1. Introduction
The ability of process calculi to create large systems by putting in parallel smaller ones using a parallel composition operator distinguishes them from other models of computation. In order to understand and rigorously reason about complex concurrent systems, several approaches were developed in (theoretical) computer science, e.g., process calculi [
1], Petri nets [
2], actor model [
3], and membrane computing [
4,
5]. Process calculi provide a way to define multi-agent systems such that (i) agents are able to communicate messages in order to instantiate variables, (ii) a small number of primitives and operators are needed for describing large systems, and (iii) behavioral equivalences and equational reasoning are used to manipulate agents. Over the years, several (families of) process calculi have emerged: CSP [
6], ACP [
7], CCS [
8], and
-calculus [
9], just to name some of the most known ones. One drawback of all these approaches in modeling multi-agent systems is the lack of explicit capabilities for reasoning about information sharing; usually, the information is stored in an implicit manner.
In this article, we present a prototyping language for describing and reasoning about multi-agent systems using timers for mobility and communication in a network given by explicit locations, an environment in which the agents act in parallel. The agents can migrate to new locations, and are able to communicate messages in order to instantiate variables; in the early defined process calculi, message passing and local variables were used instead of global variables [
10]. Recently, there have been some efforts to reintroduce globally available data [
11], while in the current approach, we use a combination of both local and global variables. In our approach, while all agents have access to public information, each agent also has access to private information to decide its next actions. The flexibility of this language is given by the parallel composition of the agents (together with their information); this compositionality provides an easy way of describing large systems starting from small ones and organizing the information of the network better. To capture the system’s complex evolution (involving exchange of information between agents), we use labeled transition systems; this allows a rigorous presentation of the behavior of a multi-agent system and a useful way to prove properties about agents.
While the introduction of private and public information adds expressiveness, it also increases the complexity of the language. However, this is somehow unavoidable, as we consider a combination that we did not find before in formalisms defined for modeling multi-agent systems: communication, migration, time, and private/public information that can be accessed to perform tests. In order to illustrate the syntax and semantics of our language, and also motivate the multi-agent systems with information sharing, we consider an example in which agents communicate and share information, but also migrate between distributed locations according to explicit timers. A scenario is assumed in which a student finished their lectures at the university and searches for available transportation in order to go home. In front of the university, there are two stations: one for buses and the other for taxis; thus, in order to reach their home location, the student can choose between a bus and a cab. The student has to take into account some information to decide their means of transportation: the costs, the duration of migration, the availability, and the established priority between the transportation choices. The difference between the buses and the cabs is that the buses use a predetermined known scheduler to move between locations, while the cabs move only on demands by clients; note that the bus schedule is available as public information at each location and is updated at each tick.
Several tools for performing simulations and model checking for real-time systems are available, e.g., Uppaal [
12], PRISM [
13], and MCMAS [
14]. However, none of these tools are able to model every feature of our approach. For this reason, we provide an implementation in the rewriting engine Maude 3 to observe that the multi-agent systems evolve as desired. As expected, increasing the number of agents, their delays, and the amount of stored information, the size of the system state space grows exponentially. Using this implementation, we emphasize how various strategies can significantly decrease the state space of the systems by controlling the application of the rules and the evolution of information in such highly nondeterministic and concurrent systems. This is possible due to the new strategy module and language included in the Maude 3 software platform. We are thus able to use the model checking tools in Maude 3 to verify several properties of our executable specification of multi-agent systems. Note that, due to the provided implementation, the decidability of the model checking for our prototyping language reduces to the decidability of the model checking of Maude 3 with strategies [
15].
This article is organized as follows: In
Section 2, we define the syntax and semantics of our prototyping language named i
MAS (where ’i’ represents ’information’ and ’
MAS’ represents ’multi-agent system’), while in
Section 3, we illustrate them with a running example. In
Section 4, we show how i
MAS is implemented in Maude 3. In
Section 5, we use various strategies in Maude 3 to guide the evolution of agents described in i
MAS, while in
Section 6, we use the model checking tools in Maude 3 to verify several properties of our executable specification of multi-agent systems. The conclusion and related work together with the references end the article.
2. Syntax and Semantics of iMAS
Inspired by process calculi, we introduce a prototyping language named i
MASfor multi-agent systems with information sharing that allows agents to act in parallel and to migrate between distributed locations according to explicit timers. We give the syntax of i
MAS in
Table 1, where we assume the following:
- ∗
Four sets, Loc , Chan = , Id , and , containing names for location variables or locations, communication channels, recursive definitions, and networks, respectively;
- ∗
For each , a unique process definition exists;
- ∗
Natural number t is a timeout of actions, integer number k is a threshold appearing in tests, u is a variable, v is a value (integer, string, Boolean), f is an information field, and p is either (to indicate the information accessed only by the agent to which it belongs) or (to indicate that the information can be accessed by any agent located where the information is available). For example, if, for , its process definition is , and for the values , there exists such that , then the agent instances and are different.
An agent A is given as a pair , where P describes how the agent should behave, while I describes the information of the agent to which P has access. An agent stays at its current location for t units of time, consumes the action after the t units of time have passed, and changes its location by migrating to location l, where it will behave according to P. Since the name l appearing in migration actions can be a location variable, it can be instantiated during the execution with a location name, thus allowing agents to have a dynamic evolution, as they adapt their behaviors depending on the interactions they have with other agents.
An agent is able to send a value v on channel a for at most t units of time, while an agent is able to receive a value on channel a for at most units of time, and to use that value to instantiate the variable u. The agents A and can communicate on channel a only if they are at the same location. If the agents A and communicate, then the variable u is instantiated by the value v, the two agents remain at the current location, and they will behave according to P and afterwards. If the agents A and cannot communicate in the time interval given by their timers (e.g., because they are at different locations), then the two agents remain at their current locations, and they will behave according to Q and afterwards.
An agent checks the truth value of by making use of the information available at the current location and of the information I. Regardless of the truth value of , the agent will remain at the current location; it will behave as P if or as Q if .
The agent , regardless of whether p is or , updates the information I, and remains at the current location, where it will behave according to P. The update of I depends on whether or not the field f already exists in I: if f does not exist, then the information is created near I, while if f exists, then the value stored in f is updated to v. The agent stays at the current location doing nothing regardless of its information I.
The information consists of pairs , where f is a field and v is the value assigned to f. When we have the information and the information , they are treated as different pieces of information (even if they have the same field f). The stored information, either (accessed only by the agent to which it belongs) or (accessed by any agent located where the information is available), is used by agents to perform tests in order to decide how to behave afterwards. For example, the agent checks if the information associated with the field f is strictly greater than k, before continuing its execution as P or Q at the current location, depending on the returned value. For checking the information, the agent is only slightly different from the one checking information, namely, . Note that the tests can only information from the and information, while the update of information is performed only by the action.
In
Table 1, all variables are free, except for the variable
u appearing in the agent
that is bound within
P, but not in
Q. The sets of free variables appearing in
P and
N are denoted by
and
. For
such that its process definition is
, we have that
. An agent
behaves according to
P in which
v replaces all the free occurrences of the variable
u; in order to avoid name clashes due to the initialization of
u by
v, some
-conversion might be used beforehand.
A network is constructed as a set of parallel distributed locations , where each location l contains the information I together with the agents from the set . If the information is empty and there are no agents, a location l is denoted by , while a network without any locations is denoted by .
In order to allow migration, we define the
structural equivalence ≡ relation, over the set
of networks, as the smallest congruence in which the next equalities are satisfied, as follows:
where
,
.
The
operational semantics of i
MAS is split in two (for ease of presentation): the part for actions is given in
Table 2, while the part for time passing is given in
Table 3. In
Table 2, we use the relation
to denote the execution of the multiset of actions
by the agents of the network
N in order to transform it into the network
. If
, then instead of
, we write
.
In rule (Stop), a network consisting of only a location l with information I and no agents will not perform any action (denoted by ). Rule (Com) allows two agents and from location l to use the channel a to communicate the value v in order to instantiate the variable u. After communication, the two agents remain at the current location l, and they will behave according to and afterwards. The label is used to mark the successful communication on the channel a.
If the timer of the active action of an agent (with ) is 0, then this action is removed by using the rules (Put0) or (Get0). This does not lead in a change of the current location of the agent or in its information I, but it leads to the change in the behavior of the agent that will behave according to Q from this point forward. Note that, when the timer of the active action of an agent (with ) is 0, the agent can be involved in any of the rules (Com), (Put0), or (Get0); since only one of the rules can be applied on this agent, the rule to be applied on it is nondeterministically chosen.
If an agent is at the location l, by using the rule (Move0), it is moving at the location , where it will behave according to P. An agent checks the truth value of by making use of the information available at the current location and of the information I. Regardless of the truth value of , the agent will remain at the current location; if , then by applying the rule (IfT), the agent will behave according to P from this point forward, while if , then by applying the rule (IfF), the agent will behave as Q from this point forward.
Depending on the value of p, one of the rules (CrtPr), (CrtPu), (UpdPr), and (UpdPu) will be used by an agent . If the field f does not exist in the information, then the rule (CrtPr) is used to extend the information with ; otherwise, if the field f does exist in the information, then the rule (UpdPr) is used to update the information by replacing the value of the existing field f by v. If the field f does not exist in the information, then the rule (CrtPu) is used to extend the information with ; otherwise, if the field f does exist in the information, then the rule (UpdPu) is used to update the information by replacing the value of the existing field f by v. This does not lead in a change of the current location of the agent that will behave according to P from this point forward.
An agent can use the rule (Call) to unfold the recursive definition . This does not lead in a change of the current location of the agent that will behave according to from this point forward.
The rule (Par) is applied to obtain the behavior of large systems by putting in parallel the behavior of smaller ones, while the rule (Equiv) is used to re-arrange networks by means of the structural equivalence relation ≡.
In
Table 3, we use the relation
to denote the passing of
t units of time in the network
N in order to transform it into the network
.
In the rule (DStop), a network consisting of only a location l with information and no agents will not be affected by the passage of time. If the timer of the active action of an agent (with ) is greater than 0, then the timer is decreased by using the rules (DPut) and (DGet). Similarly, if the timer of the active action of an agent is greater than 0, then the timer is decreased by using the rule (DMove). This does not lead in a change of the current location of the agent or in its information I.
The rule (
DPar) is applied to obtain the behavior of large systems by putting in parallel the behavior of smaller ones, while the rule (
DEquiv) is used to re-arrange networks by means of the structural equivalence relation ≡. In the rule (
DPar), a network
that cannot execute any rule of
Table 2 is denoted by
; the use of negative premises is possible as they do not lead to inconsistencies.
A derivation
, with
and
, denotes a complete computational step, that is, executing a multiset of actions
followed by the advance of time for
t time units; formally, it can be written as
If in a system
N we perform a complete computational step leading to the system
, we say that
is directly reachable from
N. In case none of the rules of
Table 2 can be applied in the system
N (namely,
), then we have only time passing, and thus, we write
instead of
.
In the next result, we illustrate that if a network allows for applying the rules of
Table 3, this does not lead to nondeterministic behaviors (the obtained system is unique).
Theorem 1. For all the networks N, , and , the following hold:
- 1.
If , then ;
- 2.
If and , then .
Proof. Employing structural induction on the network N, as outlined in [
16]. □
In the next result, we illustrate that if a network allows for applying the rules of
Table 3, the advance of time is continuous (we do not skip time instances to execute migration and communication actions).
Theorem 2. If , then .
Proof. Employing structural induction on the network N, as outlined in [
16]. □
3. The Running Example Modeled by Using iMAS
To illustrate the syntax and semantics of iMAS, let us describe in more detail the example mentioned in the introduction. A can use either a or a to move to a location (both and are encoded as agents). Additionally, the has to take into account a priority relation established between the transportation means: is given a lower priority than . To make it easier to read the definitions of agents, we use the following notations: (waiting time), (bus arrival time), (travel time), (bus capacity), (maximum bus capacity), (get on the bus), (get off the bus), (waiting according to the schedule), (student is waiting), and (student travel time). Similar notations are used for and when they are in .
The at location l having the destination can be described as
=
.
The passengers are allowed to go off the by using a channel , and this is followed by a decrease in the number of passengers in the . Afterwards, any passengers willing to travel by communicate on the channel to receive the travel time from the field of the information of the , and this is followed by an increase in the number of passengers in the . Once the took all the passengers, it moves to the location . Similarly, a at the location l can be described as
=
.
The awaits for a location to be communicated by the on the channel ; once it receives this location, it sends back the travel time between the current location and the desired one. Afterwards, the moves to the location, where the can exit the .
The description of the at the location l having the destination is as follows:
.
.
If the and the are in the same time at the same location, then they can communicate using the channel . If however, the and the are not in the same time at the same location, and it takes longer for the to arrive than the is willing to wait, then the will try to find a to reach their destination. If the and the are at the same location, then they can communicate using the channel . If the is unable to communicate with either the or the , then the passage of time is performed, and the above conditions are re-checked. Once the student communicates with the on the channel or with the on channel , the student will wait for a communication on the channel for several time units before continuing at the current location; this continuation is placed on the branch as there is no one to communicate on this channel, and always the agent will behave according to this branch.
In order to keep up-to-date the information , it is required to have an agent at each location, as follows:
.
The network composed of the previous agents and locations is
Here we describe how
evolves according to the rules of
Table 2 and
Table 3. To avoid wasting space, for each unfolded recursive definition, we indicate only the active action, and in each of its possible continuations, we replace the process definition by its first action followed by dots.
Since the contains several recursive definitions, we have to use several times the rules (Call) and (Par) to unfold them in order to be able to execute their actions, namely, four times the rule (Call) with the label and once with the label . Note that, due to the nondeterminism of applying the five instances of the rule (Call), several ways of unfolding are possible, as follows:
(Call), (Par)
In the above evolution, as the cannot communicate with any other agent on the channel (namely, no one is willing to get off the ), the rule (Get0) with the label can be applied, leading to a between two private information values: the bus capacity and the maximum bus capacity . The rule (IfT) with the label can be applied, and the can now use the channel to communicate with potential passengers. Since the has, as the first action, an input with an infinite timer, then it can evolve only by communicating on the channel , which is not the case at this point. By performing a between the public information value of ( arrival time) and the private information of (the waiting time of the at ), it is the rule (IfT) with the label that can be applied, and the can now use the channel to interact with the . Since the performed by the returns at and at , the rule (IfF) with the label and the rule (IfT) with the label can be applied. Thus, the evolution leads to the network as follows:
(Get0), (IfT), (IfF), (Par)
Since the located at and the located at have, as the first action, an input with a timer strictly higher than zero, then they can evolve only by communicating on the channels and , respectively; it is not the case at this point. On the other hand, the located at updates the public information by applying the rule (UpdPu) with the label . Afterwards, the located at can apply the rule (Call) with the label to unfold, followed by the rule (IfT) with the label to pass the and then be able to communicate on the channel .
Until this point, all the agents evolved in parallel by interacting only with the public and private information; this changes now, as the the and are able to communicate on the channel by using the rule (Com) with the label , such that the receives the value 4 from the field of the private information, a value representing the travel time between the locations and . This is followed by two private updates performed using twice the rule (UpdPr) with the label : one by the to increase the number of its passengers (field ) from 5 to 6, and another by the to store the travel time 4 between locations in the field . Afterwards, the can apply the rule (Call) with the label to unfold; since the cannot communicate with any other agent on the channel (namely, no passenger is willing to get off the ), the rule (Get0) with the label is applied, leading to a between two private information values: the bus capacity and the maximum bus capacity . The rule (IfT) with the label can be applied, and the can now use the channel to communicate with potential passengers. However, since the cannot communicate with any other agent on the channel (namely, no passenger is willing to get on the ), the rule (Put0) with the label can be applied. After these steps, the resulting network is
(UpdPu), (Call), (IfT), (Comm), (UpdPr), (Get0), (Put0), (Par)
Only time passing rules are applicable now: all the timers of the active actions are decreased by one (e.g., the timer of the input action of the from the ). Thus, after a complete computational step, we obtain the following network:
(DMove), (DGet), (DPar)
During the next three time units, only the agents’ of each location can evolve. First, since the of both locations cannot communicate with any other agent on the channel , only two rules (Get0) with the labels and can be applied. Next, the of both locations updates the public information by applying two rules (UpdPu) with the labels and . Afterwards, the of both locations can apply two rules (Call) with the labels and to unfold, followed by two rules (IfT) with the labels and to pass the and then be able to communicate on the channel . Thus, before applying the next time step, the reached network is
(Get0), (UpdPu), (IfT), (Par)
Note that the evolution is nondeterministic; this means that there are several ways of applying the evolution rules.
4. Implementing Multi-Agent Systems with Information Sharing
We supply an implementation to check that the evolutions of the systems described by our prototyping language iMASare performed correctly. Moreover, we emphasize the use of strategies to control the rule application in order to guide the evolution of such a highly nondeterministic and concurrent system. This control is achievable due to the new strategy language of the rewriting engine Maude 3.
Maude 3 is a robust software system that supports the efficient execution of specifications based on rewriting logic. Rewriting logic [
17] is a computational logic combining term rewriting and equational logic. Starting from the semantic of our language i
MAS, we define a rewriting theory. Note that the syntax of the rewriting theory is that of Maude [
18] for the untimed aspects, and also that of Real-Time Maude [
19] for the time aspects. Just like in [
20], we use a typed setting that includes sorts, together with the
subsort inclusion relationship among types. Considering a given rewrite theory
, we use
to indicate that
is derivable in
by using its rewrite rules.
To implement the multi-agent systems defined by iMAS, we utilize sorts that correspond to the sets in our language. For example, the set of channels is represented by the sort Channel, while the sort MValue is utilized to handle multisets of values within the system. For convenience, the iMASterms are decomposed in parts using the sorts AGuard, MGuard, and IGuard. The sort AGuard consists of the prefixes and of all agents then P else and then P else , the sort MGuard consists of the prefix of all agents , while the sort IGuard consists of the prefix of all agents . The prefixes contained in the sorts AGuard, MGuard, and IGuard are crucial for defining the behavior of agents in a sequential manner.
The subsort declaration subsorts Var < Location Channel Nat demonstrates that variables can be instantiated using location names, channel names, or natural numbers. This relationship is part of the broader subsorting hierarchy defined within the system.
sorts PId Agent Inf Field MField Test Location Channel Value
VarL VarC VarN Var MValue Process
AGuard MGuard IGuard Guard System GlobalSystem.
subsorts Var < Location Channel Nat < Value < MValue.
For each operator in the iMA syntax (as shown in
Table 1), we assign two attributes: the
ctor attribute, which designates the operator as a data constructor, and the
prec attribute, followed by a numerical value, which establishes the operator’s precedence in relation to other operators. To accurately represent the parallel operators ∣ and
in (Real-Time) Maude, we include the attributes
comm and
assoc. These attributes signify that the operators are commutative and associative constructors, respectively. This encoding reflects the structural congruence rules of the system. In fact, the majority of the structural rules are implemented through the use of these
comm and
assoc attributes.
op empty : -> Inf [ctor].
ops private public : -> PId [ctor].
op __ : Inf Inf -> Inf [ctor comm assoc id: empty].
op < _ ; _ > : Field Nat -> Inf.
op _^_! <_> : Channel TimeInf Value -> AGuard [ctor prec 2].
op _^_? (_) : Channel TimeInf Var -> AGuard [ctor prec 2].
op go ^__ : Time Location -> MGuard [ctor prec 2].
ops update : PId Field Nat -> IGuard [ctor].
op _then _ else _ : AGuard Process Process -> Process [ctor prec 1].
op _then _ : MGuard Process -> Process [ctor prec 1].
op _ then _ : IGuard Process -> Process [ctor prec 1].
op if _ then _ else _ : Test Process Process -> Process [ctor prec 1].
op stop : -> Process [ctor].
op _ |> _ : Process Inf -> Agent [ctor].
op Zero : -> Agent [ctor].
op _||_ : Agent Agent -> Agent [ctor prec 5 comm assoc id: Zero].
op _ [[_<|_]] : Location Inf Agent -> System [ctor prec 1].
op void : -> System [ctor].
op _|_ : System System -> System [ctor prec 5 comm assoc id: void].
op {_} : System -> GlobalSystem [ctor].
The remaining rules are implemented as equations in the following manner:
eq stop |> empty = Zero.
The majority of the rules presented in
Table 2 include hypotheses. To accurately translate these rules into Maude, we employ conditional rewrite rules. In these conditional rules, the hypotheses from the semantic rules are represented as conditions.
Note that the rules (Par), (DEquiv), and (Equiv) are not implemented as rewrite rules in Maude. This is because the rewrite theory of Maude incorporates the commutativity, associativity, and congruence properties of the ∣ and operators through the use of the comm and assoc annotations.
To determine the specific rule in
Table 2, we use clear and concise labels for each of the rewrite rules listed below.
crl [Comm] :
k[[I <| (((c ^ t ! < val >) then (P) else (Q)) |> I’)
|| (((c ^ t’ ? ( X )) then (P’) else (Q’)) |> I’’) || A]]
=> k [[ I <| (P |> I’) || ((P’ {val / X}) |> I’’) || A]]
if notin(val , bnP(P’)).
rl [UpdatePrivate] :
k[[I <| ((update(private,f,v) then (P)) |> (I’ < f ; v’ >)) || A]]
=> k[[I <| (P |> (I’ < f ; v >)) || A]].
crl [CreatePrivate] :
k[[I <| ((update(private,f,v) then (P)) |> (I’)) || A]]
=> k[[I <| (P |> (I’ < f ; v >)) || A]] if notinF(f,I’).
crl [Move] :
k[[I <| (((go ^ t l) then (P)) |> I’) || A]] | l[[I’’ <| B]]
=> k[[I <| A]] | l[[I’’ <| (P |> I’) || B]] if t == 0.
crl [Input0] :
(k[[I <| (((c ^ t ! < val >) then (P) else (Q)) |> I’) || A]])
=> k[[I <| (Q |> I’) || A]] if t == 0.
The recursive definition of i
MAS, the strategy we employ to address infinite expansions, is not directly encodable into Maude because Maude does not prevent infinite unfolding of recursive definitions into infinite sequences of actions. To unfold only when the recursive definition is at the top level, we extend the
construction with a Boolean flag
b to control unfolding; the obtained construction
uses the first occurrence of
to perform this. Note that by transforming
into
b, the unfolding can be again performed. Using such a solution, the definition for
appearing in
Section 3 becomes
op bus : Location Location Bool -> Process [ctor].
ceq bus(l,l’,b) =
((db ^ 0 ? (x1))
then (update(private, BC, sd(get(private,BC),1))
then bus(l,l’,not b))
else (if get(private,BC) < get(private,BMC)
then ((ub ^ 0 ! < get(private,TT(l , l’)) >)
then (update(private, BC, get(private,BC) + 1)
then bus(l,l’,not b))
else ((go ^ (get(loprivate,TT(l , l’))) l’)
then bus(l’,l,not b)))
else ((go ^ (get(private,TT(l,l’))) l’) then bus(l’,l,not b)))
) if b.
crl [UnfoldBus] :
k[[I1 <| ((bus(l,l’,b)) |> I2 ) || B]]
=> k[[I1 <| ((bus(l,l’,not b)) |> I2 ) || B]] if not b.
The definitions for , , and are defined in a similar way.
The passage of time can be performed in two ways. On the one hand, the rule tick models the advancing time in the encoded system by the maximum possible value. On another hand, the rule tickt models the advancing time in the encoded system by a fixed value t that is not greater than the maximum possible value. The rule tickt is not executable (indicated by nonexec) because the variable t (denoting how much time is consumed) occurs only in the right-hand side of the rule; thus, t needs to be instantiated once the rule tickt is applied.
crl [tick] : {M} => {delta(M, mte(M))} if mte(M) =/= INF and mte(M) =/= 0.
crl [tickt] : {M} => {delta(M, t)} if t <= mte(M) [nonexec].
The tick and tickt rules apply the delta function to decrement the time constraints of all agents in a network by an identical positive value.
op delta : System TimeInf -> System.
eq delta(k[[I <| A]] , t’) = k[[I <| deltaA (A, t’)]].
ceq delta(M | N , t’) = delta(M , t’) | delta(N , t’)
if M =/= void and N =/= void.
eq delta(M , t’) = M [owise].
where the function deltaA is used to decrease time constraints in agents, as follows:
op deltaA : Agent TimeInf -> Agent.
eq deltaA((((c ^ t ! < val >) then (P) else (Q)) |> I’) , t’)
= (((c ^ (t monus t’) ! < val >) then (P) else (Q)) |> I’).
eq deltaA((((c ^ t ? ( X ) ) then (P) else (Q)) |> I’) , t’)
= (((c ^ (t monus t’) ? ( X )) then (P) else (Q)) |> I’).
eq deltaA( (((go ^ t l) then (P)) |> I’) , t’)
= (((go ^ (t monus t’) l) then (P)) |> I’).
ceq deltaA( A || B , t’) = deltaA(A , t’) || deltaA(B , t’)
if A =/= Zero and B =/= Zero.
eq deltaA(A , t’) = A [owise]
The mte function computes the maximum possible time advancement that can be applied without violating its constraints, as follows:
op mte : System -> TimeInf.
eq mte(k[[I <| (stop |> I’) ]] ) = INF.
eq mte(k[[I <| (((c ^ t ! < val >) then (P) else (Q)) |> I’) ]]) = t.
eq mte(k[[I <| (((c ^ t ? ( X ) ) then (P) else (Q)) |> I’) ]]) = t.
eq mte(k[[I <| (((go ^ t l) then (P)) |> I’)]]) = t.
eq mte(k[[I <| (((c ^ t ! < val >) then (P) else (Q)) |> I’)
|| (((c ^ (t’) ? ( X )) then (P’) else (Q’)) |> I’’) || A]] | N) = INF.
ceq mte(k[[I <| A || B ]]) = min(mte (k[[I <| A ]]) , mte (k[[I <| B ]]))
if A =/= Zero /\ B =/= Zero.
ceq mte(M | N) = min(mte(M), mte(N)) if M =/= void /\ N =/= void.
eq mte(M) = 0 [owise].
We prove the equivalence between the transition system generated by our Maude specification and the reduction semantics of iMAS. Given a system M, denotes its Maude encoding, while represents the rewrite theory previously introduced and consisting of the rewrite rules Comm, IfT, IfF, CreatePrivate, CreatePublic, UpdatePrivate, UpdatePublic, UnfoldScheduleBus, UnfoldStudent, UnfoldBus, UnfoldCab, Input0, Output0, Move, and tick, together with additional defined equations.
We establish a relationship between the structural equivalence of terms in iMASand the equality of terms under the rewrite rules.
Lemma 1. iff .
Proof. ⇒: Using induction on the congruence rules of iMAS.
⇐: Using induction on the equations of . □
The following result demonstrates the operational congruence between i
MASnetworks and their rewrite theory translations. In what follows,
denotes an arbitrary rule from
Table 2 and
Table 3.
Theorem 3. iff .
Proof. ⇒: Using induction on the derivation .
Case
. This transition arises from a network
. Applying the (
Put0) rule from
Table 2, we obtain that
. Since
is
l[[I_l <| (((a ^ 0 ! < v >) then (P) else (Q)) |> I) || A]]
applying the Input0 rule from , is rewritten into
l[[I_l <| (Q) |> I) || A]]
that is equal to . Thus, as desired.
The remaining cases can be handled analogously.
⇐: Using induction on the derivation .
- •
Input0: This transition arises when is
l[[I_l <| (((a ^ 0 ! < v >) then (P) else (Q)) |> I) || A]]
while is
l[[I_l <| (Q) |> I) || A]]
In accordance with the definition of
,
, where
,
,
,
, and
. By applying the (
Put0) rule from
Table 2, we obtain that
. Based on the definition of
, we have
.
- •
The remaining cases can be handled analogously.
□
5. Controlling Multi-Agent Systems by Strategies
In programming, strategies have been used to evaluate expressions (according to certain rules); we know several evaluation strategies: call by value, call by name, call by need, etc. In rewriting systems, strategies establish the sequence of rewrite rules to be applied, and outline the available choices for making decisions. The outcome of implementing a strategy is the subset of computations produced according to the strategy. In what follows, we emphasize the use of strategies to significantly decrease the possible evolutions of multi-agent systems with information sharing.
The precise control over the application of rules in Maude 3 was possible after the inclusion of a strategy language [
21]. The Maude 3 command
srew explores all possible execution paths starting from a given term and produces a set of solutions.
The command for rewriting a system t using the strategy alpha is
srew t using alpha,
and the output presents the solutions generated; several solutions can be generated because the nondeterminism is not always eliminated by using strategies.
The fundamental component of the strategy language is the application of a rule, with the simplest form being the strategy all, which applies the rules without any restrictions. The strategy (all)* repeats the strategy all any positive number of times, including zero times. For example, running the system using the strategy (all)* is performed by using the following command:
srew {TravelSystem} using (all)*.
Since there is no restriction on applying the rules, the number of reachable networks is . In the remainder of this section, we will also examine several other strategies in addition to the iteration strategy , as follows:
- (i)
The strategy idle outputs the input system.
- (ii)
The disjunction strategy performs either the strategy or the strategy .
- (iii)
The conditional strategy executes first the strategy and then uses its output as input for the strategy ; if the strategy yields no output, it instead executes the strategy on the initial input system (if the strategy is defined as idle, then the strategy is the same as the strategy or-else).
Note that, in the definitions of the
mte function and
tick rule, we enforce the passage of time to take place only when no communication, unfolding, updating, or testing can be performed, namely, when the value returned by the
mte function is not 0. This means that the strategy
(all)* is in fact equivalent to the next strategy called
mtestep, which formally describes the order in which the i
MASrules from
Table 2 and
Table 3 are applied, as follows:
sd step := ( IfT | IfF | Move | Comm | Input0 | Output0
| UnfoldBus | UnfoldCab | UnfoldStudent | UnfoldScheduleBus
| UpdatePrivate | UpdatePublic | CreatePrivate | CreatePublic ).
sd mtestep := (step or-else tick )*.
However, the number of solutions can be decreased by establishing a priority among the rules. Since in iMAS the unfolds, tests, creations, updates, and moves could be performed in parallel, one strategy to consider is to keep one of the sequences out of all possible sequences, to apply the Comm, Input0, and Output0 rules afterwards, and the time rule tick last. Formally, this can be described in Maude 3 by the following two strategies:
sd step1 := UnfoldBus or-else (UnfoldCab or-else (UnfoldStudent
or-else (UnfoldScheduleBus or-else (IfT or-else (IfF or-else
(CreatePublic or-else (CreatePrivate or-else
(UpdatePublic or-else (UpdatePrivate or-else
(Move or-else (Comm | Output0 | Input0)))))))))))).
sd mtestep1 := (step1 or-else tick )*.
In this instance, applying the strategy mtestep1 to run the system is performed using the following command:
srew {TravelSystem} using mstep1.
Using
mtestep1, the number of reachable networks is decreasing to 13424. This number of reachable networks can be further reduced by considering a more restraining way of applying the rules, namely, by using an approach similar to the one in [
22] that forces the rule (
Com) to always be applied before the rules (
Put0) and (
Get0), as follows:
sd step2 := UnfoldBus or-else (UnfoldCab or-else (UnfoldStudent
or-else (UnfoldScheduleBus or-else (IfT or-else (IfF or-else
(CreatePublic or-else (CreatePrivate or-else
(UpdatePublic or-else (UpdatePrivate or-else
(Move or-else (Comm or-else (Output0 | Input0))))))))))))).
sd mtestep2 := (step2 or-else tick )*.
This application of rules leads to another reduction in the number of reachable networks, more exactly to 372. Note that all the defined strategies are not specific to the running example, but can be used with any iMAS system.
The behavior of two parallel agents is typically understood to encompass all possible interleavings of their steps. The actions of both agents can be mixed together in any random sequence such that the order of actions is preserved for each agent. Just like in [
23], we consider interleaving based on a specific strategy, as this more accurately reflects how multi-threading operates in modern programming languages.
The above strategies show that reducing the quantity of disjunction strategies structured as results in a significant reduction in the state space due to the decrease in the number of applicable rules at any step of any evolution. For any given network N, if we denote by the number of networks reached starting from N by applying the rules according of the strategy , then the following result holds for the mentioned strategies:
Proposition 1. .
Proof. The motivation for the equality was provided previously before defining the mtestep strategy.
For the inequality , we use structural induction on N to prove our claim as follows:
N contains only one agent. This means that, at each step, at most one rule of iMASis applicable, and so regardless of which rule is applicable.
N contains at least two agents. Depending on the structure of the agents, there exist several cases. Assume that one agent can perform a movement between locations l and , and the other can perform an update. In this case, there exist two networks, and , with and . Since these two rules can be applied in parallel, it means that there also exists with and . This means that , and , while , and thus, it holds that . The remaining cases can be handled analogously.
□
6. Model Checking Multi-Agent Systems with Information Sharing
Since the temporal aspects of i
MASsystems together with the private and public information lead to a large number of possible interactions, the verification should be much easier by using software tools. For the i
MASnetworks controlled by strategies, we can examine and validate multiple properties by employing the model-checking tool umaudemc of the unified Maude 3 [
24].
The command-line syntax for invoking the umaudemc tool is as follows:
umaudemc check <file name> <initial term> <formula> [ <strategy> ]
The umaudemc tool analyzes the input formula, determines the most general logic that the formula fits into, and subsequently invokes corresponding model checkers like NuSMV [
25] and Spot [
26].
To show how it works, we verify a few CTL* properties of the multi-agent system presented in
Section 3. The branching-time logic CTL* [
27] is an extension of both the LTL [
28] and CTL [
29] logics. Formulas in CTL* are formulated using the atomic propositions of a Kripke structure, along with a set of temporal operators and path quantifiers
Path formulas describe properties of execution paths. The operators , , and indicate that the property holds in the next state, in some states, or in all states, respectively; claims that is satisfied in all states preceding a state in which is satisfied.
State formulas describe properties of states within a system. They indicate whether an atomic property p holds in a state and specifies how paths originating from that state are quantified, either universally () or existentially ().
To follow the movement of an agent in our running example, we add in the private information of each agent a pair , where is a unique number identifying each agent.
The CTL* formula in which the expression checks if an agent with is in location describes that every state of TravelSystem can be continued to one where the is located at the location . Interestingly, this formula is not satisfied under the mtestep strategy, but it is satisfied under the mtestep2 strategy.
$ umaudemc check iMASSpecStrat.maude {TravelSystem}
’A [] E <> AIDInLocation( < AID ; 201 >, univ)’ mtestep
The property is not satisfied in the initial state (112800 system
states, 1097608 rewrites, holds in 34565/112800 states)
$ umaudemc check iMASSpecStrat.maude {TravelSystem}
’A [] E <> AIDInLocation( < AID ; 201 >, univ)’ mtestep2
The property is satisfied in the initial state (744 system states,
3719 rewrites, holds in 744/744 states)
The reason is that, in the mtestep2 strategy, we enforce communication to happen before communication capabilities expire (if this is possible).
We can also verify the next LTL formula that checks if always both the and the are infinitely often located at . This is not satisfied because the always moves between locations, regardless if anyone boards it or not, while the gets stuck at after the first trip with the . The fact that the remains stuck at the location once it reaches it is because there is no available at and the timeouts assigned to the and prevent them from interacting in the future.
$ umaudemc check iMASSpecStrat.maude {TravelSystem}
’ [] <> ((AIDInLocation( < AID ; 101 >, univ)
/\symbol{92} AIDInLocation( < AID ; 301 >, univ))) ’ mtestep1
The property is not satisfied in the initial state (90 system states,
623 rewrites, 2 Buchi states)
We can also easily consider other variants of the initial systems. For example, consider a system in which the is not initially in the same location as the , namely,
One can check that, regardless of where a resides, the student eventually moves.
$ umaudemc check iMASSpecStrat.maude {TravelSystem}
’ A [] (AIDInLocation( < AID ; 301 > , univ)
-> E <> AIDInLocation( < AID ; 301 > , home)
/\ (AIDInLocation( < AID ; 301 > , home)
-> E <> AIDInLocation( < AID ; 301 > , univ))) ’ mtestep1
The property is not satisfied in the initial state (26848 system states,
151931 rewrites, holds in 7585/26848 states)
Note that, just like for the previous property, this property is not satisfied because the always moves between locations, regardless if anyone boards it or not, while the gets stuck at after the first trip with the . The fact that the remains stuck at the location once it reaches it is because there is no available at and the timeouts assigned to the and prevent them from interacting in the future.
However, by only slightly modifying TravelSystem to TravelSystem2 in which the student initially resides in home instead of univ, namely,
the property becomes satisfied since the timeouts assigned to the and and their initial positioning in the system allow them to interact:
$ umaudemc check iMASSpecStrat.maude {TravelSystem}
’ A [] (AIDInLocation( < AID ; 301 > , univ)
-> E <> AIDInLocation( < AID ; 301 > , home)
/\ (AIDInLocation( < AID ; 301 > , home)
-> E <> AIDInLocation( < AID ; 301 > , univ))) ’ mtestep1
The property is satisfied in the initial state (1476 system states, 8228 rewrites, holds in 1476/1476 states)
Besides the above (qualitative) formulae, one can also check quantitative formulae (related to stored information values, for instance). Simulating and verifying complex systems that involve timed migration and communication in distributed environments necessitates the ability to easily describe the key entities and actions of these systems (such as mobility, message exchange, time constraints, and both private and public information) using iMASsystems. Subsequently, it requires the automated verification of both qualitative aspects (like reachability, safety, and liveness) and quantitative aspects using Maude 3 and its strategies.
7. Conclusions and Related Work
In multi-agent systems, information is commonly handled using epistemic logics [
30], specifically the multi-agent epistemic logic. These epistemic logics are modal logics that characterize various types of information. They differ not only in syntax but also in their expressiveness and complexity. Essentially, they rely on two key concepts: Kripke structures for modeling their semantics and logic formulas for representing the information of the agents.
In this article, the public and private information appear as information structures, each agent having a private information store used to decide its next actions, and public information being available for all the agents. More importantly, the agents with information sharing are described by a prototyping programming language inspired by process calculi, allowing a compositional construction of large systems by using a parallel composition of the agents (together with their information). The agents are coordinated in space and time by migration in a distributed environment of explicit locations and by explicit timeouts for interactions among agents. It is worth mentioning that the timeouts can define a non-monotonic behavior of the system (e.g., a timeout may trigger a recovery action whenever a timer expires). The nondeterministic and concurrent evolution of such a system (updating the information) is given by formal operational semantics; this allows for describing rigorously the complex behavior of the entire system. In order to illustrate the syntax and semantics of our language and also motivate the multi-agent systems with information sharing, we consider an example in which agents communicate and share information, but also migrate between distributed locations according to explicit timers.
We implemented the prototyping language i
MAS in the rewriting engine Maude 3 and emphasized how the strategies are used to significantly decrease the possible evolutions of the multi-agent systems with information sharing. The entire approach could be used to verify various properties in a strategy-controlled system having a restricted behavior and to describe and analyze context-sensitive rewriting [
31]. For each verified property, the Maude 3 tool indicates the performance evaluation by providing the number of reached states and of the preformed rewrites.
The way the agents migrate between specific locations to engage in local communications with other agents is similar to the one presented in
TiMo, a process calculus introduced initially in [
32] and then followed by several extensions. Regarding of the implementation, a Java-based software facilitating timed migration for
TiMois described in [
33]. Additionally, [
34] provides a translation of
TiMointo the Real-Time Maude language. It is worth mentioning [
35], which models agents with reputation in repu
TiMo, and [
16], which explores agents’ knowledge through sets of trees with information-bearing nodes in know
TiMo. In [
36], the safety in medical systems is improved by using multi-agent systems with synchronous and asynchronous information sharing.