Next Article in Journal
Safety Analysis on the Heating Surfaces in the 660 MW Ultra-Supercritical CFB Boiler under Sudden Electricity Failure
Next Article in Special Issue
Research on Refined Modeling and Fuzzy Control of Electro-Hydrostatic Actuator with Co-Simulation Method
Previous Article in Journal
Machine Learning Approach for Smart Distribution Transformers Load Monitoring and Management System
Previous Article in Special Issue
A Novel Magnetic Circuit Design Method for a Permanent Magnetic Chuck of a Wall-Climbing Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Communication-Focused Top-Down Design of Robotic Systems Based on Binary Decomposition

Institute of Control and Automation Engineering, Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Energies 2022, 15(21), 7983; https://doi.org/10.3390/en15217983
Submission received: 29 September 2022 / Revised: 21 October 2022 / Accepted: 22 October 2022 / Published: 27 October 2022

Abstract

:
This article proposes a formal method of designing robotic systems focusing on communication between components, as well as standardization of the messages between those components. The objective is to design a robotic system controller in a systematic way, focusing on communication at an abstract agent level. Communication, thus organized, and its properly defined specification facilitate the system’s further development. The method uses a standard message structure, based on IEEE FIPA standards, for communication within robotic systems composed of agents. Communication-focused top-down design of robotic systems based on binary decomposition is proposed, and used to design a companion robot working in the kitchen environment. The implemented robotic system is verified based on whether or not the specification conforms to the specified requirements. The characteristics of the designed communication are evaluated. The obtained results prove that the proposed method of designing robotic systems is formally correct, it facilitates the implementation of agents, and separates specification of the system from its implementation. The method of designing robotic systems is correct and useful. The proposed formal notation facilitates understanding of how the system operates and organizes the design process. It puts the communication between system components at the forefront. The resulting system specification facilitates the implementation. The tools for experimental evaluation of its characteristics enable the confirmation that it fulfills the requirements, and that the communication between the system components is correct.

1. Introduction

The energy consumption of electromechanical systems, including robots, depends on the quality of control, and that is influenced by the control system architecture. This article delves into the problem of producing the architecture of the control system by analysing the task, or the category of tasks, that the system has to accomplish. Those tasks provide guidance for the system decomposition. Systems are decomposed into modules (agents in this case); however, those modules need to communicate to produce an integrated system. All these aspects influence the optimality of energy consumption of the overall system, thus it is necessary to study them. The presented design method addresses the pre-specification design phase, which is the basis for the creation of system specification, which in turn is the foundation of its implementation.
The spectrum of approaches to robotic system design stretches from purely implementation-based to specification-based. Implementation-based approaches encompass: (1) coding the system software in a universal programming language (e.g., C, C++, Python) targeting a specific implementation platform, (2) composing the system out of library modules (e.g., functions, procedures, software objects) coded in a universal programming language (e.g., RCCL [1], PASRO [2]), (3) utilising a programming framework, which is composed of a library of modules supplemented by use patterns (e.g., Player [3], MRROC++ [4]), (4) composing the system out of components [5] (usually software black-boxes, or less frequently white-boxes, providing services through well defined interfaces), which form a programming framework (e.g., OROCOS [6], ORCA [7], GenoM [8], ROS [9]). In all implementation-based cases, the system specification is formulated as a general architecture defined in terms of boxes and arrows diagrams. At the other end of this spectrum, systems are built using model driven approaches [10]. The system specification is provided as a system model that can be used for either manual or automatic code generation. In the manual mode, the model is a blueprint for the developers writing the system code. In the automatic mode, a compiler transforms the model into code automatically, usually in many transformation steps—tool-chains are used for that purpose [11]. This article addresses the problem of providing adequate guidance to the designer by proposing a specification method rationalising the design decisions, at the same time facilitating the system architecture design.
This article aspires to provide means for reasoning about the composition of the designed system, both from the point of view of constituent elements and the point of view of the communication between them. The problem tackled in this article is further aggravated by the fact that, although many robotic systems have been created, there is no single method of describing their architecture, e.g., [12,13,14,15,16,17,18,19]. References [20,21] suggest the decomposition of the architecture presentation into structure and activities (style). Nevertheless, Reference [20] acknowledges that it is difficult to distinguish between those two elements in the presentation of the majority of systems. Formal descriptions of robotic systems also exist, e.g., [22,23,24,25,26,27,28]; however, this approach is not popular in the robotics community. This is unfortunate as, besides a clear presentation, they enable formal verification of some system properties at the specification stage of their design, e.g., [29,30]. As most robotics systems have been designed using implementation-based methods, the composition of modules or components dominates, i.e., bottom-up design prevails.
However, the method postulated here uses a decomposition-based top-down approach. In the bottom-up method, the system’s task emerges from the composition of modules, while in the top-down method, the system’s task is the design-guiding principle. Instead of using an implementation-guided approach, the task-guided approach is favoured here. This approach focuses on the system-specification stage.
It should be noted that even if formal system specification methods are employed, the communication aspect is treated as an implementation detail, e.g., [31,32].
As robotic systems are usually inherently complex, the first problem the designer faces is how to tackle this complexity. System engineering provides a hint—use decomposition. Decomposition implies a division of the system into modules or components, allowing the designer to subsequently focus on those elements. However, this subdivision also implies that communication between them will have to be established. Although this communication aspect of system design is of paramount importance, it tends to be hidden behind the computational aspects of individual system modules. The services that the modules or components provide are at the forefront. Thus, this inter-module communication should gain a prominent position in the design process, but usually is treated as subordinate. Communication has to be discussed from two perspectives: (1) who communicates with whom, (2) what is being communicated, i.e., the contents of the communication. The design methodology proposed here takes into account all of these aspects of system design. The novelty of the proposed design methodology lies in the rationalisation of the decomposition process and putting inter-module communication on equal terms with computational aspects of modules.
Reference [20] points out that, for the majority of created systems, it is difficult to determine both their structure and activities. Usually architectures are presented by using informal textual descriptions and block diagrams with a very varied level of detail. This usually leads to problems during implementation and documentation of such systems. This is the consequence of the fact that the robotic system controller design lacks well-established design methodology. Sometimes the design relies on the software specification methods based on general software engineering practice, e.g., the specification is expressed in terms of UML diagrams [33]. However, there is no hint as to how to create the system architecture and how the task influences it. Knowing the hardware system composition and the tasks that it will be put to, there is no standard procedure leading to the specification of the structure and description of activities of the designed system. This paper tries to fill this gap and formulate such a methodology, simultaneously taking into account the abundant communication that appears in complex systems. The focus on communication is its unique feature.
The theses of the article are as follows. The proposed design method leads to an organised analysis of the communication at an abstract agent-level. The so-organised communication and its properly defined specification facilitate further system development. The proposed method uses a standard message structure for communication within robotic systems composed of agents. It is based on the IEEE FIPA standard, as utilised by multi-agent systems. The work contributes to the design of robotic systems in terms of: (i) standardization of the message structure for communication within robotic systems composed of agents applying IEEE FIPA standards (Foundation for Intelligent Physical Agents, brings together the creators of the agent technology); (ii) top-down, binary, communication focused design method for robotic systems. However, our work does not concern either the distribution of the system on multiple physical platforms [34,35] or the security of communication [36,37,38,39].
The structure of the article is as follows. Section 3 introduces the notation used in the article and presents the formal approach to the communication within robotic systems composed of embodied agents. Section 4 describes the proposed communication-focused design of robotics systems. Section 5 provides an example of the use of this approach. Section 6 draws the conclusions.

2. Related Work

Many architectures of robotic systems have been proposed (e.g., reactive [40], behaviour-based [12,29] subsumption [41]; however, layered architectures dominate. The decomposition into layers is conducted either based on the frequency of subsystem behaviour repetition or on the subdivision into subtasks the system is to execute [20]. Three-layered structures dominate, e.g., Sense–Plan–Act (SPA), subsumption [42], hybrid planning–reactive [43], hierarchic [44,45]), biologically inspired [22,23,46], using belief–desire– intension (BDI) approach [47,48]. Two-layered structures also have been proposed [49]. However, all of them provide guidance to the designer only as templates that can be used for the purpose of imitation.
Ahmad and Babar [50] presented a systematic classification of software architectural solutions for robotic systems. They suggest that architectural solutions support operations enabling information and resource distribution and development such as modelling, designing and programming robotic software.
Among component-based approaches, the Robot Operating System (ROS) is becoming the standard programming framework for developing robotic applications [9,51]. The ROS-based software architecture is composed of components and connectors between them, that are partially specified in the code and created at run-time. However, there are no generally accepted rules and guidelines for developing applications to perform the required task. Moreover, static information about the architecture during system configuration is limited, and consistency checking during development is usually very difficult or even impossible. The quality of the created software depends primarily on the skills and experience of the developers and is mostly a matter of trial and error [52].
The service-driven architecture model or Service-Oriented Architecture (SOA) for robotics relies on loosely connected, heterogeneous and dynamically composed services [53], which are typically located in the cloud. SOA involves using Internet technologies to define services through which robots can be accessed or robots can access the resources of other machines. Service-based robotics enables robot operations by utilizing powerful computing capacities, virtually unlimited storage, and communication resources available from cloud-based infrastructures.
Model Driven Engineering (MDE) is a formal methodology that can be used for the design of robotics software, which involves a transition from code-based software development to model-based development [52]. MDE for robotics provides an opportunity to systematically and simultaneously bring together different levels of abstraction at which developers can work to improve the quality of the system in terms of security, reliability and reusability. The model is defined in terms of formal meta-models (or generic models). Software languages for the specification of system models play a similar role to that of meta-models (e.g., [54]). In rare cases, meta-models are provided, guiding the developers in the design of the system model (e.g., [55]). The problem is that the above mentioned approaches must strike a balance between too much freedom of choice and too little—this is termed proper freedom from choice [56,57]. Unfortunately, there is no one generally accepted definition of a meta-model. Another major problem among existing model-based robotic system design methodologies is their complexity. The software development process is multi-stage, and the number of models required is significant. In consequence, multiple domain languages and tools are needed to support the development process of robotic systems.

3. Inter-Agent Communication

3.1. Notational Convention

Formalisation of the discussion of any subject requires adequate notation. The numerosity of discussed abstract objects necessitates an introduction of a naming convention to facilitate the decryption of the symbols used. The article discusses both the system structure and its activities. The system structure is described in terms of agents a and their subsystems s . Each agent has its own name, which, in general, is represented by j used as a right subscript of the central symbol, i.e., a j , or its control subsystem c j . Similarly, a subsystem of a j may have its own name v; thus, to distinguish it s j , v is used. Subsystems interact with each other using input (left subscript x) and output (left subscript y) buffers, i.e., s x j , v , s y j , v . Usually the type of subsystem s from which the information is received or to which it is dispatched is denoted by a left superscript of a buffer symbol, i.e., s x s j , v , s y s j , v , where s { c , e , r } and s { c , e , r , E , R , T } , i.e., control subsystem c, virtual effector e, virtual receptor r, real effector E, real receptor R or an inter-agent transmission buffer T. The internal memory of a subsystem is denoted by s s j , v . As it is not connected to other subsystems, no leading subscript x or y is used. As the values contained in the buffers change in time, a discrete time index has to be introduced for each of the subsystems separately. If the time counter is i, the contents of a specific buffer are labeled with a right superscript, e.g., s x j , v i or s y j , v i + 1 , to denote that the contents are considered at the current instant i or the next one i + 1 , respectively.
The activities of an agent result from the activities of its subsystems and their interaction through the abovementioned buffers. The activities of a subsystem s j , v of an agent a j are described hierarchically by using the following concepts. At the lowest level appear transition functions f s j , v , ω , terminal conditions f s j , v , ω τ , error conditions f s j , v , ω ε , where ω is the name of a particular function. At the intermediate level, behaviours B s j , v , γ appear, where γ is the name of a particular behaviour. At the highest level, a finite state machine (FSM) F s j , v appears. F s j , v is represented by a graph containing nodes/states S s j , v ν (where ν is a particular state designator) and directed arcs labeled by initial conditions f s j , v , δ σ , where δ is its name. In general all functions are symbolised by f. Terminal, error and initial conditions are predicates and are distinguished by the right superscript τ , ε and σ , respectively. Such a superscript is missing in the case of a transition function.
A group of agents a g q , where q is the group name, aggregates one or more agents or subgroups. A group of agents is used as a component of a robotic system when the exact internal structure is unknown or when this component will be decomposed. Communication between groups a g q and a g q is carried out through the communication channel C q , q k , where k is its name, and q and q are the names of the communicating groups of agents. One or more communication channels may exist between a pair of agent groups. The communication channels C q , q k and C q , q k are identical.

3.2. Influence of Communication on System Activities

An embodied agent a is subdivided into subsystems, namely: a single control subsystem c and zero or more virtual and real effectors (e and E, respectively) and zero or more virtual and real receptors (r and R, respectively) (Figure 1) [27,28,58,59,60]. As only the control subsystem of an agent has the capability of interacting with other agents (precisely speaking with their control subsystems) this subsystem will be the focus of our attention. The primary activity of the control subsystem c j of an agent a j , is the computation of the control subsystem transition function f c j , m (m is the name of the particular function), which in the decomposed form can be presented as:
c c j i + 1 : = f c , c j , m ( c c j i , c x e j i , c x r j i , c x T j i ) c y e j i + 1 : = f c , e j , m ( c c j i , c x e j i , c x r j i , c x T j i ) c y r j i + 1 : = f c , r j , m ( c c j i , c x e j i , c x r j i , c x T j i ) c y T j i + 1 : = f c , T j , m ( c c j i , c x e j i , c x r j i , c x T j i )
From the point of view of inter-agent communication, the utilisation of the transmission input buffer c x T j and output buffer c y T j is most important. Whenever f c j , m is used, f c , T j , m is the only partial function producing the contents of the output buffer c y T j . Thus, only the partial transition function f c , T j , m produces the response of the agent a j to the messages inserted into c x T j by the other agents. However, all partial transition functions use c x T j to compute the commands for other subsystems of the agent a j ; thus, the obtained message influences all the activities of the agent. The transition function (1) takes in its arguments at discrete time i and computes the values dispatched to the other subsystems at time i + 1 . This defines the subsystem sampling time. Hence the elementary action of a subsystem is defined for a single sampling period and consists of:
  • Computation of the transition function f c j , m and assigning the computed values to appropriate components of the output buffer c y j = [ c y e j , c y r j , c y T j ] and internal memory c c j ,
  • Dispatching the contents of c y j to the associated subsystems,
  • Incrementation of the discrete time counter i,
  • Acquiring into c x j = [ c x e j , c x r j , c x T j ] new input values from the cooperating subsystems.
This elementary action is iterated until a predicate f c j , m τ , called the terminal condition, or f c j , m ε , called an error condition, becomes true. The process of iterating the elementary action is called a behaviour B c j , m and can be represented by a graph (Figure 2). As behaviour depends on the transition function, terminal condition and error condition, it is parameterised by all of them: B c j , m ( f c j , m , f c j , m τ , f c j , m ε ); however, for brevity the dependence on f c j , m ε will be neglected. Once behaviour B c j , m terminates its actions, another behaviour has to be chosen. This choice is based on a predicate f c j , v , m , m σ , called the initial condition (m and m are the designators of the nodes connected by the directed arc labeled by f c j , v , m , m σ ). Initial conditions label the directed arcs connecting the nodes of a graph representing a Finite State Machine (FSM) F c j . With each of its nodes (states) S c j m , a behaviour B c j , m is associated; thus, switching states switches behaviours. As initial conditions are parameterised by the contents of input buffers, here again the acquired message influences the activities of the agent. Hence the messages introduced into the input buffer c x T j can influence the activities of an agent in three ways:
  • In each iteration of the behaviour through the influence on the values computed by the iterated transition function (1),
  • By terminating the behaviour, i.e., by causing the terminal condition f c j , m τ to be satisfied,
  • By selecting a new behaviour, i.e., by causing an adequate initial condition f c j , v , m , m σ to become true.
All these ways of influencing the activities of an agent result from the assumed model of an embodied agent. However, the designer of the system can also exert influence on the structure of the FSM F c j . Some of its states can be distinguished as communication states (a single such state is quite common), in which the behaviour of the agent waits for a message. Once a message is obtained, the FSM reacts to it by directing the execution to an appropriate section of the FSM, which is responsible for the execution of a set of behaviours that compose a service—a service-oriented architecture (SOA) results. At each of the four mentioned communication levels, FIPA messages can be utilised.
It should be noted that this propagation of the received message through the four levels of the agent execution structure is beneficial from the point of view of the agent’s reactivity to outside stimuli, i.e., incoming messages. Each message can be accessed at the control subsystem sampling rate. If the currently executed service has to be interrupted, e.g., due to the request for a higher priority service, this subsystem, and hence the agent, will detect this through the evaluation of the terminal condition associated with the currently executed behaviour. The satisfaction of the terminal condition will cause the current behaviour to be aborted, and that in turn will initiate the evaluation of the initial condition, what will result in switching the FSM state, e.g., the state responsible for invoking the newly requested service. Such an instantaneous switch from one service to the other might not be possible, e.g., if the current service consists in the delivery of a cup of tea to the user, this cup cannot be simply dropped to start the execution of the higher priority service. In this case the FSM will need to pass through an intermediate state, in which it will execute a behaviour of placing the cup in a safe location, and only then proceeding to the execution of a new service.
It is worth noticing that in the case of complex robotic systems, e.g., cognitive robots, their control systems contain many CT type agents [28,61], i.e., computational agents, having the capability of communicating with other agents, but lacking effectors and receptors. In this case (1) becomes:
c c j i + 1 : = f c , c j , m ( c c j i , c x T j i ) c y T j i + 1 : = f c , T j , m ( c c j i , c x T j i )
In other words, the activities of the FSM of such an agent (its control subsystem, to be precise) mainly rely on the inter-agent communication, i.e., the contents of the messages transferred between such agents. Thus, f c , T j , m is of paramount importance.

3.3. Structure of a Communicate

To take into account all the necessary possibilities of message contents, we propose that the contents of the output and input buffers—a communicate—be set in accordance with the IEEE FIPA standards. The partial transition function f c , T j , m should produce the content of the output buffer c y T j as a message M j , j i (where j and j are the designators of the communicating agents) and accept the content of the input buffer c x T j in the format of M j , j i (see Figure 3). A communicate transmitted from agent a j to agent a j at instant i is a tuple:
M j , j i = V , Z , L , O , I , t b , m w , m t , m c
The tuple M j , j i , according to FIPA ACL Message Structure Specification standard [62], contains the following items:
  • Performative verb V V ^ , where V ^ is a set of possible performative verbs defined by [62]: V ^ = { accept-proposal, agree, cancel, cfp, confirm, disconfirm, failure, inform, inform-if, inform-ref, not-understood, propagate, propose, proxy, query-if, query-ref, refuse, reject-proposal, request, request-when, request-whenever, subscribe}. It defines the intention of sending the communicate (request, inquiry, statement, etc.), and how the recipient of the message should interpret it.
  • Content Z of the communicate. If the language L is provided, the content should conform to the syntax and semantics of that language.
  • Name of language L L ^ used to formulate the content, where L ^ is a set of languages used for the formulation of contents of the communicates. Proper indication of the language expressing the content of each message helps the sender of the message to correctly formulate the content of the message, and the recipient to correctly understand this content. The FIPA standard does not require a language to be defined (no syntax and semantics are required). Different languages can be assigned to different communicative acts between pairs of agents.
  • Name of the ontology O O ^ , where O ^ is a set of ontologies. An ontology defines the meaning and relations between symbols used in the message content. An ontology is needed when it is not enough to define the language and performative verb to understand the message content correctly, and also when the agent does not have the common knowledge required for its operation. The ontology determines for the agent how to understand the message in the context of knowledge that the ontology specifies.
  • Name of the interaction protocol I I ^ , where I ^ is a set of possible interaction protocols defined by [62]: I ^ = { query, request, subscribe, request-when, cfp, icfp, propose,...}. It defines the pattern of interaction governing the conversation. The protocol organizes the exchange of messages and determines their order. Conversation between agents is usually bi-directional, and the goal is not only to pass an information item, but also to request and obtain a piece of information, to query the other agent, subscribe to a specific piece of information, etc. Thus, pairs of input and output buffers are considered ( c x T j , c y T j ); and for the sake of brevity, the pair is denoted as c x , y T j .
  • Reply-by t b T ^ , is a timestamp indicating the latest time by which the reply to this communicate should be produced.
  • Reply-with identifier m w M ^ and in-reply-to identifier m t M ^ . The pair of identifiers is used to combine messages into short-term conversations. If the sender of the message wants the recipient’s reply to match the message it sent, it should set the m w identifier value. In turn, the recipient who wants to refer to the message with the m w identifier should set this value in m t .
  • Conversation identifier m c M ^ identifies the ongoing sequence of communicative acts that together form a conversation.
The figure T ^ is a set of timestamps, and M ^ is a set of identifiers of communicates sent. While M ^ is a set of all communicates. The symbol f M : M ^ M ^ is a function mapping a message to its identifier. The set of all messages that can be sent by the agent a j is denoted as y a M ^ j . The set of all messages that can be received by the agent a j is denoted as x a M ^ j .

4. Robotic System Design Method

The design method assumes that the designer begins with a general robotic system characterized by a set of generic tasks for the robot. Next, he/she specifies the tree of requirements, making the requirements ever more detailed and thus expanding the tree. Based on that, the robotic system is divided into smaller parts (agent groups), and as a result, the communication channels appear. The decomposition is stopped when the requirements are fully addressed, and the resulting agent groups are within the mental grasp of the designer, so that they can be substituted by agents. During decomposition, adequate specification of communication has to be produced. Finally, the specification of agents’ interfaces is done. The top-down approach based on system decomposition implies that system properties follow directly from the requirements. This is in contrast to the bottom-up compositional approach, where system properties emerge during the design process. Detailed specification of communication is completed when the agents substitute the groups. The process is depicted in Figure 4 and consists of the following steps.
  • Specification of system requirements. The designer formulates a set of tasks that the robot should perform. On this basis, a set of requirements is formulated for the robotic system. Each general requirement is specified as a set of more detailed requirements; thus, a hierarchy of requirements is formulated. It can be represented as a requirement tree. This tree will serve as a basis for system decomposition.
    The output of this step is a hierarchy of requirements, specified as a tree.
  • Binary decomposition of a robotic system. The designer divides the system into smaller elements, assuring proper assignment of tasks (requirements) to each of them. Each decomposition step has to produce elements that have clearly defined and coherent tasks. The elements that are the result of decomposition are groups of agents a g q . This step considers an iterative process consisting of the following stages concerning the analysis of successive requirements from the tree in the top-down order.
    (a)
    Decomposition of a group. One group of agents is divided into two groups, thus the name—binary decomposition. When a g q is decomposed into a g p and a g r , the following notation is used: a g q : = a g p , a g r .
    (b)
    Creation of new channels. The need for creation of one to N new communication channels might arise. In this case, it is necessary to correctly determine which component needs an action to be realized (e.g., answer in the case of query interaction protocol, command execution in the case of request interaction protocol, new information in the case of subscribe interaction protocol).
    (c)
    Reallocation of existing channels. After each decomposition step, the designer must decide on the proper allocation of the existing communication channels, associated with the decomposed group, to the newly created groups. When group a g q is decomposed into a g p and a g r , and C q , q k is reallocated to a g p , the following notation is used: C q , q k C p , q k . When both decomposed groups a g p and a g r need to communicate with the non-decomposed group a g q the channel is split into two: C q , q k C p , q m , C r , q n .
    (d)
    Complementing communication channels. Due to the analysis of new requirements, there may be a need to supplement the existing communication channels, without need for creating new ones.
    Stop condition. The decomposition process stops when all of the requirements are processed. When the designer concludes that the agent groups have clearly defined and coherent tasks that are comprehensively understood by the programmer he/she should substitute each of the existing agent groups by single agents. Otherwise, the analysis of the requirements has to be repeated, making them more detailed.
    The output of binary decomposition are sets of agents a ^ , and of communication channels C ^ . Any group of agents can be composed of a single agent. Specific agents have strictly assigned functionalities.
  • Specification and conversion of the communication channels. The next step is to specify the communication exchanged between the pairs of agents a j and a j , substituting groups a g q and a g q , between which the communication channel C q , q k was created initially. There is a need to properly convert the communication channel into the specification of input and output buffers, and to define the patterns of communication between the agents a j and a j . This results in the creation of four buffers for agents a j and a j : two input buffers: c x T j , c x T j and two output buffers: c y T j , c y T j . Buffers c y T j and c y T j are used to send, while c x T j and c x T j to receive the communicates. Those communicates are specified by a number of interaction protocols. The interaction protocol I j , j p (or I j , j p —depending on who initiates the communication) specifies the pattern of communication between the agents. A conversation is an exchange of messages within a specific communication channel. A number of interaction protocols can be associated with one communication channel.
    The definition of the interaction protocol determines the performative verbs for the specific messages. Moreover, every message contains the communication language, and possibly the ontology used.
    While specifying new communication channels, an analysis targeted at communication deadlocks is mandatory. In our approach the analysis is focused on the newly created channels, therefore, it is easier to conduct. This article does not delve into deadlock analysis, as it focuses on the problem of system decomposition. However, in [31], communication analysis of Embodied Agent-based systems is provided, and in [59], a Petri-Net-based description of Embodied Agents is presented. The formal specification and verification of FIPA interaction protocols by means of Colored Petri Nets is proposed in [63]. Therefore, the system designers utilising this approach can check deadlocks with the known tools for Petri-Net deadlock analysis, and the proposed decomposition method enables concentration of the analysis on a part of a complex system, making the analysis simpler.
    The output of this step is the specification of each of the defined communication channels from the set C ^ . Each channel C j , j k specifies the sets of interaction protocols I ^ j , j . Each interaction protocol I j , j p I ^ j , j defines a set of messages that are sent from the agent a j to the agent a j : M ^ j , j , and back from the agent a j to the agent a j : M ^ j , j . The messages should contain the performative verb V , interaction protocol I , and optionally the language L and ontology O .
  • Specification of agent interfaces. The previous steps concentrated on the communication channels, while the current focuses on the agents (or groups of them). This step specifies the following elements: sets of performative verbs y V ^ j and x V ^ j that agents send and receive, respectively, in the communicates, and the sets of languages y L ^ j and x L ^ j used in those communicates, as well as sets of ontologies y O ^ j and x O ^ j employed by those communicates. The specification of the interfaces organizes the project, and enables the detection of inconsistencies. Moreover, it helps to organize the handling of errors that occur when an unsupported message is received. However, this is not discussed in this article.
The design process should be iterative as, after the initial system design, it may be necessary to relapse and re-run through it several times.

5. Illustrative Example: Robot Companion

The herein presented illustrative example shows how to apply the proposed design method. It is purposefully kept simple to let the reader concentrate on the method, however the resulting system is of utility. The purpose of a robot companion is to assist users (usually elderly) at their homes. A survey of robot tasks that the elderly person might request is presented in [64]. A robot companion, as pointed out in [65,66], should exhibit the following capabilities: (1) moving around in the user’s home; (2) being safe for humans, household appliances and furniture; (3) being aware of the dynamic environment; (4) having a simple and intuitive human-robot interface (e.g., voice, display, gestures, tactile); (5) possessing manipulation capability; (6) having a friendly look; (7) having its own task management. A robot companion should be versatile and address as many requests of the elderly as possible. However, there are still technology limitations that force robot specialization. The mentioned range of tasks for a companion robot is wide, therefore an analysis of the requirements set for a specific robot should be carried out and a selection has to be made.

5.1. Formulation of Requirements

For brevity, we shall focus on the Robot Companion helping in a kitchen. The above general requirements should be analysed from the point of view of the chosen task. The most general requirement for the Robot Companion is that it should be able to acquire and subsequently execute human commands. Next, the robot should decide upon the necessary activity, i.e., interpret the command. To be able to perform the activity, there is a need for creating and managing a plan and afterwards executing it. As the Robot Companion interacts with an elderly person, who can change his/her mind, it is crucial for it to be able to interrupt the action currently performed and schedule the next one. To plan in a kitchen environment, the Robot Companion must be able to model this environment. The environment in which it works is dynamic, so in order to correctly plan its activities, the Robot Companion should deduce what activities it should perform. Moreover, it must identify and understand relationships (including spatial ones) between objects, both existing currently and those that will result from the execution of the plan. To execute the plan, it has to be able to sense and influence the environment using appropriate sensors and actuators. The environment is dynamic, because the robot and other agents change the positions and orientations of objects, thus, the Robot Companion should be able to update the environment model. The hierarchy of proposed requirements is presented in Figure 5.

5.2. Decomposition of the Robot Companion

The Robot Companion can be treated as a monolithic system represented as a group of agents a g RoCo . For brevity, only two decomposition steps (the first and fourth) are described in detail, the remaining are only outlined.
The first decomposition is due to the requirement at the root of the tree (see Figure 5) that says that a robotic system should acquire and execute human commands. The division a g RoCo : = a g ac , a g ec results in two agent groups:
  • a g ac (ac for acquire commands)—group responsible for acquiring the commands from a human
  • a g ec (ec for execute commands)—group responsible for execution of the commands.
These two groups communicate by sending commands and responses. In general, communication channels C j , j k created in the successive steps of the decomposition process are marked by consecutive integers k. Those numbers remain unchanged during the design process. Moreover, those channels are also marked with the symbols of the groups, however, those will evolve. The communication channel C ac , ec 1 is created between groups a g ac and a g ec (Table 1).
The second division results from the requirement that the robotic system interprets the commands obtained from a human, and decides about the command to be executed first. The group a g ac is decomposed into: a g hi (human-machine interface), and a g da (decides upon command to be executed). It results in reallocation of the communication channel C ac , ec 1 C da , ec 1 , and creation of the new one: C ic , hi 2 , that is used to pass interpreted human commands.
The third division results from the requirements that demand the creation and management of a plan, and allow the interruption of the current action and scheduling the next one. The group a g da is divided into a g co (coordinates command execution and scheduling), and a g cp (creates and manages a plan). Two channels are reallocated: C da , ec 1 C co , ec 1 , C da , hi 2 C co , hi 2 . Also, a new one is created: C co , cp 3 , that is used to request for plan generation.
In the fourth step the designer takes into account that the Robot Companion should infer actions, model the environment and manage the knowledge (see Figure 5). Inference of actions requires a component responsible for creating abstract plans: a g pl (pl for planner). In turn, the group responsible for knowledge management will provide knowledge about the objects and relationships between them: a g km (km for knowledge manager). Thus, the group a g cp is split into two: a g cp : = a g pl , a g km (see Table 2). There is a need to reallocate the channel C co , cp 3 and it is obvious that it will be handled by the group responsible for creating the plan: C co , cp 3 C co , pl 3 . A new channel C pl , km 4 must be created between the new groups to enable the transfer of knowledge extracted from the knowledge base.
In the fifth step, we take into account that the robotic system must execute the plan, influence and sense the environment by controlling the robot. Thus, a g ec is decomposed into: a g ex (responsible for influencing and sensing the environment by operating the hardware) and a g ta (responsible for executing the plan). C co , ec 1 is reallocated: C co , ec 1 C co , ta 1 . Two communication channels are created: C ex , ta 5 , where the low-level commands for the hardware, i.e., acquiring sensor readings and controlling the actuators, are transmitted, and C ta , km 6 passing the knowledge needed by a ta .
The last step takes into account that the robotic system should update the environment model. As there already exists a group responsible for managing the environment model ( a g km ), and sensor data are provided by a g ex , and a communication channel C ta , km 6 exists already; thus, only the content transmitted by this channel has to be supplemented.
After exhausting the requirement tree, the decomposition process is complete. Six groups of agents emerged (see Figure 6). Again for brevity the description of decomposition of a g ex , that controls the robot hardware, has been omitted here. Hardware configuration is described below. Five groups ( a g ta , a g pl , a g km , a g co , a g hi ) are transformed into respective agents: a ^ = { a ta , a pl , a km , a co , a hi } . Those agents have clearly defined and coherent tasks. a pl generates a plan based on problem definition; a km manages the knowledge database; a co coordinates a set of commands received from a human, orders plans from a pl , and schedules their execution; a ta executes the plan and dispatches low-level instructions to a g ex ; a hi acts as a human-machine interface. Further decomposition would result in too much fragmentation and too much communication effort. Moreover, during decomposition, six communication channels were identified: C ^ = { C co , ta 1 , C co , hi 2 , C co , pl 3 , C pl , km 4 , C ex , ta 5 , C ta , km 6 } .

5.3. Hardware Configuration

The Velma robot (Figure 7) has adequate capabilities to satisfy a g ex requirements. Thus, it became the hardware part of the Robot Companion [67]. It is the torso of a humanoid robot with two KUKA LWR arms and additional F/T sensors embedded in the wrists to which three-fingered BarrettHand grippers are attached. The torso is mounted on a 1 DOF revolute column which also supports a 2 DOF neck equipped with cameras and Kinect sensor (the head). Arms are torque controlled, employing impedance control [67,68]. The control laws offer two basic modes of operation: Cartesian and configuration space [69]. The neck and grippers are position-controlled. The control system is implemented using the FABRIC programming framework [70].

5.4. Specification of Communication Channels

At this stage of design, the messages sent between pairs of agents need to be refined, leading to the specification of the interaction protocols. Each communication channel should have a specified set of interaction protocols according to which the conversations take place. Moreover, every interaction protocol should contain: performative verbs, languages that specify the content of the messages, and possibly the ontologies. Due to the limited space, only two communication channels are described. However, the creation of the remaining follows the same pattern.
The communication channel C co , ta 1 is used to send plans to be executed. The first message deals with requesting from a ta to execute the plan. The plan should be understandable to the agents; therefore, the task notation using HFSM (HFSM for Hierarchical Finite State Machine) is chosen. As a co requests from a ta , the performative verb is V request , and the content Z is expressed using L HFSM . L HFSM is used for requesting the execution of the generated plan; thus, a typical message is up to dozens of kB.
While executing the plan, a ta sends its status (name of its current state). Together with the performative verb V inform it sends Z from a specified set of states: Z {Idle, Run, Error, Terminating, Finished, Failed, Replan}. The language L SW (SW for Supervised Worker) was proposed to organize the list, thus, Z should be formulated using L SW . The detailed description of Z is as follows. Idle—awaiting for the next plan. Running—executing a plan. Error—the plan execution ended with an error. Terminating—the plan execution is being terminated. Succeeded—the plan is finished with success. Preempted—the plan is preempted due to an interruption from a co . Replan—the plan cannot be further executed and a new plan is required.
The agent a co confirms each message it receives from a ta by a message with the performative verb V confirm and empty content, as the intention provides complete information.
In the case when the plan has to be interrupted, a co sends an interrupting message to a ta . The performative verb is V cancel , and again sending any content is not necessary. The messages sent with L SW have no content, so their size is 100 to 200 bytes, and are sent every time a t a modifies its internal state. The detailed specification is presented as follows.
V { c o n f i r m , c a n c e l } ( L = L SW ) ( Z = ) V { r e q u e s t } ( L = L HFSM ) ( Z = p l a n )
These four messages make up the interaction protocol I co , ta coordinate . The interaction protocol managing the conversation between a co and a ta , together with the detailed formulation of the messages is shown in Figure 8.
The communication channel C pl , km 4 is used to transmit knowledge, in the form of queries produced by a pl , and answers formulated and sent by a km . For both agents, to understand the queries and knowledge, the language L WM (WM for World Model) is proposed. The agent initiating the conversation is a pl , asking for knowledge using Z expressed using L WM . The ontology O RoCo specifies the knowledge about the robot’s environment. It is built from scratch in OWL to keep it as simple as possible. The ontology was inspired by the KnowRob2 project [71]. It was used to implement a simple knowledge base. The performative verb for the message is V query ref . Agent a km formulates the answer to the query, and informs the inquirer; thus, the performative verb is V inform ref .
The interaction protocol I pl , km query , together with the detailed formulation of the messages is presented in Figure 9. After designing the six communication channels, the system shown in Figure 10 results.
The size of the content formulated using L WM depends on the purpose of the message, e.g., requests are up to 250 bytes, but the size of answers depends on particular knowledge—from 250 bytes (in the case of, e.g., state of the gripper) up to 1 MB (in the case of complex or accurate information about the environment). The messages are sent every time the knowledge is needed, i.e., mainly during plan generation.

5.5. Agents in the Robot Companion System

Having defined the communication between agents, we can proceed to the specification of the embodied agents themselves. A crude specification of the agents arises from the system decomposition, however, a more detailed description is required. Thus, a brief description of key behaviours and specification of the FSMs governing them, for each agent, is presented below (Figure 11). All of the agents considered here (Figure 10) contain only their control subsystem—they are purely computational agents.
The objective of the coordinator agent a co is to select one task out of all the tasks requested and coordinate its execution. The problem of task execution coordination and its solution for service robots is published in [72,73].
Based on configurable parameters and the scheduling algorithm, a co decides when to switch the plan being executed by a ta to another one. The agent manages the procedure of suspending the current and starting the next plan execution. The FSM F c co of the control subsystem c co of the agent a co is tailored to fit the Robot Companion requirements. The FSM has 5 states (Figure 11a). A brief description of the behaviours executed in each of them is shown in Table 3. The agent receives the communicates from a hi in state S c co trigger , from a pl in state S c co compSP , and from a ta in all states.
The purpose of a pl is to generate the plan on demand of a c o , in response to I co , pl request . During plan generation it may need additional knowledge, so it queries a k m using I pl , km query . The activities of the control subsystem c pl of the agent a pl are governed by the FSM shown in Figure 11b. In each state of the FSM, an associated behaviour is executed (Table 4).
The agent a km is responsible for query answering and for updating the knowledge database. At initialization the agent starts the I km , ta subscribe protocol, which results in receiving updates on the environment states. Its activity is governed by the FSM shown in Figure 11c. In each state of the FSM, a corresponding behaviour is executed (Table 5).
The agent a ta communicates with a co , a km , and a g ex . It receives plans and plan execution interruption messages from a co by means of I co , ta coordinate . In reaction to the plan execution and interruption requests the a ta states are switched. For the sake of brevity the communication with a g ex is not presented. The FSM that governs the activity of a ta is shown in Figure 11d. In each state of the FSM, a corresponding behaviour is executed (Table 6).

5.6. Implementation

The system is implemented using ROS 1. The message exchange was performed using the Remote Procedure Call (RPC) mechanism. Agents are implemented as ROS nodes.
The standardized message format, proposed by IEEE FIPA standards, and specified in this article, is treated as a design pattern from the system implementation point of view. It is implemented as an ROS service, and used for all agents. The interaction protocols of the agents are specified with the ClassInterfaceInfo class. Thus, communicates of any agent are specified by filling elements of IEEE FIPA standard communicate. For example, the interaction protocol I co , pl request is specified as:
ClassInterfaceInfo   ( [ ( request , generatePlan ,   [ GroundTaskNetwork ]   HierarchicalPlan ) ] ,
where ‘request’ is the performative, ‘generatePlan’ is the query name, ‘[GroundTaskNetwork]’ is the request content class expressed in L HFSM , ‘HierarchicalPlan’ is the response content class given in L HFSM . Besides enforcing IEEE FIPA communicate standard, the ClassInterfaceInfo class verifies if the content follows the specified language. The part of the code responsible for communication has been separated, and used as programming modules. The modules implement, e.g., the ClassInterfaceInfo class, all interfaces for all agents and variety of world model objects being transmitted between the agents.
The standardization of interaction protocols enabled verification of the implementation against the specification. It made it easier to verify the implementation of agents by checking that all kinds of queries are supported at the agent code level and that the arguments and returned value comply with the specification. Due to structured messages and well-organised communication at the abstract agent-level, it is straightforward to implement query checks in the agents and analyse inter-agent communication abstracting from the implementation details. The approach presented in this article requires the definition of six communication languages used in the system. Due to that the organization of data structures into a hierarchy of classes emerged.

5.7. Experiment

In the previous sections, the proposed method was used to design a robotic system, then its implementation was described. The experiment presented herein verifies that the thus-produced specification conforms to the requirements. Moreover, the characteristics of the designed communication are assessed. The mentioned characteristics are as follows: the size of communicates, number of them, and frequency of their occurrence in specific timeframes.
The sizes of the messages were estimated in Section 5.4, but their true sizes, their number and frequency of occurrence, depend on particular tasks that the robotic system executes. Therefore, in order to determine whether the communication between the individual components of the system is efficient, these characteristics should be verified in simulations, and then in the real robotic system. The approach for testing the communication is also sketched. Due to limited space, only one test case is described.
The designed robotic system was run under the following conditions. The group a g ex is built of robot hardware components and software running on a machine with 4 core CPU and Linux OS with RT (real time) kernel. The software is semi-automatically generated from the specification using FABRIC [70] and it meets hard RT requirements. The remaining agents, i.e., a co , a ta , a hi , a pl , a km do not require hard RT, and they are deployed on another machine with Linux OS and ROS 1. Both machines are connected by a LAN using an Ethernet cable.
In the presented experiment, a complex task was performed that employs interleaved planning and execution. The experimental task was to pour content of a jar into a bowl. Both objects were initially located in an open cabinet. A video of the experiment is available at the link: https://vimeo.com/720200335 (accessed on 21 October 2022). The provided video, as well as Table 7, show that the requirements imposed on the system, as stated in Section 5.1, are fulfilled.
During the experiment the occurrence of each message was logged along with the metadata (size, sender, recipient, performative, and interaction protocol) and the time at which it was sent. Based on this information, the detailed analysis presented below was carried out.
The execution of a given task took 778 s. and required 1856 messages to be exchanged between the agents. A timetable summarizing the experiment is shown in Table 7.
Agents a co , a ta , a hi , a pl and a km share the same codes for inter-agent communication. The intensity of communication in the whole system is presented in Figure 12. There is intense communication from 97 up to 152 s. (planning #1), and a few other communication episodes. There are two planning processes. In the first one (#1) the robot plans how to take the jar and bowl out of the cabinet, and put the bowl on the table. In the second one (#2) the robot plans how to pour the contents of the jar into the bowl.
86% of the sent messages is below 0.5 kB, but there are larger ones too, going up to 852 kB. This shows a great variety of the size of communicates. During execution of a command, agents apply specific communication, depending on the current situation (see Table 7). The difference in the size of the communicates is in line with the estimates of the various types of them made in Section 5.4.
The number of messages sent and received by particular agents is presented in Table 8. Most of the communication is carried out by agents a km and a pl through the communication channel C pl , km 4 . For the whole planning process #1 the required bandwidth is on average 32 kbps, but the peak (150 kbps) is at the end of the first planning process (sec. 153) (see Figure 13). The number of communicates sent in planning #1 is equal to 1598. Later on, the channel is not used (154–680 s), up to 681–682 s. where the planning #2 is done. 57 communicates are exchanged, with average bandwidth equal to 32 kbps. In total, the minimum size of the communicate is 231 b, the maximum is equal to 98 kb, and the average size is 654 b. It shows the accuracy of the predictions made during the construction of the communication channels (see Section 5.4). The distribution of messages over time is difficult to estimate accurately. The size of the message querying a km is small, as it contains only the specification of the knowledge needed. However, the size of the reply from a km depends on the specific piece of knowledge. The communicate size can be small (up to 1 kb), e.g., information on the gripper state, or creation of the hypothetical state of the knowledge base; large (dozens of kb), e.g., sequence of positions of the bowl held by the gripper, possible grasps of the jar; or even very large (up to 100 kb), e.g., the current state of the whole environment.

6. Conclusions

The article presents a method of designing robotic systems, which puts the communication between system components at the forefront. This systematic design method facilitates system implementation and enables experimental evaluation of its characteristics, thus confirming that it fulfills the requirements, and that the communication between the system components is correct. This strengthens the authors’ claim that the communication-focused top-down robotic system design method based on binary decomposition is correct and useful. Decomposition based top-down design methods, if applied systematically, assure that the resulting system fulfils the requirements, as opposed to bottom-up synthesis techniques, which produce emergent system properties, which have to be deduced and subsequently tested against the requirements. Moreover, it uses the standardization of the message structure for communication within robotic systems composed of agents based on the IEEE FIPA standard, known from multi-agent systems. In contrast to a commonly used bottom-up approach, where designers take components from a library or a framework and connect them complying with their interfaces, the proposed method is strictly top-down. It separates the specification of the system from its implementation. It shows how to systematically decompose groups and connect them by communication channels.
The communication focused system decomposition enables the analysis of the communication on the abstract agent-level. Organized communication and its properly defined specification facilitates the system’s further development. System-specific messages and language definitions are grouped and separated from the implementation. The implementation of the software that handles the communication process is generic. Therefore, there is no need to test the software multiple times. Only verification against the system specifications is required.
Highlighting communication in the design process shows the critical aspects of its operation. The bandwidth of communication channels and communication intensity can be estimated in the development. On their basis, it is possible to properly define or redefine the architecture of the entire robotic system and avoid bottlenecks. Application of the communication focused system decomposition method, with adequately defined requirements, allows for designing a robotic system with optimal control and energy consumption.

Author Contributions

Conceptualization, C.Z.; methodology, P.P. and C.Z.; software, D.S. and W.D.; validation, D.S. and P.P.; writing—original draft preparation, C.Z, P.P. and W.S.; writing—review and editing, C.Z., P.P., W.S., D.S. and W.D.; supervision, W.S.; project administration, P.P. and W.S.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

Research was funded by the Centre for Priority Research Area Artificial Intelligence and Robotics of Warsaw University of Technology within the Excellence Initiative: Research University (IDUB) programme.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hayward, V.; Paul, R. Robot Manipulator Control Under Unix RCCL: A Robot Control C Library. Int. J. Robot. Res. 1986, 5, 94–111. [Google Scholar] [CrossRef]
  2. Blume, C.; Jakob, W. PASRO: Pascal for Robots; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  3. Collett, T.; MacDonald, B.; Gerkey, B. Player 2.0: Toward a Practical Robot Programming Framework. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, 5–7 December 2005. [Google Scholar]
  4. Zieliński, C.; Winiarski, T. Motion Generation in the MRROC++ Robot Programming Framework. Int. J. Robot. Res. 2010, 29, 386–413. [Google Scholar] [CrossRef]
  5. Brugali, D.; Scandurra, P. Component-based Robotic Engineering. Part I: Reusable building blocks. IEEE Robot. Autom. Mag. 2009, 16, 84–96. [Google Scholar] [CrossRef]
  6. Bruyninckx, H. OROCOS: Design and implementation of a robot control software framework. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002. [Google Scholar]
  7. Brooks, A.; Kaupp, T.; Makarenko, A.; Williams, S.; Orebäck, A. Orca: A component model and repository. In Software Engineering for Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2007; pp. 231–251. [Google Scholar]
  8. Mallet, A.; Pasteur, C.; Herrb, M.; Lemaignan, S.; Ingrand, F. GenoM3: Building middleware-independent robotic components. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4627–4632. [Google Scholar]
  9. Quigley, M.; Gerkey, B.; Conley, K.; Faust, J.; Foote, T.; Leibs, J.; Berger, E.; Wheeler, R.; Ng, A. ROS: An open-source Robot Operating System. In Proceedings of the Open-Source Software Workshop at the International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009. [Google Scholar]
  10. Brambilla, M.; Cabot, J.; Wimmer, M. Model-Driven Software Engineering in Practice. Synth. Lect. Softw. Eng. 2017, 3, 1–207. [Google Scholar]
  11. MacDonald, B.; Biggs, G.; Collett, T. Software Environments for Robot Programming. In Software Engineering for Experimental Robotics; Brugali, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 107–124. [Google Scholar] [CrossRef]
  12. Matarić, M.J.; Michaud, F. The Handbook of Robotics. In The Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Chapter Behavior-Based Systems; Springer: Berlin/Heidelberg, Germany, 2008; pp. 891–909. [Google Scholar]
  13. Bulter, Z.; Rizzi, A. Distributed and Cellular Robots. In Springer Handbook of Robotics; Khatib, O., Siciliano, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 911–920. [Google Scholar]
  14. Yim, M.; Shen, W.M.; Salemi, B.; Rus, D.; Moll, M.; Lipson, H.; Klavins, E.; Chirikjian, G.S. Modular Self-Reconfigurable Robot Systems [Grand Challenges of Robotics]. IEEE Robot. Autom. Mag. 2007, 14, 43–52. [Google Scholar] [CrossRef]
  15. Farinelli, A.; Iocchi, L.; Nardi, D. Multirobot systems: A classification focused on coordination. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 2015–2028. [Google Scholar] [CrossRef] [Green Version]
  16. Dudek, G.; Jenkin, M.R.M.; Milios, E.; Wilkes, D. A taxonomy for multi-agent robotics. Auton. Robot. 1996, 3, 375–397. [Google Scholar] [CrossRef]
  17. Doriya, R.; Mishra, S.; Gupta, S. A brief survey and analysis of multi-robot communication and coordination. In Proceedings of the 2015 International Conference on Computing, Communication & Automation, Noida, India, 15–16 May 2015; pp. 1014–1021. [Google Scholar] [CrossRef]
  18. Chibani, A.; Amirat, Y.; Mohammed, S.; Matson, E.; Hagita, N.; Barreto, M. Ubiquitous robotics: Recent challenges and future trends. Robot. Auton. Syst. 2013, 61, 1162–1172. [Google Scholar] [CrossRef]
  19. Campusano, M.; Fabry, J.; Bergel, A. Live programming in practice: A controlled experiment on state machines for robotic behaviors. Inf. Softw. Technol. 2019, 108, 99–114. [Google Scholar] [CrossRef]
  20. Kortenkamp, D.; Simmons, R.; Brugali, D. Robotic Systems Architectures and Programming. In Springer Handbook of Robotics, 2nd ed.; Siciliano, B., Khatib, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 283–306. [Google Scholar]
  21. Coste-Maniere, E.; Simmons, R. Architecture, the backbone of robotic systems. In Proceedings of the IEEE International Conference on Robotics and Automation ICRA ’00, San Francisco, CA, USA, 24–28 April 2000; Volume 1, pp. 67–72. [Google Scholar] [CrossRef]
  22. Lyons, D.M.; Arbib, M.A. A Formal Model of Computation for Sensory-Based Robotics. IEEE Trans. Robot. Autom. 1989, 5, 280–293. [Google Scholar] [CrossRef]
  23. Lyons, D.M. Adaptive behavior and intelligent systems without symbols and logic. In Studies in Cognitive Systems; Chapter A Schema-Theory Approach to Specifying and Analysing the Behavior of Robotic Systems; Kluwer Academic: Alphen aan den Rijn, The Netherlands, 2001; Volume 2, pp. 51–70. [Google Scholar]
  24. Zieliński, C.; Kornuta, T. Diagnostic Requirements in Multi-robot Systems. In Intelligent Systems in Technical and Medical Diagnostics; Korbicz, J., Kowal, M., Eds.; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2014; Volume 230, pp. 345–356. [Google Scholar] [CrossRef]
  25. Armbrust, C.; Kiekbusch, L.; Ropertz, T.; Berns, K. Soft Robot Control with a Behaviour-Based Architecture. In Soft Robotics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 81–91. [Google Scholar]
  26. Figat, M.; Zieliński, C. Methodology of Designing Multi-agent Robot Control Systems Utilising Hierarchical Petri Nets. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3363–3369. [Google Scholar]
  27. Kornuta, T.; Zieliński, C.; Winiarski, T. A universal architectural pattern and specification method for robot control system design. Bull. Pol. Acad. Sci. Technol. Sci. 2020, 68, 3–29. [Google Scholar] [CrossRef]
  28. Zieliński, C. Robotic System Design Methodology Utilising Embodied Agents. In Automatic Control, Robotics and Information Processing; Kulczycki, P., Korbicz, J., Kacprzyk, J., Eds.; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2021; Volume 296, pp. 523–561. [Google Scholar] [CrossRef]
  29. Kiekbusch, L.; Armbrust, C.; Berns, K. Formal verification of behaviour networks including sensor failures. Robot. Auton. Syst. 2015, 74, 331–339. [Google Scholar] [CrossRef]
  30. Kiekbusch, L.; Armbrust, C.; Berns, K. Formal Verification of Behaviour Networks Including Hardware Failures. In Proceedings of the International Conference Intelligent Autonomous Systems IAS 13; Advances in Intelligent Systems and Computing. Springer: Berlin/Heidelberg, Germany, 2016; Volume 302, pp. 1571–1582. [Google Scholar]
  31. Zieliński, C.; Figat, M.; Hexel, R. Communication Within Multi-FSM Based Robotic Systems. J. Intell. Robot. Syst. 2019, 93, 787–805. [Google Scholar] [CrossRef] [Green Version]
  32. Hussain, R.; Zielinska, T.; Hexel, R. Finite state automaton based control system for walking machines. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419853182. [Google Scholar] [CrossRef]
  33. Pilone, D.; Pitman, N. UML 2.0 in a Nutshell; O’Reilly: Sebastopol, CA, USA, 2005. [Google Scholar]
  34. Kaptein, F.; Kiefer, B.; Cully, A.; Celiktutan, O.; Bierman, B.; Rijgersberg-peters, R.; Broekens, J.; Van Vught, W.; Van Bekkum, M.; Demiris, Y.; et al. A Cloud-Based Robot System for Long-Term Interaction: Principles, Implementation, Lessons Learned. J. Hum. Robot Interact. 2021, 11, 1–27. [Google Scholar] [CrossRef]
  35. Dudek, W.; Szynkiewicz, W.; Winiarski, T. Cloud computing support for the multi-agent robot navigation system. J. Autom. Mob. Robot. Intell. Syst. 2017, 11, 67–74. [Google Scholar] [CrossRef]
  36. Dieber, B.; Breiling, B.; Taurer, S.; Kacianka, S.; Rass, S.; Schartner, P. Security for the Robot Operating System. Robot. Auton. Syst. 2017, 98, 192–203. [Google Scholar] [CrossRef]
  37. DiLuoffo, V.; Michalson, W.R.; Sunar, B. Robot Operating System 2: The need for a holistic security approach to robotic architectures. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418770011. [Google Scholar] [CrossRef]
  38. Dudek, W.; Szynkiewicz, W. Cyber-security for Mobile Service Robots—Challenges for Cyber-physical System Safety. J. Telecommun. Inf. Technol. 2019, 2, 29–36. [Google Scholar] [CrossRef]
  39. Krzykowska-Piotrowska, K.; Dudek, E.; Siergiejczyk, M.; Rosiński, A.; Wawrzyński, W. Is Secure Communication in the R2I (Robot-to-Infrastructure) Model Possible? Identification of Threats. Energies 2021, 14, 4702. [Google Scholar] [CrossRef]
  40. Baklouti, E.; Amor, N.B.; Jallouli, M. Reactive control architecture for mobile robot autonomous navigation. Robot. Auton. Syst. 2017, 89, 9–14. [Google Scholar] [CrossRef]
  41. Brooks, R.A. Intelligence Without Representation. Artif. Intell. 1991, 47, 139–159. [Google Scholar] [CrossRef]
  42. Brooks, R.A. A robust layered control system for a mobile robot. IEEE J. Robot. Autom. 1986, 2, 14–23. [Google Scholar] [CrossRef] [Green Version]
  43. Arkin, R.C. Behavior-Based Robotics; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  44. Gat, E. On Three-Layer Architectures. In Artificial Intelligence and Mobile Robots; Kortenkamp, D., Bonnasso, R.P., Murphy, R., Eds.; AAAI Press: Cambridge, MA, USA, 1998; pp. 195–210. [Google Scholar]
  45. Alami, R.; Chatila, R.; Fleury, S.; Ghallab, M.; Ingrand, F. An Architecture for Autonomy. Int. J. Robot. Res. 1998, 17, 315–337. [Google Scholar] [CrossRef]
  46. Tang, F.; Parker, L. A Complete Methodology for Generating Multi-Robot Task Solutions using ASyMTRe-D and Market-Based Task Allocation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3351–3358. [Google Scholar]
  47. Shoham, Y. Agent-Oriented Programming. Artif. Intell. 1993, 60, 51–92. [Google Scholar] [CrossRef]
  48. Padgham, L.; Winikoff, M. Developing Intelligent Agent Systems: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  49. Nesnas, I. The CLARAty Project: Coping with Hardware and Software Heterogenity. In Software Engineering for Experimental Robotics; Brugali, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 31–70. [Google Scholar]
  50. Ahmad, A.; Babar, M.A. Software architectures for robotic systems: A systematic mapping study. J. Syst. Softw. 2016, 122, 16–39. [Google Scholar] [CrossRef]
  51. Malavolta, I.; Lewis, G.A.; Schmerl, B.; Lago, P.; Garlan, D. Mining guidelines for architecting robotics software. J. Syst. Softw. 2021, 178, 110969. [Google Scholar] [CrossRef]
  52. de Araújo Silva, E.; Valentin, E.; Carvalho, J.R.H.; da Silva Barreto, R. A survey of Model Driven Engineering in robotics. J. Comput. Lang. 2021, 62, 101021. [Google Scholar] [CrossRef]
  53. Remy, S.; Blake, M. Distributed Service-Oriented Robotics. IEEE Internet Comput. 2011, 15, 70–74. [Google Scholar] [CrossRef]
  54. Kleppe, A. The Field of Software Language Engineering. In Software Language Engineering; Gašević, D., Lämmel, R., Wyk, E., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–7. [Google Scholar]
  55. Figat, M.; Zieliński, C. Parameterised robotic system meta-model expressed by Hierarchical Petri nets. Robot. Auton. Syst. 2022, 150, 103987. [Google Scholar] [CrossRef]
  56. Cisternino, A.; Colombo, D.; Ambriola, V.; Combetto, M. Increasing Decoupling in the Robotics4.NET Framework. In Software Engineering for Experimental Robotics; Brugali, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 307–324. [Google Scholar]
  57. Stampfer, D.; Lotz, A.; Lutz, M.; Schlegel, C. The SmartMDSD Toolchain: An Integrated MDSD Workflow and Integrated Development Environment (IDE) for Robotics Software. J. Softw. Eng. Robot. 2016, 7, 3–19. [Google Scholar]
  58. Zieliński, C. General Robotic System Software Design Methodology. In Proceedings of the Advances in Mechanism and Machine Science, 15th IFToMM World Congress, Kraków, Poland, 30 June–4 July 2019; Uhl, T., Ed.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 73, pp. 2779–2788. [Google Scholar]
  59. Figat, M.; Zieliński, C. Robotic System Specification Methodology Based on Hierarchical Petri Nets. IEEE Access 2020, 8, 71617–71627. [Google Scholar] [CrossRef]
  60. Janiak, M.; Zieliński, C. Control System Architecture for the Investigation of Motion Control Algorithms on an Example of the Mobile Platform Rex. Bull. Pol. Acad. Sci. Technol. Sci. 2015, 63, 667–678. [Google Scholar] [CrossRef] [Green Version]
  61. Zieliński, C.; Trojanek, P. Stigmergic cooperation of autonomous robots. J. Mech. Mach. Theory 2009, 44, 656–670. [Google Scholar] [CrossRef]
  62. Foundation for Intelligent Physical Agents. FIPA ACL Message Structure Specification. 2000. Available online: http://www.fipa.org/specs/fipa00061/SC00061G.html (accessed on 21 October 2022).
  63. Mazouzi, H.; Seghrouchni, A.E.F.; Haddad, S. Open protocol design for complex interactions in multi-agent systems. In First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 2; Association for Computing Machinery: New York, NY, USA, 2002; pp. 517–526. [Google Scholar]
  64. Bedaf, S. The Future is Now: The Potential of Service Robots in Elderly Care. Ph.D. Thesis, Datawyse/Universitaire Pers Maastricht, Maastricht, The Netherlands, 2017. [Google Scholar] [CrossRef] [Green Version]
  65. Salichs, M.A.; Encinar, I.P.; Salichs, E.; Castro-González, Á.; Malfaz, M. Study of scenarios and technical requirements of a social assistive robot for Alzheimer’s disease patients and their caregivers. Int. J. Soc. Robot. 2016, 8, 85–102. [Google Scholar] [CrossRef]
  66. Frennert, S.; Eftring, H.; Östlund, B. What older people expect of robots: A mixed methods approach. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 19–29. [Google Scholar]
  67. Winiarski, T.; Jarocki, S.; Seredyński, D. Grasped Object Weight Compensation in Reference to Impedance Controlled Robots. Energies 2021, 14, 6693. [Google Scholar] [CrossRef]
  68. Seredyński, D.; Banachowicz, K.; Winiarski, T. Graph–based potential field for the end–effector control within the torque–based task hierarchy. In Proceedings of the 21th IEEE International Conference on Methods and Models in Automation and Robotics, MMAR’2016, Miedzyzdroje, Poland, 29 August–1 September 2016; pp. 645–650. [Google Scholar] [CrossRef]
  69. Winiarski, T.; Banachowicz, K.; Seredyński, D. Two mode impedance control of Velma service robot redundant arm. In Progress in Automation, Robotics and Measuring Techniques; Szewczyk, R., Zieliński, C., Kaliczyńska, M., Eds.; Advances in Intelligent Systems and Computing (AISC); Springer: Berlin/Heidelberg, Germany, 2015; Volume 351, pp. 319–328. [Google Scholar] [CrossRef]
  70. Seredyński, D.; Winiarski, T.; Zieliński, C. FABRIC: Framework for Agent-Based Robot Control Systems. In Proceedings of the 12th International Workshop on Robot Motion and Control (RoMoCo), Poznań, Poland, 8–10 July 2019; pp. 215–222. [Google Scholar] [CrossRef]
  71. Beetz, M.; Beßler, D.; Haidu, A.; Pomarlan, M.; Bozcuoğlu, A.K.; Bartels, G. Knowrob 2.0—A 2nd generation knowledge processing framework for cognition-enabled robotic agents. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 512–519. [Google Scholar]
  72. Dudek, W.; Winiarski, T. Scheduling of a Robot’s Tasks With the TaskER Framework. IEEE Access 2020, 8, 161449–161471. [Google Scholar] [CrossRef]
  73. Dudek, W.; Węgierek, M.; Karwowski, J.; Szynkiewicz, W.; Winiarski, T. Task harmonisation for a single–task robot controller. In Proceedings of the 12th International Workshop on Robot Motion and Control (RoMoCo), Poznań, Poland, 8–10 July 2019; pp. 86–91. [Google Scholar] [CrossRef]
Figure 1. Internal structure of an agent a j .
Figure 1. Internal structure of an agent a j .
Energies 15 07983 g001
Figure 2. The c j FSM F c j executing any behaviour.
Figure 2. The c j FSM F c j executing any behaviour.
Energies 15 07983 g002
Figure 3. Messages M j , j and M j , j transmitted between a j and a j .
Figure 3. Messages M j , j and M j , j transmitted between a j and a j .
Energies 15 07983 g003
Figure 4. Binary decomposition process.
Figure 4. Binary decomposition process.
Energies 15 07983 g004
Figure 5. Requirement hierarchy for the Robot Companion.
Figure 5. Requirement hierarchy for the Robot Companion.
Energies 15 07983 g005
Figure 6. The decomposition after six steps of the process.
Figure 6. The decomposition after six steps of the process.
Energies 15 07983 g006
Figure 7. Velma robot operating in the environment.
Figure 7. Velma robot operating in the environment.
Energies 15 07983 g007
Figure 8. I co , ta coordinate interaction protocol. The initiator agent a co , that contains the pair of buffers c x , y T co , starts the interaction, while the participating agent a ta , that contains the pair of buffers c x , y T ta takes part in it. The dashed lines are lifelines, arrows represent particular communicates. A diamond indicates an alternative in messaging. The sub-diagram ’loop’ means the repetition of its contents.
Figure 8. I co , ta coordinate interaction protocol. The initiator agent a co , that contains the pair of buffers c x , y T co , starts the interaction, while the participating agent a ta , that contains the pair of buffers c x , y T ta takes part in it. The dashed lines are lifelines, arrows represent particular communicates. A diamond indicates an alternative in messaging. The sub-diagram ’loop’ means the repetition of its contents.
Energies 15 07983 g008
Figure 9. Interaction protocol I pl , km query .
Figure 9. Interaction protocol I pl , km query .
Energies 15 07983 g009
Figure 10. The considered part of Robot Companion realised as five agents: a ta , a km , a co , a hi , and a pl , including interaction protocols. The group a g ex is not shown.
Figure 10. The considered part of Robot Companion realised as five agents: a ta , a km , a co , a hi , and a pl , including interaction protocols. The group a g ex is not shown.
Energies 15 07983 g010
Figure 11. FSMs governing the behaviours of the agents, i.e., their control subsystems. The arc labels specify conditions; missing labels are always T r u e . The conditions are checked as soon as a behaviour terminates.
Figure 11. FSMs governing the behaviours of the agents, i.e., their control subsystems. The arc labels specify conditions; missing labels are always T r u e . The conditions are checked as soon as a behaviour terminates.
Energies 15 07983 g011
Figure 12. Size (in bytes) of messages sent at specific times.
Figure 12. Size (in bytes) of messages sent at specific times.
Energies 15 07983 g012
Figure 13. Bandwidth used by C pl , km 4 in every second of robotic system operation. The plot contains only the time slots in which the communication takes place.
Figure 13. Bandwidth used by C pl , km 4 in every second of robotic system operation. The plot contains only the time slots in which the communication takes place.
Energies 15 07983 g013
Table 1. Summary of the first step of a g RoCo decomposition process. The arrows show which groups are the result of the decomposition of the initial group. Dashed lines show emerging communication channels. The light coloured groups form the decomposition tree fringe, which shows which groups exist at the current step—the dark ones disappear, because they were decomposed.
Table 1. Summary of the first step of a g RoCo decomposition process. The arrows show which groups are the result of the decomposition of the initial group. Dashed lines show emerging communication channels. The light coloured groups form the decomposition tree fringe, which shows which groups exist at the current step—the dark ones disappear, because they were decomposed.
Requirementsacquire and execute human commands
Decomposition a g RoCo : = a g ac , a g ec
a g ac tasksacquiring commands from a human
a g ec tasksexecuting commands
Channel reallocation
New channels C ac , ec 1
C ac , ec 1 transmitscommands to be executed
Energies 15 07983 i001
Table 2. Summary of the fourth step— a g cp decomposition.
Table 2. Summary of the fourth step— a g cp decomposition.
Requirementsinfer actions, model environment, knowledge management
Decomposition a g cp : = a g pl , a g km
a g pl tasksplanning
a g km tasksknowledge management
Chann. reallocation C co , cp 3 C co , pl 3
New channels C pl , km 4
C co , ec 1 transmitscommands to be executed
C co , hi 2 transmitsinterpreted human commands
C co , pl 3 transmitsgenerated plan
C pl , km 4 transmitsknowledge needed by planner
Energies 15 07983 i002
Table 3. Mapping of the states of c co FSM to a co behaviours.
Table 3. Mapping of the states of c co FSM to a co behaviours.
StateBehaviourDescription
S c co trigger B c co , trigger a co is idle, it receives a new command by I hi , co request and a ta state by I co , ta coordinate
S c co procReq B c co , procReq a co stores the task it received from a hi and calculates its deadline and start time
S c co compSP B c co , compSP a co requests a pl to update plans (initiates I co , pl request ) of all tasks awaiting execution and, based on them, determines the schedule parameters
S c co schedule B c co , schedule a co executes the scheduling algorithm that produces a decision. The decision is either to switch the plan being executed to another one, or to continue with the current one. A transition labelled (continue or switch) is satisfied if the decision (continue or switch) of the schedule algorithm is made.
S c co switch B c co , switch a co executes the plan switch procedure. It initiates the I co , ta coordinate protocol.
Table 4. Mapping of the states of c pl FSM to a pl behaviours.
Table 4. Mapping of the states of c pl FSM to a pl behaviours.
StateBehaviourDescription
S c pl idle B c pl , idle a pl waits for a planning problem to be transmitted through I co , pl request
S c pl initialize B c pl , initialize a pl initializes the planning problem, multiple possible initiations of I pl , km query
S c pl search B c pl , search a pl searches for the solution
S c pl finalize B c pl , finalize a pl generates the plan
Table 5. Mapping of the states of c km FSM to a km behaviours.
Table 5. Mapping of the states of c km FSM to a km behaviours.
StateBehaviourDescription
S c km idle B c km , idle a km waits for three kinds of messages: a query passed by using I pl , km query , I ta , km query or a new observation of the environment. Based on that it prepares a switch of state.
S c km questionAnswering B c km , questionAnswering a km reasons using the knowledge database to answer the received query, and formulates an appropriate message.
S c km knowledgeSaving B c km , knowledgeSaving a km updates the knowledge database with the received information.
Table 6. Mapping of the states of c ta to a ta behaviours.
Table 6. Mapping of the states of c ta to a ta behaviours.
StateBehaviourDescription
S c ta idle B c ta , idle a ta waits for a plan execution request, transmitted by communicates M co , ta = V request , Z , L HFSM , , I coordinate , , m w , , m c within I co , ta coordinate interaction protocol.
S c ta running B c ta , running a ta executes the plan specified by a co . The plan is defined by a Hierarchical Finite State Machine (HFSM); therefore, S c t a R u n n i n g is a super state defined by the HFSM. It constantly reports on the state of execution of the plan by sending M ta , co = V inform , Z , L SW , , I coordinate , , m w , , m c . When the HFSM terminates or a ta receives an interruption message ( M co , ta = V cancel , Z , L HFSM , , I coordinate , , m w , , m c ) from a co (via I co , ta coordinate ) the behaviour is terminated. In the case when it needs a piece of knowledge, it initiates I ta , km request interaction protocol to query the a k m for it.
S c ta finished B c ta , finished a ta waits for an acknowledgment from a co regarding the plan termination
Table 7. Communicates and robot actions in the experiment.
Table 7. Communicates and robot actions in the experiment.
Time [s]Event C , Used
0–37 a km receives an initial knowledge about the environment C ex , ta 5 , C ta , km 6
83 a hi receives a new command, and requests a co to coordinate its execution C co , hi 2
97 a co requests a pl for the plan #1 C co , pl 3
97–152 a pl generates the plan #1, and queries a km C pl , km 4
153 a co receives the generated plan #1 from a pl C co , pl 3
153–681 a ta executes the plan, reports its state to a co C co , ta 1
293–315, 674–680 a km receives information about the modified environment from a ta C ex , ta 5 , C ta , km 6
300the jar is grasped with the left gripper
315the jar is pulled out of the cabinet
495the bowl is grasped with the right gripper
508the bowl is pulled out of the cabinet
620the bowl is put down on the table and released
680the bowl is localized
681 a co requests a pl for the plan #2 C co , pl 3
681–682 a pl generates the plan #2, and querries a km C pl , km 4
682 a co receives the generated plan #2 from a pl C co , pl 3
682–778 a ta executes the plan, reports its state to a co C co , ta 1
674–682 a km receives information about the environment from a ta C ex , ta 5 , C ta , km 6
763the content of the jar is poured into the bowl
778the task is finished
Table 8. Number of messages sent and received by agents.
Table 8. Number of messages sent and received by agents.
a co a hi a km a pl a ta a g ex Received
a co 1 28 11
a km 92583241032
a pl 2 730 732
a ta 4 71 75
a g ex 14 1 15
send6181592792241865
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pałka, P.; Zieliński, C.; Dudek, W.; Seredyński, D.; Szynkiewicz, W. Communication-Focused Top-Down Design of Robotic Systems Based on Binary Decomposition. Energies 2022, 15, 7983. https://doi.org/10.3390/en15217983

AMA Style

Pałka P, Zieliński C, Dudek W, Seredyński D, Szynkiewicz W. Communication-Focused Top-Down Design of Robotic Systems Based on Binary Decomposition. Energies. 2022; 15(21):7983. https://doi.org/10.3390/en15217983

Chicago/Turabian Style

Pałka, Piotr, Cezary Zieliński, Wojciech Dudek, Dawid Seredyński, and Wojciech Szynkiewicz. 2022. "Communication-Focused Top-Down Design of Robotic Systems Based on Binary Decomposition" Energies 15, no. 21: 7983. https://doi.org/10.3390/en15217983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop