EARL—Embodied Agent-Based Robot Control Systems Modelling Language

: The paper presents the Embodied Agent-based Robot control system modelling Language (EARL). EARL follows a Model-Driven Software Development approach (MDSD), which facilitates robot control system development. It is based on a mathematical method of robot controller speciﬁcation, employing the concept of an Embodied Agent, and a graphical modelling language: System Modelling Language (SysML). It combines the ease of use of SysML with the precision of mathematical speciﬁcation of certain aspects of the designed system. It makes the whole system speciﬁcation effective, from the point of view of the time needed to create it, conciseness of the speciﬁcation and the possibility of its analysis. By using EARL it is possible to specify systems both with ﬁxed and variable structure. This was achieved by introducing a generalised system model and presenting particular structures of the system in terms of modelling block conﬁgurations adapted by using instances. FABRIC framework was created to support the implementation of EARL-based controllers. EARL is compatible with component based robotic middlewares (e.g., ROS and Orocos).


Introduction
Robotic systems become more and more complex due to the incorporation of an increasing number of sensors and effectors as well as the complexity of the executed tasks. One of the main problems of modern robotics is how to design robot controllers effectively and correctly. At the core of this problem is the formulation of a robot controller specification method. Two of the requirements are of utmost importance. The specification should be platform-independent and, moreover, it should be simple to transform into an implementation compatible with a chosen technology. There is no single solution to those requirements. The literature regarding this subject focuses on best practices in this field [1,2]. The provided set of tips, principles and proposals for structuring and representing the operation of controllers is frequently presented with the help of Model-Driven Software Development (MDSDs) techniques. This representation method helps to effectively implement those principles when constructing a robot controller. In our work we propose EARL MDSD. To present the context of the creation of EARL, in the following, other MDSDs that preceded EARL are described. Moreover, SysML and the concept of Embodied Agent are introduced, as EARL is based on them.

MDSD in Robotics
In 2016, Nordmann et al. [3] surveyed 137 publications dealing with domain-specific modelling approaches in robotics. Those publications took into account problems from nine subdomains of

•
Utilise well-known graphical specification languages-the MDSD itself, as well as the system described using this MDSD, are presented in the form of graphic diagrams derived from one of the well known graphical specification languages, e.g., UML [11] or SysML [12].

•
Are based on formal models-rules and constraints defined by MDSD language are represented in the form of a formal model.

•
Enable direct transformation of the specification into implementation-specific entities defined in the MDSD tools map to specific entities of the framework used for the creation of the robot software, e.g., components, communication channels and data types.

•
Are compatible with open source frameworks-the specification can be easily transformed into executable code created by using open source frameworks.
• Are supplemented with controller code creation software-the MDSD tool provides or can cooperate with mechanisms that enable the transformation of specifications into the robot controller code.

SysML
Reliable methods of creating robot controllers are vital for the quality and certification of the produced software [13,14]. Thus, robotics MDSD tools usually include robot controller code generation facilities [15]. The frameworks utilised to implement lower layers of controllers are usually component oriented to make easier code reuse [16]. Additionally, components are the natural equivalent of blocks, classes, actions and other entities of UML and SysML [12].
Historically UML preceded SysML. Its first documentation, adopted as an Object Management Group (OMG) standard, was published in 1997. Many robotic MDSD tools such as BCM, RobotML, SmartSoft and V3CMM either use directly UML, or use UML profiles or at least adopt some UML concepts. In 2007, OMG published the first SysML specification, created as an extension/profile of the UML language. SysML enables the specification of general concepts, not only related to software, but also to the physical representation of cyber-physical systems. SysML graphical modelling language is widely used by engineers and specifically utilised in the robotics domain, e.g., [17][18][19][20]. SysML defines nine types of diagrams (Table 1), that enable multi-dimensional decomposition of the system into: Packages, Requirements, that the designed system should comply with, as well as Behaviours and system Structure. Table 1. SysML diagrams and their abbreviations [12]. As SysML has no formal semantics, a number of works has proposed its formalisation. By embedding SysML within a formal logic, formal methods can be used to maintain consistency as the design evolves [21]. In [22], the authors present TEPE, a graphical TEmporal Property Expression language based on SysML parametric diagrams. Properties are built upon temporal and logical relations between block attributes and signals. TEPE may be integrated with a SysML real-time profile. The paper [23] presents a method of formalizing the SysML Internal Block Diagram (IBD) semantics, by mapping it into the Hierarchical Colored Petri Net (HCPNs) semantics. The Description Logic, namely, SHIOQ(D), is used in [24] to describe the block diagrams. However, the informal semantics of SysML is often not completely captured or preserved when encoded in logic-based languages. Examples include the generation of a B-Specification from a UML class [25]. In [26], Chouali et al. use interface automata to formalize the semantics of SysML sequence and definition block diagrams. The work proposes verification of interoperability in component-based systems by combining interface automata and SysML models. The formalisation is presented by an algorithm and illustrated with an example. The approach is neither automated nor analysed.

Diagram Group Diagram Kind Abbreviation
The literature review presented above indicates that so far no single dominating standard of formalization and use of SysML has emerged. Usually, the method of SysML formalisation and how it is used depends on a specific application.

Embodied Agent
A robotic system can be composed of one or more agents. The discussion whether a symbolic representation of the environment is necessary in control of intelligent robots lead to the reformulation of the concepts of embodiment and situatedness within robotics, which subsequently lead to the formulation of the concept of an embodied agent [27][28][29][30][31][32][33][34][35]. The classification presented in, e.g., [35] or Subsection 2.4, points out that structurally an embodied agent is the most complete type of an agent. It gathers the information about the state of the environment using its receptors and influences the environment using its effectors. Its control system is aware of the task that it has to execute. Using that knowledge combined with the information produced by receptors it commands the effectors in such a way as to fulfil the task. A monolithic control system would be too complex to specify and implement, thus its decomposition into subsystems is required. A natural way of decomposing complex systems is to partition them into hardware drivers and the task dependent part. Thus, the control system of an embodied agent is decomposed into its control subsystem, virtual effectors and receptors [32][33][34][35]. Virtual effectors transform commands obtained from the control subsystem into commands acceptable by the real (hardware) effectors. Virtual receptors aggregate the information acquired by the real (hardware) receptors. The control subsystem, being aware of the task it has to accomplish, uses the aggregated receptor data to produce effector commands. The transformative abilities of the virtual effectors and receptors enable the control system to express the task in terms of concepts more appropriate for that purpose than if it would have to process raw sensor readings and produce hardware control commands. Thus, the structure of an embodied agent produces the natural control loop: from the environment, through the real and virtual receptors, further to the control subsystem and finally through the virtual and real effectors back to the environment. As receptors sometimes have to be configured and the control subsystem might need proprioceptive data, a reverse path also exists.
The subsystems of an embodied agent communicate through data buffers. Subsystems also have internal memory. The contents of subsystem input buffers and internal memory form the arguments of transition functions producing the contents of the output buffers and internal memory. Transition functions are responsible for performing computations only. However, data has to be also propagated between subsystems. Iterative acquisition of new data, computation of the transition function and dispatch of the results is called a behaviour. The iterations cease when either a terminal or error condition is fulfilled. Both conditions take as arguments the contents of input buffers and internal memory and produce a Boolean value, and are therefore predicates. Multiplicity of subsystem behaviours leads to the necessity of choosing the next one, once the previous one terminates its activities. This is done by the subsystem finite state machine (FSM). The directed arcs of the state graph of the FSM are labeled by predicates called initial conditions. The true initial condition directs the FSM to its next state, in which a successive behaviour is invoked.
Embodied agents were used to specify research-oriented controllers for the investigation of control laws [36,37], fuzzy logic based controllers [38] and object-oriented ontology [39]. They utilised Finite State Machines, Hierarchical Finite State Machines and Petri Nets [40] to describe the system activities. This approach was used to develop the controllers of many different types of robots: • Industrial robots, e.g., modified IRb-6 manipulator [34], whose control software was an inspiration for the example included in this article. • Service robots, e.g., Velma robot [37]. • Mobile robots, e.g., Lynx [41], with selectable modes of locomotion, either horizontal or vertical.

EARL
Although Robot Operating System (ROS) is the most commonly used robot control system implementation tool, the authors of [45] indicate that no hints guiding the creation of robot controllers are provided with it. The authors state that ROS is very well suited for creating various control systems, but it lacks support for the reuse of once created architectural solutions. They opted for SciROS, dedicated to the implementation of hybrid behaviour-deliberative systems. It purposefully constrains the developer creating a specific set of functionalities. On the one hand, this approach makes it impossible to create a controller with an architecture that does not meet the requirements, and on the other hand, it allows the reuse of its fragments. The problems indicated in this publication do not appear only in the popular frameworks, used for the creation of modern robot controllers. Those problems have a very general nature. In our work, we propose another solution, guiding robot controller developers creating systems composed both of real-time (RT) and non-RT components. This approach, in particular, can be applied to systems implemented using the ROS and Orocos frameworks.
EARL proposes a standardized approach to the control system specification of cyber-physical systems. The Embodied Agent (Section 1.3) is its foundation. EARL maps the concepts associated with Embodied Agents into SysML (Section 1.2) blocks with theirs properties, i.e., parts, references, values and operations. EARL fulfils all of the requirements formulated at the end of Section 1.1. It extends the set of best practices, by answering the following questions. •

How to organize a specification into SysML packages? •
For what purposes should the graphical tools be used and where the mathematical notation should be applied directly? • How to map the specification into component systems? • How to describe systems with a time-varying structure? Figure 1 presents the dependencies of EARL packages. The model utilised by EARL is defined in the Model package (Section 2). The system instances that «realize» EARL model constraints are defined in the System Instance package (Section 3). This package «uses» independently defined computational structures from the Calculation Components package and data types from the DataTypes package. A robot controller is created by first producing an EARL based system specification, and then it is implemented with the support of FABRIC framework (Section 4). Section 5 provides a comparison of EARL with other MDSD tools. Section 6 provides final remarks and conclusions.

Model Formulation
The model of a system specified in EARL is composed of concepts describing its structure and behaviour. The structure of the model is specified with SysML Block Definition Diagrams (bdd) and Internal Block Diagrams (ibd) [46]. For clarity of presentation, the various aspects of the structure are presented by separate diagrams. The model is composed of a set of diagrams. Each of the diagrams presents only a part of the structure, however the whole set has to be consistent. Some of the model constraints are defined by mathematical equations.

System and Its Parts
System and Robot are the most general EARL concepts. They are structurally defined as in Figure 2. A System must contain at least one Robot r. A Robot is composed of at least one Agent a. A system may contain agents that are not elements of robots, e.g., an Agent coordinating the work of a group of Robots [35]. Agents are connected with aa inter-agent communication Links. Each aa Link can be referred by a Robot. In general, the Links parts names are created by combining the source block part name at the beginning of the Link part name and destination block part name at the end of the Link part name. In cyber-physical systems an Agent usually has a physical body, thus it is an Embodied Agent. It represents either a whole or a part of a robot [47]. The structure of an Agent is defined in Figure 2. The specific features of robotics, where an Agent can take on various roles, from real-time control, through sensor data processing, to execution of computationally demanding tasks [48], require its decomposition into various types of Subsystems and specialized Links between them. The variety of link names was introduced to distinguish the types of Subsystems or Agents that communicate with each other and the direction of data transmission. The blocks cardinality presented in Figure 2 is general, but particular system structure may introduce more strict constraints according to the extra rules presented further.
There are five different specialisations of Subsystems (right side of Figure 2). The main one (indispensable for an Agent) is a Control Subsystem cs, which coordinates the Agent's Subsystems and communicates with other Agents. Real Effectors re are Subsystems which affect the environment, whereas Real Receptors rr (exteroceptors) gather information from the environment. Virtual Subsystems (Virtual Receptors vr and Virtual Effectors ve) supervise the work of Real Subsystems. Therefore, the Real Subsystems of a particular type, cannot exist without virtual ones and vice versa, see Equation (1).
Inequalities Equation (1) represents the necessary conditions ensuring the preservation of system integrity. Additional constraints have to be imposed on the number of Subsystems due to the specificity of inter-subsystem communication Links (Section 2.3).

Subsystem and its Parts
The structure of a Subsystem is defined in

Figure 4b depicts relations between a particular Subsystem and its communication buffers. The communication constraints depicted in Section 2.3 cause that each Virtual Receptor or Virtual
Effector must have at least one Input Buffer and one Output Buffer. A Real Effector needs at least one Input Buffer to receive commands, and a Real Receptor needs at least one Output Buffer to send sensory data.  Input Buffer, Output Buffer and Internal Memory are defined analogically as in [49]. Each Buffer contains a data structure msg, which stores data of type dataType. The dataType can be defined either as a primitive type or a composite and nested structure. Input Buffer possesses an operation receive(), which enables communication with Output Buffers, and stores the received data in the Input Buffer. Analogically, Output Buffer has a send() operation, which dispatches the data stored in the Output Buffer to the connected Input Buffers. Internal Memory stores data, which is a value of type dataType. Input and Output Buffers are graphically represented by squares connected by an arrow showing the direction of data transfer. Internal Memory is represented by a square with a bidirectional arrow. Various forms of communication between Subsystems have been described in the paper [33].
Similarly to [5,32], the EARL Subsystem structural model contains a Finite State Machine (FSM) that determines its activities ( Figure 3). To define the FSM, the set s of FSM States and the set t of FSM Transitions are distinguished. With each of the states a behaviour bb is associated. Figure 5a defines how the run() operation works. The FSM starts in the initial FSM State ifs. Then, while the Subsystem is running, the bb.execute() operation executes a behaviour associated with the current state, which is represented by cfs. The fsm.selectState() operation evaluates the predicates associated with the FSM Transitions emerging from cfs to select the next FSM State. FSM Transition ( Figure 3) is defined by the source and destination FSM States as well as the associated Initial Condition, i.e., predicate ic.
In the following part of the article a SysML dot "." notation [46] is used to depict the nesting of the part instances as well as other block properties. The dot "." can be treated as an extraction operator. It is assumed that if a specific instance of a part is not indicated, the set of all instances of the part is taken into account. In particular, if there is only one instance, there is no need to name it explicitly, only the part name is needed. The same rule applies to references. As the particular parts compose objects of the same type, they can be interpreted as sets in mathematical formulas.
The structure of a Basic Behaviour is defined in Figure 3. It should be noted that in our previous papers using the concept of an Embodied Agent, e.g., [32][33][34][35], the "Basic Behaviour" was called shortly a "Behaviour". The name has been extended as "Behaviour" is a very general term in UML. The Basic Behaviour includes a Transition Function tf ; a Terminal Condition tc , which is a Predicate determining when the execution of the Basic Behaviour should terminate; and an error condition ec, which is a predicate indicating that an error has been detected in the execution of the Basic Behaviour. Basic Behaviour also posses an execute() operation (Figure 5b). That operation, first executes a Transition  The structure of a Transition Function is defined in Figure 3. A Transition Function is decomposed into Partial Transition Functions. This sometimes reduces the redundancy of the specification, making it more comprehensible. Moreover, if the implementation of the specified system is based on components, a Partial Transition Function can be identified with a separate component or a set of components [50,51]. In this case, a Partial Transition Function can be reused in more than one Transition Function similarly as a component can be reused in more than one of the separate groups of components, where one group implements one specific behaviour of a system. Partial Transition Functions composing a Transition Function can be executed in diverse orders, see, e.g., in [47]. To define the execution of Partial Transition Functions within a Transition Function, the operation execute() was introduced. The operation may vary between particular instances of Subsystems.
The structure of a Partial Transition Function is defined in Figure 3. It refers to Input Buffers, Output Buffers as well as Subsystem Internal Memory ( Figure 6). A Partial Transition Function can read from the Internal Memory (using the mi reference) or write to it (using the mo reference). It can be defined as a composition of components from the Calculation Components Package (Figure 1). The composition is defined by a tf .execute() operation. The Partial Transition Function algorithm is executed by an pf .execute() operation. The concept of the Embodied Agent as presented in this paper introduces no restrictions on how to implement both of these operations.
Terminal Conditions used by a Basic Behaviour and Initial Conditions utilised within an FSM Transition can be decomposed into Primitive Predicates. A Primitive Predicate takes its arguments from Subsystem Buffers, see Figures 3 and 6. Both Predicate and Primitive Predicate execute an operation called f un producing a Boolean output.

Embodied Agent Communication
The general system architecture is defined by the Agents and their Subsystems, being the building blocks forming the system structure, and the communication links between those entities. In a way, the architecture is defined by the constraints that are imposed on permissible connections. If no constraints are imposed on the communication links, then the system designer has an excessive freedom of choice, which in the case of his limited experience might lead to an obscure structure. Therefore, architectures limiting this choice are preferred, thus leading to freedom from choice [9]. This provides guidance to the designers, which results in a clear system structure.
In the case of EARL, inter-agent and inter-subsystem communication [47] is defined by unidirectional communication Links (see Figures 2 and 6). The communication takes place between Input Buffers and Output Buffers of Subsystems. Figure 7 presents acceptable communication links between pairs of Subsystems. Note that the inter-agent communication is realized between the Control Subsystems of the communicating agents. Additionally, Figure 7 shows that for each Real Effector present in the system at least one transmission chain should exist. The commands produced by the Control Subsystem, transformed by the Virtual Effector, must reach the Real Effector. Analogically, for each Real Receptor, there is one compulsory communication chain that transmits and processes sensory data. The Real Receptor provides data to the Virtual Receptor which in an agregated form passes it to the Control Subsystem. The other communication Links appearing in Figure 7 are not obligatory. To define bidirectional communication, a pair of unidirectional communication Links is used. Detailed discussion of communication in Embodied Agent systems is presented in [33].

Types of Agents
Four general activities of an Agent can be distinguished [35]: C -overall control of the agent, E -exerting influence over the environment by using effectors, R -gathering the information from the environment by using receptors, and T -inter-agent communication (transmission).
The first activity is indispensable, but the other three are optional, thus eight types of Agents result (Table 2), depending on their capabilities. However, only seven are of utility, as an agent without any of the optional capabilities is useless.

Specification of a Particular Robot Control System
The particular structure of a system is specified by application of instances of specializations of blocks [12] constituting the general model presented above. The names of instances should be long enough to be descriptive and intuitive to interpret, thus reducing the need for additional glossaries. In our approach, each instance can set the number of parts and references (e.g., associated Buffers), however within the limits imposed by the general model. Similarly, each instance can redefine the particular operations of parent blocks present in the general model (e.g., each instance of Partial Transition Function redefines pf .execute operation).
In general, a system instance is defined as a graph. Its nodes represent Agents a and the directed arcs represent the communication Links aa between them. It is a good practice to name Links by using the names of communicating Agents: first the source Agent name, then a comma, and finally the destination Agent name. Input Buffers and Output Buffers of the Control Subsystems are depicted as sources and destinations of dataTypes being transmitted through the Links. The Buffer names reflect the content of dataType being transmitted. The Subsystems are defined analogically.
Specification refers to a system with a static structure and invariable behaviour, or a system with a variable structure at a certain time instant that both can be efficiently solved by using advanced optimization techniques proposed in [52,53]. To specify a particular system, instances of the relevant concepts appearing in the general system model should be concretised. The SysML diagrams [54] are a part of the EARL-based system ( Table 3). Some of the EARL concepts are specified mathematically: • model and system instance constraints that can not be practically formulated in diagrams, • f un operations of Predicates and Primitive Predicates, and • some calculations performed inside actions of Activity Diagrams of Partial Transition Functions, e.g., control laws.
In addition, mathematical notation is used to express formal conditions ascertaining the correctness of the composition of Partial Transition Functions.

System Part and Function SysML Diagrams
System and its parts, initial analysis req, uc System and Agent internal structure, Links, Input Buffer, Output Buffer ibd FSM, FSM State stm Operations of blocks act

Example of a System Specified Using EARL
This section is devoted to the illustration of how to use the EARL language to specify a robot control system. The example presents a single robot multi-agent system containing CT and CET agents. A manipulation robot with N degrees of freedom and a gripper is considered, capable to perform e.g. pick and place task. The specification process starts with the definition of the System structure. Tips on the specification of requirements and use cases using SysML can be found in [55,56].

Structure of the System Composed of Agents
There are three Agents in the System (Figure 8). The Agent task/a supervises the task execution, i.e., picking and placing objects; the Agent manip/a controls the N-DOF manipulator; and the Agent grip/a controls the gripper. The gripper controller is separate from the manipulator controller, because different grippers can be attached to the manipulator, thus separate Agents facilitate system modification.  Figure 9 presents the dataTypes transmitted between the Agents. The Task Agent task/a sends ManipulatorCommands to the Manipulator Agent manip/a. The commands contain parameters, e.g., operational or joint position setpoints and a command to perform emergency stop. In return task/a gets a ManipulatorState dataType containing: the current operational or joint position, status of the manipulator movement and information whether an emergency stop occurred. The Task Agent task/a sends GripperCommand messages to the Gripper Agent grip/a and receives GripperStatus in return. Similarly to messages exchanged between manip/a and task/a Agents, the GripperCommand and GripperStatus messages contain parameters describing the desired and current gripper finger positions.

Manipulator Agent manip/a
The structure of the Manipulator Agent manip/a is presented in Figure 10. Each Real Effector re represents one of the N drives of manipulator joints. Each drive is controlled by a Virtual Effector that, e.g., implements a motor position regulator. All N Virtual Effectors ve are controlled by a single Control Subsystem cs, which causes the manipulator to move either in joint space, where it interpolates between joint positions, or in operational space, where it interpolates between Cartesian poses of a frame affixed to a chosen link of the kinematic chain. The dataTypes transmitted inside the Manipulator Agent manip/a are presented in Figure 11. The Control Subsystem cs sends MotorControllerCommand to each Virtual Effector motorControllerK/ve. The dataType contains the desired winding current value or a command to switch the hardware driver to the emergency stop state. Each Virtual Effector motorControllerK/ve sends to the Control Subsystem cs information about the current motor position and whether the hardware driver is in an emergency stop state. Each Virtual Effector motorControllerK/ve sends the desired motor winding current to its respective Real Effector motorK/re, and in return receives the encoder readings. Table 4 describes types of data stored in the manip/a.cs.     Table 6 describes the Predicates utilised by manip/a.cs. Figure 12 shows possible transitions between the FSM States of the Control Subsystem manip/a.cs as well as the association of Basic Behaviours to particular FSM States. Table 6. Initial conditions labelling manip/a.cs.fsm transitions and terminal conditions of manip/a.cs.bb. It is assumed that task/a can not set simultaneously a new joint position and an operational space pose.    Table 6.   Figure 13 shows the execute operation of a jointMove/pf Partial Transition Function. This Partial Transition Function realises, e.g., the PI type motor position regulator for each joint Equation (2), Equation (3):

Labels of transitions between FSM States
where K p and K i are, respectively, proportional and integral gain factors, e is the position error, t is time.
It was assumed that any two Partial Transition Functions used by a particular Transition Function do not produce data to the same Output Buffers and Internal Memories, therefore the following conditions are formulated for the pfc set Equation (7) and the pfo set Equation (8), respectively, (∀x/bb)(∀x/bb.i/pf , x/bb.j/pf ∈ pfc, i = j)(x/bb.i/pf .k/mo x/bb.j/pf .k/mo, k/mo ∈ mbo), (7) (∀x/bb)(∀x/bb.i/pf , x/bb.j/pf ∈ pfo, i = j)(x/bb.i/pf .k/ob x/bb.j/pf .k/ob), where stands for "is not the same entity". Transition Functions act in the following way. First, they compute the Partial Transition Functions from the pfc set, and then they compute the Partial Transition Functions from the pfo set. The fulfilment of Equations (7) and (8) makes it possible to run Partial Transition Functions being members of pfc in parallel in the first stage of the Transition Function execution, and then run the Partial Transition Functions being members of pfo set in parallel in the second stage of Transition Function execution. To illustrate the above considerations, Figure 15 shows the definition of jointMove/bb.tf .execute() operation -practical realization of Partial Transition Functions execution for jointMove Basic Behaviour.

FABRIC Framework
Four roles can be assigned to people developing and using robotics software: end users, application builders, component builders and framework builders [57]. The following considerations pertaining to code generation take into account the mentioned roles. The code of a Subsystem is generated out of its specification expressed using EARL and the source code written manually in C++. The specification items are referred to using the numbers and letters, e.g., 1.a-deployer. Figure 16 presents the relationship between: specification, patterns, parameters and code.  Figure 16. Specification decomposition into patterns and their parameters at various levels of generality [16]. Parameters are implemented using either code, or patterns at lower level of generality. Patterns are expressed as code. Arrows point from more general entities to more specific, i.e., their implementation.
For the purpose of Subsystem code generation EARL uses the FABRIC [16] framework. The Subsystem code generation procedure uses the following input data.

1.
FABRIC-framework code, common to all Subsystems; it was manually written by the framework builders; it invokes Subsystem specific parts of code loaded at run-time; it is composed of the following items.
a) deployer-code that loads Orocos framework together with its components and configures them. b) master_component-a specialised Orocos component that manages the activities of the subsystem.

2.
Subsystem specification complying with MDSD, created by application builders, and delivered as two separate XML files: a) The first complies with the EARL system model containing the definitions of FSM, terminal and error conditions of Basic Behaviours, initial conditions of FSM Transitions, dataTypes and send() and receive() operations employed by the Buffers,. b) The second is used for run-time configuration; it contains the description of Orocos components that compose the Transition Functions, and the parameters of those components.

3.
Orocos components that form Partial Transition Functions, written by component builders; the set of components that are to be used is selected by application builders.

4.
Source code of the Primitive Predicates produced by application builders; it is delivered in the form of Orocos services. 5.
ROS message definitions specifying dataTypes of the variables composing the Buffers; this code is written by application builders. Figure 17 shows the method of creating the executable code out of the above mentioned items and the library source code.  Figure 17. Creation of Subsystem executable code basing on its library source code and specification as well as subsystem deployment in run-time [16].
Out of the specification in item 2.a source code is generated. It is subsequently compiled together with the code written manually, i.e., both the source code of the framework (items 1.a and 1.b) as well as the code of the Orocos components and services (items 3 and 4). The deployer loads the necessary components and the item 2.b of the specification, configures Orocos components, and thus composes a working Subsystem.

Discussion
The transformation of the standard description of the Embodied Agent based systems [32,33,42,44,47,58] into EARL facilitates robot control system implementation. The introduction to this paper lists some of the other MDSD languages used to specify robot controllers: RobotML, SmartSoft, BCM (BRICS), and V3CMM. The article [4] had introduced nine features of Model Driven Software Development approaches that were used to compare those four MDSDs. We extend this comparison by adding EARL. The results of the comparison are presented in the form of Table 8, see the following definitions.

•
Composability-in the phase of system integration, all the main properties of components should remain unchanged. The problem may concern, e.g., duplication of component names. EARL and other discussed MDSD languages do not meet this requirement.  [59]. RobotML is based on an ontology that defines various concepts, entities and relationships between them, pertaining to the domain of robotics. EARL is based on the concept of an Embodied Agent, which can be treated as an ontology defining such robotics concepts as effector, receptor, control subsystem, communication buffers, internal memory, FSM, behaviour and transition function. Those concepts enable the composition of a robot system architecture taking into account the separation of concerns approach to software design [60], resulting in a hierarchic layered system.

•
System Level Reasoning-refers to reasoning about the execution time or the model semantics.
Only SmartSoft implements some aspects of this kind of reasoning.

•
Non-Functional Property Model-defines such aspects of the system operation as reliability or performance. None of the discussed MDSD languages address this aspect. EARL also does not, but it is based on SysML, which enables the definition of such requirements. Section 1.1 lists five features that are common to the four described MDSD languages. We used those features to compare EARL to the other MDSD languages. The assessment is based on the papers describing the MDSD languages, i.e., RobotML [5,6], BCM (BRICS) [1,2], SmartSoft [8], V3CMM [10] as well as the internet page of the SmartSoft project [61]. The results of the comparison are presented in the form of Table 9, please see the following.

•
Utilisation of well known graphical specification languages-EARL is based on the SysML standard, RobotML implements a UML profile, SmartSoft relies on the UML standard, BCM (BRICS) defines a meta-model using UML and V3CMM although does not use UML directly, it adopts and adapts it.

•
Reliance on a formal model-EARL uses the formal SysML notation, V3CMM uses Object Constraint Language (OCL) to define model constraints formally. Although no information on the use of formal models to define the other MDSD languages has been found by us, it can be assumed that they are formally defined, because it is possible to generate executable code out of them. It should be emphasised, that defining the model in the form of a UML profile can be considered as a formal description as there are tools enabling the verification of the correctness of instances of such models. Controller code generation software-V3CMM and RobotML are supported by the Eclipse platform. BCM (BRICS) uses the BRIDE toolchain, which is based on Eclipse. SmartSoft is associated with the SmartMDSD Toolchain utilising Eclipse. EARL uses FABRIC for code generation. The FABRIC configuration files are created using any text editor, whereas the system tests and its online analysis are done using FABRIC based graphical tools.

Final Remarks and Future Work
EARL is convenient tool to effectively specify cyber-physical systems due to the following features, • it employs model driven engineering, especially the rules governing the hierarchic composition of system layers out of lower level elements; • it uses the FABRIC framework to automatically create controllers out of their specification, • it is based on the concept of an Embodied Agent, which proved to be instrumental in the specification and implementation of many practical applications; • it utilises standardised tools, i.e. SysML, supported by auxiliary software tools for developers, • a large part of the designed system is specified by using graphical diagrams; • there is no redundancy in the specification; and • it is compact-contextual notation enables the introduction of long, descriptive names of block instances, which do not have to be repeated frequently.
The above mentioned advantages made EARL a tool of preference for the specification and implementation of the currently developed systems. The Velma https://www.robotyka.ia.pw.edu. pl/robots/velma robot (Figure 18a,b) is used as the main test-bed. As a two-handed robot, capable of force/torque sensing, equipped with three-fingered grippers, and having a movable head with mounted cameras and a Kinect sensor, it is sufficiently complex to perform tasks that one would expect of service robots. Our current research concentrates on several topics: (i) robot task planning especially for the purpose of grasping (continuation of [37,62,63]), (ii) automatic identification of physical parameters of the grasped object and their reflection in impedance control law (continuation of [36,37]) and (iii) ontology-based task planning and execution on an example of searching for a lost object somewhere at home [64,65] (continuation of [37]). Additionally, EARL is currently being used to develop a system to assist elderly people at homes. The system uses a Rico https://www.robotyka.ia.pw.edu.pl/robots/rico robot ( Figure 18c) and intelligent house components. This also takes EARL into an area beyond robotics, into the realm of cyber-physical systems [66], where robot works together with external devices, both sensors and effectors. Event-driven interruption of Agent's behaviours [67] as well as simulation of human behaviour for the purpose of testing social robots is currently investigated. In both cases, EARL is used to specify and implement the investigated systems.