Previous Article in Journal
Multi-Run Genetic Algorithm Approach for Optimal Design of Concentric Reluctance Magnetic Gears
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulator-Based Digital Twin of a Robotics Laboratory †

by
Lluís Ribas-Xirgo
School of Engineering, Campus UAB, Universitat Autònoma de Barcelona, 08193 Barcelona, Spain
This paper is an extended version of our paper published in International Workshop on Physical Agents, 4–5 September 2025, Cartagena, Spain.
Machines 2026, 14(3), 273; https://doi.org/10.3390/machines14030273 (registering DOI)
Submission received: 30 December 2025 / Revised: 18 February 2026 / Accepted: 24 February 2026 / Published: 1 March 2026
(This article belongs to the Section Automation and Control Systems)

Abstract

Simulator-based digital twins are widely used in robotics education and industrial development to accelerate prototyping and enable safe experimentation. However, they often hide implementation details that are essential for understanding, diagnosing, and correcting system failures. This paper introduces a technology-independent model-based design framework that provides students with full visibility of the computational mechanisms underlying robotic controllers while remaining feasible within a 150-h undergraduate course. The approach relies on representing controller behavior using networks of Extended Finite State Machines (EFSMs) and their stacked extension (EFS2M), which unify all abstraction levels of the control architecture—from low-level reactive behaviors to high-level deliberation—under a single formal model. A structured programming template ensures traceable, optimization-free software synthesis, facilitating debugging and enabling self-diagnosis of design flaws. The framework includes real-time synchronized simulation, transparent switching between virtual and physical robots, and a smart data logger that captures meaningful events for model updating and error detection. Integrated into the Intelligent Robots course, the system supports topics such as kinematics, control, perception, and simultaneous localization and mapping (SLAM) while avoiding dependency on specific middleware such as Robot Operating System (ROS) 2. Over three academic years, students reported positive hands-on experiences, strong adaptability to diverse modeling approaches, and consistently high survey ratings reflecting the course’s overall quality. The proposed environment thus offers an effective methodology for teaching end-to-end robot controller design through transparent, simulation-driven digital twins.

1. Introduction

Simulator-based digital twins streamline the development, deployment, and operation of robotic and industrial systems. These systems are typically specified with formal behavioral models that rely on hierarchy and abstraction to manage complexity. As a result, both industry and students often favor digital-twin workflows to build monitoring and control software and to quickly deliver tangible results.
However, a major drawback is that design details can be obscured, which makes failures harder to diagnose and correct. Engineers—and increasingly artificial intelligence (AI) assistants—must therefore understand the underlying formal models and the key technologies in the development toolchain. Building this foundation is essential for efficient systems and for addressing failures effectively.
In robotics, core technologies converge within the digital twin: a robot’s data model, often linked to a simulator, and its control software. Robot development thus includes both the physical robot and its digital counterpart.
Time constraints force most robotics courses to emphasize a limited subset of robot design while still enabling hands-on laboratory work. In this paper, we describe how we designed a course that covers the controllers across the full stack from a single computational model, still allowing practical experience and, critically, providing visibility into the internal workings of robotic systems. The latter supports debugging and fault analysis, though it typically increases learning time. To fit a 150 h course with 50 contact hours, we introduced a simulator-based digital twin of an elementary mobile robot and a simplified model-to-software framework.
The Intelligent Robots course (within an AI undergraduate program) reviews kinematics for wheeled mobile robots and manipulator arms, as well as low-level control, assuming no prior robotics background. The course focuses on modeling behavior with state machines and demonstrates, through code examples, how to achieve real-time simulation, model updates, and synchronization between simulation and physical systems.
The contributions of this work are: (i) a single, simple model that spans all abstraction levels of robotic controllers; (ii) a programming template for synthesizing software from system models; and (iii) a set of models and corresponding software components enabling real-time synchronized simulation of a digital twin that can also serve as the controller for the physical robot. Building on the work of Ribas-Xirgo [1], we additionally present a model-based design system that subsumes features of well-known modeling frameworks, including support for representing autonomous, intelligent agents, and we provide practical guidelines for synthesizing software manually or with AI assistance.
The specific contributions of this extended paper are: (i) the extension of basic EFSM graphs to incorporate decision diagrams and state stacks, enabling more flexible state sequencing and the representation of deliberative behaviors; (ii) a commented EFSM-based programming pattern with integrated self-diagnosis mechanisms, illustrating how this pattern can serve as a reference for programmers and AI tools when synthesizing simulation code; and (iii) detailed specifications of the simulation workflow and model-update mechanisms.
The paper is organized as follows. Section 2 reviews system-modeling techniques and AI-assisted software generation, as well as related courses and development environments. Section 3 details the critical aspects of robotic controllers in the course setting. We then discuss the course experience and outcomes.

2. State of the Art

Our primary goal was to create a technology-agnostic, model-based design (MBD) framework that minimizes the learning curve while covering the knowledge and skills required to develop control software for industrial machinery and robots.
MBD uses formal, machine-readable models that can be simulated, verified, and deployed atop middleware that links application software with visualization, monitoring tools, simulators, and the actual plant or robot.
In this context, it is important to note that unified modeling language (UML) statecharts, often the first formalism encountered by engineers transitioning into MBD methodologies, present a steep learning curve. This difficulty stems from the wide range of possible action triggers, whether on event occurrence, on state entry or exit, or during transitions, and is further compounded by composite states, which introduce intricate action-ordering semantics [2]. While hierarchical UML state machines are highly expressive and can help experienced engineers describe behavior more succinctly, they also add conceptual overhead. Our proposal reduces this complexity by limiting action execution points to the start and end of states, avoiding fixed action ordering (actions ideally execute in parallel), and implementing hierarchy through master–slave communication. This approach preserves the ability to represent complex behavior using relatively simple diagrams, albeit at the cost of losing an explicit visual representation of master–slave relationships.
Contemporary software development increasingly leverages AI for code generation. Yet large language models (LLMs) do not inherently capture model intent and may produce incomplete or incorrect code, increasing the verification burden. Engineering education must therefore provide practical, end-to-end experience, even in introductory courses, which often means carefully scoped lab activities.

2.1. Model-Based Design

Model-based systems engineering (MBSE) has become essential for coping with heterogeneous hardware, distributed software, real-time constraints, and safety-critical requirements. MBSE captures requirements, architecture, behavior, and verification artifacts across the lifecycle, improving consistency, traceability, and multidisciplinary integration [3,4]. MBD uses such models to represent behavior, enable verification, generate code automatically, and accelerate deployment and maintenance.
Models can be tailored to functional and nonfunctional requirements—for example, in manufacturing automation [5] and satellite systems [6]—and translated to executable platforms such as PauWare [7] that support simulation, execution tracing, and runtime verification. Statecharts [8], a common formalism for concurrent hierarchical EFSMs (HCEFSMs), help designers reason about behavior, explore states comprehensively, and scale to complex systems.
Single-state state machines (SSSMs) are sometimes used to encapsulate behavior designed with different methods. An ASML study [9] reported that SSSMs comprised over 25% of 1500 industrial models and were relatively stable over time.
While code generation depends on both the model and the tools, many common models can be compiled into a model intermediate representation (MIR) to support efficient scheduling, simulation, and code generation in tools like Simulink and Ptolemy-II [10].
In practice, many engineers are comfortable coding ad hoc state machines but are less familiar with designing software from formal models. Projects evolve, so designs must remain adaptable. We address this by leveraging networks of simple EFSMs that emulate statechart features and can be implemented without specialized libraries or environments.
The framework must also account for ROS 2, now a dominant open middleware in industrial robotics, offering real-time-friendly communication, DDS-based security, cross-platform support, and modularity [11,12]. ROS 2 aligns well with formal modeling frameworks such as Timed Rebeca for verifying concurrency, timing, and safety properties and for bridging analysis, simulation, and deployed code [13,14]. Industry-focused ROS 2 frameworks improve modularity and reuse, support rapid development, and provide unified, real-time-capable interfaces [15]. Simulators—most notably Gazebo (tight ROS integration) and Unity-based XR environments—enable simulation-driven development and digital twins that support early validation, model refinement, and faster commissioning [16,17,18]. While this ecosystem increases the complexity of the environment, our work focuses on hierarchical concurrent extended finite-state machines (HCEFSMs), implemented using collections of extended finite-state machines (EFSMs) together with direct links to raw code. We also relate HCEFSMs to the hierarchical state machines (HSMs) and behavior trees (BTs) commonly used in ROS 2 projects.

2.2. Software Generation

We distinguish programming from software synthesis. Programming translates a model into a particular language and introduces optimizations; software synthesis prioritizes a systematic translation that preserves traceability from the model to the code.
LLMs can generate code [19,20,21], but may propagate errors and security issues from training data [22]. They tend to perform well on simpler problems, struggle with extended reasoning, and often optimize for compilability over robustness, motivating combined correctness-and-security evaluations such as CodeSecEval [23]. Social and technical biases in automatic generation have been documented [24,25,26]. Automatic translation with the help of LLMs has limited accuracy [27], with success rates ranging from 2 to 47% [28], and is not reliable without thorough human review [29,30], complemented by testing and targeted prompts that reduce errors.
Fine-tuning can improve generation quality at lower computational cost [31,32,33,34], but interactive clarification is typically required [35], and performance degrades with overly long or noisy prompts [36]. While top models achieve student-level exam performance in programming [37,38], human experts still excel at structured reasoning and quality control.
Our approach converts formal state-machine diagrams directly into executable code, reducing ambiguity, bias, and translation errors. LLMs can then be used as transducers given paired examples (diagram → code), but they may insert “false optimizations” (e.g., reorganizing code or adding unnecessary initializations) that mask model flaws. We therefore prefer synthesis patterns that maximize model–code traceability and embed self-diagnostics.

2.3. Robotics Education

According to Educations.com, there are 127 undergraduate AI programs worldwide (76 in Europe, 26 in the UK, and 6 in Spain), including ours at the Universitat Autònoma de Barcelona. Many programs include a 6 ECTS (≈150 h) robotics course with ≈50 contact hours. Our course must introduce robotics fundamentals [39,40] while focusing on intelligent robots [41], covering kinematics and dynamics, controllers, perception (notably vision), execution of predefined plans, SLAM and Kalman filtering, and AI links for perception and decision making—topics that often demand an entire course [42,43] or a dedicated master’s program.
Leading universities (e.g., Oxford, Stanford, UC3M, EPFL) offer compatible overviews, while others (e.g., Berkeley, Imperial, Harvard, NTU, NUS, Caltech, and UPC) provide specialized courses in robotics and AI. Affordable robot platforms enable rich student labs [44]. Digital twins extend this work beyond the lab with safe, risk-free testing. Popular simulators—Player/Stage Gazebo [45], CoppeliaSim [46], and Webots [47]—offer model libraries and support building twins for the lab robots. In our case, we use in-house developed mobile robots and their digital twins that consist of the corresponding robot model and a few high-level data (Figure 1). We also have an arm robot from UR for which we use the built-in model in CoppeliaSim.
Low-level controllers can directly drive actuators, but complex procedures are best programmed atop ROS [48]. In fact, some introductory courses on robotics use ROS [43] and there are educational resources based on ROS [49,50,51]. While we include only a practical ROS example, our programming model mirrors ROS node organization and can generate ROS code. Unlike ROS pub–sub timing, our execution can satisfy stricter time requirements.
Inspired by Ptolemy II [52], our computational model represents behavior as a network of EFSMs communicating via signals in a cycle-by-cycle execution, enabling accurate timing. We simplify Ptolemy II to reduce errors and ease code generation and debugging. Prior work has proposed correct-by-construction behaviors with built-in recovery [53] and fault-handling architectures [54].
Our course content is organized around hands-on modules but emphasizes technology-agnostic controller thinking and robust signal handling—present, out-of-range, and absent—so that components embed error detection and recovery.

3. Course Contents

The course covers foundational material common to many offerings: kinematic models of wheeled mobile robots (with emphasis on differential drive) and robotic arms (including inverse kinematics), as well as open-loop and closed-loop control with PID controllers. It also introduces image processing (OpenCV [55]), AI-based detection (YOLO [56]), SLAM, and the Kalman filter.
For the practical part of the course, although state machines can be created manually or with any diagramming software, draw.io [57] is recommended (indeed, all diagrams in this paper were produced with it). Since the digital twin of the real robot was created in CoppeliaSim, which uses Lua [58] as its scripting language, Lua is also used to develop stand-alone simulators and the code for the state-machine system models. Any integrated development environment (IDE) can be used, but we recommend ZeroBrane [59].
The differentiating element is the software development process: behavior is modeled as an EFSM network and paired with an ad hoc simulator so students can see how systems work internally and where failures originate. The remainder of this section details the computational model and the real-time execution model used to synchronize the digital twin with reality.

3.1. Behavior Modeling

Because EFSMs underlie more complex models (statecharts, HSMs, and BTs), any of these can be transformed into EFSMs. We thus represent the system as a set of EFSMs executing in parallel. Unlike plain FSMs, EFSMs carry an extended state: a control state plus a data store retained across cycles.
Figure 2 shows a programmable counter with clock input, two inputs (P and b), one output (e), and an internal counter C. Diagram labels define transition conditions and associated actions. Decision nodes (e.g., C > 0 ) partition cases to ensure coverage. All activated actions are conceptually concurrent within a cycle. Delayed assignments (e.g., C + = P 1 ) update at the next cycle, while the next control state is determined by the graphical transitions. Designers must avoid conflicting assignments to the same signal (e.g., e = false and e = true simultaneously).
Hierarchy arises when a state contains a subordinate machine (Figure 3, top). In our implementation (Figure 3, bottom), we simplify semantics by running master and slave EFSMs in parallel, sharing variables and I/O and coordinating with active/ready handshake signals. This pattern reproduces statechart behavior while making priorities and continuations explicit and observable.
Arrows coming from composite states that do not originate from stop states of the subordinate machine like C x can be assigned various meanings, since it is necessary to determine whether the actions of the subordinate machine should take effect and with what priority in case of conflict and also whether, upon returning to the master state, the subordinate machine will remember the current state or start from the initial one again. In the format of the diagrams at the top of Figure 3, this should be specified explicitly, as is done, for example, in statecharts.
To simplify, the hierarchy is implemented so that the slave machines work in parallel with the masters, sharing variables and input and output. The hierarchy is established through a handshake protocol with activation and readiness signals. Thus, when the master state machine reaches the SUB state, the active signal is set to true so that the slave machine exits the IDLE state in the next cycle.
When the slave machine returns to IDLE, the ready signal returns to true so that the master can continue. This reproduces the same behavior as the arrival at END in the slave machine depicted above and the exit arc from the octagon in the composite state of the master machine.
The semantics associated with exit arcs that do not wait for the subordinate EFSM to return to an inactive state can be easily simulated in the composite state part, with an extra decision node, as can be seen in the lower right part of Figure 3. On the subordinate machine side, the designer must consider all situations and decide, for each state S J , whether to suspend the corresponding activity and add a decision node active and, in that case, whether to restart the activity in that state or return to IDLE. In this way, there are no errors in interpreting which actions are taken. It should be noted that it can be monitored when the subordinate machine continues with the activity and the master has exited SUB because the ready signal of the subordinate will remain at false.
Thus, concurrent EFSMs can represent hierarchical machine behaviors such as statecharts and HSMs and BTs, which are more common in the field of robotics. In fact, in the case of BTs, it is even simpler, since the various nodes represent communication between machines working in parallel.
For intelligent behaviors, we extend EFSMs with a state stack (EFS2M) inspired by pushdown automata [60]. The EFS2M model adds a state stack and push, pop, and top operations and the corresponding graphical representation.
The EFS2M in Figure 4 generates a plan P 1 in case plan1() returns true, which consists of following a sequence of states, from P 1 1 to P 1 N 1 and GOAL. The plan appears in reverse order since the bubbles with state names that appear after the tip of an arrow are states that are pushed and, therefore, will be popped in the reverse order to which they were put on the stack.
In state P 1 1 there are three types of output arcs. The one that is born directly from the state bubble and that is activated with next1_1_2() is a regular arc that goes to the state at the top of the stack. That is, the hexagon at the end of the arrow indicates that the next state will be the one that is popped from the stack. The arc labeled with skip1_2() is born after a bubble that represents a pop operation on the state stack. If you want to indicate that it is completely emptied, you do not need to put as many bubbles as there are states on the stack, but it is enough to put a hexagon there, as shown in the arc abort1_1().
This model naturally represents the planning style of belief-desire-intention(BDI) agents [61], where intentions correspond to stacked plans and desires to the initial pushed states.
As mentioned, the computational model used is an EFSM network with implicit hierarchy, that is, the model does not include any explicit indication of subsumption of one component to another. The description of such a model includes that of the architecture of its system and that of the EFSMs it contains. The architecture diagram also indicates how the different components relate to each other and can be hierarchical. In this sense, this computational model is comparable to behavioral trees.
At the architectural level (Figure 5), we adopt a conventional three-tier stack for a mobile robot. L2 (deliberation) can host a BDI-like agent but, in our course, serves as a user-facing interface that selects missions (SWEEP, FOLLOW, NAVIGATE). L1 (execution) decomposes missions into L0 (reactive) primitives.
Each architectural element expands to one or more EFSM, i.e., multiple EFSMs may coexist per layer. Outputs are driven from state variables (Moore semantics) to prevent circular dependencies and order-sensitive updates. For example, Figure 6 shows the LIDAR sweep state machine part of the main state machine in L1.
In this case, the message received from L1 is decomposed into three values forming the input instruction I. The operation code is stored in I [ 1 ] ; if it is equal to 2, L1 is requesting a lidar sweep, and the EFSM therefore transitions to state LIDAR. The sweep angles are stored in variable C, and variable J indexes this table. From LIDAR, the EFSM moves to ECHO in the next cycle. In ECHO, variables P and g are updated: P specifies the angle at which the lidar must point, and g requests the lidar-controller state machine to read the distance at that angle. Note that the delayed assignment to P contains a function call. This represents another form of hierarchy, where the subordinate element always produces a response, and the upper-level state machine must only evaluate whether that response is consistent.
In the ECHO state, the machine waits for the lidar controller to respond. When the distance measurement becomes available, the sensor controller sets u = true , and the value can then be read via input signal Q. In this design, the presence or absence of an event and its associated value are split into two distinct signals, as is also the case for g and P. When a sentinel value can encode the absence of an event, a single signal is sufficient.
Once Q becomes available, the EFSM transitions to RAY and subsequently to RESUME, where it waits for L1 to acknowledge reception of the data and issue the next step (operation code 3). This handshake protocol ensures that no lidar-sweep data are lost. Note, however, that this state machine does not include a timeout for communication with L1. This is acceptable in the simulated-robot configuration, where L1 and L0 run on the same computing node, but a timeout should be incorporated when L0 executes on a separate processor.

3.2. EFS2M Software Generation

Modern control/embedded environments provide modeling, simulation, verification, and code generation—often bound to a specific middleware. A key advantage of EFSM/EFS2M networks is that their behavior can be coded directly in any language without special libraries.

3.2.1. EFS2M Programming Template

There are several ways to translate EFS2M diagrams into programs. They can be classified into those based on state machine interpreters and those that are hard coded. In the first case, the transition and output tables of the EFS2Ms are interpreted by a program that simulates them. In the second case, the simulator program calls specific functions for each EFS2M.
In our case, each state machine is an object of a class with methods to initialize (init) it and simulate its behavior. For the latter, it is divided into four phases: reading inputs (read_inputs), calculating the next extended state (plan), updating it (update), and writing outputs (write_outputs).
The programming of the input/output functions must be done specifically because they depend on the input and output devices used. For example, for the low-level code of the virtual robot, the input and output are via CoppeliaSim.
Methods init, plan and update depend only on the state machine, and therefore the code can be generated following a pattern of correspondence between the elements of the diagram and instructions for the corresponding function (see Listing 1).
In this programming pattern, all variables in the state machines are tuples of two values: the current value and the value in the next state. The main state variable tuple has an additional component to store the state stack, which is only manipulated in the plan function if the corresponding diagram contains some element that indicates it, such as push and pop bubbles or empty or pop hexagons (to change to the state at the top of the stack).
Programming variables (not to confuse them with model/state variables) are intentionally not initialized to prevent them from hiding model errors. For example, the init function only sets variables that appear in the initial arrow label, and the plan function does not include any default value assignment to the next values of model variables.
Additionally, plan includes code to check whether a single transition is activated at every cycle. In case no activation is detected, the corresponding state machine is incompletely specified and the next state is set to “__INCOMPLETE__”, and if two or more fire, then it is set to “__NON-DETERMINISTIC__”.
Local variable blocked is set to false by default and to true when the state machine will not evolve unless some input event occurs or it requires some external attention (reaching sink states, planned or not, like the error states shown in lines 40, 42, and 43 of Listing 1). In the example, it defaults to true because the Counter cycles are synchronized with an external clock.
Although not shown here, state tracing and accounting blocks can be added into update to trace-back errors and profile state machine evolution over time.
This implementation is not the most efficient in terms of execution cost, but it is one of the most efficient in terms of engineering cost.
Unfortunately, it is common for programmers to incorporate false optimizations [62] that may introduce unintended effects, such as masking coding errors or even affecting the model. Listing 2 shows a program that follows the given pattern but incorporates some false optimizations, mostly consisting of minimizing the number of instructions originating in interpretations of the model. For example, interpreting that the outgoing arcs from WAIT are complementary lead to the if-then-else structure of lines 11–15. (Typically, as the increase of firings is done regardless of the condition evaluation, it would be put out of the conditional structure and the else part, removed).
Furthermore, there is no guarantee that premature optimizations are necessary, or that some micro-optimizations such as changes in the format of some data (e.g., changing state names to numeric codes) may have an effect on the final executable code, or that code that takes advantage of specific features of a programming language will be more efficient in terms of memory or time than code based on more general instructions.
Listing 1. The code for the EFSM of Figure 2 serves as a pattern for software synthesis that includes both direct correspondence between graphical elements and program instructions and self-diagnosis (lines 37–42).
Machines 14 00273 i001
Listing 2. Programming the EFSM of Figure 2 includes false optimizations that may mask model errors.
Machines 14 00273 i002
In summary, the aim is not to manually program state machines, but rather to synthesize their code while preserving, as faithfully as possible, the correspondence between each element of the state-machine diagram and the code that implements it. In this approach, the course emphasizes that any error present in the model must have associated code capable of exposing it. For example, the monitor function must allow the inspection of variables even when they do not yet hold a defined value. The course also stresses that any optimization of program execution should be left for a later stage, and only if it is actually needed.

3.2.2. Automatic Software Synthesis

Given that EFS2M programming follows well-defined patterns, it is feasible to automate the process, which also avoids the problem of false optimizations.
Automatic code synthesis can be done algorithmically or with the support of AI. In fact, an EFS2M software generator was already available that read the XML of the models and generated the corresponding Lua code [60]. However, the graphical representations of the models must be syntactically correct for the synthesis software to generate code, and, in addition, variations in the input format and output language can imply significant modifications with a large associated cost.
The rise of LLMs-based programming assistants takes advantage of the fact that there is a good database of both algorithmic solutions to problems and programming of these algorithms. As has been seen, this does not mean that the generated code should not be extensively reviewed.
However, in the particular case of behavioral models, it is not necessary for the LLM to be trained to correlate problems and algorithmic solutions but rather to correlate models with codes that follow a specific pattern.
In this sense, a series of examples have been created that pair EFS2M and the corresponding programs so that the AI assistant can generate an appropriate output for the target model that has a very high correlation with the given examples.
In experiments with several AI chats—ChatGPT (5 and 5-mini), Copilot (based on ChatGPT 5), DeepSeek (R1 and V3), and Gemini 2.5 (Pro and Flash)—it has been observed that they all work well for simple cases where the input is a syntactically correct diagram. In fact, in these cases it is irrelevant whether the input is an image of the state machine or an XML file, or which programming language is used as output.
In the case of images, synthesis works better because it does not depend on attributes of the objects in the drawing and, for example, arrow connections are established by proximity while, in the XML format, it depends on whether the two objects are linked or not.
It has also been observed that they tend to introduce false optimizations and corrections such as initializations not present in the models, which can mask behavioral errors. This can partly be avoided by combining synthesis examples with rules.
In short, software synthesis algorithms generate code without false optimizations and, therefore, with reliable self-diagnosis, but they require inputs with clear graphical syntax and must adapt to each input and output format. AI assistants, on the other hand, can work with any input and output, as long as the instructions include examples in the formats used and rules to stick to the programming patterns, but they can reproduce the same problems that could be encountered with human programmers.
In short, software-synthesis algorithms produce code without false optimizations and therefore with reliable self-diagnosis. However, they require inputs with clear graphical syntax and must be tailored to each specific input and output format. AI assistants, by contrast, can operate with almost any input or output format, provided the prompts include examples and clear rules that enforce the required programming patterns. However, they may still reproduce the same types of mistakes that can appear in code written by human programmers.
These findings are also shared with students to make them aware of the risks of relying on AI assistants for programming, and to help them understand how to evaluate prompts and integrate such tools responsibly into the software development process.

3.3. Real-Time Simulation

The execution model of state machines can be represented by a state machine (Figure 7) that cyclically repeats the phases of reading input data (SENSE), calculating the next extended state (PLAN), updating it (UPDATE) therefore moving to the next cycle, and writing primary outputs (ACT). This means that all machines in the system are always in the same phase.
This execution model links, to each state, the call to a specific function of all the state machines of the system and is easy to program in any programming language, which allows the implementation of the system simulation in any development environment.
In this EFSM, the variable C is a cycle counter, that is, it indicates how many transitions the state machines have made until the current cycle, the variable s T is the simulated time, which starts at 0 and the inputs T and d T provide the real time of the system where it is executed and the time step in each cycle. For simplicity, this time step is fixed and the condition for it to be executed in real time is that the execution time of each cycle is always less than or equal to d T .
To calculate the time elapsed in a cycle, there is the variable B (for “begin time”) that stores the reference of time at which each cycle begins. The state machine will remain in IDLE until T B d T . In this case, it has been chosen to increase B by d T instead of assigning it to the value of T to allow the excess execution time of previous cycles to be compensated for with subsequent cycles in which the system runs faster. However, it must be considered that there are systems that do not tolerate this soft real-time behavior.
In systems that work with very short execution times compared to d T , more than one cycle can be advanced per time step. In this case, cycles will be advanced until some calculation of the next state of some state machine of the system requires an external input. This information is stored in the variable X upon reaching the UPDATE state from PLAN, which is where the function plan is called and returns this condition.
The variable I indicates the cycle number within the current period, so that signals that are events cease to be present when I > 0 . In this sense, this must be taken into account when programming the read_inputs functions, since they are the ones that must distinguish between inputs that are events and others that are continuously present.
The EFSM in Figure 7 can be executed through a super-loop that repeatedly calls the functions update, monitor, write_outputs, read_inputs, and plan of all the state machines in the system, until one of them becomes inactive, i.e., it transitions to a state (valid or not) from which no outgoing arc exists. All the stand-alone simulators built in Lua in the course are intended for students to detect errors and, in professional environments, for early verification. In systems that must operate continuously, this situation must be handled explicitly through either a stop or a reset function. The former removes the affected state machine from execution, whereas the latter brings it back to an active state; in both cases, the overall system becomes active again.
When integrated into a simulator such as CoppeliaSim, the simulator’s own super-loop invokes these functions to execute the EFSM in Figure 7, which in turn triggers the execution of all EFSMs in the system. In this setup, the loop runs until the system reaches an IDLE state, after which control returns to the simulator, allowing it to regulate simulated time.
In ROS, a similar approach should be used: each EFSM-based subsystem is executed within a node that spins at a defined time period. During each time slice, the node repeatedly evaluates the EFSM until it reaches the IDLE state. (In the course, although this integration is explained, no exercises currently cover the use of EFSM network simulators within ROS).

3.4. Simulation and Reality Synchronization

Synchronization between simulation models and real-world systems is essential for maintaining virtual models consistent with the evolving state of a physical system. Real-time simulation allows the simulated time to progress at the same pace as physical time; however, discrepancies between the simulated state and the physical state must be continuously monitored and used to update the system model parameters and, when possible, the configuration of the embedded controllers of the physical elements [63]. Consequently, virtual models and their physical counterparts maintain synchronization by sharing data [64], thereby ensuring consistent behavior.
Synchronization may also occur indirectly through interaction with other entities in the system. For example, the digital twin of a manipulator robot can synchronize with its physical counterpart via an external module that observes the manipulated object in the real environment and updates its state in the simulated environment. At the same time, the digital twin sends the control inputs that drive the physical robot [65].
In our case, synchronization is performed internally through dedicated modules integrated into the robot’s control architecture. The physical robot and its digital twin share the upper-level layers of the controller stack, which execute exclusively within the simulator, whereas the lower-level layers run in both the real and virtual robots. Both robots communicate with the upper layers, ensuring coherent operation across the system.
As shown in Figure 8, the control stack is divided between layers L0 and L1. Furthermore, communication is maintained with both the virtual and the physical robot, enabling the system to operate with both simultaneously.
More specifically, communication takes place through a transparent module, L0L1Link, which receives commands (C) from L1 and broadcasts them to both virtual and real L0 robots (output signal Z). It then waits for the L0 replies, via inputs R and V, in order to construct a single reply message M for L1. The EFSM shown in Figure 8 represents a basic synchronization mechanism for R and V, where R has priority. Cases in which messages on V arrive later than those on R are counted in the variable D. For simplicity, the diagram does not include time tolerance considerations. Inconsistencies between the contents of R and V during these time windows result in informative messages being sent to L1, so as to preserve the transparency of the module.
Note that when the real robot is connected, the responses sent to L1 are those generated by the physical robot. This may lead to discrepancies between the simulation and reality, since the simulated environment does not necessarily match the real one, and no mechanisms are currently implemented to make this update.
However, the fact that development can be completed in the simulated environment and that, in the laboratory, the connection with the real robot is transparent to the model, makes the experience more positive.
This system architecture is implemented such that the physical robot executes only basic commands (L0), just like its virtual counterpart, while the remaining layers of the controller stack run exclusively in the simulator. In this setup, the CoppeliaSim simulator is linked to a single physical robot through a serial-profile Bluetooth connection; consequently, a multi-robot system would require as many instances of CoppeliaSim as robots in the system. This configuration works well for the laboratory sessions, as each team can work at home with its digital twin and, in the lab, with the physical robot.
Simulating multiple virtual robots within a single instance of CoppeliaSim can become computationally demanding and, to maintain synchronization in real time, may require reducing the system’s visualization refresh rate or even simplifying robot models. For practical applications, it therefore seems more appropriate to preserve an individual digital-twin simulation architecture and add an additional layer to each robot’s controller stack to enable communication with the other robots and with the system’s monitoring and control software.
In this regard, the system’s simulation software can be built from a model that integrates high-level models of each robot and, optionally, a coordinating component [66]. This state-machine system would communicate with the state-machine controllers of each virtual and physical robot to form a multi-robot system. Such an architecture can scale effectively, but the centralized implementation of the highest-level simulator makes it sensitive to communication delays or failures. Nevertheless, simulation at this higher level—where robot models are heavily simplified—can be performed for systems comprising thousands of units [67]. However, with real robots involved, this architecture has only been tested with dozens of units. As with any distributed system, communication remains a major dependency. Fortunately, at this level, events occur at much lower frequencies than in the lower and more reactive layers.

3.5. Model Update

Adapting the values of model parameters to maintain consistency between reality and simulation can be achieved through different strategies. These range from approaches in which specific controllers treat the discrepancies between the real and simulated systems as errors and output corrected parameter values, to approaches that replace traditional controllers with neural networks to provide a more adaptive method for identifying model parameters.
There are two types of parameters to update [68]: temporal parameters (when events occur) and behavioral parameters (what their values are). In this course, we focus on the first category, aiming to identify operation times, such as travel times.
Simulation fidelity refers to the degree of accuracy with which the simulation reflects the real world. Achieving this requires that digital-twin model parameters be updated in real time, which is challenging at lower levels of abstraction (physical fidelity) but feasible at higher levels (functional fidelity).
Sophisticated approaches use high-level abstraction simulation running concurrently with the digital twin to achieve real-time operation. The high-level simulator determines optimal parameter values, typically using AI techniques [69], while the main simulation concentrates on functional alignment rather than exact physical replication [70]. Another strategy for real-time simulation is to update model parameters offline or online but at a lower frequency than the control-system time step [71].
Alternatively, simpler methods based on less computationally expensive techniques, such as Kalman filters, can operate in real time [72].
In any case, all methods require data collection, and the course focuses on demonstrating how data loggers can be integrated into a robot’s software so that data can be collected and used to update model parameters and detect system faults in a subsequent stage. In the laboratory sessions, students calibrate the virtual robot’s movement speed to match that of real physical one by analyzing time differences observed during the execution of movement operations.
Various data loggers are considered, from those that store everything to smarter ones that only record events of interest, whether errors or pre-processed data to update the model parameters. The smart data logger presented in Figure 9 stores only data related to the robot’s motion operations to be able to determine how well the real-time simulation adapts to the real robot and alarms related to out-of-context messages and timeout situations.
In the log file, each entry is labeled with the real time (T) at which the command was sent to the robots, the rotation and displacement they were instructed to perform, and the times at which the first response (lap1) from one robot and the second (lap2) from the other were registered (i.e., the individual recorded time intervals for each robot). To determine which robot each response corresponds to, the system checks whether the state is AHEAD (the first response is from the virtual robot), BEHIND (the first response is from the real robot), or SYNC, meaning that both responses arrived simultaneously. If the responses are inconsistent or if any of them arrive after a specified timeout, an error is generated.

4. Conclusions

Developing software for industrial monitoring, control, and robotics requires substantial knowledge and skill. Simulator-based digital twins help practitioners focus on behavior, but can hide details that matter when systems fail. We presented a development environment that couples a digital twin with a transparent computational model: any system behavior can be described as an EFS2M network.
Compared with statecharts, HSMs, and BTs, the proposed modeling style is simpler to learn, robust by construction, and easy to implement without specialized middleware. To be widely useful, however, it should align with the tools most engineers use, notably ROS 2. Our initial motivation was pedagogical: cover a broad, practice-oriented spectrum so students can see and reason about the full path from model to executable controller, both in simulation and on the real robot.
Over three years of teaching, students reported positive hands-on experiences and demonstrated the ability to adapt to projects using different modeling styles and toolchains, while requesting greater design freedom. University subject surveys across several dimensions (material quality, teaching, assessment system, workload, and perceived utility) indicate that the course is well aligned with the rest of the AI degree program. It has an overall average score of 3.20 out of 4 (based on 62 responses out of 121 enrolled students), compared with the program-wide average of 3.26. Its perceived usefulness is rated at 2.99, slightly above the program average of 2.98. Although some students criticize the course for not incorporating as much AI as expected, it also receives highly positive feedback, such as: “The strength of this subject is that it combines theory with practical assignments” (23/24), “I think it can be very interesting for people who want to pursue a career in robotics” (24/25), and “I really enjoyed the unique way of explaining robotics using finite state machines and loved how we actually worked with robots in class and in the lab. […] This is by far my favorite subject in the program” (25/26).
The student materials for the 2025/26 mobile-robotics module are publicly available [73] and include class presentations with stand-alone model simulators and problem statements, complete and fill-in-the-gaps Lua programs, laboratory guides, and the corresponding CoppeliaSim scenes featuring the digital twin of the physical robot. These materials constitute a reproducible baseline environment: all stand-alone examples can be executed directly in ZeroBrane, and the CoppeliaSim scenes can be simulated without any hardware-specific drivers. Example programs are ready to run, whereas the code for exercises and laboratory sessions must be completed by students, in line with the course’s emphasis on guided practice and progressive autonomy. Full functionality is achieved once students complete the provided code, but all shared components run out of the box and are fully accessible through the public repository.
From an educational standpoint, future work should include parallel examples implemented in other environments to ease transfer. From a research perspective, we will continue to embed self-diagnosis and error management in the generated code to detect both model and synthesis issues early.

Funding

This research received no external funding.

Data Availability Statement

The student materials for the course are provided in [73], whereas supplementary instructor resources are available upon request.

Acknowledgments

The author thanks Jordi Guerrero for building the robot and carrying out the tests; Ismael Chaile, Pragna Das, Joaquín Saiz-Alcaine and Daniel Rivas for their contributions to various versions of the physical and virtual robots over the years; Marc Serra-Asensio for his contribution to AI assisted software synthesis of models, and Carlos García-Calvo for sharing with me the teaching of the Intelligent Robots course.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
BDIBelief–Desire–Intention
BTBehavior Tree
DDSData Distribution Service
ECTSEuropean Credit Transfer System
EFSMExtended Finite State Machine
EFS2MExtended Finite State Stack Machine
FSMFinite State Machine
HCEFSMHierarchical Concurrent Extended Finite State Machine
HSMHierarchical State Machine
IDEIntegrated Development Environment
LLMLarge Language Model
MBDModel-Based Design
MBSEModel-Based Systems Engineering
MIRModel Intermediate Representation
PIDProportional–Integral–Derivative Controller
ROS, ROS 2Robot Operating System (versions 1 and 2)
SLAMSimultaneous Localization and Mapping
SSSMSingle-State State Machine
UMLUnified Modeling Language
XRExtended Reality
YOLOYou Only Look Once

References

  1. Ribas-Xirgo, L. State Machine Model of a Controller System for an Educational Mobile Robot. In Proceedings of the International Workshop on Physical Agents, Cartagena, Spain, 4–5 September 2025; Available online: http://hdl.handle.net/10317/21020 (accessed on 2 February 2026). [CrossRef]
  2. Cruz-Lemus, J.A.; Genero, M.; Manso, M.E.; Morasca, S.; Piattini, M. Assessing the understandability of UML statechart diagrams with composite states—A family of empirical studies. Empir. Softw. Eng. 2009, 14, 685–719. [Google Scholar] [CrossRef]
  3. Pasupuleti, S. Model-Based Systems Engineering (MBSE) for the Design and Integration of Complex Robotics Systems. ESP J. Eng. Technol. Adv. 2023, 3, 126–132. Available online: http://espjeta.org/Volume3-Issue3/JETA-V3I7P116.pdf (accessed on 2 February 2026). [CrossRef]
  4. Zhang, L.; Chen, Z.; Laili, Y.; Ren, L.; Deen, M.J.; Cai, W.; Zhang, Y.; Zeng, Y.; Gu, P. MBSE 2.0: Toward More Integrated, Comprehensive, and Intelligent MBSE. Systems 2025, 13, 584. [Google Scholar] [CrossRef]
  5. Vogel-Heuser, B.; Schütz, D.; Frank, T.; Legat, C. Model-driven engineering of Manufacturing Automation Software Projects—A SysML-based approach. Mechatronics 2014, 24, 883–897. [Google Scholar] [CrossRef]
  6. Center, K. Describing and Deploying Satellite Behaviors Using Rules-based Statecharts. In Small Satellite Conference; Utah State University’s (USU) Institutional Repository: Logan, UT, USA, 2014; Available online: https://digitalcommons.usu.edu/smallsat/2014/IntellSoftware/2/ (accessed on 2 February 2026).
  7. Cariou, E.; Brunschwig, L.; Le Goaer, O.; Barbier, F. A software development process based on UML state machines. In Proceedings of the International Conference on Advanced Aspects of Software Engineering (ICAASE), Constantine, France, 1–8 November 2020. [Google Scholar] [CrossRef]
  8. Harel, D. Statecharts: A Visual Formalism for Complex Systems. Sci. Comput. Program. 1987, 8, 231–274. [Google Scholar] [CrossRef]
  9. Yang, N.; Cuijpers, P.; Schiffelers, R.; Lukkien, J.; Serebrenik, A. Single-state state machines in model-driven software engineering: An exploratory study. Empir. Softw. Eng. 2021, 26, 6. [Google Scholar] [CrossRef]
  10. Su, Z.; Wang, D.; Yang, Y.; Yu, Z.; Chang, W.; Li, W. MDD: A Unified Model-Driven Design Framework for Embedded Control Software. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 3252–3265. [Google Scholar] [CrossRef]
  11. Open Robotics. ROS—Robot Operating System: Home and ROS 2 Releases. 2024–2025 Updates. Available online: www.ros.org/ (accessed on 2 February 2026).
  12. Pagare, T. The Future of Robotics Simulation and ROS 2 in 2025. LinkedIn Article. 29 May 2025. Available online: www.linkedin.com/pulse/future-robotics-simulation-ros-2-2025-tejas-pagare-gh03e/ (accessed on 2 February 2026).
  13. Trinh, H.H.; Sirjani, M.; Ciccozzi, F.; Masud, A.N.; Sjödin, M. Modelling and Model-Checking a ROS2 Multi-Robot System using Timed Rebeca. arXiv 2025. Available online: https://arxiv.org/abs/2511.15227 (accessed on 2 February 2026).
  14. Dust, L.; Gu, R.; Mubeen, S.; Ekström, M.; Seceleanu, C. A model-based approach to automation of formal verification of ROS 2-based systems. Front. Robot. AI 2025, 12, 1592523. [Google Scholar] [CrossRef]
  15. Sarraf, G. ROS 2 Control for Custom Robots: Mastering Real-Time Robot Control. ThinkRobotics Blog. 3 September 2025. Available online: https://thinkrobotics.com/blogs/learn/ros-2-control-for-custom-robots-mastering-real-time-robot-control (accessed on 2 February 2026).
  16. Naderhirn, M.; Köpf, M.; Mendler, J. Model Based Design for Safety Critical Controller Design with ROS and Gazebo. In ROSCon 2017. Available online: https://roscon.ros.org/2017/presentations/ROSCon%202017%20Kontrol.pdf (accessed on 2 February 2026).
  17. Flores González, J.M.; Coronado, E.; Yamanobe, N. ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies. Appl. Sci. 2025, 15, 8637. [Google Scholar] [CrossRef]
  18. Winiarski, T.; Kaniuka, J.; Giełdowski, D.; Ostrysz, J.; Radlak, K.; Kushnir, D. ROS-Related Robotic Systems Development with V-Model-Based Application of MeROS Metamodel. arXiv 2025. [Google Scholar] [CrossRef]
  19. Heller, M. LLMs and the Rise of the AI Code Generators. InfoWorld. May 2023. Available online: www.infoworld.com/article/2338500/llms-and-the-rise-of-the-ai-code-generators.html (accessed on 2 February 2026).
  20. Huynh, N.; Lin, B. A Survey On Large Language Models For Code Generation. arXiv 2025. [Google Scholar] [CrossRef]
  21. Jiang, J.; Wang, F.; Shen, J.; Kim, S.; Kim, S. A Survey on Large Language Models for Code Generation. ACM Trans. Softw. Eng. Methodol. 2025, 35, 2. [Google Scholar] [CrossRef]
  22. Yan, H.; Vaidya, S.S.; Zhang, X.; Yao, Z. Guiding AI to Fix Its Own Flaws: An Empirical Study on LLM-Driven Secure Code Generation. arXiv 2025. Available online: https://arxiv.org/html/2506.23034v1 (accessed on 2 February 2026).
  23. Wang, J.; Luo, X.; Cao, L.; He, H.; Huang, H.; Xie, J.; Jatowt, A.; Cai, Y. Is Your AI-Generated Code Really Secure? Evaluating Large Language Models on Secure Code Generation with CodeSecEval. arXiv 2024. [Google Scholar] [CrossRef]
  24. Liu, Y.; Chen, X.; Gao, Y.; Su, Z.; Zhang, F.; Zan, D.; Lou, J.; Chen, P.; Ho, T. Uncovering and quantifying social biases in code generation. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS ’23); Curran Associates Inc.: Red Hook, NY, USA, 2023; pp. 2368–2380. [Google Scholar]
  25. Wang, C.; Li, Z.; Gao, C.; Wang, W.; Peng, T.; Huang, H.; Deng, Y.; Wang, S.; Lyu, M. Exploring Multi-Lingual Bias of Large Code Models in Code Generation. arXiv 2024. [Google Scholar] [CrossRef]
  26. Huang, D.; Zhang, J.; Bu, Q.; Xie, X.; Chen, J.; Cui, H. Bias Testing and Mitigation in LLM-based Code Generation. ACM Trans. Softw. Eng. Methodol. 2026, 35, 5. [Google Scholar] [CrossRef]
  27. Dou, S.; Jia, H.; Wu, S.; Zheng, H.; Wu, M.; Tao, Y.; Zhang, M.; Chai, M.; Fan, J.; Xi, Z.; et al. What is Wrong with Your Code Generated by Large Language Models? An Extensive Study. Sci. China Inf. Sci. 2025, 69, 112107. [Google Scholar] [CrossRef]
  28. Pan, R.; Ibrahimzada, A.R.; Krishna, R.; Sankar, D.; Wassi, L.P.; Merler, M.; Sobolev, B.; Pavuluri, R.; Sinha, S.; Jabbarvand, R. Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24); Association for Computing Machinery: New York, NY, USA, 2024; p. 82. [Google Scholar] [CrossRef]
  29. Liu, Y.; Le-Cong, T.; Widyasari, R.; Tantithamthavorn, C.; Li, L.; Le, X.-B.D.; Lo, D. Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues. ACM Trans. Softw. Eng. Methodol. 2024, 33, 116. [Google Scholar] [CrossRef]
  30. Wang, Z.; Zhou, Z.; Song, D.; Huang, Y.; Chen, S.; Ma, L.; Zhang, T. Towards Understanding the Characteristics of Code Generation Errors Made by Large Language Models. In Proceedings of the 2025 IEEE/ACM 47th International Conference on Software Engineering, ICSE 2025; IEEE Computer Society: Washington, DC, USA, 2025; pp. 2587–2599. [Google Scholar] [CrossRef]
  31. Ma, Z.; Guo, H.; Chen, J.; Peng, G.; Cao, Z.; Ma, Y.; Gong, Y. LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation. arXiv 2024. [Google Scholar] [CrossRef]
  32. Tsai, Y.D.; Liu, M.; Ren, H. Code less, align more: Efficient LLM fine-tuning for code generation with data pruning. arXiv 2024. [Google Scholar] [CrossRef]
  33. Weyssow, M.; Zhou, X.; Kim, K.; Lo, D.; Sahraoui, H. Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models. ACM Trans. Softw. Eng. Methodol. 2025, 34, 204. [Google Scholar] [CrossRef]
  34. Chawre, H. Fine-Tuning LLMs: Overview, Methods, and Best Practices. Turing. March 2025. Available online: www.turing.com/resources/finetuning-large-language-models (accessed on 2 February 2026).
  35. Mu, F.; Shi, L.; Wang, S.; Yu, Z.; Zhang, B.; Wang, C.; Liu, S.; Wang, Q. ClarifyGPT: A Framework for Enhancing LLM-Based Code Generation via Requirements Clarification. Proc. ACM Softw. Eng. 2024, 1, 103. [Google Scholar] [CrossRef]
  36. Tian, H.; Lu, W.; Li, T.; Tang, X.; Cheung, S.; Klein, J.; Bissyandé, T. Is ChatGPT the ultimate programming assistant: How far is it? arXiv 2023, arXiv:2304.11938. [Google Scholar] [CrossRef]
  37. Bordt, S.; Luxburg, U. ChatGPT Participates in a Computer Science Exam. arXiv 2023, arXiv:2303.09461. [Google Scholar] [CrossRef]
  38. Hou, W.; Ji, Z. Comparing Large Language Models and Human Programmers for Generating Programming Code. Adv. Sci. 2025, 12, 2412279. [Google Scholar] [CrossRef]
  39. Siegwart, R.; Nourbaksh, I.R. Introduction to Autonomous Mobile Robots; The MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  40. Graig, J.J. Introduction to Robotics: Mechanics and Control; Pearson Education International: London, UK, 2005. [Google Scholar]
  41. Murphy, R.R. Introduction to AI Robotics; The MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  42. Gil-Vázquez, P. Intelligent Robotics. Master’s Thesis, Universitat d’Alacant, Sant Vicent del Raspeig, Spain, 2023. [Google Scholar]
  43. Sacré, P. Introduction to Intelligent Robotics. Master’s Thesis, University of Liège, Liège, Belgium, 2025. [Google Scholar]
  44. Čehovin Zajc, L.; Rezelj, A.; Skočaj, D. Teaching Intelligent Robotics with a Low-Cost Mobile Robot Platform. In Proceedings of the Robotics in Education (RiE), Yverdon-les-Bains, Switzerland, 20–23 May 2015. [Google Scholar]
  45. PlayerStage Homepage. Available online: https://sourceforge.net/projects/playerstage/ (accessed on 2 February 2026).
  46. Coppelia Robotics: CoppeliaSim. Available online: www.coppeliarobotics.com (accessed on 2 February 2026).
  47. Cyberbotics: Webots. Available online: https://cyberbotics.com/ (accessed on 2 February 2026).
  48. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef] [PubMed]
  49. Cañas-Plaza, J.; Perdices, E.; García-Pérez, L.; Fernández-Conde, J. A ROS-Based Open Tool for Intelligent Robotics Education. Appl. Sci. 2020, 10, 7419. [Google Scholar] [CrossRef]
  50. Roldán-Álvarez, D.; Mahna, S.; Cañas, J.M. A ROS-based Open Web Platform for Intelligent Robotics Education. In Robotics in Education (RiE); Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R., Obdržálek, D., Eds.; Advances in Intelligent Systems and Computing Series; Springer: Cham, Switzerland, 2022; Volume 1359. [Google Scholar] [CrossRef]
  51. Martín-Rico, F. A Concise Introduction to Robot Programming with ROS2; Chapman & Hall: London, UK, 2022. [Google Scholar]
  52. Ptolemaeus, C. (Ed.) System Design, Modeling, and Simulation using Ptolemy II; Ptolemy.org: Berkeley, CA, USA, 2014. [Google Scholar]
  53. Wong, K.; Ehlers, R.; Kress-Gazit, H. Correct High-level Robot Behavior in Environments with Unexpected Events. In Proceedings of the 2014 Robotics: Science and Systems Conference, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar] [CrossRef]
  54. Gharbi, A. Faulty control system. Cogn. Syst. Res. 2024, 86, 101233. [Google Scholar] [CrossRef]
  55. Bradski, G. The OpenCV Library. Dr. Dobb’S J. Softw. Tools 2000, 25, 2236121. [Google Scholar]
  56. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2016; pp. 779–788. [Google Scholar] [CrossRef]
  57. JGraph: Diagrams.net, Draw.io. Available online: https://app.diagrams.net/ (accessed on 2 February 2026).
  58. Ierusalimschy, R.; Figueiredo, L.H.d.; Celes, W. Lua 5.1 Reference Manual; Lua.org: Rio de Janeiro, Brazil, 2006. [Google Scholar]
  59. Kulchenko, P. ZeroBrane Studio: Lightweight IDE for Your Lua Needs. Available online: https://studio.zerobrane.com (accessed on 2 February 2026).
  60. Rivas, D.; Das, P.; Saiz-Alcaine, J.; Ribas-Xirgo, L. Synthesis of Controllers from Finite State Stack Machine Diagrams. In IEEE International Conference on Emerging Technologies and Factory Automation (ETFA); IEEE: New York, NY, USA, 2018; pp. 1179–1182. [Google Scholar]
  61. de Silva, L.; Meneguzzi, F.; Logan, B. BDI agent architectures: A survey. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI); Bessiere, C., Ed.; IJCAI Organization: Menlo Park, CA, USA, 2020; pp. 4914–4921. [Google Scholar]
  62. Hyde, R. The Fallacy of Premature Optimization. Ubiquity 2009, 2009, 1. [Google Scholar] [CrossRef]
  63. Bernateau, S.; Derigent, W.; Marangé, P.; Aubry, A.; Attar, A.B. Review of Methods for Synchronizing a Digital Twin in an Industrial Context. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future; Borangiu, T., Trentesaux, D., Leitão, P., Legat, C., Eds.; SOHOMA 2024; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2025; Volume 1197. [Google Scholar] [CrossRef]
  64. Soykan, B.; Blanc, G.; Rabadi, G. A Proof-of-Concept Digital Twin for Real-Time Simulation: Leveraging a Model-Based Systems Engineering Approach. IEEE Access 2025, 13, 58899–58912. [Google Scholar] [CrossRef]
  65. Abou-Chakra, J.; Sun, L.; Rana, K.; May, B.; Schmeckpeper, K.; Suenderhauf, N.; Minniti, M.V.; Herlant, L. Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin. arXiv 2025. [Google Scholar] [CrossRef]
  66. Chaile, I.F.; Ribas-Xirgo, L. MASYM, a Framework to Deploy Synchronized Industrial Systems Based on Any ABM Simulator. IEEE Lat. Am. Trans. 2015, 13, 3244–3252. [Google Scholar] [CrossRef]
  67. Ribas-Xirgo, L. A state-based multi-agent system model of taxi fleets. Multimed. Tools Appl. 2022, 81, 3515–3534. [Google Scholar] [CrossRef]
  68. Deubert, D.; Schultze, S.; Selig, A.; Verl, A. Synchronized online simulation at field level for virtual sensing of web tension. Discov. Appl. Sci. 2025, 7, 574. [Google Scholar] [CrossRef]
  69. Cao, Y.; Currie, C.; Onggo, B.S.; Higgins, M. Simulation Optimization for a Digital Twin Using a Multi-Fidelity Framework. In Winter Simulation Conference (WSC), Phoenix, AZ, USA, 12–15 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–12. [Google Scholar] [CrossRef]
  70. Maddukuri, A.; Jiang, Z.; Chen, L.Y.; Nasiriany, S.; Xie, Y.; Fang, Y.; Huang, W.; Wang, Z.; Xu, Z.; Chernyadev, N.; et al. Sim-and-Real Co-Training: A Simple Recipe for Vision-Based Robotic Manipulation. In Proceedings of the Robotics: Science and Systems (RSS), Los Angeles, CA, USA, 21–25 June 2025; Available online: https://co-training.github.io/ (accessed on 2 February 2026).
  71. Xiu, D.; Tartakovsky, D.M. Computational Framework for Real-Time Digital Twins. Thermopedia 2025. Available online: www.thermopedia.com/content/10452 (accessed on 2 February 2026).
  72. Das, P.; Ribas-Xirgo, L. Travel Time Estimation for Optimal Planning in Internal Transportation. World Electr. Veh. J. 2024, 15, 565. [Google Scholar] [CrossRef]
  73. Ribas-Xirgo, L. Introduction to Intelligent, Mobile Robots with State-based Controllers. Zenodo 2026. [Google Scholar] [CrossRef]
Figure 1. The real mobile robot and its virtual twin.
Figure 1. The real mobile robot and its virtual twin.
Machines 14 00273 g001
Figure 2. An EFSM representing a programmable counter that outputs an end event e after P cycles following the one in which the input event b was true.
Figure 2. An EFSM representing a programmable counter that outputs an end event e after P cycles following the one in which the input event b was true.
Machines 14 00273 g002
Figure 3. Explicit hierarchical state machines (top) and simulation by communicating parallel EFSMs sharing memory (bottom).
Figure 3. Explicit hierarchical state machines (top) and simulation by communicating parallel EFSMs sharing memory (bottom).
Machines 14 00273 g003
Figure 4. EFS2Ms are EFSMs with state stacks. The smaller bubbles represent operations on the state stack: pops if they are at the beginning of an arrow and pushes if they are at the end. The hexagons mean emptying the stack at the beginning of an arrow and going to the state at the top of the stack (and doing the corresponding pop) if they are at the end.
Figure 4. EFS2Ms are EFSMs with state stacks. The smaller bubbles represent operations on the state stack: pops if they are at the beginning of an arrow and pushes if they are at the end. The hexagons mean emptying the stack at the beginning of an arrow and going to the state at the top of the stack (and doing the corresponding pop) if they are at the end.
Machines 14 00273 g004
Figure 5. The EFSM network of the mobile robot controller is organized as a three-tier stack. The upper layer (L2) receives high-level instructions from the user-interface module and issues commands such as “F path” to the executive layer (L1), which interprets this as “follow the specified path”. The executive layer then generates commands such as “1 angle dist”, instructing the reactive layer (L0) to move to the indicated polar coordinates. Each layer acknowledges received commands upon completion or returns warning or error messages, possibly including supplementary data for diagnostic analysis and model updates.
Figure 5. The EFSM network of the mobile robot controller is organized as a three-tier stack. The upper layer (L2) receives high-level instructions from the user-interface module and issues commands such as “F path” to the executive layer (L1), which interprets this as “follow the specified path”. The executive layer then generates commands such as “1 angle dist”, instructing the reactive layer (L0) to move to the indicated polar coordinates. Each layer acknowledges received commands upon completion or returns warning or error messages, possibly including supplementary data for diagnostic analysis and model updates.
Machines 14 00273 g005
Figure 6. Portion of the L1 EFSM implementing the lidar-sweep procedure. The states either wait for commands from L1 or for responses from the lidar controller, or they schedule variable updates in a defined temporal order—for example, initializing J when transitioning from LISTEN to LIDAR, and subsequently setting P using J when moving from LIDAR to ECHO.
Figure 6. Portion of the L1 EFSM implementing the lidar-sweep procedure. The states either wait for commands from L1 or for responses from the lidar controller, or they schedule variable updates in a defined temporal order—for example, initializing J when transitioning from LISTEN to LIDAR, and subsequently setting P using J when moving from LIDAR to ECHO.
Machines 14 00273 g006
Figure 7. The simulation of the EFSM network in real time includes a waiting state (IDLE) to synchronize the simulation time s T with the real time T. An optional END state (shown in grey) can be added for cases in which some action must be taken when any EFSM reaches a sink or erroneous state.
Figure 7. The simulation of the EFSM network in real time includes a waiting state (IDLE) to synchronize the simulation time s T with the real time T. An optional END state (shown in grey) can be added for cases in which some action must be taken when any EFSM reaches a sink or erroneous state.
Machines 14 00273 g007
Figure 8. The real and the virtual robot models share the higher layers of the controller and communication between these layers and the lower ones is done through the transparent layer L0L1Link, which behaves as shown in the EFSM on the left, giving priority to the real robot and counting (D) how many cycles the simulation lags behind reality.
Figure 8. The real and the virtual robot models share the higher layers of the controller and communication between these layers and the lower ones is done through the transparent layer L0L1Link, which behaves as shown in the EFSM on the left, giving priority to the real robot and counting (D) how many cycles the simulation lags behind reality.
Machines 14 00273 g008
Figure 9. The smart data logger records filtered event entries into a log file, which can later be analyzed to refine the robot’s high-level model and identify the causes of potential system failures.
Figure 9. The smart data logger records filtered event entries into a log file, which can later be analyzed to refine the robot’s high-level model and identify the causes of potential system failures.
Machines 14 00273 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ribas-Xirgo, L. Simulator-Based Digital Twin of a Robotics Laboratory. Machines 2026, 14, 273. https://doi.org/10.3390/machines14030273

AMA Style

Ribas-Xirgo L. Simulator-Based Digital Twin of a Robotics Laboratory. Machines. 2026; 14(3):273. https://doi.org/10.3390/machines14030273

Chicago/Turabian Style

Ribas-Xirgo, Lluís. 2026. "Simulator-Based Digital Twin of a Robotics Laboratory" Machines 14, no. 3: 273. https://doi.org/10.3390/machines14030273

APA Style

Ribas-Xirgo, L. (2026). Simulator-Based Digital Twin of a Robotics Laboratory. Machines, 14(3), 273. https://doi.org/10.3390/machines14030273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop