Next Article in Journal
Event-Based Closed-Loop Control for Path Following of a Purcell’s Three-Link Swimmer
Previous Article in Journal
Safe Autonomous UAV Target-Tracking Under External Disturbance, Through Learned Control Barrier Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flexible Constraint-Based Controller Framework for Ros_Control

1
TECNALIA, Basque Research and Technology Alliance (BRTA), Mikeletegi Pasalekua 2, 20009 Donostia-San Sebastián, Spain
2
IKERBASQUE, Basque Foundation for Science, 48013 Bilbao, Spain
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(8), 109; https://doi.org/10.3390/robotics14080109
Submission received: 28 May 2025 / Revised: 28 July 2025 / Accepted: 8 August 2025 / Published: 11 August 2025
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

Generating robot behaviors in dynamic real-world situations generally requires the programming of multiple, often redundant degrees of freedom to meet multiple goals governing the desired motions. In this work, we propose a constraint-based controller specification methodology. A novel declarative language is used to combine semantically specialized building blocks into composite controllers. This description is automatically transformed at runtime into an executable form, which can automatically leverage multiple threads to parallelize computations whenever possible. Enabling runtime definition of controller topologies out of declarative descriptions not only reduces the work required to develop such controllers, but it also allows one to dynamically synthesize new controllers based on higher-level task planners or by user interaction through Graphical User Interfaces (GUIs). Our solution adds new functionality to the Robot Operating System (ROS)/ros_control ecosystem, where robot behaviors are typically achieved by deploying single-objective, off-the-shelf controllers for tasks like following joint trajectories, executing interpolated point-to-point motions in Cartesian space, or for basic compliant behaviors. Our proposed constraint-based framework enhances ros_control by providing the means to easily construct composite controllers from existing primary elements using our design language. Building on top of the ros_control infrastructure facilitates the usage of our controller with a wide range of supported robots and enables quick integration with the existing ROS ecosystem.

1. Introduction

Programming robot applications commonly consists of describing the motion of a certain reference frame attached to the robot flange—often referred to as the Tool Control Point (TCP). This requires determining the position and orientation of said frame, which in our 3D world means independently controlling six degrees of freedom. Industrial manipulators with six joints have the exact number of joints to provide the required mobility in these cases. However, there are robots with different degrees of freedom, like Cartesian robots with only 3 degrees of freedom or humanoids with tens of degrees of freedom. Likewise, certain tasks may specify requirements on fewer degrees of freedom, leaving the others to vary freely. For instance, polishing or drilling requires determining motion in only five directions. In other cases, the degrees of freedom might be shared across multiple objectives, which must be fulfilled simultaneously.
Constraint-based approaches are useful in these types of scenarios. They generalize techniques used to control the 3D motion of a single TCP and instead define tasks in terms of an arbitrary number of constraints, each of them, which determines the behavior in some constraint-specific, n-dimensional space. An exact solution to the constrained motion problem exists provided that the constraint spaces are orthogonal between them and that the number of constraint dimensions does not exceed the total number of degrees of freedom of the manipulator. Furthermore, available techniques to solve this problem offer solutions with different trade-offs for degenerate cases where an exact solution does not exist. These techniques can be applied in the context of pre-planning robot motions, as seen in [1]. Alternatively, as in [2,3,4,5], they can be used to dynamically generate robot motions at runtime, tracking a set of constraints using closed-loop control techniques.
As mentioned, the constraint-based methodology is a powerful tool to design complex robot behaviors by composing individual constraints, where each of the constraints can be expressed in relation to the relevant system features. Software support for designing controllers using constraint-based approaches exists for some of the cited methodologies. Given the complexity of designing and executing these types of controllers in a generic way, all of these are built on some supporting software infrastructure. The key parts of this infrastructure, which are common to the aforementioned approaches as well as the one presented here, are (a) drivers providing access to the robot hardware, (b) a language or specification format to define specific controller characteristics, and (c) communication middleware allowing one to combine the controllers with the rest of the application.
The choice of the tools to provide this infrastructure is a key aspect in supporting the constraint-based approach. Other similar frameworks make use of scripting languages, such as Python or Lua, to provide interfaces for controller designers to build the mathematical expressions that the runtime controller later uses to execute the task. Our approach differs from these by defining a novel, purely declarative YAML-based specification language which can be used to efficiently compose controllers out of predefined behavioral blocks. This comes with the burden of initially creating said building blocks but provides a higher abstraction level to the designers of the combined controllers, thus facilitating their work.
With respect to the robot hardware interface, we ensure wide, cross-platform applicability by building on the ros_control framework, which is the de facto standard for writing manipulation controllers in ROS-based applications. Indeed, while other constraint-based approaches may exist already, existing implementations of constraint-based controllers require special support packages built for each individual hardware platform, thus limiting their applicability within the robotics community at large. Either because they predate the popularity of the Robot Operating System (ROS) or because they come from institutions invested in their own robot control software frameworks, none of the existing solutions offer first-class support for the ROS ecosystem. The framework that we propose here builds on top of the ros_control drivers [6], taking advantage of the excellent hardware support provided by the community to ensure quasi-universal compatibility.
Summarizing, we present here a constraint-based software framework that differentiates itself from the existing alternatives in these five relevant aspects:
  • Extensive hardware support provided by the existing community and manufacturer-provided ros_control-compatible drivers.
  • Declarative specification language based on YAML syntax to compose controllers out of a user-extensible library of predefined blocks.
  • A novel syntax with semantically specialized types that provides a concise and expressive controller-specification language.
  • Definition of a Dataflow construct, derived from Taskflow, that represents the controller architecture as directed, acyclic graphs that lend themselves to automatic scheduling on multiple threads.
  • Native integration into the larger ROS ecosystem, allowing easy integration with ROS-based applications.
This framework can be viewed as a tool set with which users can design controllers by composing existing behaviors and configuring the data sources and associated filter pipelines. The provided specification language allows this composition to be defined declaratively and tuned to obtain the desired performance in each robot platform. Tools for outputting internal data from the controller to the ROS ecosystem are also provided, facilitating introspection for fine-tuning and debugging during development using ROS visualization tools.
The rest of this document presents the proposed control framework. In Section 2, additional background about constraint-based control techniques and the ros_control framework is provided. It finishes with a discussion about where the current work lies in comparison with other approaches and how it tries to address their limitations. An in-depth description of the proposed controller framework is provided in Section 3. The first half of this section lists all its features, and in the second, some relevant implementation details are explained. A set of use cases where the proposed work has been used to solve real-world robotics applications is presented in Section 4. Finally, Section 5 contains conclusions and final remarks and proposes potential lines for future work.

2. Background

Prior to discussing the design and implementation of the ros_control-native constraint-based controller, this section gives a brief background. Firstly, one way to formulate the constraint-based control problem and its relation to optimization techniques is introduced. This is followed by a brief introduction to the ros_control framework and the tools it provides to a controller designer.

2.1. Constraint-Based Programming as Optimization Problem

Mathematical optimization in the most general form consists of obtaining an element that maximizes or minimizes some criterion from a set of possible values. Depending on the specific form of the function to maximize/minimize and on the shape of the set of possible values, different categories of optimization can be found. These can range from problems with fast analytical solutions to problems for which no closed-form solutions exist and where heuristics must be used to find correct, not necessarily globally optimal, solutions. In robotics, different categories of problems can be formulated in this manner. At the mission level, finding the optimal sequence in which a series of tasks should be carried out can be modelled as the combinatorial optimization problem known as the travelling salesman problem [7]. Motion planning problems, both for mobile robots and for robot manipulators, can also be formulated as different types of optimization problems [8,9]. Low-level torque-based control of nonlinear models of robot manipulators has also been a matter of study for optimization, with recent approaches leveraging neural networks and sliding mode controllers applied to control manipulators with varying loads [10].
Here, we focus our work on the sensor-based control of industrial robotic manipulators, where commonly only position or velocity control of the robot joints is exposed to the programmer. Finding the right joint command for a position- or velocity-resolved robot manipulator can often be reduced to finding the inverse solution to the instantaneous velocity kinematics equation
v = J q ˙ .
That is, one must find the joint-space velocity vector q ˙ that results in the desired Cartesian space velocity v of the robot end effector with respect to some reference frame, given the Jacobian J, which instantaneously defines a linear relationship between them.
Similarly, image-based visual-servoing applications [11] are, when deployed to control a robot in its joint space, defined by determining a feature Jacobian relating the robot joint velocities q ˙ to the image space velocity s ˙ of certain features with
s ˙ = J s q ˙ .
Problems in this A x = b form, like (1) and (2), have well-formed solutions only when the Jacobian is square and full rank; that is, when the robots’ number of joints and the dimensionality of the task space, which the Jacobian maps to, are equal (and when the Jacobian is not singular). This unique solution is given by using a matrix inverse to compute the joint velocities as
q ˙ = J 1 v .
In the more general case, a solution may not exist, or it may exist but not be unique. In both cases, the Jacobian matrix inverse does not exist, and typically some sort of pseudoinverse, such as the Moore–Penrose pseudoinverse, is used. The solution given by this particular pseudoinverse is the minimum norm joint velocity vector q ˙ * that is also a solution to the minimization problem
Find q ˙ * argmin q ˙ J q ˙ v 2 .
This pseudoinverse-based approach is used, for example, in the ViSP software framework [12] for implementing visual-servoing applications.
Considering that (1) and (2) actually form a system of linear equations, where additional equations can be added to the system to further constrain the solution, it is easy to see that stacking more Jacobian matrices and desired velocity vectors results in a system of equations to find the joint space velocity that minimizes the combined error. That is, given J A and J B , which express the velocities of bodies A and B in relation to the robot’s joint velocities, and given the desired velocities for said bodies, v A and v B , the combination of both constraints can be written as
v A v B = J A J B q ˙ .
The solution to (5) is then the minimum norm vector q ˙ * which satisfies
Find q ˙ * argmin q ˙ J A q ˙ v A 2 + J B q ˙ v B 2 .
Here, the task space is the one formed by the combination of J A and J B , which map q ˙ to the 12-dimensional space of the combined velocities of both bodies. The desired velocity in that space is, therefore, the combination of the individual velocities v A and v B . In the more general case, N pairs of Jacobian matrices and target velocities J i , v i may be defined. When considered in the context of a minimization problem as (6), these pairs of Jacobian matrices and desired velocities can be viewed as constraints for the system in the velocity space.
The Moore–Penrose pseudoinverse is well defined even at singular configurations, but the solutions it produces can result in arbitrarily high joint velocities. In order to reduce these near kinematic singularities, the damped pseudoinverse considers the joint velocity norm as part of the minimization criterion, turning the optimization problem into
Find q ˙ * argmin q ˙ J q ˙ v 2 + λ q ˙ 2 .
The solution to (7) minimizes all task space errors and joint velocities equally in its formulation. By using weighing matrices as in [13], more importance can be given to errors in certain directions of the task velocity space and to the individual joint velocity components.
Weighing constraints still means every constraint has an effect on the solution, even if its contribution is very small. Alternatively, strict priority between constraints can be enforced by sequentially solving each individual constraint (or set of constraints). Solutions to lower priority constraints can be projected into the null space of higher priority ones as originally proposed in [14] and generalized in [2]. The general idea is that by using P = I J + J (with the + superindex denoting a pseudoinverse here), the matrix projecting into the null space of J, one can compute a solution to a problem with a hierarchy of N constraints using
q ˙ N * = i = 1 N ( J i P i 1 ) + ( v i J i q ˙ i 1 * ) + P N q ˙ N + 1 ,
with P 0 = I , x 0 = 0 and P i the projector into the null space of the augmented Jacobian
J i A = J 1 J 2 J i .
This technique results in a least squares solution for each level of the hierarchy and ensures that lower-level constraints can only use the remaining degrees of freedom of the system without ever perturbing higher-level ones. A systematic way to apply this hierarchical constraint-based approach, combined with estimation of geometrically uncertain parameters is proposed in [3] and later extended in [15,16]. The iTaSC software framework [17] provides software support for this methodology on top of the Orocos-RTT software framework [18,19].
The above examples have all shown equality constraints being applied to a system, where the desired velocity in each direction of the task space is exactly determined. However, certain tasks, like enforcing joint limits, are best described as inequality constraints, using formulations such as
min x 1 2 x T Q x + c T x subject to A x b .
The resolution of problems in the form of (10) can be done with a technique known as quadratic programming. These problems have traditionally been too costly to solve at the speed required by typical closed-loop control rates in the millisecond range. Recent developments [20,21] have, however, enabled solving certain instances of problems described in this way, if their complexity is not too high and certain optimizations are considered. Stack-of-Tasks [4,22] and eTaSL [5] both rely on this type of efficient QP solver to provide complete software frameworks that can be used to implement controllers. While both of them offer ways to obtain a certain level of ROS integration, the former uses its own custom framework, and the latter is based on Orocos-RTT, which makes it somewhat more cumbersome to integrate them into a ROS ecosystem.
All of the cases described in this section have a common expression of formulating the problems. On one hand, a linear mapping between the control variables (joint space velocities q ˙ ) and some task space variables is defined by a Jacobian matrix J. On the other hand, certain constraints are defined for these task space variables, either as exact desired values (equality constraints) or as bounds (inequality constraints). Depending on whether the problem is defined by equality or inequality constraints, different methods can be used to find the optimal robot joint velocities that achieve the desired combination. The design of our proposed controller framework provides infrastructure to support both types of equality and inequality constraints, even if the current family of provided solver implementations focuses on the first family.

2.2. Ros_Control

The ROS middleware [23,24] provides tooling to build robotic software solutions as a collection of distributed processes, named nodes. Basic communication between nodes can be done via remote procedure calls, named services, or by implementing a publisher–subscriber mechanism via topics. Continuous data flow is typically implemented using said topics which, by default, use a custom interface description language to specify message types. The messages are serialized and transmitted through network sockets from publishers to subscribers.
Running closed-loop controllers for dynamic applications typically requires rates from the hundreds of Hz to kHz ranges. The aforementioned serialization and network transmission mechanism used by the publish–subscribe mechanism described above, together with the scheduling of different processes/nodes by the operating system, introduce latencies incompatible with these rates. When ROS was originally being developed at Willow Garage, together with the PR2 robot, an ad hoc framework named pr2_mechanism [25] was developed, including a node that interfaced with the robot’s hardware via an EtherCAT bus and could run different controllers in hard real time. These controllers were implemented following a common abstract C++ interface, compiled into dynamically loadable libraries, and instrumented to be used as runtime plugins.
Later, from an effort to generalize this concept to make it robot-agnostic, the ros_control [6] project was created. An abstract hardware_interface::RobotHW class is declared here to provide an abstraction of the actual robot hardware. ros_control-compliant ROS drivers for sensors, actuators, or complete robots implement this abstract interface and expose read or read+write software handles to the underlying hardware. The controller_manager is then responsible for managing the access of controllers to these handles, taking care that no conflict occurs with more than one controller trying to access a given read+write resource at the same time. Here, controllers are also plugins implementing a generic controller interface and communicate through the hardware by claiming the handles, which are requested by name from the hardware interface.
Similarly to controllers, the hardware abstractions themselves can also be implemented in plugin form, allowing a single node and controller manager to load multiple instances into a single running process. By configuring the system in this way, individual controllers can claim interfaces from multiple hardware elements, providing support for complex hardware setups via runtime composition.
The key in this infrastructure is that hardware abstractions and controllers all run in a single process, where the controller_manager takes care of scheduling read/write operations to the actual hardware and updates the controllers in a synchronized way. In this process, the handles, which the controllers use to interact with the hardware abstractions, are internally implemented using shared pointers. The deterministic scheduling and shared memory communication provide a level of performance that can not be achieved in a multiprocess scenario using ROS communication primitives. The ros_control architecture offers this performant execution model while maintaining a great degree of runtime flexibility, where controllers can be loaded, started, stopped, and unloaded at runtime. The ROS interfaces provided for this are handled by the aforementioned infrastructure in non-real-time threads, whereas a real-time thread is reserved for the read/update/write loop, where running controllers are processed. For controllers declaring ROS interfaces themselves (e.g., to receive high-level set points), the realtime_tools library provides a collection of tools to interact from the core update loop to the threads interacting with the ROS ecosystem. By building on a collection of available hardware abstraction and controller plugins, which make use of pluginlib [26] to provide dynamic lookup and loading capabilities, a great degree of flexibility is provided to the application developers.
Under ROS, once a given controller is implemented, configuring it to run on a given ros_control robot is typically achieved by following two steps. The kinematic model of the robot using the Unified Robot Description Format (URDF) is made available on the ROS parameter server, usually under the robot_description key. Additionally, configuration for the specific controller to use is made available on the parameter server, which usually involves loading it from a single text file—encoded using YAML syntax—using the ROS-native tools. Under ROS, designing complex control architectures is turned into selecting the plugins to load and configuring them, without the need to write custom code. This feature, plus the broad community of ROS users and contributors, makes it a highly suitable foundation for the declarative framework for constructing constraint-based contollers to be developed here.

2.3. Positioning of Proposed Solution

As discussed in previous sections, there exist multiple strategies to define optimization-based control laws in the literature. These techniques have great expressive power since they allow expressing each of the objectives to be fulfilled in the task-specific coordinates where they are more naturally expressed. Existing frameworks providing software support for these techniques are highly sophisticated and can deal with a great deal of scenarios. With respect to our stated goal of providing a simple, declarative method for defining complex controllers based on constraints, we submit that our novel formulation for constructing complex constraint-based controllers adds new benefits in the following ways:
  • In Stack-of-Tasks, the intended way to define controllers is by composing a dynamic task graph using the provided Python bindings. The usage of Python code to define the controller structure provides a very high level of flexibility, but its open nature makes it more difficult to understand than other declarative formats. Furthermore, specific support packages must be implemented for each individual robot, and integration with ros_control is only possible indirectly through these handcrafted packages. Additionally, there is no clear documentation on how to add support for new robots other than browsing existing support packages.
  • The eTaSL framework is built on the concept of modelling control tasks as expression graphs defining the robot structure and task model using the Lua scripting language. Tools to deploy tasks defined this way are provided for robots with a compatible driver on the Orocos-RTT middleware. Means to integrate with ROS are only provided on top of topics, which are not well suited for real-time control. Some third-party direct integration with ros_control is available, but at the time of writing, it appears to be no longer maintained.
  • Existing ros_control controllers provide functionalities with a narrower focus. The ros_controllers collection [27], maintained by the same organization in charge of the ros_control framework itself, contains some of the most widely used controllers in this framework. For example, the joint_trajectory_controller, oriented towards trajectory tracking, is very often used in combination with motion planning algorithms to follow precomputed collision-free trajectories. The cartesian_controllers suite [28] implements a collection of controllers for motion and force control applications defined in Cartesian space coordinates; however, they can only handle tasks described in terms of a single TCP.
In addition, while the available constraint-based control frameworks are very powerful and expressive to building task-oriented controllers for very complex robot platforms, the need to build framework-specific drivers for each robot or to adopt additional middleware hinders their adoption by the large ROS/ros_control community. The number of robot drivers that are provided and maintained by this community is continuously growing, and it is desirable for any new control framework to facilitate integration into this ecosystem.
The objective of this work is, therefore, to fill the gap that exists between these families of solutions: to provide a framework that adheres to ROS/ros_control standard practices while implementing the rich set of control strategies that can be formulated using the constraint-based methodology. To achieve this objective, the present work focuses on
  • defining a specification for how to describe the structure of the desired constraint-based controller; 
  • providing a system to produce an executable ros_control controller from an instance of said specification.

3. Constraint-Based Ros_Controller Framework

The solution is implemented in the form of a generic, configurable controller. This generic controller is designed to solve the problem of finding optimal robot velocity commands that respect a series of constraints expressed as a pair of Jacobian matrix J and target velocity v. The Jacobian defines the mapping from the robot’s joint velocity space to the space in which the constraint is to be defined, and the target velocity is the desired velocity in said space.
The controller must also be configured with a solver that attempts to find an optimal robot joint velocity that respects the constraints as defined above. The formulation for the constraints enables using solvers of the types discussed in Section 2.1. A family of pseudoinverse-family solvers is currently implemented, and the plugin-based architecture is further detailed in Section 3.2.2, allowing controller designers to choose from the available ones or implement their own.
Upon loading the proposed configurable controller using the standard ros_control mechanism, the controller reads its individual specification from the parameter server and configures itself accordingly. The specification format, which is discussed in Section 3.1, implicitly describes the structure of a computational graph that calculates the required commands to apply to the robot joints so that the declared constraints are satisfied. Following the specified task, the controller leverages the tools described in Section 3.2 to build the runtime structures needed to update the computational graph at every update cycle.
Albeit highly configurable, this architecture means that the structure of an individual instance of the configurable controller is static once the controller has been loaded. The ability to switch between different behaviors is achieved by the capability of the ros_control framework infrastructure, in particular its implementation of a controller_manager component, to load, start, stop, and unload multiple controllers at runtime. The controller_manager loads and configures new controllers in separate threads from the ones used to run the update cycle of the running controllers. Implementation of ros_control robot drivers and generic control nodes leverages this structure to set up higher priority threads to update the controllers undisturbed from the potentially blocking controller initialization and configuration logic. Switching the active controllers for new ones is also designed to be executed in a non-blocking way, synchronized with the main update loop. The configurable controller proposed here adheres to this workflow by performing the potentially nondeterministic initialization of the computational graph off the critical real-time threads.
The next two subsections describe the designed format for specifying the controllers in the first place and then some relevant implementation details.

3.1. Specification Design

A generic controller that can solve optimization problems of the form presented in the previous section cannot be modeled with a rigid structure. In particular, each controller specification may present an arbitrary number of constraints. Each constraint, in turn, may require an arbitrary number of input signals of arbitrary types, some of which may require custom processing for filtering, derivation, etc. This inherently defines a graph-like structure, which is handled in different ways by the existing frameworks implementing similar characteristics to the present one.
In eTaSL, this structure is modeled as a graph of symbolic mathematical expressions. Explicitly modelling the expressions provides the ability to manipulate them to, for instance, automatically obtain derivatives. Controllers in iTaSC are built out of computational blocks exposing input/output ports to handle data flow. The components themselves and the interconnection of their ports embed the graph structure.
The present work uses the latter approach, using a computational graph structure to logically conceptualize the controller structure. Nodes in this graph represent individual computations, and edges represent the flow of information between them. The implementation uses a high-performance parallel execution model underneath, described with additional detail in Section 3.2.1, which pairs explicit declarative definition of computational graphs, together with efficient parallel execution.
With reusability and encapsulation in mind, a hierarchical structure is used in this framework, where nodes in the graph can be atomic computations or composite subgraphs acting as nodes for outer levels of the hierarchy. For generality and unless otherwise specified, the term graph is used in the rest of this description to refer to either actual graphs of nodes or to individual nodes themselves. The controller itself is implemented as a composite graph, composed of many specialized graphs for, e.g., processing sensor data, computing the constraint Jacobians and target velocities, solving the optimization problem, etc. In order to pass data between the graphs inside a controller, each of them declares named ports, which can then be connected to one another. Instead of bundling a fixed set of graphs together with the controller, the framework also loads the individual graphs that make up a controller as plugins themselves to be dynamically loaded on demand. Figure 1 represents a very simplified structure for a controller, where each block represents one of the aforementioned graphs in the controller structure.
Providing a collection of prebuilt, control-oriented computational blocks, together with the ability to load them and connect their ports, could suffice to describe arbitrary controller structures. In that case, a controller specification would consist of a list of graph components and another list with the connections between them. That would be a very flexible format, accommodating a very wide range of computational structures. This flexibility would, however, require users to build every controller from the ground up, defining details that a more sophisticated, albeit more restrictive, system could automate on their behalf. Additionally, in a very open-ended structure, the role of each individual component in the graph would be difficult to ascertain without deep knowledge of its implementation internals.
A specification format where individual graphs in the controller are declared with semantically specialized types is preferred by our framework. These types are parsed from the YAML text simply on the basis of conventions on each component’s input and output port types and names. For instance, blocks for constraints must declare output ports named jacobian and taskvel, with matrix and vector types. Likewise, blocks for solvers must declare a configurable number of input port pairs named jacobian_i and taskvel_i, one for each of the constraints in a given controller. These conventions inform about the role each graph plays in the controller structure and allow our framework to automatically interconnect them. The use of these specialized types results, in turn, in a more concise and expressive controller specification language.
The remainder of this section, therefore, enumerates the types of elements built into the controller specification format, providing an overview of the capabilities of the framework. First, some general concepts of the format chosen for the specification language are introduced. Then, the definition of the most basic building blocks for a controller, namely the constraints and the solver, is explained. It is followed by the description of a series of support functionalities that may be required depending on the scenario, e.g., defining custom I/O for the controller with optional filtering, defining controller-specific feature frames, or generating visual representations of the controller computational graph for debugging purposes.

3.1.1. Controller Specification Syntax

Following standard practices in the ROS pluginlib ecosystem, ros_control controller configurations for an application are declared by loading parameters to the ROS parameter server. Data is stored there using key/value mappings, where each value can consist of either a scalar, an ordered sequence, or a nested key/value mapping itself. When loading a new controller, the tooling in ros_control looks up a key matching the name of the controller and reads the necessary configuration from its associated mapping.
To support loading complex application configurations to the parameter server, the ROS ecosystem provides the tools to declare them using YAML syntax. This syntax allows one to define configuration data in a key/value format using a convenient human-readable form, as shown in Listing 1, and to directly upload that data to the ROS parameter server for applications to consume.
Listing 1. YAML syntax for a key/value data structure containing three keys, associated with a scalar value, a sequence, and a nested structure, respectively. The colon (:) symbol is used there to separate keys (displayed in black in listings) from values. Scalar values of any type are displayed in blue font except for sequences, which use indentation and a dash (-) character preceding each item (which are also shown in black). Indentation before a nested key defines its nesting level for key/value structures nested under outer keys. Comments can be added after a hash (#) character and are displayed in red text in the listings. A more exhaustive description of the YAML syntax can be found in [29].
Robotics 14 00109 i001
In order to integrate seamlessly into the ROS and ros_control ecosystem, it is desirable to leverage this background knowledge that is expected from ROS developers. Therefore, the controller specification language for our proposed framework consists of a specific structure for these key/value maps in the ROS parameter server. Examples of controller specifications in the remainder of this document are presented using the same YAML syntax above, which a ROS-based application can use to configure the controllers at runtime.
The contents of the YAML specification of controllers in ros_control are typically uploaded to the parameter server by the tooling that is part of the roslaunch system, which is used to bring up ROS nodes and configure them as necessary. Syntax errors in the parsed YAML files are identified by roslaunch, and the startup of the complete system will be halted. When a syntactically correct YAML configuration contains a controller description that fails to comply with the required specification, this is detected by our controller framework at the time of configuring the controller. The ros_control infrastructure propagates this problem to the client requesting the controller to be loaded, which can then react appropriately.

3.1.2. Core Controller Definition: Constraints and Solver

Strictly speaking, there are only two required elements to define a controller in the system discussed here: a set of constraints and a solver. The constraints determine the conditions that must be fulfilled in some task-specific space, and the solver is used to find the robot joint-space command that best satisfies the given constraints.
As described in the introduction, this work currently focuses on modelling constraints at the velocity level. These take the form of a pair of elements composed of a constraint Jacobian together with a constraint-space velocity target. A constraint graph that follows the specification must, therefore, declare output ports for both of these elements. Likewise, a solver graph must declare as many pairs of Jacobian and desired-velocity input ports as constraints are defined for the given controller. An arbitrary number of constraints can be added to a controller definition, and their output is automatically connected by the framework to the inputs of the solver. An example of a minimal controller is shown in Figure 2.
Here, a set of N constraints is each receiving state information from the robot and generating the Jacobian and target velocity data. The solver on their right receives the output of all the constraints and computes an optimal (according to its configuration) joint-velocity command, which is sent back to the robot. The configuration in Listing 2 below declares a controller matching this structure with only two constraints, named dual_arm_twist_controller.
Listing 2. dual_arm_twist_controller definition. A controller with multiple constraints. Definition of the details of each constraint is left out for clarity.
Robotics 14 00109 i002
The first line in Listing 2 starts a key/value map with the name of the controller, here dual_arm_twist_controller. This name is used to refer to this specific controller configuration when working with the ros_control interfaces to load, start, and stop it. The type entry, required by the ros_control, declares it to be of the configurable constraint-based controller type presented here and allows the driver to load the controller from its dynamic library. Two variants of the controller exist, each supporting different joint-command interface handle types that can be exposed by the robot hardware abstraction. The current example uses ConfigurableConstraintVelocityController, which expects joint interfaces to accept velocity commands. A corresponding version to be used for position-resolved joint interfaces can be used by defining the type to be ConfigurableConstraintPositionController. If the latter is used, the optimal joint velocities obtained from the solver are automatically integrated into a position command.
The rest of the contents in the configuration are part of the selected specification format for the configurable constraint-based controller defined in this document. In this minimal example, this consists of the two other top-level keys inside the controller configuration map. The constraint_names key lists the names of the constraints that this controller must apply—here left_arm_twist_constraint and right_arm_twist_constraint. Likewise, damped_pseudoinverse_solver is the name of the solver to use, declared with the solver_name key.
The components being assigned to specialized roles in the controller structure mean that, in this example, all data flowing between the constraints and the solver is implicitly determined. At controller initialization, every constraint’s Jacobian and target velocity output ports are automatically connected to the solver’s matching input ports, without the user needing to specify these connections manually. Also, since the framework proposed here is mainly designed to generate controllers for articulated robot structures, logic to automatically handle robot state and command information is also in place. The robot state is provided to every component that requires it by connections created automatically at controller initialization. The optimal joint velocity command found by the solver is also connected automatically to the joint command interfaces (after integrating if the underlying joints are position-driven). The state/command data flowing from/to the robot is illustrated in Figure 2 and others by the gray blocks containing a robot icon at the left and right of the graph. Note that these icons are duplicated to improve graphical layout and figure compactness, but both represent the same robot being controlled.
Combining the output of multiple constraints into a single optimization problem for the solver requires a common structure for the joint space representation. This structure is automatically obtained by the controller from a common URDF model and used to configure two components on the graph that output the state of the robot joints and that receive the commands in every update cycle of the controller.
Details of how the constraint and solver graphs are configured are omitted in Listing 2 for clarity. The constraints and solver, referred to by name in the controller specification, are defined in a similar way to the controller itself. Listing 3 shows how a complete configuration may look.
Listing 3. dual_arm_twist_controller definition including constraint and solver details.
Robotics 14 00109 i003
Note that the constraints themselves are defined here in separate mappings at the same level as the controller itself, not nested inside it. This has been designed this way to allow individual graph definitions—constraints, solvers, or other types of graphs—to be reused by multiple controllers if needed. Both the constraint and solver graph descriptions contain the graph_type entry, telling the controller which graphs to load into the controller at initialization. Similar to the controller itself, in addition to the type of graph, these graph-specific mappings should contain any additional parameters required by the specific graph being loaded.
The structure of the graph configuration is similar to that of the controller because the graphs themselves are also implemented as plugins in the framework. Defining the subgraphs used to compose the controller as plugins means that users of the framework can implement and use their own graphs in the controller. However, a collection of reusable graph plugins is also implemented and provided alongside the configurable controller itself, which includes a set of common constraints and solvers.
The provided constraints are in two families: joint-space and Cartesian-space constraints. The joint-space ones can be used to define the behavior of a subset of joints in the robot, where the Jacobian is simply a selection matrix composed of suitably located ones and zeros. The provided Cartesian-space constraints can be used to control the relative velocity of any link in the robot scene with respect to any other. Both joint- and Cartesian-space constraints provided with this system include a common set of strategies to compute the target velocity in their respective constraint space:
  • Hold initial position/relative pose.
  • Move at a desired velocity (pure velocity feedforward).
  • Track desired position + velocity (using proportional + derivative control with the addition of a velocity feedforward component).
Since it was one of the principal use cases for which this framework was developed, Cartesian-space constraints also offer compliant versions of all three control strategies above. When using these, sensed forces are used to adjust the motion of the robot according to an admittance control law, resulting in an active compliant behavior even for rigid manipulators.
The set of solvers provided off-the-shelf is all based on the pseudoinverse-based solutions discussed before and includes the following:
  • Moore–Penrose pseudoinverse.
  • Damped pseudoinverse.
  • Weighted damped pseudoinverse.

3.1.3. Inputs and Outputs

The example in the previous section misses an important part of the controller, the definition of the input data sources. In its current design, certain common connections between the graphs in the controller are automatically established by the framework itself. Robot state data and timing information are fed into the constraint graphs’ input ports automatically, the constraints’ output ports are automatically connected to the solver’s input ports, and the optimal command is automatically sent to the robot by reading from the solver graph’s appropriate output port.
Often, closed-loop controllers require additional sensor information to compute the robot commands. For instance, wrench measurements from a force-torque sensor may be required as an input by a compliant constraint. Likewise, a controller may need to output more than just a joint position/velocity command to the robot. In a teleoperation setup, a force feedback signal may be computed and reflected to the user through a haptic interface.
To achieve this, this framework proposes a way to forward data in both directions: from external sources into the graph inputs, using configurable inputs, and from graph outputs to external destinations, using configurable outputs. The configurable inputs read external data from one of the available sources and forward it to one of the graphs’ input ports, whereas the configurable outputs forward data from one of the graphs’ output ports to the desired external destination.
These inputs and outputs can be configured while designing the controller and can read from and write to different communication channels using a variety of data types. The types of communication channels that can be used by the inputs and outputs are best suited for different use cases. When dealing with data streams, there are two channels that can be used as either a source or a destination for inputs or outputs, respectively: ros_control hardware interface handles and ROS topics. Exclusively for inputs and scenarios where data does not change very often, the controller can also provide setter ROS service servers or even set the input value from constant values embedded in the controller configuration itself. Table 1 and Table 2 summarize the currently available formats available as part of the configurable input and output systems, respectively.
Note that the types of inputs and outputs that read from or write to ros_control handles are more limited than the other types. This reflects the limited types of hardware interfaces used by the hardware abstractions available in the community. Since joint state and command interfaces are automatically handled by the framework without using this configurable input/output system, this only leaves some very specific types of Cartesian interfaces provided by some industrial robot drivers, haptic devices, and force-torque sensors, which are all covered by the types in these tables.
An important consideration in the handling of input signals is that the data from the inputs to the controller is sampled at the start of the controller update logic and held constant for the duration of the controller update. In the case of constant values, that is a trivial statement. In the case of inputs that consume data from a ros_control hardware interface handle, this requirement is ensured by the implementation of the overall logic in a read/update/write structure, where data from the hardware handles is refreshed, then the controllers are updated before the commands are written back to the hardware. In the case of the inputs consuming information from the outer ROS system, the implementation of the input system makes use of separate threads to process the input data and only synchronizes with the rest of the control graph through mechanisms provided in the ros_control-provided realtime_tools library, which ensures lock-free read access to the underlying shared memory. This essentially means that the update logic of an individual controller will always run with the most recent data provided by its inputs, irrespective of how frequently that data is produced.
An illustrative example of using the configurable input/output system could be that of a teleoperation controller commanding the movement of a robot’s end effector to follow an external velocity (twist) command from a joystick. This example assumes the joystick exposes its Cartesian twist command using a ros_control-compatible handle. Additionally, it is also assumed that the Cartesian-space tracking constraint provides the tracking error in an output port and that the error signal must be forwarded to a ROS topic for monitoring purposes.
The resulting controller is illustrated in the schematic shown in Figure 3. In addition to the automatic connections to the robot joint interfaces discussed before, this figure shows the input command from the joystick and the output of the error to a ROS topic.
Listing 4 shows how the configuration for such a controller would look. Here, the first lines declare the type of the controller and the constraint and solver names (their details are omitted here, but assume they are defined elsewhere). In particular, assume left_arm_twist_constraint is a type of constraint with the input/output ports described above.
Listing 4. twist_tracking_controller definition, using the configurable I/O to read an external command and broadcast the tracking error.
Robotics 14 00109 i004a
Robotics 14 00109 i004b
Following the constraint and solver names, input_names and output_names each declare a list of configurable inputs and outputs forwarded by this controller. These lists only contain the names of the input and output port forwarding rules. Their detailed definitions, also in the controller configuration, must define the ports to forward into or to read from and the type of forwarding rule (from Table 1 or Table 2). Specific input or output types required additional type-specific parameters.
The definition of left_arm_target_twist declares its input_type to be twist_from_ handle, which means twist data is read from a ros_control hardware handle (other available types are shown in Table 1). Therefore, in addition to the port into which this data is forwarded, it must also specify which ros_control handle name to read the data from. This is provided by specifying the handle name using the input_handle key. Finally, the port_names property determines which graph input this data is forwarded into. Using custom notation, the dot (.) character is used to refer to nested elements: here, an input port named target_twist in the left_arm_twist_constraint graph. Note that, unlike output port forwardings, a given input signal may be forwarded to more than one port in the control graph, hence the plural in port_names.
The details of the actual_taskvel output forwarding define output_type to output vector data (since this is not technically a twist but a stacking of both left and right twist vectors) and is of the ...to_topic family. Similarly to the input_handle above, forwarding an output port into a ROS topic requires output_topic to determine the name of the topic. Here, the port_name value determines from which graph port in the controller the data must be read and sent to the configured topic: an actual_taskvel_projection port exposed by the solver component.
The implementation of the input and output port forwarding layer is decoupled from the rest of the control graph generation by implementing each configurable input/output forwarding as a specialized graph element. Custom logic is used to construct them independently from the rest of the graph and to insert them in the controller graph, where they are updated synchronously with the controller execution. This input and output forwarding capability of the controller infrastructure is particularly relevant for future reusability because it belongs to the boundary between the internal core graph structure computing the control law and other parts of the ROS ecosystem. By design, the implementation of the internal control law has been made ROS-agnostic and should be easy to port to other environments.

3.1.4. I/O Filters

One common scenario when dealing with sensor data is the need to apply different varieties of filters. For instance, a signal coming from a wrist-mounted force-torque sensor may need to go through a series of filters before being used in an admittance control constraint: an offset to compensate for temperature-based drift, followed by a compensation for the load of the robot gripper.
The input and output system introduced in the previous section also provides a means to put the data through filter chains. Filters can be arbitrary graph components and are also loaded from the same general plugin type as constraints and solver graphs. They must provide at least one input and one output port with the right names to allow the system to automatically insert them between the desired input or output and the desired port(s) in the control graph. The optional filter_names entry in the input and output definitions can contain an ordered list of the filters to be applied to the input or output signal. These filters are connected in series between the input or output graph ports and the external sources from which the forwarded data is read or written.
Figure 4 shows an example of a controller where data from a force-torque sensor is forwarded into an input port in a compliant leadthrough constraint. In this example, the data goes through two filters before being input into the constraint: first, a fixed offset is applied, and then the gravitational forces from a given load are compensated.
The example in Listing 5 shows how a controller with the structure of Figure 4 would be configured. It shows a controller named leadtrough_controller declaring a solver and a single constraint, the details of which are omitted. It then declares one input named sensor_wrench, which is ultimately connected to a port named wrench in the leadthrough_constraint graph. In addition to the entries already shown for the configurable input in the last section, this time filter_names is used to list the filter graphs through which the data is passed before being fed into the constraint graph. The two filter graphs, whose configurations are defined below, are automatically loaded into the controller and connected in the specified order between the force-torque signal source and the constraint’s input port, resulting in a topology like the one in Figure 4.
Listing 5. leadtrough_controller definition, which filters the signal coming from a force-torque sensor through two filters in sequence. Constraint and solver graph definitions are omitted.
Robotics 14 00109 i005a
Robotics 14 00109 i005b

3.1.5. Constraint Transformers

In order to avoid unnecessary multiplicity of constraint implementations, the current system supports a feature named constraint transformers. These are graphs to be inserted in the control graph structure, which act in a filter-like way on the outputs from the desired constraints. That is, when a transformer graph is defined for a given constraint, the Jacobian and target velocity outputs from the constraint are passed through this graph before being fed to the solver.
Possible use cases for transformer graphs are many. One of the offered transformers can be used to apply coordinate transformations to the Jacobian and/or target velocities before inputting to the solver. This allows one to effectively change the reference systems to which they refer and alters the behavior of the constraint—for instance, to track a twist command as if expressed in some static frame or as if expressed in local coordinates of a moving frame. Another use case is to implement constraint-space velocity limits to the signals before entering the solver.
One particular use case that provides great flexibility when using this type of Jacobian-based constraint formulation is the use of transformers to select specific subsets of directions in which the constraint must take effect. Note that the Jacobian matrix defining a constraint in this framework represents a list of linear equations mapping a given space, namely the joint velocity space, to some other constraint-specific space. By selecting specific rows of the matrix while discarding the rest, a constraint will only apply in certain directions of its constraint space.
For instance, when describing the relative Cartesian space velocity (twist) between two bodies, the six rows of the Jacobian matrix can be divided into two blocks representing the three components of the relative translational velocity and the three components of the relative angular velocity. By selecting the top three rows of this Jacobian, only the relative translation between the two bodies is constrained, leaving the rotation free; likewise, by selecting the bottom three rows, only the relative orientation is constrained. Other types of selections are also possible by picking different sets of rows. For instance, by selecting only one of the translational components and controlling its value to remain constant, a point in the second body is constrained in a given plane with respect to the first.
This method provides great flexibility in achieving different behaviors. Very importantly, it also allows defining controllers that constrain only the required degrees of freedom required by a given task. By doing so, tasks that would be otherwise unfeasible because of unnecessary overconstraining can become feasible. Previously feasible tasks can be augmented by secondary task objectives, which would not be possible otherwise.
Figure 5 shows the location on the control graph where this component is inserted. In this figure, a transformer labelled position selector is inserted between the twist constraint and the damped solver graphs. As annotated under the arrows representing data connections, the constraint, which acts in Cartesian space, outputs a six-dimensional Jacobian and target velocity. After the selection is applied, constraints for only three of the degrees of freedom are input into the solver.
Listing 6 shows how a constraint transformer graph is included in the controller description. As in previous examples, controller configuration starts by defining the solver and a single constraint in the usual way. The constraint_transformers feature then contains a nested mapping, where entries can be added listing transformers to be applied to each constraint. An entry for a given constraint can contain a list of transformer graph names to be applied sequentially.
In this example, a single transformer graph named position_selector is defined for the twist_constraint graph. The transformer is then defined like any other graph outside of the body of the controller. In this case, the selector is configured with a selection_mask indicating which rows in the constraint must be selected for the solver. The result of selecting the first three components is that only the translational velocity is actually constrained, with the rotational velocity being left unspecified.
Listing 6. trans_vel_controller definition, applying a transformer (position_selector) to a constraint (twist_constraint). Constraint and solver graph definitions are omitted.
Robotics 14 00109 i006

3.1.6. Scene Model and Additional Frames

When discussing the constraint and solver formulation, the need to share a common representation of the state and command space has been mentioned. This shared representation necessarily implies a shared scene kinematic model, which relates the configuration state and command space with the motion of the different links in the robot. Since one of the objectives of this work is to provide a ros_control-native implementation of the controller specification described here, it is assumed that a global model of the robot and the static scene around it is available in the form of a URDF description. This global model is made available to every element in the control graph through an interface, which is later described in Section 3.2.3.
However, sometimes task-specific frames can be required, which may not belong in the URDF model, since this model is typically static and contains only the robot itself and parts of the scene that do not change. For example, a perception system can be used to detect objects that are not known beforehand and therefore are not part of the scene represented in the URDF model. Also, after the robot picks up an object, additional feature frames of said object may be attached to the robot flange to be referred to by the controllers. In both examples, these task-specific frames are not part of the robot’s kinematic model itself but are nevertheless relevant for some specific controller. A system to define controller-specific feature frames rigidly attached to arbitrary links in the URDF scene model is thus made available in this controller specification format. Listing 7 shows how this can be achieved by using the additional_frames entry, which can contain a list of feature frames listing which link they are attached to and the relative transform from them.
Listing 7. feature_frame_controller definition, declaring a frame named grasped_object attached to robot_end_effector. Note that feature_twist_constraint can then directly refer to this additional frame. Solver graph definition is omitted.
Robotics 14 00109 i007
Note that the task-specific frames defined using this functionality are statically defined, in a topological sense, for the lifetime of a given controller. That means that in the tree-like representation of the kinematic scene, the parent frame of these task-specific frames cannot be modified, only their relative transformation. A different scene topology requires a new controller to be defined, where each additional frame can be declared from scratch to be a child of a different parent frame.

3.1.7. Custom Graphs and Connections

In addition to every semantically meaningful graph type described above, there are cases where computational elements may not clearly fit into any of these categories. Also, graphs that belong to any of the categories above may require additional connections to the ones automatically performed by the described logic. In order to accommodate more general types of graph topologies being included in a controller, support for custom_graphs and custom_connections elements is also offered. The custom_graphs element may contain a list of graph names to be added to the controller and for which no automatic connections are made. Pairs of ports labelled source and destination can be added to a list under custom_connections to connect these ports. By using this functionality, arbitrary graph plugins can be inserted into the controller, and connections between any pair of ports with matching types can be made. In the example illustrated by Figure 6, a custom trapezoidal velocity profile generator graph is displayed on the top left of the diagram. A connection is shown between it and the single constraint, representing the output of the profile being used as a target position for the robot’s end effector.
In order to add arbitrary graphs and establish connections between their ports, a configuration like the one shown in Listing 8 can be used, which would generate a controller like the one shown in Figure 6. There, after the declaration of the constraint and solver graphs to use, the custom_graphs entry is used to instantiate the profile generator graph. Similar to other cases, the details of which type of graph to load and the specific configuration it requires are described outside the scope of the controller configuration itself. In addition to instantiating the profile generator, the custom_connections key is used to list connections between arbitrary pairs of ports in the graph. This specific example lists one connection from the output of the motion profile to the input of the constraint as a target.
Listing 8. profile_tracking_controller definition, tracking a pose output by a custom trapezoidal motion profile generator graph. Constraint and solver graph definitions are omitted.
Robotics 14 00109 i008

3.1.8. Output to DOT

Even if this tool is designed to facilitate designing controllers by providing some reusable structure, until some higher-level tooling to guide the user to do this is available, getting every detail right in the first attempt can be difficult. One functionality that can help troubleshoot or facilitate incremental building of a controller is the ability to output a DOT file describing the structure of the graph by defining a filename using the dot_output_file key. When a filename is specified using this key, the controller initialization logic automatically generates this file with the structure of the controller.
The DOT files are plain text files containing a description of the graph topology, together with optional markup for the desired visual representation of some of the elements. A diagram representation like the one in Figure 7 can be generated in multiple supported image formats by processing the DOT file using the tools in the Graphviz [30] toolset. These graphs display each of the atomic computational elements as rectangular blocks. Ellipses represent the ports used to define the flow of information in the graph, with solid ellipses displaying output ports and dashed ellipses displaying input ports. Subgraphs in the hierarchy are represented by rectangular shapes that contain other elements inside. Additional details about how this computational graph is implemented and executed are discussed in Section 3.2.1 below.
For a more comprehensive example of the types of graphs that can be generated in a moderately complex controller used in practical scenarios, refer to the example discussed in Appendix A.

3.2. Implementation

Looking at some of the details of the controller framework implementing the specification above can help understand the design choices made and the resulting behavior of the generated controllers. At its core, the controller framework requires a way to dynamically build and update arbitrarily complex computational graphs. One option to achieve this is using code generation techniques to produce a controller performing the required computations from a given specification. This approach provides maximum flexibility in structuring the resulting controller but makes it very difficult to use it dynamically at runtime. Our controller relies on dataflow, a C++ library, to dynamically compose directed acyclic graphs of interconnected computational blocks, providing typed inputs and outputs to connect them together.
Reusability of the computational blocks used in these graphs may be maximized by encapsulating computations at the smallest possible granularity level. However, exposing too granular blocks deprives them of clear high-level semantics and requires too much low-level composition work by the user. When designing this framework, providing a collection of prebuilt higher-level composable graphs has been preferred. Each of these graphs has a clear intended use in a controller structure: compute a constraint, filter a signal, etc. Furthermore, these built-in graphs are packaged and loaded dynamically as plugins, and mechanisms for the end users to provide their own graph plugins are also provided.
In addition to the runtime data transmitted amongst the blocks in the graph using their input/output data ports, each individual block in the graph may require static configuration to be read on initialization. As discussed, in the ROS ecosystem, this is typically achieved by reading the configuration from the ROS parameter server. Our infrastructure generalizes configuration loading through a ROS-independent abstract interface. By reading their configuration from this abstract interface, the composable graphs provided to build the controllers can also be used in different contexts.
Each constraint configured for a given controller ultimately outputs a Jacobian matrix relating robot joint velocities to some task space velocity. In order to be able to combine them into a single solver, it is important that every constraint shares a common representation of the robot joint-space vector. This is achieved by defining a common robot model object at the controller level to which every block in the computational graph has access. The current state of the robot at each time step is also provided to each block as an input, in an ordering coherent with that used by this model.
Each of these fundamental building blocks is described in more detail in the following sections.

3.2.1. Dataflow

The backbone of the proposed controller framework is a system, which we refer to as Dataflow, to represent computations as a directed acyclic graph of atomic operations. Nodes in these graphs contain arbitrary computations, and edges model precedence requirements, used when computations in a given node must be finished before starting the execution of another. The Taskflow header-only C++ library [31,32] provides infrastructure to model computations in this way. By using this library, computational graphs can be declaratively defined and handed over to an executor, which can schedule them on multiple threads using a work-stealing algorithm. The importance of the computational dependency graph being acyclic comes from the automated parallel execution scheduling used by Taskflow. In order to determine the correct order of execution of the atomic computation blocks, the graph must include no cycles.
In a general sense, there can be arbitrary factors determining computational precedence requirements. In particular, any computation that involves external side effects can dictate an order of operations. Therefore, the Taskflow library provides a general way to declare precedence relations between computations. However, the computational graphs produced by the controller framework proposed here have a specific objective: read sensor input, compute the optimal control commands, and apply these commands to the robot actuators. Since the only operation with side effects is the application of the control commands to the robot actuators, data flow dependencies between atomic computations are the only remaining source of precedence requirements. This observation guides the design of the Dataflow software layer built on top of Taskflow, which emphasizes the flow of data by explicitly modelling it via ports.
A very simple scenario, like the one shown in Figure 1, shows a computational graph consisting of atomic computations: read from sensors, determine control targets based on it, and compute the optimal command. Data-based dependencies between the computations are represented by the arrows in the figure, where each block needs the result of the ones preceding it. This graph topology could be implemented, using Taskflow, by the pseudocode in Listing 9. The first few lines forward-declare functions performing the actual computation, highlighting the data they need as input and produce as output. Inside the run_control method, variables to hold intermediate results are created first; then the individual computational elements, modeled as C++ lambda expressions, are added to the graph; and finally, the precedence relationships between them are explicitly declared.
Listing 9. Declaring a pipeline of tasks requires using global data and declaring precedences matching flow of this data between tasks.
Robotics 14 00109 i009a
Robotics 14 00109 i009b
The tf::Taskflow object holds the topology of the graph, composed of the computations to be executed and the required precedence relations between them. Taskflow then provides an implementation of a tf::Executor object, which, given the graph constructed above, can update all the nodes exactly once in the desired order. This matches the expectation of the ros_control infrastructure that every controller updates synchronously at the configured rate. Handling of long-running computations that are not suitable for deterministic execution in the duration of a control loop should be handled separately from the controller, and the results injected into the controller using the input system described in Section 3.1.3.
Note that how the data, which must flow from one computation to the next (the sensor data and the command), is not handled by Taskflow in any way. The user of this library is responsible for defining and handling variables to hold the intermediate results of the computations. By manually using these variables to provide input to the computations and store their results, the data flow is implicitly defined. However, the precedence between the declared computations must still be explicitly declared to Taskflow, independently of the implicit data flow. This duplication requires additional work from the developer and is a source of difficult-to-detect runtime race condition errors if the explicitly declared dependencies do not match the actual implicit ordering.
To mitigate both these problems, dataflow explicitly models the flow of information between the components using input/output ports. Atomic computations in dataflow must implement the specific C++ dataflow::Task interface and explicitly declare named and typed input or output ports. The user can then assemble the computational graph by defining the tasks and the connections between their ports. In this way, the intermediate data no longer needs to be handled by the user and is held by each task instead, specifically by the output ports it defines. Furthermore, when declaring a connection between a pair of input and output ports, a precedence relationship in the internal Taskflow graph representation is automatically defined, removing the previously mentioned duplication. The connection mechanism also ensures the type correctness when connecting the ports in the graph and is capable of detecting unconnected input ports, which would result in updating the required computations with uninitialized data. The controller performs both these checks when loading a task specification into the dataflow structure and fails gracefully, making it impossible to try to activate a controller with these types of errors.
The same example used above could look like the pseudocode in Listing 10 when using dataflow. The first lines forward declare the specializations of dataflow::Task, where the ports are explicitly declared and whose associated computations are performed in a Task::update() method. Then, a run_control method equivalent to the one in Listing 9 can be found. After creating an instance of a graph and adding instances of the necessary computational blocks, the last few lines declare the connections between the ports in the tasks just once.
Listing 10. Declarative connections of task ports define precedences implicitly in dataflow.
Robotics 14 00109 i010a
Robotics 14 00109 i010b
When the objective is to build general computational graphs where the precedence between tasks is uniquely determined by the data that flows between them, dataflow builds upon the foundations provided by Taskflow to allow a more expressive and less error-prone way for users to define graphs. Taskflow, on its own, can guarantee the consistency of the data used in the control graph as long as the precedence relations between the computational blocks are correctly declared by the user. The explicit modelling of the data flowing between the computational blocks is automatically transformed into execution precedence relationships declarations. Therefore, a graph built using dataflow guarantees that the data is free from parallelism-related race conditions.

3.2.2. Graph Plugins

In order to be able to dynamically build controllers using the specification format described above, the controller framework must be able to inject multiple different types of preconstructed graphs into the structure. The option to pre-determine the set of graph types that are available to use creates too tight a coupling between the infrastructure to build the controllers and the actual computational elements inside it. Also, it would make it impossible for end users to extend the controller design framework without recompiling the complete stack of controllers and constraints. The alternative adopted here is to implement the graph building blocks, which the controller design framework uses to build specific controllers as plugins. This is also an approach that fits naturally into the ros_control framework, which itself uses a plugin system to load controllers into the robot control node dynamically.
The plugin system, as defined by the class_loader and pluginlib utilities in the ROS ecosystem, allows categories of plugins to be defined by a base class interface. This base class must have an empty constructor to allow factories to be able to construct objects in a generic way and is typically accompanied by plugin category-specific initialization methods. To adhere to this format, a base class that acts as a factory for the individual type of graph included in each plugin is defined as shown in Listing 11.
Listing 11. Graph factory base class, taking the generic scene model and configuration interface components designed for this purpose. The public-facing make_graph method internally calls private, and virtual make_graph_impl, which individual graph implementations must override to construct the specific graph they provide.
Robotics 14 00109 i011
The built-in graphs, which are included by default in the core set of graphs, are implemented in such a way as to maximize their reusability:
  • A header with a function constructing the desired graph structure is first implemented. This header can be used to directly build the provided graphs into a C++ executable if the dynamic plugin and controller system described here is not necessary. This function takes the scene model but is independent of the generic configuration interface to reduce dependencies.
  • A graph factory class deriving from the base class in Listing 11 takes care of unpacking the required configuration parameters from the generic interface and calls the pure C++ and returns its graph. This class is registered as a dynamically loadable plugin using the ROS-agnostic macros provided with the class_loader tools. These classes can still be used as plugins in a ROS-independent environment.
  • A ROS package wrapper is configured to export the plugin to the ROS ecosystem using the conventions required by the pluginlib ecosystem. This facilitates the discovery of the plugin at runtime by the controller infrastructure by inspecting the configured ROS environment paths.
Since the ros_control ecosystem is inherently ROS dependent, the controller design framework discussed in this work directly or indirectly uses the three levels of structure mentioned above. However, it was deemed interesting to provide this separation to allow other uses in ROS-independent setups and particularly for a future migration to ROS 2 and ros2_control.

3.2.3. Scene Model

In the context of this framework, constraints for the robot motion are expressed by defining the relation between desired constraint-space velocities and the joint velocities through a Jacobian matrix. These are used to define bounds or desired values in said constraint space, and a solver algorithm is used to find the joint velocities that result in the desired constraint space velocities. The solver must, therefore, consume the constraints in a way where the variables to solve—joint-space velocities, in our case—are coherent between all the constraints defined. Also, the computation of the desired constraint space velocities (or their bounds) often requires knowledge of the current state of the robot. Our control framework supports this by offering every building block involved a common geometric model of the robot and its environment, as well as a snapshot of the current state of the robot. This scene model can be queried for relative Jacobians between arbitrary pairs of frames, keeping a constant representation of the robot joint space. It can also be used to compute relative poses between frames in the model, which is often required in error computations. The joint state of the robot provided by the framework to each individual computational module is always kept coherent with what the scene model expects.
The model can be initialized with multiple instances of robot models described using URDF, which is the standard format used in the ROS ecosystem. In addition to these, additional fixed frames can be defined, rigidly connected to any existing frame in the model. Furthermore, the transforms defining these additional frames can be updated dynamically in order to support injecting information from externally tracked objects and declaring constraints with respect to them and the robot links. The logic in the implementation of the scene model builds a tree-like representation of the scene by incrementally composing together the provided robot URDF models—which, by definition, must themselves adhere to a kinematic tree structure—and individual frames. The incremental construction makes sure that no duplicate frames exist in the tree, which would, in turn, lead to indetermination in the addressing of such frames from the computational elements in the controller graph.
Arbitrary modifications of the structure of the scene model can lead to nondeterministic recomputation of the internal data structures used to keep track of the scene graph. To ensure deterministic execution of the controller discussed here, the usage of the flexibility of the scene model object is explicitly limited to avoid the topology of the underlying scene graph from being modified after the controller has been initially loaded, as already discussed in Section 3.1.6.

3.2.4. Configuration Interface

Even if first-class integration with ROS and ros_control has been one of the objectives of this work, an effort has been made to keep as much of the development using ROS-independent C++ only. This should facilitate reusing most of the functionality in this development in other environments, including an eventual port to ros2_control and ROS 2.
In addition to the runtime data flow via topics or service calls, one of the key elements of the infrastructure provided by ROS is the tooling for configuring the components of a system when initializing it. Here, a configuration is typically loaded to the ROS parameter server from either YAML files or values included directly in the launch files used to define the elements composing a runtime system. An abstract configuration_interface has been implemented for individual elements of this system to consume generic configuration values as a hierarchy of key/value pairs. To provide effortless use in a ROS scenario, a specific implementation that implements that interface while reading from the ROS parameter server is offered. Likewise, for ROS-independent setups, an implementation of said interface that internally reads directly from YAML files is also provided.

4. Experimental Use Cases

To validate the proposed system for authoring complex controllers based on inter-twined constraints, we describe in the following three specific use cases where our novel declarative language for creating complex controllers has been put to the test. As the combined controllers are based on existing, lower-dimensional controllers, it is not within the scope of this article to test performance per se in terms of stability, cycle time, etc. Such performance criteria will depend on the performance of the selected component controllers, the design of which is not a function of the system for assembling composite controllers that we have developed here. Instead, the motivation for using the newly proposed system is that it facilitates the authoring of complex controllers. Demonstrating the effectiveness of the system would mean comparing the development costs of using the new system to implement a given application vs. authoring the system by hand using more traditional methods, a comparison that is not realistically achievable in a quantitative sense. As proof of the efficacy of the new system, we describe in the following the robotic behavior that is needed, we show the controller graphs that illustrate the structure of the composite controller, we provide the YAML code (as Supplementary Materials) to allow the reader to evaluate how constructing composite controllers using our declarative language compares to more traditional methods, and we provide a link to a video (https://doi.org/10.5281/zenodo.16738065) of the composite controllers in use as evidence that the systems so programmed actually work.

4.1. Laboratory Automation

The TraceBot project addresses the automation of a sterility testing procedure in laboratory environments, with a focus on the traceability of the robotic procedure to ensure each step of the process is performed according to requirements. The system deployed to automate the procedure is composed of a dual-arm robot equipped with electric grippers and an RGB-D camera. The hardware elements available to the ros_control loop are both arms and their respective wrist-mounted FT sensors. This results in active control capability over 12 position-controlled DoFs. The grippers are not integrated in the ros_control loop in the main control PC, since they are directly controlled from the Universal Robots robot controllers.
The environment in which the robot operates and the sterility testing procedure present multiple control challenges. On one hand, the elements to manipulate consist of a testing kit consisting of a pair of plastic canisters attached to flexible tubes, all of which are transparent and which must be inserted into their respective receptacles on top of a reflective metal pump. This represents a great challenge for the perception components in the setup, which are being addressed in [33,34]. On the control side, these efforts are supported as much as possible by designing specific controllers for individual steps of the process, which can accommodate imprecision in the object pose estimation.
One specific case, illustrated in Figure 8, where this has proven very effective, is the implementation of a strategy to insert the plastic canister into their receptacles in the pump. This being a peg-in-hole application with circular shapes, a two-step sequence is used where a compliant behavior constraint is used to vertically insert the canister in a tilted orientation and then rotate it until it is fully inserted. Each step in this sequence is executed by a dedicated controller, both of which are defined using the same set of constraints:
  • Vertical translation is controlled with a constraint combining spring-like proportional pull towards a target position with compliance.
  • Translation of the canister in the plane parallel to the hole is left purely compliant to allow the contact between both circular shapes and automatically center the canister in the hole.
  • Orientation of the canister is rigidly controlled.
In both scenarios, the task space consists of six DoFs, all of which are only influenced by the joints in the arm manipulating the canister. This, together with the joint-space velocity-minimizing behavior of the available solvers, results in the second arm remaining motionless.
The differences between the two controllers, depicted in Figure 9, lie in the targets used for the canister’s vertical displacement and orientation constraints. For the initial insertion step, the target position in the vertical translation axis is controlled by a linear motion generator towards a target slightly below the hole, producing, via the programmed compliance, a finite vertical force which helps the insertion, while the orientation is kept fixed at a tilted angle. In the second step, this is reversed. A constant vertical displacement target is set beneath the tray hole to maintain the downwards push, while a rotational motion is generated with a smooth profile to bring the canister upright and finish the insertion. A direct upright insertion of the canister into its receptacle hole in the pump would require submillimeter accuracy. However, the proposed strategy can accommodate positioning errors of a few centimeters by leveraging object and environment affordances through robot compliance.
Another challenging step in the procedure addressed in TraceBot is inserting a flexible tube into a narrow slot in the pump’s head, illustrated with a 3D printed mockup of the pump in Figure 10. In order to insert the tube into the slot, pulling tension must be applied to the tube to straighten it and align it with the slot. Additionally, the longitudinal pulling tension also creates transversal contraction of the tube diameter, facilitating the insertion. In order to achieve this, a custom strategy for bimanual manipulation, as described in [35], is implemented as a constraint. Here, the combined motion of both robot end effectors is controlled in a compliant way by combining two behaviors:
  • An absolute motion, applied to either one of the end effectors, affected by an additive combination of the measured wrenches in both arms.
  • A relative motion between both end effectors, affected by a subtractive combination of the measured wrenches in both arms.
Using this approach while manipulating a single object with both arms provides an intuitive means to describe the desired behavior. The absolute component can be made to control how the object moves and interacts with the environment. By adjusting the relative component, internal forces on the manipulated object can be controlled.
In this specific use case, this custom 12 DoF constraint is used to control the complete dual-arm manipulator, as shown in the controller diagram in Figure 11. The absolute component setpoints and admittance parameters are adjusted to first position the tube over the pump head slot and then lower it compliantly into its inserted position. The vertical target position is set below the slot to generate a vertical insertion force, and a back-and-forth oscillatory motion in the tube’s longitudinal direction is overlaid on the vertical motion profile by using a filter graph to help with overcoming friction. The setpoint for the relative component is set to produce a constant pulling force on the tube, which keeps it straight and also slightly contracts it radially, facilitating the insertion.

4.2. Teleoperation

In the context of the HELDU project, contact-based control is being investigated, where motion primitives defined by leveraging the capabilities of this controller infrastructure are implemented. One particular scenario in which the control framework is applied is that of teleoperation. A six-DoF manipulator and an electric gripper are controlled using a haptic device, which can sense and actuate six DoFs. Both devices are shown in Figure 12.
ros_control hardware drivers for the haptic devices were used for this, with custom Cartesian handles implemented to read the device handle pose and twist and to command the wrench to be applied as feedback to the user. By using the same types of handles supported by the control framework, it is possible to define controller input and output signals that read and write into this device.
At the core of the controller, shown in Figure 13, is an admittance control constraint, where the motion of the robot’s end effector is set to track the pose of the haptic device by adjusting the wrench measured by its built-in force-torque sensor. This admittance-control constraint works in the Cartesian velocity space of the robot end-effector frame and therefore constrains the complete six DoFs in the system. Implementing constrained teleoperation in only certain of the directions of this Cartesian velocity space could be achieved here by selecting specific rows in the constraint Jacobian using the constraint transformer feature described in Section 3.1.5. To maintain the velocity in the directions of the admittance teleoperation, another constraint could be defined to act exclusively on these directions to keep them fixed, either rigidly or with a compliant response.
In order to improve usability, a technique based on the Bubble technique described in [36] has been implemented as a filter graph and is applied to the haptic pose input to extend the reduced workspace of the haptic device to the bigger workspace of the robot, making use of filters as described in Section 3.1.4. The force feedback signal to the haptic device is computed as a combination of a virtual component related to the Bubble technique and another component proportional to the tracking error. This combination of pose differentiation and atomic vector operators is also implemented by using output filters.

4.3. Mobile Dual-Arm Collaborative Manipulation

Dual-arm mobile manipulation, where both the motion of the arms and the mobile base have to be controlled in a synchronous way, is another use case where the constraint-based approach can be used to great effect. In the SMARTHANDLE project, parts of the used car battery disassembly process are being automated. Some of the operations involved are considered too challenging to automate and are left to the human operator; others are being fully automated, and there is a third category of operations to be solved from a human–robot collaboration perspective. One operation in the latter category is that of removing the cover of the battery. After all the screws holding the battery cover in place are removed by the robot using an automatic screwdriver, the heavy and bulky battery cover is removed in collaboration by the dual-arm mobile robot and a human operator.
The mobile manipulator robot used for this task consists of a holonomic omnidirectional mobile platform, offering the ability to command its motion in three DoFs, onto which two LBR iiwa manipulators are mounted, each offering seven DoFs to be controlled. The complete platform provides, therefore, 17 velocity-controllable DoFs to the control layers. Ad-hoc hardware abstractions are implemented, both for running controllers on the physical platform and on a simulated version. In both cases, the DoFs of the mobile platform are modeled as three virtual joints (two prismatic and one revolute), which are exposed to the control framework, matching the degrees of freedom of the mobile platform. Virtual joint control commands are then translated to the appropriate velocities for the platform wheels. The platform and its associated simulator are shown in Figure 14.
For the cover manipulation per se, a collaborative admittance control constraint similar to the one used for the bimanual tube handling is defined to control the motion of the manipulator flanges, resulting in a controller with a structure like the one shown in Figure 15. Here, both arms grasp the car battery cover and provide lifting force, whereas the operator can guide the motion by pulling and pushing from the opposite side.
Unless otherwise constrained, the task space defined by this setup only constrains 12 DoFs. A controller defined using this framework splits the resulting motion between the arm joints and the motion of the mobile base. The specifics of how the motion is shared between the arms and the base depend on the chosen solver for the optimization and on how it is parameterized. Work on evaluating the suitability of weighted and priority-based solvers is to be conducted to select the most suitable for collaborative applications.
In parallel to the analysis of the behavior resulting from using different solvers for this task, another line of work consists of exploring what additional objectives could be added to the task. In particular, an approach where a reactive potential field-based collision avoidance constraint is selectively added to the control when obstacles are near will be explored in this project. This additionally constrains the control problem to ensure the mobile base does not collide with obstacles on the factory floor.

4.4. Performance Analysis

In order to evaluate the runtime performance of the controllers generated using this framework, we conducted an experimental evaluation of the execution time required for a single update of various controllers. We followed a systematic approach to build increasingly complex controllers with which the analysis is performed. We also compared the execution time of the controllers generated by the framework with equivalent hand-coded variants to assess the impact of using our constraint-based specification on execution performance.
We used a simulated robot model for the analysis, consisting of two UR robots mounted on a vertical stand, totalling 12 controllable DoFs. Starting with a controller constraining a single task-space DoF, the number of constraints was incremented in each iteration of the controller, up to constraining the full 12 DoFs of the robot. Each constraint used in designing these controllers was of a type used to force a link on the model track to track an input twist signal with an overlaid compliance to external forces. In each iteration, a single direction in the constrained Cartesian velocity space is selected using the constraint transformer capability presented in Section 3.1.5, resulting in one additional DoF being constrained by each successive controller. First, six such constraints were defined for one of the two robot flanges, and then six more were incrementally added for the second arm. Note that this is a highly inefficient implementation, since most of these constraints are performing duplicate calculations. However, it allowed us to systematically build up the complexity in a consistent way. The hand-coded variant was also built with an equivalent structure of duplicated constraints from which individual constrained DoFs were selected for the solver. It was designed in a parametric way to allow computing the same control law ranging from one to twelve single-DoF constraints.
Each of the controllers was loaded on the simulator and executed for a total of 5 s, during which the time required by the controller to update its internal computational graph was measured using Linux’s monotonic clock. In order to avoid memory allocations, which could affect the controller’s performance, the running mean and standard deviation are computed using Welford’s algorithm. Furthermore, since our framework allows one to seamlessly execute the computational graph in a distributed manner, the controllers generated with our framework were executed 12 times. In each run, the number of threads assigned to the parallel executor that updates the computational graph was incremented, starting with one thread and going up to twelve threads.
The controllers from our framework, as well as the hand-coded variant, were compiled on an Ubuntu 20.04 system using gcc 9.4, which is the default supported version, on CMake’s Release configuration. The robot mockup and all the controllers were executed on a desktop workstation with an AMD Ryzen 9 5900x 12-core processor with a base clock speed of 3.7 GHz.
Figure 16 shows the aggregated results of all the experiment runs. Each additional constraint requires computing a Jacobian matrix and several relative geometric transformations by applying the forward kinematics model of the platform. As expected, the execution time grew together with the number of constraints that must be computed and optimized by the controller. Furthermore, it can be seen that in tasks with low complexity, the controllers generated with our framework presented a slight overhead with respect to the hand-coded variant, shown as a dotted line in the figure. However, the results also show the effectiveness of parallelizing the computations, particularly when controllers with a high number of constraints were being evaluated. In general, adding a second thread reduced the mean execution time to about 80% of the execution time on a single thread. A third thread reduced this further to about 65% and as the number of threads allocated increased, the reduction stabilized around 55% of the single-threaded execution time. In this specific scenario, the controllers generated by our framework outperformed the hand-coded equivalents when at least three threads were assigned for the most complex controller configurations.
Another relevant observation is that the mean execution times were always below a quarter of a millisecond, even in the worst-case scenario when the controller with 12 constraints was executed on a single thread. As a reference, the UR robots accept commands every 2 ms.; therefore, this result is amply able to accommodate higher frequency control loops or even higher complexity controllers. During all the experiments recorded, no single update of the controller loop took longer than 1 ms. These results show a certain overhead in computation time of controllers generated with our framework with respect to the hand-coded variant, presumably coming from the dynamical loading used by our plugin-based architecture or perhaps introduced by the underlying mechanisms required to set up the distributed parallel execution by Taskflow. Whatever the source, the overhead happened only in the low-complexity controller variants and can be considered assumable since execution times in these controllers were always well below the required loop times of common industrial manipulators.
Given the flexibility this framework presents to design arbitrarily complex controllers, the results cannot be completely generalized, and specific benchmarking experiments should be performed with the exact conditions of any given scenario before running any such controller on real hardware. However, we conclude that the ability to generate controllers dynamically is a powerful tool for researching different controller architectures. In the cases where the overhead makes it more inefficient than the hand-coded variant, the computations are nevertheless fast enough to make this overhead negligible. The use of our framework compensates for this slight performance decrease by enabling seamless parallelism, which is difficult and error-prone to implement by hand in an efficient manner.

5. Conclusions and Future Work

A software framework for designing controllers using the constraint-based approach has been presented here. The proposed specification language provides expressive primitives to describe the most common patterns found in the computational graph for such a controller, namely the constraints themselves, the solver, input and output signals and their associated filters, etc. More general-purpose primitives for unexpected scenarios are also provided, where structures that do not fall into any of the categories above can be accommodated. A collection of reusable composable graphs implementing common constraints, filters, solvers, and more is also provided. These can be combined into task-specific controllers using the proposed specification format. The collection of available graphs is built on a plugin system, so the system can be easily extended with additional user-defined computational blocks, which can then be used to build new controllers. We have also demonstrated several use cases highlighting the flexibility of the proposed framework to implement a wide degree of task-oriented controllers for different scenarios. All the presented use case scenarios are built by merely declaring combinations of constraints, solvers, filters, and other elements using the YAML-based specification language described in Section 3. At runtime, the execution of the resulting controller logic is performed by automatically parallelizing the computation of the individual blocks on the control graph wherever possible.
We have thus demonstrated the effectiveness of the proposed authoring system for solving real-world robot control problems. Nevertheless, after using the framework in practical scenarios, some aspects of the proposed specification format have been identified for future revision. For example, reusing named graph definitions in multiple controllers can reduce duplication but results in coupling between these controllers. Other mechanisms to reduce duplication, based on YAML anchors and aliases, might lead to better results. Some inconsistencies are also found in how different types of elements inside the controller are specified and parsed, like the inputs/outputs and the transformers, and should be homogenized. Finally, reusability of the software tooling could be improved by splitting the specification into the semantically informed format used currently and a more generic graph description with no specialized types. The explicit transformation step from the first into the second would also help separate concerns in the implementation.
Another aspect that has been identified for future work is that of checking controller specifications offline. Invalid controller specifications can happen for several reasons. As mentioned in Section 3.2.1, errors such as addressing nonexistent graphs or ports, or leaving input ports unconnected, are already detected when loading a controller specification and before the controller activation can even be requested. However, the specific case of the cyclic controller structure is particularly troublesome because it can have unexpected results. When this happens, the algorithms in Taskflow can sometimes ignore parts of the computational graph, and in other cases, the execution of the graph can be completely blocked. None of those outcomes is acceptable, and further analysis of the generated computational graph structure is being explored to detect cyclic structures by either looking for back edges in a Depth-First Search of the graph or by evaluating whether an algorithm for topological sorting of the graph nodes is able to find a successful solution. These should enhance the safety of using this tool to design complex controllers without risking undefined behaviors.
The present work proposes not a specific controller but rather a framework providing a set of building blocks and a structured way to design custom controllers for specific tasks. The framework provides certain tools for the controller designer to evaluate the correctness and performance of a running controller. The ability to generate a graphical representation of the internal computational graph, discussed in Section 3.1.8, can be leveraged to identify an incorrectly built controller structure. Furthermore, inspecting the evolution of the internal variables in the controller can be achieved by using the configurable I/O infrastructure presented in Section 3.1.3 to broadcast the data flowing through arbitrary ports to the ROS environment. ROS then provides tools like rviz or rqt_plot to inspect this data at runtime and to record it for offline inspection using rosbag.
The current implementation attempts to facilitate performance analysis by automatically projecting the joint velocities computed by the solver back into the task space where the targets are defined. However, it is currently the controller designers or application architects who must compare the targets with the computed results and determine how to act if the deviations are beyond the thresholds that are acceptable in each scenario. A possible future line of work is that of systematizing this analysis as part of the tools provided by the framework. Infrastructure for the designers to declare bounds for allowable deviations from the specified targets could be implemented, including automated tracking of the deviations. Such a system could be generalized to keep track of various internal values in the controller, with the condition number or singular values of the Jacobian matrices being a clear candidate. A set of primitives could also define basic reactions to identified violations of the declared bounds, such as stopping the controller, outputting null velocity commands, or simply broadcasting events to the ROS environment. Even greater expressive power could be provided to the user by combining this controller framework with a higher-level coordination layer, which can switch the active controllers based on the events generated by monitoring tools such as the ones just proposed. This would allow one to define the responses to degraded performance or to conflicts between constraints that may arise at runtime in the most appropriate manner in the context of each application.
These problems notwithstanding, the framework has been used to produce working and stable controllers in the applications discussed in the previous section. In all these cases, the controller has been able to compute the required control laws without triggering timeouts in the robot driver. A preliminary evaluation of the performance of the generated controllers when compared with equivalent hand-coded variants was also presented. This evaluation shows that the automatic parallel execution capability in our generated controllers can offset the overhead introduced by the generic implementation, that is only noticeable, albeit negligible, for simple controller architectures. A more exhaustive execution time characterization is left as future work, considering a greater variety of controller complexity levels and different types of computing hardware. In addition to analyzing the temporal response of the generated controllers, another aspect which is not currently evaluated in this work is the memory footprint of the controllers generated by the framework. In particular, the cumulative memory usage of a system where multiple controllers are synthesized at runtime should be analyzed, paying close attention to check whether memory is properly cleaned up upon unloading unused controllers. This is particularly relevant for transitioning this system from its current use in prototype setups to using it in production on long-running applications.
Lastly, the proposed framework is based mostly on pure C++ with few dependencies and integrated natively into the ros_control ecosystem. This allows seamless integration into ROS 1 applications, provided that hardware drivers that support the ros_control hardware abstraction system are available. Since the start of the framework development, the ros2_control framework for the newer ROS 2 has reached a significant maturity level. Plans to port our framework to support ROS 2 and ros2_control are also left as future work. As the implementation has already tried to decouple the core of the controller graph from the ROS/ros_control interaction parts, this porting should be relatively simple to achieve.

Conclusions

We have presented a novel authoring system that facilitates the building of composite, multi-dimensional controllers based on an aggregate of constraints. This system contributes to the body of knowledge and tools to support the deployment of ever more sophisticated robotic control systems.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/robotics14080109/s1, Listing S1: YAML-based specifications of canister insertion controller, Listing S2: Specification for dual-arm tube handling controller, Listing S3: Specification for dual-arm teleoperation controller, Listing S4: Specification for simulation demo mobile dual-arm controllers, Listing S5: Specification for collaborative mobile dual-arm controller.

Author Contributions

Conceptualization, M.P. and A.R.; methodology, M.P.; software, M.P. and A.F.; validation, M.P. and A.F.; investigation, M.P.; writing—original draft preparation, M.P.; writing—review and editing, A.R. and J.M.; supervision, J.M.; project administration, A.R.; funding acquisition, A.R. and J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received funding from the European Union’s Horizon 2020 research and innovation program, as part of the projects TraceBot under Grant Agreement No. 101017089 and SMARTHANDLE under Grant Agreement No. 101091792 and from the Basque Government’s ELKARTEK program, as part of the HELDU project (KK-2023/00055).

Data Availability Statement

Data is contained within the article or Supplementary Materials.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Real Use Case Controller Diagram

In this appendix, the computational structure of a more complex controller is shown and discussed. Specifically, one of the controllers from the laboratory automation use-case discussion is analyzed, the controller for the canister insertion task depicted in Figure 8, for which a schematic representation is shown in Figure 9.
As described in Section 4.1, this controller is designed to maintain three constraints during its execution: compliant position tracking on the Z axis of translation, pure compliant behavior on the XY axes of translation, and rigid orientation control.
Figure A1 shows the detailed computational graph corresponding to the vertical insertion controller shown in the left half of Figure 9.
The top part of the diagram, with red shading, shows three big blocks that compute the three constraints this controller keeps track of, including their inputs, and is further described in Appendix A.1. Underneath them, three blocks shaded in blue show the transformer blocks used to select specific directions in Cartesian space where each of the constraints is limited to. A more detailed description of these is provided in Appendix A.2. The bottom subgraph shows the solver in green shading. It computes the optimal solution to the defined constraints and is explained in Appendix A.3.
Figure A1. Diagram of the tilted canister insertion controller rendered from the automatically generated DOT description with zones shaded according to their function. Constraints and their inputs are on top with red shading, transformers are in the middle with blue shading, and the solver is at the bottom with green shading.
Figure A1. Diagram of the tilted canister insertion controller rendered from the automatically generated DOT description with zones shaded according to their function. Constraints and their inputs are on top with red shading, transformers are in the middle with blue shading, and the solver is at the bottom with green shading.
Robotics 14 00109 g0a1

Appendix A.1. Constraints and Their Inputs

The red shaded area in Figure A1 contains three graphs generating the Jacobian matrices and target velocity vectors, which represent the three enforced constraints.
The leftmost constraint is shown with greater detail in Figure A2. It constrains an arbitrary target frame to follow an input twist command with an overlaid compliant behavior that reacts to measured external forces. The Jacobian it outputs, computed by the block labelled Jacobian, is the one describing the motion of the target frame with respect to the selected base frame. The rest of the constraint graph is dedicated to transforming the wrench measurements into the correct reference frame and point, multiplying by the configured admittance matrix, and combining it with the input target twist.
The resulting target velocity produced by the constraint can be represented by
v i n + K a d m C h R e f F r a m e A n d P o i n t ( w m e a s ) .
Here, v i n is the base target velocity, represented in the graph by the input block labelled xy_vel_constraint_twist. K a d m is a square admittance matrix that is provided as a parameter to the constraint configuration and is displayed in the graph as a block labelled admittance_k. The wrench measured by the force-torque sensor is shown in the equation as w m e a s .
Since the wrench is measured in the origin of the sensor and its numeric representation is expressed on the reference frame aligned with the axes of the sensor, the  C h R e f F r a m e A n d P o i n t operator is used to represent a two-step process of translating the wrench to the equivalent wrench as applied in the target frame of the constraint and then expressing it in coordinates of the reference frame configured as the base frame for the constraint.
Translating the reference point and changing the reference frame of the measured wrench requires determining transformation matrices between the related frames at the current robot configuration. The blocks labelled fk_base_to_sensor and fk_sensor_to_frame compute the transforms between the base and sensor frames and between the sensor and target frame, respectively. In order to do this, they use the forward kinematic model provided by our scene model and consume the current joint position, which is generated by the block labeled joint_position.
The output transformation matrices containing these two transforms are then connected to respective blocks, which separate them into two separate components represented by the transformation: a rotation matrix and a translation vector. This is performed by the blocks labelled base_to_sensor_trans_rot and sensor_to_frame_trans_rot.
The translation from the sensor to the target frame of the constraint is then used by the change_refp_sensor_to_frame block to compute the equivalent wrench as if applied directly on the target frame. This equivalent wrench is then transformed to be expressed in the robot base frame by the change_base_sensor_to_base block, which uses the current rotation matrix between the sensor and the base frame as an input.
The resulting wrench is multiplied by the admittance matrix in the block labelled admittance_vel and added to the base target velocity in the block combine_vel. The output of the latter, implementing the calculation expressed by (A1), is finally output by this whole constraint graph as the target velocity.
Note that this constraint has two input signals, which are configured following the description in Section 3.1.3. The target base twist input ( v i n in (A1)) is shown on the left, labelled xy_vel_constraint_twist. This input is configured as a constant null value, resulting in the described purely compliant behavior; that is, no velocity is generated in this constraint space unless as a reaction to external forces. These external forces are the second input defined for this constraint, which is shown to the right, labelled wrench. This is introduced into the section of the graph that deals with changing its reference frame and point as described above.
The second constraint, in the center of the red shaded area in Figure A1, is shown with greater detail in Figure A3. It implements tracking a reference pose and twist pair with an overlaid compliant response to external forces. The graph for this constraint is implemented hierarchically by combining the same compliant twist tracking constraint from Figure A2, and shown with a red shading in Figure A3, with an outer layer that computes its input twist based on the output of a motion profile generator and a proportional controller to force convergence towards the pose output by said profile. This outer part, shaded in blue in Figure A3, is shown more closely in Figure A4.
Figure A2. Diagram of computation for the pure compliant behavior constraint acting on the XY axes of translation.
Figure A2. Diagram of computation for the pure compliant behavior constraint acting on the XY axes of translation.
Robotics 14 00109 g0a2
The block labelled canister_base_lin_to_profile in Figure A4 implements a smooth linear minimum jerk motion profile as a target for the constraint. The computation of the profile itself is performed by the block labeled profile, which requires initial and target poses for the interpolation, the maximum allowable velocity for the generated profile, and the current time. The initial pose of the profile is computed automatically when starting the controller from the current pose of the relevant frame in the model by the block labelled fk_task. The target pose and maximum allowable velocities are provided as external inputs and shown to the right of the figure, labelled target_pose and max_twist, respectively. The time is automatically provided by the framework in the correct port by convention, but it is offset to start at zero when the controller is started by the time_tare block.
The pose and twist produced by this profile are then fed into the block labelled pff. This adds the input twist to a twist proportional to the error between the target pose and the current pose, which is also automatically computed by the fk block in the figure. The result is then processed by the red shade part in Figure A3, equivalent to the previously described constraint in Figure A2.
The resulting target velocity of the combination of this constraint and the profile generator is therefore equivalent to computing (A1) with
v i n = P r o f i l e T w i s t ( P o s e i n i , P o s e t g t , v m a x , t ) + K Δ ( P r o f i l e P o s e ( P o s e i n i , P o s e t g t , v m a x , t ) , P o s e c u r r e n t )
The insertion of the profile generator into the controller is achieved by using the functionality described in Section 3.1.7.
As described in Section 4.1, only specific sub-dimensions of the constraints above are actually enforced by the controller. The combination of both of them is limited to controlling the three-dimensional translational velocity. The mechanism for how this is performed using the constraint transformer mechanism is explained in Appendix A.2.
Unless the orientation is explicitly constrained, the result of solving for the above translational motion constraints generally results in a varying orientation of the target frame. The remaining constraint, shown in Figure A5, is of a type used to keep the relative pose between two frames fixed.
The internal subgraph, labelled cart_posvel_pff_constraint, implements tracking to externally provided pose and twist targets. It is similar to the last analyzed constraint, with the elements performing the computations required for the compliant behavior removed. The pff component implements a proportional control law pushing the current pose towards the target pose. In this case, the target is provided by the small chain of blocks in the top right, which takes the current robot configuration (joint_positions block), uses it to compute the current relative pose between the configured base and controlled frame (current_pos_fk block), and then holds the initial value as the target position for the duration of the controller execution (hold_value block). The final result is a constraint that tries to keep the relative pose between two configured frames fixed while the controller is executed.
All three graphs presented in detail above constrain the motion of a selected frame with respect to some other. The relative motion of two frames represents six DoFs, and therefore, the combined application of the three constraints above would attempt to constrain 18 DoFs. In reality, these are configured to constrain the same frame with respect to the same base, so the application of three different constraints must be either redundant or incompatible. The next section explains the mechanism through which each of the constraints is limited to act in only specific sub-spaces, resulting in limiting the active constrained DoFs to 6, fully determining the motion of the manipulated object.
Figure A3. Diagram of computation for the compliant position tracking behavior constraint acting on the Z axis of translation. Other parts of the controller are shown but grayed out to improve readability.
Figure A3. Diagram of computation for the compliant position tracking behavior constraint acting on the Z axis of translation. Other parts of the controller are shown but grayed out to improve readability.
Robotics 14 00109 g0a3
Figure A4. Computation of the input twist, which tracks the output of a motion profile generator by using a proportional law with velocity feedforward.
Figure A4. Computation of the input twist, which tracks the output of a motion profile generator by using a proportional law with velocity feedforward.
Robotics 14 00109 g0a4
Figure A5. Diagram of computation for the constraint maintaining orientation fixed.
Figure A5. Diagram of computation for the constraint maintaining orientation fixed.
Robotics 14 00109 g0a5

Appendix A.2. Constraint Transformers

The blocks in blue shading in Figure A1 are constraint transformers, which can be added to postprocess the Jacobian and target velocity signals output by the constraints.
Shown more closely in Figure A6, the transformers configured in this controller implement the selection of specific directions from each of the constraints. This is achieved by selecting the relevant rows from each Jacobian matrix and task velocity vector and discarding the rest. The internal mechanism with which this is implemented is by computing the selection matrix at the controller configuration phase, which, when multiplied by the Jacobian or target velocity, has the effect of selecting the desired rows.
For the controller under consideration, the transformer blocks select specific directions from each constraint—translation in X and Y, translation in Z, and orientation, respectively—such that the behavior described in Section 4.1 above is produced, totalling six constrained DoFs by combining these directions from the three constraints.
Figure A6. Diagram of computation for the transformer blocks selecting configured components from each general constraint.
Figure A6. Diagram of computation for the transformer blocks selecting configured components from each general constraint.
Robotics 14 00109 g0a6

Appendix A.3. Solver

Finally, Figure A7 shows a close-up of the solver configured for this controller.
The solver in this case applies a plain pseudoinverse to solve the task resulting from the direct combination of the three chosen constraints. The frst step, shown on top, is for two blocks, labelled merge_taskvels and merge_constraints, to combine the three constraint velocity vectors and three Jacobian matrices into a single task space velocity vector and a single Jacobian by stacking them.
This would be done differently by other types of solvers: for the weighted pseudoinverse, weighing matrices would be introduced in this step; in solvers applying strict priorities, the Jacobians and task velocities would not be stacked together but solved separately, with each subsequent constraint being solved in the null-space of the higher priority ones, etc.
The computation of the optimal joint velocity takes place inside the pseudoinverse_solver block, which outputs it on the port labelled x (reflecting this here being the solution of an A x = b -type problem).
For evaluation purposes, the solver graph also contains a block projecting the optimal joint velocity back into the desired task space by using the merged Jacobian computed at the start. This can be compared with the desired task velocity and used to monitor the task tracking error in application specific logic to deal with problems such as the inability to fulfill the defined constraints in the proximity of singularities.
Figure A7. Diagram of computation for the optimization of the specified constraints and reprojection into task space.
Figure A7. Diagram of computation for the optimization of the specified constraints and reprojection into task space.
Robotics 14 00109 g0a7

References

  1. Schulman, J.; Ho, J.; Lee, A.; Awwal, I.; Bradlow, H.; Abbeel, P. Finding Locally Optimal, Collision-Free Trajectories with Sequential Convex Optimization. In Proceedings of the Robotics: Science and Systems, Berlin, Germany, 24–28 June 2013. [Google Scholar]
  2. Siciliano, B.; Slotine, J.J. A general framework for managing multiple tasks in highly redundant robotic systems. In Proceedings of the Fifth International Conference on Advanced Robotics ’Robots in Unstructured Environments, Pisa, Italy, 19–22 June 1991. [Google Scholar]
  3. De Schutter, J.; De Laet, T.; Rutgeerts, J.; Decré, W.; Smits, R.; Aertbeliën, E.; Claes, K.; Bruyninckx, H. Constraint-based Task Specification and Estimation for Sensor-Based Robot Systems in the Presence of Geometric Uncertainty. Int. J. Robot. Res. 2007, 26, 433–455. [Google Scholar] [CrossRef]
  4. Mansard, N.; Stasse, O.; Evrard, P.; Kheddar, A. A Versatile Generalized Inverted Kinematics Implementation for Collaborative Working Humanoid Robots: The Stack of Tasks. In Proceedings of the International Conference on Advanced Robotics (ICAR), Munich, Germany, 22–26 June 2009; p. 119. [Google Scholar]
  5. Aertbeliën, E.; De Schutter, J. Etasl/eTC: A Constraint-Based Task Specification Language and Robot Controller Using Expression Graphs. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014. [Google Scholar]
  6. Chitta, S.; Marder-Eppstein, E.; Meeussen, W.; Pradeep, V.; Tsouroukdissian, A.R.; Bohren, J.; Coleman, D.; Magyar, B.; Raiola, G.; Lüdtke, M.; et al. ros_control: A generic and simple control framework for ROS. J. Open Source Softw. 2017, 2, 456. [Google Scholar] [CrossRef]
  7. Deng, Y.; Liu, Y.; Zhou, D. An Improved Genetic Algorithm with Initial Population Strategy for Symmetric TSP. Math. Probl. Eng. 2015, 2015, 212794. [Google Scholar] [CrossRef]
  8. Kalakrishnan, M.; Chitta, S.; Theodorou, E.; Pastor, P.; Schaal, S. STOMP: Stochastic trajectory optimization for motion planning. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  9. Zucker, M.; Ratliff, N.; Dragan, A.D.; Pivtoraiko, M.; Klingensmith, M.; Dellin, C.M.; Bagnell, J.A.; Srinivasa, S.S. CHOMP: Covariant Hamiltonian optimization for motion planning. Int. J. Robot. Res. 2013, 32, 1164–1193. [Google Scholar] [CrossRef]
  10. Zhao, X.; Liu, Z.; Jiang, B.; Gao, C. Switched Controller Design for Robotic Manipulator via Neural Network-Based Sliding Mode Approach. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 561–565. [Google Scholar] [CrossRef]
  11. Chaumette, F.; Hutchinson, S. Visual servo control, Part II: Advanced approaches. IEEE Robot. Autom. Mag. 2007, 14, 109–118. [Google Scholar] [CrossRef]
  12. Marchand, E.; Spindler, F.; Chaumette, F. ViSP for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robot. Autom. Mag. 2005, 12, 40–52. [Google Scholar] [CrossRef]
  13. Schinstock, D. Approximate solutions to unreachable commands in teleoperation of a robot. Robot. Comput.-Integr. Manuf. 1998, 14, 219–227. [Google Scholar] [CrossRef]
  14. Nakamura, Y.; Hanafusa, H. Optimal Redundancy Control of Robot Manipulators. Int. J. Robot. Res. 1987, 6, 32–42. [Google Scholar] [CrossRef]
  15. Decré, W.; Smits, R.; Bruyninckx, H.; De Schutter, J. Extending iTaSC to support inequality constraints and non-instantaneous task specification. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  16. Decré, W.; Bruyninckx, H.; De Schutter, J. Extending the iTaSC Constraint-Based Robot Task Specification Framework to Time-Independent Trajectories and User-Configurable Task Horizons. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
  17. Smits, R.; Bruyninckx, H.; De Schutter, J. Software support for high-level specification, execution and estimation of event-driven, constraint-based multi-sensor robot tasks. In Proceedings of the International Conference on Advanced Robotics, Munich, Germany, 22–26 July 2009. [Google Scholar]
  18. RTT: Real-Time Toolkit. Available online: https://orocos.org/rtt/ (accessed on 24 July 2024).
  19. Soetens, P. A Software Framework for Real-Time and Distributed Robot and Machine Control. Ph.D. Thesis, Department of Mechanical Engineering, Katholieke Universiteit Leuven, Leuven, Belgium, 2006. [Google Scholar]
  20. Ferreau, H.; Bock, H.; Diehl, M. An online active set strategy to overcome the limitations of explicit MPC. Int. J. Robust Nonlinear Control 2008, 18, 816–830. [Google Scholar] [CrossRef]
  21. Ferreau, H.; Kirches, C.; Potschka, A.; Bock, H.; Diehl, M. qpOASES: A parametric active-set algorithm for quadratic programming. Math. Program. Comput. 2014, 6, 327–363. [Google Scholar] [CrossRef]
  22. Escande, A.; Mansard, N.; Wieber, P.B. Hierarchical quadratic programming: Fast online humanoid-robot motion generation. Int. J. Robot. Res. 2014, 33, 1006–1028. [Google Scholar] [CrossRef]
  23. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  24. Open Source Robotics Foundation. Robotic Operating System. Available online: https://www.ros.org/ (accessed on 2 April 2025).
  25. Pr2_Mechanism. Infrastructure to Control the PR2 Robot in a Hard Realtime Control Loop. Available online: https://index.ros.org/r/pr2_mechanism/#noetic (accessed on 13 January 2025).
  26. Pluginlib. Library for Loading/Unloading Plugins in ROS Packages During Runtime. Available online: https://index.ros.org/p/pluginlib/ (accessed on 13 April 2025).
  27. Ros_Controllers. Generic Robotic Controllers to Accompany Ros_Control. Available online: https://index.ros.org/r/ros_controllers/#noetic (accessed on 4 December 2024).
  28. Cartesian_Controllers. A Set of Cartesian Controllers for the ROS1 and ROS2-Control Framework. Available online: https://github.com/fzi-forschungszentrum-informatik/cartesian_controllers/ (accessed on 21 March 2025).
  29. YAML 1.2.2 Specification. Language Overview. Available online: https://yaml.org/spec/1.2.2/#chapter-2-language-overview (accessed on 29 April 2025).
  30. Graphviz. Available online: https://graphviz.org/ (accessed on 28 April 2025).
  31. Huang, T.W.; Lin, D.L.; Lin, C.X.; Lin, Y. Taskflow: A Lightweight Parallel and Heterogeneous Task Graph Computing System. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 1303–1320. [Google Scholar] [CrossRef]
  32. Taskflow. A General-purpose Parallel and Heterogeneous Task Programming System. Available online: https://taskflow.github.io/ (accessed on 7 February 2025).
  33. Weibel, J.B.; Sebeto, P.; Thalhammer, S.; Vincze, M. Challenges of Depth Estimation for Transparent Objects. In Proceedings of the Advances in Visual Computing, Lake Tahoe, NV, USA, 16–18 October 2023; Springer: Cham, Switzerland, 2023; pp. 277–288. [Google Scholar]
  34. Gupta, H.; Thalhammer, S.; Weibel, J.B.; Haberl, A.; Vincze, M. ReFlow6D: Refraction-Guided Transparent Object 6D Pose Estimation via Intermediate Representation Learning. IEEE Robot. Autom. Lett. 2024, 9, 9438–9445. [Google Scholar] [CrossRef]
  35. Tarbouriech, S.; Navarro, B.; Fraisse, P.; Crosnier, A.; Cherubini, A.; Sallé, D. Admittance control for collaborative dual-arm manipulation. In Proceedings of the 19th International Conference on Advanced Robotics (ICAR), Belo Horizonte, Brazil, 2–6 December 2019; pp. 198–204. [Google Scholar]
  36. Dominjon, L.; Lécuyer, A.; Burkhardt, J.M.; Richir, S. A Comparison of Three Techniques to Interact in Large Virtual Environments Using Haptic Devices with Limited Workspace. In Proceedings of the Advances in Computer Graphics, Hangzhou, China, 26–28 June 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 288–299. [Google Scholar]
Figure 1. A potential controller design in a graph structure. The solver is configured with a task consisting of constraints A and B. Constraint A uses data from sensors 1 and 2, whereas constraint B only uses data from sensor 2.
Figure 1. A potential controller design in a graph structure. The solver is configured with a task consisting of constraints A and B. Constraint A uses data from sensors 1 and 2, whereas constraint B only uses data from sensor 2.
Robotics 14 00109 g001
Figure 2. Controller combining multiple constraints. Each of the constraints receives joint state data from the robot, labelled as q and outputs a Jacobian, labelled as J, and a target velocity, labelled as v * . The * superindex denotes it being a desired, or target, value.
Figure 2. Controller combining multiple constraints. Each of the constraints receives joint state data from the robot, labelled as q and outputs a Jacobian, labelled as J, and a target velocity, labelled as v * . The * superindex denotes it being a desired, or target, value.
Robotics 14 00109 g002
Figure 3. Controller diagram with configurable inputs and outputs (I/O). Input from a joystick device is forwarded as target velocity v into the tracking constraint, and the measured tracking error e is broadcast to a ROS topic.
Figure 3. Controller diagram with configurable inputs and outputs (I/O). Input from a joystick device is forwarded as target velocity v into the tracking constraint, and the measured tracking error e is broadcast to a ROS topic.
Robotics 14 00109 g003
Figure 4. Controller diagram with filtered configurable I/O. The input from the FT sensor is passed through an offset filter and a gravity compensation filter before being fed as an input to an leadthrough constraint.
Figure 4. Controller diagram with filtered configurable I/O. The input from the FT sensor is passed through an offset filter and a gravity compensation filter before being fed as an input to an leadthrough constraint.
Robotics 14 00109 g004
Figure 5. Controller diagram with a transformer applied to the single constraint corresponding to Listing 6. As noted, the original twist constraint affects six degrees of freedom (DoF) in the system, whereas after the selection, only three DoF are constrained.
Figure 5. Controller diagram with a transformer applied to the single constraint corresponding to Listing 6. As noted, the original twist constraint affects six degrees of freedom (DoF) in the system, whereas after the selection, only three DoF are constrained.
Robotics 14 00109 g005
Figure 6. Controller tracking a motion profile generated by a profile, representing the configuration in Listing 8.
Figure 6. Controller tracking a motion profile generated by a profile, representing the configuration in Listing 8.
Robotics 14 00109 g006
Figure 7. Diagram of a controller rendered from the DOT description file in a controller similar to the one in Listing 4 (output of the actual_taskvel_projection port data is not displayed here). Top half of the diagram represents the computation of the Jacobians and task velocity commands for two simple constraints, inside the blocks labelled left_arm_twist_constraint and right_arm_twist_constraint. In the bottom half, the solver merges the Jacobian and target velocities of both constraints and computes the solution using a pseudoinverse-based solver.
Figure 7. Diagram of a controller rendered from the DOT description file in a controller similar to the one in Listing 4 (output of the actual_taskvel_projection port data is not displayed here). Top half of the diagram represents the computation of the Jacobians and task velocity commands for two simple constraints, inside the blocks labelled left_arm_twist_constraint and right_arm_twist_constraint. In the bottom half, the solver merges the Jacobian and target velocities of both constraints and computes the solution using a pseudoinverse-based solver.
Robotics 14 00109 g007
Figure 8. (Left) TraceBot dual-arm manipulator inserting a canister from the sterility testing kit into the pump’s drain tray. (Right) Detail of the tilted insertion to leverage robot compliance and interaction between object and receptacle shape to obtain self-centering behavior.
Figure 8. (Left) TraceBot dual-arm manipulator inserting a canister from the sterility testing kit into the pump’s drain tray. (Right) Detail of the tilted insertion to leverage robot compliance and interaction between object and receptacle shape to obtain self-centering behavior.
Robotics 14 00109 g008
Figure 9. Controllers used to perform the canister insertion task. For the vertical insertion, on the left, a motion profile is input into the vertical compliant spring-like constraint, and a constant target orientation is kept. During the final reorientation, a vertical force is generated by a constant setpoint to the vertical spring-like constraint while a smooth rotation is executed.
Figure 9. Controllers used to perform the canister insertion task. For the vertical insertion, on the left, a motion profile is input into the vertical compliant spring-like constraint, and a constant target orientation is kept. During the final reorientation, a vertical force is generated by a constant setpoint to the vertical spring-like constraint while a smooth rotation is executed.
Robotics 14 00109 g009
Figure 10. Dual-arm tube handling use case, where the TraceBot robot manipulates a flexible tube with both grippers to insert it into a narrow slot while maintaining a pulling tension.
Figure 10. Dual-arm tube handling use case, where the TraceBot robot manipulates a flexible tube with both grippers to insert it into a narrow slot while maintaining a pulling tension.
Robotics 14 00109 g010
Figure 11. Controller for the bimanual tube manipulation. A single constraint constrains the motion of both manipulators, implementing a compliant behavior.
Figure 11. Controller for the bimanual tube manipulation. A single constraint constrains the motion of both manipulators, implementing a compliant behavior.
Robotics 14 00109 g011
Figure 12. Teleoperation scenario wherein the robot manipulator is programmed to follow the movements and forces produced by the hands of the operator.
Figure 12. Teleoperation scenario wherein the robot manipulator is programmed to follow the movements and forces produced by the hands of the operator.
Robotics 14 00109 g012
Figure 13. Controller for haptic teleoperation. The input of a joystick is used to drive the motion of a robot, adjusting it based on the reading of a force-torque sensor to provide compliant behavior. Tracking error and virtual forces from a custom workspace extending method are combined to compute force feedback to apply through the haptic device.
Figure 13. Controller for haptic teleoperation. The input of a joystick is used to drive the motion of a robot, adjusting it based on the reading of a force-torque sensor to provide compliant behavior. Tracking error and virtual forces from a custom workspace extending method are combined to compute force feedback to apply through the haptic device.
Robotics 14 00109 g013
Figure 14. (Left) Mobile dual-arm platform unscrewing bolts from the car battery cover. (Right) Simulation of the mobile dual-arm platform.
Figure 14. (Left) Mobile dual-arm platform unscrewing bolts from the car battery cover. (Right) Simulation of the mobile dual-arm platform.
Robotics 14 00109 g014
Figure 15. Collaborative control of mobile dual-arm manipulator. Both arms are commanded to generate a vertical lifting force, and their relative pose is required to maintain a fixed value.
Figure 15. Collaborative control of mobile dual-arm manipulator. Both arms are commanded to generate a vertical lifting force, and their relative pose is required to maintain a fixed value.
Robotics 14 00109 g015
Figure 16. Mean execution time of increasingly complex controller graphs with different numbers of threads assigned to the executor. Each continuous line in the graph displays the result of executing a controller generated by our framework with a different number of threads assigned for execution, which are displayed on the legend in the figure. The dotted green line shows the mean execution time for hand-coded equivalent controllers solving from one to twelve constraints. In each line, the vertical axis measures the mean execution time of a single update of the controller in milliseconds, and the horizontal axis shows the number of constraints of the controller.
Figure 16. Mean execution time of increasingly complex controller graphs with different numbers of threads assigned to the executor. Each continuous line in the graph displays the result of executing a controller generated by our framework with a different number of threads assigned for execution, which are displayed on the legend in the figure. The dotted green line shows the mean execution time for hand-coded equivalent controllers solving from one to twelve constraints. In each line, the vertical axis measures the mean execution time of a single update of the controller in milliseconds, and the horizontal axis shows the number of constraints of the controller.
Robotics 14 00109 g016
Table 1. List of supported configurable input types according to the data type they read and sources they read from. The input type is identified in the controller settings by the combination of data type and source, e.g., pose_from_constant or vector_from_topic.
Table 1. List of supported configurable input types according to the data type they read and sources they read from. The input type is identified in the controller settings by the combination of data type and source, e.g., pose_from_constant or vector_from_topic.
Data Type……_from_handle…_from_topic…_from_service…_from_constant
poseXXXX
twistXXXX
wrenchXXXX
vector XXX
boolean XXX
Table 2. List of supported configurable output types according to the data type they write and the destination they write into. The output type is identified in the controller settings by the combination of data type and destination, e.g., twist_to_handle or boolean_to_topic.
Table 2. List of supported configurable output types according to the data type they write and the destination they write into. The output type is identified in the controller settings by the combination of data type and destination, e.g., twist_to_handle or boolean_to_topic.
Data Type……_to_handle…_to_topic
poseXX
twistXX
wrenchXX
vector X
scalar X
boolean X
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prada, M.; Fernandez, A.; Remazeilles, A.; McIntyre, J. Flexible Constraint-Based Controller Framework for Ros_Control. Robotics 2025, 14, 109. https://doi.org/10.3390/robotics14080109

AMA Style

Prada M, Fernandez A, Remazeilles A, McIntyre J. Flexible Constraint-Based Controller Framework for Ros_Control. Robotics. 2025; 14(8):109. https://doi.org/10.3390/robotics14080109

Chicago/Turabian Style

Prada, Miguel, Asier Fernandez, Anthony Remazeilles, and Joseph McIntyre. 2025. "Flexible Constraint-Based Controller Framework for Ros_Control" Robotics 14, no. 8: 109. https://doi.org/10.3390/robotics14080109

APA Style

Prada, M., Fernandez, A., Remazeilles, A., & McIntyre, J. (2025). Flexible Constraint-Based Controller Framework for Ros_Control. Robotics, 14(8), 109. https://doi.org/10.3390/robotics14080109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop