1. Introduction
Robots that move by attaining and breaking contact with supporting surfaces, especially walking robots, are finding an increasing number of practical applications. Quadruped walking robots have been used for search and rescue operations [
1], industrial inspection and maintenance [
2], agriculture [
3], and healthcare [
4]. However, the lack of robustness and robustness guarantees prevents a fuller range of practical use for such robots.
Among the main sources of uncertainty that impair the function of walking robots are the model uncertainty, lack of information about the environment, and unexpected interactions and other types of external forces. For trajectory planning and model predictive control (MPC), such uncertainty means that the expected behavior of the robot does not match its actual motion; for training-based control, such uncertainty leads to a difference in performance of the policy between training and deployment [
5,
6].
There are various strategies that are employed to combat the effects of uncertainty. In control architectures relying on convex MPC [
7,
8], the robustness emerges due to the high update rates of the MPC-based control loop. Similarly, the model predictive path integral (MPPI) demonstrates an emergent robustness due to the high update rates of the control loop [
9,
10]. In training-based methods, robustness is targeted in conjunction with the “sim-to-real” problem: the task of preparing a policy trained purely on simulated data to work effectively in the real world [
11,
12,
13]. This is achieved by such techniques as domain randomization [
14,
15]. Alternatively, in [
16], the authors used a novel loss function motivated by the
norm in order to teach the actor network to learn robustness against external disturbance instead of achieving robustness by applying disturbances during training. However, all these methods lack formal guarantees of robustness, which is a liability when the robot is required to perform critical tasks.
Optimal control methods have been used to tune a linear control law for robots with mechanical constraints, leveraging orthogonal decomposition of the states into active and static [
17,
18]. The resulting linear quadratic regulator implicitly relies on the existence of an accurate inverse dynamics algorithm. Existing inverse dynamics algorithms, such as those in [
19,
20,
21], in their turn, rely on accurate information about the robot’s state. In the case of active states, such requirements are not restrictive, since the same information is used by the feedback controller; however, in the case of the static states, this represents an expansion of the minimal requirements, necessitating either direct measurements or estimation of the static states. Thus, unknown (or unmeasured) static states represent a challenge to the traditional control pipeline (stabilizing control and inverse dynamics).
In linear control, a number of approaches have been developed to handle various types of uncertainty. Specifically, the uncertainties that we described above can be thought of as multiplicative and additive model uncertainties. One of the most successful approaches to handling uncertain inputs has been
control. An
controller guarantees attenuation of the input signal as measured by the
norm [
22].
control has been used in a variety of applications, from fault-tolerant control [
23] to vibration suppression for higher precision machining [
24] to robust unmanned aerial vehicle control [
25].
In [
26], a simultaneous gain tuning method design for a controller–observer pair was introduced, guaranteeing the desired
gain of the closed-loop system. The key design choice in that work was the decomposition of the state variables into active and static (following [
17,
18]) and the use of the Luenberger-style observer for constrained systems introduced in [
27]. The use of the full state observer allowed compensation for the action of the static states while attenuating the external disturbances. However, the observer-based approach introduced limitations: it necessitated the use of the Young relation (a matrix inequality approximation that bounds the definiteness of a matrix product), which introduced conservatism into the design, and, furthermore, it could not handle multiplicative model uncertainties. In this work, we introduce a framework that (1) does not rely on the estimation of static states, (2) is able to handle norm-bounded multiplicative uncertainties, and (3) does not introduce Young relation approximation when multiplicative uncertainties are not present.
control was developed for
input signals. However, in [
22,
28], it was shown that
control can be used not only for
input signals, but also for periodic and almost periodic signals. Almost periodic signals are time functions that can be closely approximated (in the sense of a uniform norm) by a sum of harmonic functions [
29]. An almost periodic signal is characterized by a set of almost-periods. Pure sinusoids and constant signals are a subset of almost periodic signals. This allows us to use
control to minimize the effect of unmeasured static states on the behavior of the system, as well as attenuate other external disturbances. This unified approach leads to stronger results than separate static state compensation and disturbance attenuation.
The ability of
control to attenuate input signals allows us to use it to suppress the effect of additive model uncertainties. We can further ensure a guaranteed level of attenuation in the presence of norm-bounded multiplicative uncertainties. Here, we can take advantage of the instruments developed for the simultaneous robust observer and controller design, such as by using Young’s relation to handle bilinearity in the matrix inequalities that appear due to the presence of multiplicative model uncertainties [
30]. This represents a trade-off between conservativism (introduced by the linear approximation of bilinear matrix relations) and robustness. The conservatism of the method can be improved by the choice of hyperparameters (scaling factors) of the Young approximation. In [
31], a grid-search procedure was developed to search for optimal values of such parameters.
Further, in this work, we introduce a weighted regularization, which allows us to influence the aggressiveness of the controller without directly prescribing the attenuation levels. We demonstrate the effectiveness of this regularization scheme in managing the trade-off between higher control gains and higher attenuation levels.
The main contributions of this work are as follows:
We propose a robust control design scheme that attenuates the effect of the unmeasured static states on the dynamics of a constrained system;
The proposed method does not require a state observer (or any other filter or dynamic controller) to suppress the effect of the static states to the desired level, leading to a smaller optimization problem and much faster control design (in terms of computational expense);
Unlike existing methods, the proposed control works with uncertain models, specifically in the presence of norm-bounded multiplicative model uncertainties;
The proposed control design can avoid conservative Young relation approximation when the state and control matrices are known exactly;
Unlike observer-based approaches, the proposed robust control design does not require a grid search to tighten the Young relation approximation; this is done automatically as a part of the optimization process.
The rest of the paper is organized as follows:
Section 2 gives the preliminary information on the
control and describes the tools related to linear matrix inequalities used in this paper.
Section 3 describes a linear model (certain and uncertain) of a robot with contact-related constraints.
Section 4 introduces two proposed methods: one for models with known model matrices and one for models with multiplicative uncertainties.
Section 5 describes the numerical experiments performed to verify the proposed method, studies the effect of the proposed regularization coefficients, and examines the gap between the upper bound on the
gain found in the control design process for a system with multiplicative uncertainties and the tighter bounds which could be established for particular realizations of the uncertain system.
2. Preliminaries
We start by defining a space of almost periodic functions and a root mean square norm on that space:
Definition 1. A continuous function is almost periodic (a.p.) if, for every , there exists a relative-dense set such that, for all , the following holds: Where
is the Euclidean norm on
[
32,
33].
is the space of such almost periodic functions.
Following [
28], we define a root mean square norm over
:
Consider a system operator
. We can define an induced norm for the operator as follows:
It is proven that the induced operator norm
is equivalent to the
norm for the same system. This allows us to use the
control design to achieve suppression of the almost periodic inputs [
22,
28].
In this paper, we cast the control design as an optimization problem with linear matrix inequality (LMI) constraints. Below, we present some of the key identities used in the derivations of the LMI constraints proposed in the paper.
Schur complement [
34]: Given symmetric matrices
and
, a matrix
, and a block matrix
defined as
the following statements are equivalent: (1)
and (2)
, where the signs < and ≤ denote negative definiteness and semidefiniteness when applied to matrices or quadratic forms (equivalently for the opposite sign).
Young’s Relation: For any symmetric positive definite matrix
, a positive scalar
, and matrices
, where
, the following inequality holds [
34]:
4. Proposed Method
As shown in the previous section, a walking robot model can be approximated as a linear system with a constant input, which is the result of the mechanical constraints applied to the robot. We can take advantage of the fact that a constant input is an almost periodic signal, which allows us to leverage the control design to find a linear feedback control law that guarantees the desired attenuation of the input signal. An additional benefit of such an approach is the fact that an controller guarantees attenuation across the entire frequency spectrum; thus, the unmodeled non-static additive disturbances will be attenuated at least as well as the static input signal.
In this section, we solve the problems stated previously. We start by considering Problem 1, which corresponds to a known plant (the linearized model of a walking robot is known precisely, while the static states are still not known or measured). Then, we proceed to Problem 2, the case where only a nominal linearized model is known, and the control needs to account for norm-bounded multiplicative uncertainties.
4.1. Control Design for a Known Plant
In this subsection, we solve the previously stated Problem 1: we propose a design for an controller that guarantees an upper bound on the gain of the system, resulting in the attenuation of the effect of the static states on the system’s dynamics.
The system for which we design the control is (
9), with the output capturing the active states
:
We can introduce feedback control
. Based on the Bounded-Real Lemma (BRL), we can write the LMI condition that needs to be satisfied for the
gain of the system to be upper-bounded by a positive scalar
:
where the control design variables
and
are related to the control gain through the expression
. To facilitate the search for a lower-gain controller that still fulfills the
requirements, we cast the control design as an optimization problem and introduce a regularization term to the cost function:
where
and
are positive scalar weights. This choice of a cost function creates a trade-off between increasing attenuation and limiting controller gain.
4.2. Robust Control Design for a Plant with Multiplicative Uncertainties
In this subsection, we solve Problem 2 by designing an controller that guarantees an upper bound on the gain of the uncertain system with norm-bounded multiplicative uncertainties as well as unknown and unmeasured static states.
Theorem 1. System (10) with output and linear control law is stable and has an gain of less than γ for all , , and (where ) if there exist matrices and and scalars such that and the following LMI holds:where and Proof. Using (
13), we can state the condition for any admissible realization of the system:
where
Using Young’s relation, we can write the following relations:
Using these inequalities, we find a conservative approximation of the original condition. Applying the Schur complement to each inequality, we recover the LMI (
15) linear in the decision variables. □
The optimal control problem is then cast as the following semidefinite program:
Figure 1 shows the control for the proposed robust control method. It illustrates that the tuning method (
21) based on Theorem 1 works for all admissible system realizations (corresponding to the multiplicative uncertainty staying within bounds).
4.3. Upper Bound on the Gain
In this subsection, we make a remark regarding the computation of the upper bounds of the
gain for systems with multiplicative uncertainties when the proposed method is used. The proposed method (
21) allows us to design a control law that guarantees a bound on the
gain for any admissible realization of the uncertain system (
10) (a realization of a system (
10) corresponds to a specific choice of uncertain matrices
,
, and
within the prescribed bounds). Applying the designed control law and choosing an admissible realization of the matrices
,
, and
we derive a particular realization of the closed-loop system
.
In the robust control design phase, we compute the common upper bound on the gain, valid for all admissible realizations of the uncertain system. However, for any particular closed-loop system , we can find a tighter upper bound through the direct application of the Bounded-Real Lemma. Since the robust control design method uses Young relation approximation, while the direct application of the BRL does not, we can expect the bound to be notably tighter for all admissible realizations of the uncertain system.
5. Numerical Study
We test the proposed algorithm on a flat quadruped robot in simulation. The robot is represented in
Figure 2. It consists of a body with front and rear legs. Each leg includes two joints (a hip joint connecting to the robot’s body and a knee joint). The model parameters are given in
Table 1; the model matrices are given in
Appendix A.
5.1. Comparative Analysis Against Comparable Methods
In this subsection, we make a comparative analysis of the proposed method against the previously developed controllers. We identify two comparable methods for constrained mechanical systems which realize a stabilizing feedback control law, minimizing control error in the active states:
The constrained linear quadratic regulator (CLQR) proposed in [
17,
18]. The method takes advantage of projecting the robot dynamics into the null space of constraint equations, which allows the optimal control problem to be cast with quadratic cost as a Riccati equation;
The observer-based
control proposed in [
26], which directly compensates (rather than attenuating, as in the proposed method) the effect of the static states based on their accurate estimation. The method requires a grid search to reduce the conservatism of the Young approximation.
The first experiment we perform is a comparative study of the two methods: the proposed
method and the CLQR tuned to the following cost:
where the cost weights are
and
.
Figure 3 shows the error dynamics for the active states of the quadruped robot using the linear control law
, where the gain is tuned by solving the optimal control problem (
14).
Figure 4 shows analogous error dynamics, but for the case where the CLQR is used.
As we can see from
Figure 3, the application of the proposed method leads to convergence of the state error to a value close to zero. This is the direct result of the attenuation of the effect of the unmeasured zero states on the robot dynamics. In contrast, the CLQR results in a noticeable steady-state error. In this experiment, the norm of the steady-state error for the proposed method is 0.0036, while, for the CLQR, it is 0.21; the norm of the unmeasured static states is 0.412. Comparing the graphs in
Figure 3 and
Figure 4 shows the practical importance of using a control scheme that attenuates or compensates the static states.
In the next experiment, we compare the performance and computational expense of three methods: the proposed static feedback
controller, the CLQR, and the observer-based
control designed in [
26]. We apply all three controllers to the same problem as in the previous experiment. To tune the controllers, we use the same semidefinite programming (SDP) solver: MOSEK [
35] and a CVXpy wrapper. We compare the number of variables reported by the solver and the time to solution (the time it takes the solver to arrive at the optimal solution, as reported by the solver and the steady-state error, measured after the transient modes have decayed (at the end of the simulation interval)). The results of the experiment are presented in
Table 2.
Table 2 demonstrates the following: (1) The proposed method, while slightly less accurate than the observer-based
control, demonstrates qualitative clear superiority over the CLQR. (2) In terms of the computation time, the proposed controller is 280 times faster than the observer-based
and 5 times slower than the CLQR. (3) In terms of the number of decision variables, the proposed method is nearly equivalent to the CLQR and nearly 6 times more compact than the observer-based
. The explosive growth in the computation time that we observed is directly linked to the computational complexity of the primal-dual interior-point methods. The computational complexity with respect to the decision variables
p that we observed here fits the
model. The per-iteration complexity of primal-dual interior-point methods, when applied to control design problems, is often described as
in terms of the size of the principal positive semidefinite matrix, in our case,
[
36]. The number of iterations itself is a function of the problem size and the desired accuracy.
Note that the number of variables in the design problem depends on the dimensions of the state space of the robot; the relation is close to quadratic, but different for different robots. For a full spatial model of a quadruped robot with fully actuated legs, 18 degrees of freedom, 12 mechanical constraints, and 12 active states, the proposed design will result in a problem with 288 decision variables. Doubling the number of active states and tripling the number of static states (compared with the flat model) leads to more than quadruple the number of decision variables. This illustrates the importance of casting the control design criteria as compact LMI when possible.
We can summarize the findings thus: The proposed method has a large advantage in accuracy over the CLQR, without the prohibitive computational expense of the observer-based , and it has a large advantage in computation time over the observer-based , without the inaccuracy of the CLQR. The existing methods offer high accuracy with very long computation time or low accuracy with very fast computations; thus, we proposed a method that is almost as accurate as the best alternative, with computation time closer to the fastest available method.
5.2. Attenuation of the External Almost Periodic Signals
One of the critical problems in mobile robotics is the presence of external disturbances, which deteriorate the performance of mobile robots. In this subsection, we demonstrate that the proposed method can attenuate various types of disturbances, including almost periodic signals and signals with rich spectral images, such as the square wave.
In the first experiment, we add to the input channels signals of the type
while keeping intact the static states, which causes the total input to be shifted, as shown in
Figure 5. In
Figure 6, we plot the active state error for this experiment. We can observe that the input is attenuated, and the error dynamics are dominated by the initial conditions (transient response) rather than the input.
In the second experiment, we add to the input channels square waves
while also keeping intact the static states, as shown in
Figure 7. Note that square waves have a spectrum with both low and high frequencies, which allows us to test the performance of the controller in challenging conditions. In
Figure 8, we plot the active state error for this experiment. As in the previous experiment, the input is attenuated, and the error dynamics are dominated by the initial conditions, showing that the proposed controller is capable of handling both low- and high-frequency disturbances, as well as the effect of the unmeasured static states.
5.3. Influence of Multiplicative Uncertainties on the Controller Performance
As stated in
Section 4.3, there exists a gap between the common upper bound
on the
gain of the uncertain system, found in the control design phase, and the tighter upper bounds
computed for particular realizations of that system. In this subsection, we demonstrate this gap for a quadruped robot while also showing that the upper bounds
computed through the Bounded-Real Lemma have low variation and thus can be effectively approximated.
The experiment is set up as follows: We solve the optimal control problem (
21) for the flat quadruped robot, with the model parameters given in the previous subsection (the exact values of the model matrices are given in the
Appendix A). We find the optimal control law and the corresponding common upper bound
. Then, we generate
N specific realizations of the uncertain model (
10) by choosing specific randomly generated values for the matrices
,
, and
with a spectral norm less than 1. For each system realization, we compute the particular upper bounds
through the Bounded-Real Lemma.
Figure 9 shows the distribution of the particular upper bounds
as a bar graph. We can observe that the distribution has a well-defined center, and it trails off rapidly; moreover, the entire distribution is situated in the range
, and the difference in the upper bounds affects only the third significant figure. The upper bound
found during control design is equal to 0.092. We can draw the following conclusions: (1) the common performance parameter significantly overestimates the upper bound on the
gain of the particular realizations of that system; (2) the tighter upper bounds could be effectively approximated.
We can demonstrate that the distribution of the upper bounds
depends on the bounds on the matrices
,
, and
. In the next experiment, we generate particular realizations for the uncertain system (
10) using the following common bounds on the uncertain matrices:
where
is the uncertainty scaling factor, a positive scalar.
Figure 10 shows a scatter plot of the upper bounds
for particular realizations generated for a given value of
. All experiments are performed with the same control law, designed for
.
We can observe that an increase in the uncertainty scaling factor leads to a linear increase in the spread of the upper bounds we observe. We can use this relation to assess the gap between the expected and actual performance due to the overestimation of uncertainty.
5.4. Study of the Effect of Regularization
Our second experiment aims at exposing the effect the regularization hyperparameters have on the solution of the optimal control problem (
14). Specifically, we study the effects of the parameter
, which promotes more aggressive disturbance attenuation, and the parameter
, which punishes excessive control action.
Figure 11 shows two graphs: the dependence of the Frobenius norm of the control gain on the value of
and the dependence of the attenuation bound
on
. In both cases,
is fixed at 0.001.
Figure 12 shows the same dependencies, but for
.
Analyzing
Figure 11 and
Figure 12, we find that the parameter
acts as a slider, allowing us to choose the desired trade-off between the higher performance (in terms of the attenuation bound
) and the lower controller gain
. On the other hand, the parameter
controls the sharpness (the sensitivity) of this slider—the larger
is, the more sensitive the optimal control problem to the choice of the slider variable
.
6. Discussion
In this section, we discuss the results presented in the previous sections and give practical recommendations for the use of the proposed control method.
6.1. Implementation and Complexity Analysis
Hyperparameter tuning. The proposed method features two important hyperparameters:
and
. While the preferable values of the parameters could be chosen through the use of the analysis presented in
Section 5.4, we can give general recommendations here. We recommend fixing the value of
to be low (in our case,
) and choosing
based on the desired upper bound on the
gain of the system. The relation between the two parameters with a fixed
is shown in
Figure 11.
Computational complexity. In the previous section, we demonstrated that the proposed algorithm shows a computational advantage in comparison with an observer-based method that results in a comparable accuracy. This is the direct result of a more compact LMI (with fewer decision variables), as the primal-dual interior-point methods used in the modern solvers scale poorly with respect to the decision variables [
36]. The compact size of the proposed LMI allows the computationally affordable use of the method for robots with a large number of states, such as humanoid robots. The computational expense of the method is also directly linked to the maximal gain tuning update frequency that can be achieved for the robot. Lighter computational load allows for updating the controller gains rapidly when the state moves away from the linearization point.
There exist a number of ways to integrate the proposed controller with the prevalent walking robot control pipelines. For instance, it could be used as a replacement for the whole-body impulse controller in the pipeline proposed in [
7].
6.2. Robustness
The robustness of the proposed method to norm-bounded uncertainties was directly addressed in the proof of Theorem 1. Other types of disturbance, which we did not address directly, include time delays, unmodeled dynamics, and the deviation from the linearization point.
Robustness to additive disturbances. The inherent property of the controllers that attenuates external inputs makes the method robust to additive disturbances, including almost periodic signals, as was shown in the previous section. One of the key features of the control is the application of the upper bounds to input amplification for all frequencies. We showed in the previous section that the inputs with an instantaneous change (representing a signal with high-frequency components) could be handled directly by the proposed method. Such signals may be experienced as a result of the change in contact or due to collisions with the environment. Other types of robot–environment interactions may result in aperiodic disturbance inputs, which can also be handled directly, as we demonstrated.
Robustness to time delays. The stability margins with respect to time delays, especially time delays in the feedback channel of the controller, can be estimated using the small delay approximation. In modern walking robots, the update frequency in the feedback channels is often much higher than 1 kHz, alleviating time delay problems.
Robustness to the unmodeled dynamics. Unmodeled dynamics and the deviation from the linearization point, if bounded, can both be conservatively approximated through the norm-bounded uncertainties, allowing us to use the proposed method to directly account for this type of uncertainty.
gain bounds. We demonstrated that the common performance parameter , found as a part of the robust control design, significantly overestimates the upper bound on the gain of the particular realizations of the system. This can be linked to the use of the Young relation approximation as a part of deriving the robust control design conditions. If a relatively accurate upper bound on the gain of the system is required, it could be computed based on the nominal model of the system through the BRL.
7. Conclusions
In this paper, we presented a robust method for mechanical systems with contact, especially suited to walking robots and other systems, where the full state of the system is not measured directly. Our method overcomes the effect of the static states (states that are fixed due to the effect of the constraints imposed on the system), which, if unmeasured, can lead to a steady-state error and thus worse performance of the robot. Unlike previous works, which relied on estimating the static states in order to compensate for their action through the feed-forward component of the control law, we proposed the use of an controller to attenuate their effect. This allowed us to propose a robust control design capable of handling bounded multiplicative uncertainties in the state, control, and disturbance matrices while guaranteeing an upper bound on the gain for the uncertain system.
Handling multiplicative uncertainties leads to a bilinear matrix inequality (BMI) reformulation of the Bounded-Real Lemma. To handle the bilinear terms, we used Young’s relation approximation. We found that, for our formulation, it is possible to automatically tighten the approximation by including the Young’s relation scaling parameter as a variable of the optimization problem while maintaining linearity (the final optimization problem is an SDP). This allows us to avoid grid search, which is often required in Young’s relation-based methods, resulting in a radical improvement in the computational complexity, as grid search requires solving N instances of an SDP, and our method requires solving a single instance. The casting of the optimal control problem as a static feedback design resulted in smaller LMIs and much faster computation times compared with the observer-based alternative.
Also, we established that the proposed formulation overestimates the upper bounds on the gain (the solution to the robust control design problem for a system with bounded multiplicative uncertainties generates a conservative upper bound on the closed-loop gain). This bound can be tightened by solving the Bounded-Real Lemma problem for the particular instances of the uncertain model (fixing the uncertain matrices by choosing them from an appropriate distribution). Our numerical experiments showed that such tightened upper bounds demonstrate a clear pattern and allow an approximation without a loss of robustness guarantees.
Further work could be done towards building a custom problem-tailored SDP solver for the proposed control design method, with the aim of radically increasing its speed and thus combating the issues related to high computational load when the controller gain needs to be frequently re-tuned. Combining highly efficient custom solvers with efficient and robust controllers may represent a new direction for walking robotics.