Next Article in Journal
A Framework for Safe Mobile Manipulation in Human-Centered Applications
Previous Article in Journal
Reinforcement-Based Person-Specific Training for Children with Autism Using a Humanoid Robot NAO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

H Control for Walking Robots Robust to the Bounded Uncertainties in the State and the Model

Research Center for Artificial Intelligence, Innopolis University, Innopolis 420500, Russia
*
Author to whom correspondence should be addressed.
Robotics 2026, 15(4), 67; https://doi.org/10.3390/robotics15040067
Submission received: 24 February 2026 / Revised: 16 March 2026 / Accepted: 24 March 2026 / Published: 25 March 2026
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

In recent years, we have seen a constant increase in the capabilities of walking robots, leading to early cases of their practical use, and a much broader application is expected in the near future. However, creating a robust control design (in the presence of disturbances and model uncertainties) for walking robots still remains a challenge. One challenging source of uncertainty is the combination of the contact constraints and the lack of full state information, which can potentially lead to an offset (a steady-state error) in the robot’s position, interfering with tasks requiring high accuracy and deteriorating the overall performance of the robot. This is further exacerbated by the presence of multiplicative model uncertainties, common to mobile robots. In this work, we introduce an H control formulation designed to attenuate this type of disturbance. The proposed method can handle norm-bounded multiplicative uncertainties in the state, control, and disturbance matrices using a full-state static feedback control. The resulting control design procedure is a single semidefinite program which provides a large computational advantage over the alternative dynamic feedback controller methods. We demonstrate the effectiveness of the method in comparison with the alternative formulations in simulation. We demonstrate that the method can be effectively tuned using a regularization term in the cost function. We show that the upper bounds on the H gain of the closed-loop system can be effectively tightened post control design.

1. Introduction

Robots that move by attaining and breaking contact with supporting surfaces, especially walking robots, are finding an increasing number of practical applications. Quadruped walking robots have been used for search and rescue operations [1], industrial inspection and maintenance [2], agriculture [3], and healthcare [4]. However, the lack of robustness and robustness guarantees prevents a fuller range of practical use for such robots.
Among the main sources of uncertainty that impair the function of walking robots are the model uncertainty, lack of information about the environment, and unexpected interactions and other types of external forces. For trajectory planning and model predictive control (MPC), such uncertainty means that the expected behavior of the robot does not match its actual motion; for training-based control, such uncertainty leads to a difference in performance of the policy between training and deployment [5,6].
There are various strategies that are employed to combat the effects of uncertainty. In control architectures relying on convex MPC [7,8], the robustness emerges due to the high update rates of the MPC-based control loop. Similarly, the model predictive path integral (MPPI) demonstrates an emergent robustness due to the high update rates of the control loop [9,10]. In training-based methods, robustness is targeted in conjunction with the “sim-to-real” problem: the task of preparing a policy trained purely on simulated data to work effectively in the real world [11,12,13]. This is achieved by such techniques as domain randomization [14,15]. Alternatively, in [16], the authors used a novel loss function motivated by the H norm in order to teach the actor network to learn robustness against external disturbance instead of achieving robustness by applying disturbances during training. However, all these methods lack formal guarantees of robustness, which is a liability when the robot is required to perform critical tasks.
Optimal control methods have been used to tune a linear control law for robots with mechanical constraints, leveraging orthogonal decomposition of the states into active and static [17,18]. The resulting linear quadratic regulator implicitly relies on the existence of an accurate inverse dynamics algorithm. Existing inverse dynamics algorithms, such as those in [19,20,21], in their turn, rely on accurate information about the robot’s state. In the case of active states, such requirements are not restrictive, since the same information is used by the feedback controller; however, in the case of the static states, this represents an expansion of the minimal requirements, necessitating either direct measurements or estimation of the static states. Thus, unknown (or unmeasured) static states represent a challenge to the traditional control pipeline (stabilizing control and inverse dynamics).
In linear control, a number of approaches have been developed to handle various types of uncertainty. Specifically, the uncertainties that we described above can be thought of as multiplicative and additive model uncertainties. One of the most successful approaches to handling uncertain inputs has been H control. An H controller guarantees attenuation of the input signal as measured by the H norm [22]. H control has been used in a variety of applications, from fault-tolerant control [23] to vibration suppression for higher precision machining [24] to robust unmanned aerial vehicle control [25].
In [26], a simultaneous gain tuning method design for a controller–observer pair was introduced, guaranteeing the desired H gain of the closed-loop system. The key design choice in that work was the decomposition of the state variables into active and static (following [17,18]) and the use of the Luenberger-style observer for constrained systems introduced in [27]. The use of the full state observer allowed compensation for the action of the static states while attenuating the external disturbances. However, the observer-based approach introduced limitations: it necessitated the use of the Young relation (a matrix inequality approximation that bounds the definiteness of a matrix product), which introduced conservatism into the design, and, furthermore, it could not handle multiplicative model uncertainties. In this work, we introduce a framework that (1) does not rely on the estimation of static states, (2) is able to handle norm-bounded multiplicative uncertainties, and (3) does not introduce Young relation approximation when multiplicative uncertainties are not present.
H control was developed for L 2 input signals. However, in [22,28], it was shown that H control can be used not only for L 2 input signals, but also for periodic and almost periodic signals. Almost periodic signals are time functions that can be closely approximated (in the sense of a uniform norm) by a sum of harmonic functions [29]. An almost periodic signal is characterized by a set of almost-periods. Pure sinusoids and constant signals are a subset of almost periodic signals. This allows us to use H control to minimize the effect of unmeasured static states on the behavior of the system, as well as attenuate other external disturbances. This unified approach leads to stronger results than separate static state compensation and disturbance attenuation.
The ability of H control to attenuate input signals allows us to use it to suppress the effect of additive model uncertainties. We can further ensure a guaranteed level of attenuation in the presence of norm-bounded multiplicative uncertainties. Here, we can take advantage of the instruments developed for the simultaneous robust observer and controller design, such as by using Young’s relation to handle bilinearity in the matrix inequalities that appear due to the presence of multiplicative model uncertainties [30]. This represents a trade-off between conservativism (introduced by the linear approximation of bilinear matrix relations) and robustness. The conservatism of the method can be improved by the choice of hyperparameters (scaling factors) of the Young approximation. In [31], a grid-search procedure was developed to search for optimal values of such parameters.
Further, in this work, we introduce a weighted regularization, which allows us to influence the aggressiveness of the controller without directly prescribing the attenuation levels. We demonstrate the effectiveness of this regularization scheme in managing the trade-off between higher control gains and higher attenuation levels.
The main contributions of this work are as follows:
  • We propose a robust control design scheme that attenuates the effect of the unmeasured static states on the dynamics of a constrained system;
  • The proposed method does not require a state observer (or any other filter or dynamic controller) to suppress the effect of the static states to the desired level, leading to a smaller optimization problem and much faster control design (in terms of computational expense);
  • Unlike existing methods, the proposed control works with uncertain models, specifically in the presence of norm-bounded multiplicative model uncertainties;
  • The proposed control design can avoid conservative Young relation approximation when the state and control matrices are known exactly;
  • Unlike observer-based approaches, the proposed robust control design does not require a grid search to tighten the Young relation approximation; this is done automatically as a part of the optimization process.
The rest of the paper is organized as follows: Section 2 gives the preliminary information on the H control and describes the tools related to linear matrix inequalities used in this paper. Section 3 describes a linear model (certain and uncertain) of a robot with contact-related constraints. Section 4 introduces two proposed methods: one for models with known model matrices and one for models with multiplicative uncertainties. Section 5 describes the numerical experiments performed to verify the proposed method, studies the effect of the proposed regularization coefficients, and examines the gap between the upper bound on the H gain found in the control design process for a system with multiplicative uncertainties and the tighter bounds which could be established for particular realizations of the uncertain system.

2. Preliminaries

We start by defining a space of almost periodic functions AP n and a root mean square norm · M on that space:
Definition 1.
A continuous function f : R R n is almost periodic (a.p.) if, for every ϵ > 0 , there exists a relative-dense set { τ } ϵ such that, for all τ { τ } ϵ , the following holds:
sup t R f ( t + τ ) f ( t ) 2 ϵ ,
Where · 2 is the Euclidean norm on R n [32,33]. AP n is the space of such almost periodic functions.
Following [28], we define a root mean square norm over AP n :
f M = lim T 1 T T T f ( t ) T f ( t ) d t 1 / 2 .
Consider a system operator G : AP n AP m . We can define an induced norm for the operator as follows:
G M = sup f AP n 0 G f M f M = G ,
G M = sup f G f M f M , f AP n , f M 0 .
It is proven that the induced operator norm G M is equivalent to the H norm for the same system. This allows us to use the H control design to achieve suppression of the almost periodic inputs [22,28].
In this paper, we cast the H control design as an optimization problem with linear matrix inequality (LMI) constraints. Below, we present some of the key identities used in the derivations of the LMI constraints proposed in the paper.
Schur complement [34]: Given symmetric matrices A and C , a matrix B , and a block matrix M defined as
M = A B B T C ,
the following statements are equivalent: (1) M < 0 and (2) A B C 1 B T < 0 , C < 0 , where the signs < and ≤ denote negative definiteness and semidefiniteness when applied to matrices or quadratic forms (equivalently for the opposite sign).
Young’s Relation: For any symmetric positive definite matrix S > 0 , a positive scalar ϵ > 0 , and matrices X , Y , F , where F T F I , the following inequality holds [34]:
X T F Y + Y T F T X ϵ X T X + 1 ϵ Y T Y .

3. Problem Statement

In this section, we begin by introducing the nonlinear model of a walking robot and outline the derivation of the linearized description of the robot’s dynamics with and without uncertainties. Then, we formulate the problems this paper solves.

3.1. Linearized Robot Model

The dynamics of a walking robot are governed by a set of differential equations and constraints:
H q ¨ + V q ˙ + g = S τ + J T λ , h ( q ) = 0 ,
where matrices H , V , J = h q , and S are the generalized inertia matrix, the Coriolis and normal inertial force matrix, and the constraint Jacobian and the actuation selector matrix, while vectors g , q , λ , and τ are the generalized gravity force, generalized coordinates, reaction forces, and joint torques. The second equation is the contact-related constraint. The first and second derivatives of the constraint equation form an additional system of equations:
J q ˙ = 0 , J q ¨ + J ˙ q ˙ = 0 ,
which we use to identify the tangent coordinates r for the Differential Algebraic Equation (DAE) (6), which we will refer to as active states. We define active states r as null space coordinates for the constraint matrix G = 0 J J J ˙ and static states ρ as its row space coordinates. The relation between generalized coordinates and active and static states is expressed as follows:
N r + N ρ = q ˙ q ,
where N and N are matrices whose columns form an orthonormal basis in the null space and row space of G accordingly.
In case of independent constraints, the matrix J J T is of full rank, which allows us to solve Equation (6) for the generalized acceleration. Linearizing it with respect to tangent coordinates gives us a linear system:
r ˙ = A r r + B u + A ρ ρ ,
where u is the control input, A r and B are the state and control matrices, and A ρ is a matrix that represents the action of the static states on the active state velocity.
Uncertainty with respect to model parameters or an unknown offset with respect to the linearization point can be captured by introducing bounded multiplicative uncertainties to the model, leading to the following modification of Equation (9):
r ˙ = ( A r + Δ A r ) r + ( B + Δ B ) u + ( A ρ + Δ A ρ ) ρ ,
where Δ A r = M 1 F 1 ( t ) N 1 , Δ B = M 2 F 2 ( t ) N 2 , and Δ A ρ = M 3 F 3 ( t ) N 3 are norm-bounded structured uncertainty matrices with F i T ( t ) F i ( t ) I for i = 1 , 2 , 3 .

3.2. Problem Formulation

In this paper, we solve the following two problems:
Problem 1.
Given the dynamical system (9) and an output channel y = C r , we find a control law that guarantees an upper bound on the H gain of the closed-loop system.
Problem 2.
Given the uncertain dynamical system
r ˙ = ( A r + Δ A r ) r + ( B + Δ B ) u + ( A ρ + Δ A ρ ) ρ , y = C r ,
we find a control law that guarantees an upper bound on the H gain of the closed-loop system for any admissible realization of the system (an admissible realization corresponds to the particular values of the matrices F i such that F i T ( t ) F i ( t ) I for i = 1 , 2 , 3 ).

4. Proposed Method

As shown in the previous section, a walking robot model can be approximated as a linear system with a constant input, which is the result of the mechanical constraints applied to the robot. We can take advantage of the fact that a constant input is an almost periodic signal, which allows us to leverage the H control design to find a linear feedback control law that guarantees the desired attenuation of the input signal. An additional benefit of such an approach is the fact that an H controller guarantees attenuation across the entire frequency spectrum; thus, the unmodeled non-static additive disturbances will be attenuated at least as well as the static input signal.
In this section, we solve the problems stated previously. We start by considering Problem 1, which corresponds to a known plant (the linearized model of a walking robot is known precisely, while the static states are still not known or measured). Then, we proceed to Problem 2, the case where only a nominal linearized model is known, and the control needs to account for norm-bounded multiplicative uncertainties.

4.1. H Control Design for a Known Plant

In this subsection, we solve the previously stated Problem 1: we propose a design for an H controller that guarantees an upper bound on the H gain of the system, resulting in the attenuation of the effect of the static states on the system’s dynamics.
The system for which we design the control is (9), with the output capturing the active states r :
r ˙ = A r r + B u + A ρ ρ , z = C r .
We can introduce feedback control u = K r . Based on the Bounded-Real Lemma (BRL), we can write the LMI condition that needs to be satisfied for the H gain of the system to be upper-bounded by a positive scalar γ :
M = ( A r P + BU ) T + A r P + BU A ρ ( CP ) T A ρ T γ I 0 CP 0 γ I < 0 ,
where the control design variables P > 0 and U are related to the control gain through the expression K = U P 1 . To facilitate the search for a lower-gain controller that still fulfills the H requirements, we cast the control design as an optimization problem and introduce a regularization term to the cost function:
minimize γ , η , P , U α γ + β tr ( U T U ) η , subject to P > η I , γ > 0 , η > 0 , M < 0 ,
where α and β are positive scalar weights. This choice of a cost function creates a trade-off between increasing attenuation and limiting controller gain.

4.2. Robust H Control Design for a Plant with Multiplicative Uncertainties

In this subsection, we solve Problem 2 by designing an H controller that guarantees an upper bound on the H gain of the uncertain system with norm-bounded multiplicative uncertainties as well as unknown and unmeasured static states.
Theorem 1.
System (10) with output z = C r and linear control law u = K r is stable and has an H gain of less than γ for all Δ A r = M 1 F 1 ( t ) N 1 , Δ B = M 2 F 2 ( t ) N 2 , and Δ A ρ = M 3 F 3 ( t ) N 3 (where F i T ( t ) F i ( t ) I ) if there exist matrices P > 0 and U and scalars ϵ i > 0 such that K = U P 1 and the following LMI holds:
Π A ρ ( CP ) T P N 1 T U T N 2 T 0 A ρ T γ I 0 0 0 N 3 T CP 0 γ I 0 0 0 N 1 P 0 0 ϵ 1 I 0 0 N 2 U 0 0 0 ϵ 2 I 0 0 N 3 0 0 0 ϵ 3 I < 0
where i = 1 , 2 , 3 and
Π = ( A r P + B U ) T + A r P + B U + ϵ 1 M 1 M 1 T + ϵ 2 M 2 M 2 T + ϵ 3 M 3 M 3 T .
Proof. 
Using (13), we can state the condition for any admissible realization of the system:
M + M Δ 1 + M Δ 2 + M Δ 3 < 0 ,
where
M Δ 1 = ( Δ A r P ) T + Δ A r P 0 0 0 0 0 0 0 0 = ( M 1 F 1 N 1 P ) T + M 1 F 1 N 1 P 0 0 0 0 0 0 0 0 ,
M Δ 2 = ( Δ B U ) T + Δ B U 0 0 0 0 0 0 0 0 = ( M 2 F 2 N 2 U ) T + M 2 F 2 N 2 U 0 0 0 0 0 0 0 0 ,
M Δ 3 = 0 Δ A ρ 0 ( Δ A ρ ) T 0 0 0 0 0 = 0 M 3 F 3 N 3 0 ( M 3 F 3 N 3 ) T 0 0 0 0 0 ,
Using Young’s relation, we can write the following relations:
M Δ 1 1 ϵ 1 P N 1 T 0 0 P N 1 T 0 0 T + ϵ 1 M 1 0 0 M 1 0 0 T
M Δ 2 1 ϵ 2 U T N 2 T 0 0 U T N 2 T 0 0 T + ϵ 2 M 2 0 0 M 2 0 0 T
M Δ 3 1 ϵ 3 0 N 3 T 0 0 N 3 T 0 T + ϵ 3 M 3 0 0 M 3 0 0 T
Using these inequalities, we find a conservative approximation of the original condition. Applying the Schur complement to each inequality, we recover the LMI (15) linear in the decision variables. □
The optimal control problem is then cast as the following semidefinite program:
minimize γ , η , ϵ i , P , U α γ + β tr ( U T U ) η , subject to P > η I , γ > 0 , η > 0 , ϵ 1 > 0 , ϵ 2 > 0 , ϵ 3 > 0 , constraints ( 15 ) .
Figure 1 shows the control for the proposed robust control method. It illustrates that the tuning method (21) based on Theorem 1 works for all admissible system realizations (corresponding to the multiplicative uncertainty staying within bounds).

4.3. Upper Bound on the H Gain

In this subsection, we make a remark regarding the computation of the upper bounds of the H gain for systems with multiplicative uncertainties when the proposed method is used. The proposed method (21) allows us to design a control law that guarantees a bound on the H gain for any admissible realization of the uncertain system (10) (a realization of a system (10) corresponds to a specific choice of uncertain matrices F 1 , F 2 , and F 3 within the prescribed bounds). Applying the designed control law and choosing an admissible realization of the matrices F 1 , F 2 , and F 3 we derive a particular realization of the closed-loop system G = G ( F 1 , F 2 , F 3 ) .
In the robust control design phase, we compute the common upper bound γ on the H gain, valid for all admissible realizations of the uncertain system. However, for any particular closed-loop system G i , we can find a tighter upper bound γ i through the direct application of the Bounded-Real Lemma. Since the robust control design method uses Young relation approximation, while the direct application of the BRL does not, we can expect the bound γ i to be notably tighter for all admissible realizations of the uncertain system.

5. Numerical Study

We test the proposed algorithm on a flat quadruped robot in simulation. The robot is represented in Figure 2. It consists of a body with front and rear legs. Each leg includes two joints (a hip joint connecting to the robot’s body and a knee joint). The model parameters are given in Table 1; the model matrices are given in Appendix A.

5.1. Comparative Analysis Against Comparable Methods

In this subsection, we make a comparative analysis of the proposed method against the previously developed controllers. We identify two comparable methods for constrained mechanical systems which realize a stabilizing feedback control law, minimizing control error in the active states:
  • The constrained linear quadratic regulator (CLQR) proposed in [17,18]. The method takes advantage of projecting the robot dynamics into the null space of constraint equations, which allows the optimal control problem to be cast with quadratic cost as a Riccati equation;
  • The observer-based H control proposed in [26], which directly compensates (rather than attenuating, as in the proposed method) the effect of the static states based on their accurate estimation. The method requires a grid search to reduce the conservatism of the Young approximation.
The first experiment we perform is a comparative study of the two methods: the proposed H method and the CLQR tuned to the following cost:
J l = 0 r T Q l r + u T R l u d t
where the cost weights are Q l = 2000 I 6 × 6 and R l = 0.5 I 3 × 3 .
Figure 3 shows the error dynamics for the active states of the quadruped robot using the linear control law u = K r , where the gain is tuned by solving the optimal control problem (14). Figure 4 shows analogous error dynamics, but for the case where the CLQR is used.
As we can see from Figure 3, the application of the proposed method leads to convergence of the state error to a value close to zero. This is the direct result of the attenuation of the effect of the unmeasured zero states on the robot dynamics. In contrast, the CLQR results in a noticeable steady-state error. In this experiment, the norm of the steady-state error for the proposed method is 0.0036, while, for the CLQR, it is 0.21; the norm of the unmeasured static states is 0.412. Comparing the graphs in Figure 3 and Figure 4 shows the practical importance of using a control scheme that attenuates or compensates the static states.
In the next experiment, we compare the performance and computational expense of three methods: the proposed static feedback H controller, the CLQR, and the observer-based H control designed in [26]. We apply all three controllers to the same problem as in the previous experiment. To tune the controllers, we use the same semidefinite programming (SDP) solver: MOSEK [35] and a CVXpy wrapper. We compare the number of variables reported by the solver and the time to solution (the time it takes the solver to arrive at the optimal solution, as reported by the solver and the steady-state error, measured after the transient modes have decayed (at the end of the simulation interval)). The results of the experiment are presented in Table 2.
Table 2 demonstrates the following: (1) The proposed method, while slightly less accurate than the observer-based H control, demonstrates qualitative clear superiority over the CLQR. (2) In terms of the computation time, the proposed controller is 280 times faster than the observer-based H and 5 times slower than the CLQR. (3) In terms of the number of decision variables, the proposed method is nearly equivalent to the CLQR and nearly 6 times more compact than the observer-based H . The explosive growth in the computation time that we observed is directly linked to the computational complexity of the primal-dual interior-point methods. The computational complexity with respect to the decision variables p that we observed here fits the O ( p 4 ) model. The per-iteration complexity of primal-dual interior-point methods, when applied to control design problems, is often described as O ( n 6 ) in terms of the size of the principal positive semidefinite matrix, in our case, P S n [36]. The number of iterations itself is a function of the problem size and the desired accuracy.
Note that the number of variables in the H design problem depends on the dimensions of the state space of the robot; the relation is close to quadratic, but different for different robots. For a full spatial model of a quadruped robot with fully actuated legs, 18 degrees of freedom, 12 mechanical constraints, and 12 active states, the proposed H design will result in a problem with 288 decision variables. Doubling the number of active states and tripling the number of static states (compared with the flat model) leads to more than quadruple the number of decision variables. This illustrates the importance of casting the control design criteria as compact LMI when possible.
We can summarize the findings thus: The proposed method has a large advantage in accuracy over the CLQR, without the prohibitive computational expense of the observer-based H , and it has a large advantage in computation time over the observer-based H , without the inaccuracy of the CLQR. The existing methods offer high accuracy with very long computation time or low accuracy with very fast computations; thus, we proposed a method that is almost as accurate as the best alternative, with computation time closer to the fastest available method.

5.2. Attenuation of the External Almost Periodic Signals

One of the critical problems in mobile robotics is the presence of external disturbances, which deteriorate the performance of mobile robots. In this subsection, we demonstrate that the proposed method can attenuate various types of disturbances, including almost periodic signals and signals with rich spectral images, such as the square wave.
In the first experiment, we add to the input channels signals of the type d ( t ) = 0.1 sin ( t ) + 0.1 sin ( 2 t ) while keeping intact the static states, which causes the total input to be shifted, as shown in Figure 5. In Figure 6, we plot the active state error for this experiment. We can observe that the input is attenuated, and the error dynamics are dominated by the initial conditions (transient response) rather than the input.
In the second experiment, we add to the input channels square waves d ( t ) = 0.1 sign ( sin ( sin ( 0.2 t ) ) ) while also keeping intact the static states, as shown in Figure 7. Note that square waves have a spectrum with both low and high frequencies, which allows us to test the performance of the controller in challenging conditions. In Figure 8, we plot the active state error for this experiment. As in the previous experiment, the input is attenuated, and the error dynamics are dominated by the initial conditions, showing that the proposed controller is capable of handling both low- and high-frequency disturbances, as well as the effect of the unmeasured static states.

5.3. Influence of Multiplicative Uncertainties on the Controller Performance

As stated in Section 4.3, there exists a gap between the common upper bound γ on the H gain of the uncertain system, found in the control design phase, and the tighter upper bounds γ i computed for particular realizations of that system. In this subsection, we demonstrate this gap for a quadruped robot while also showing that the upper bounds γ i computed through the Bounded-Real Lemma have low variation and thus can be effectively approximated.
The experiment is set up as follows: We solve the optimal control problem (21) for the flat quadruped robot, with the model parameters given in the previous subsection (the exact values of the model matrices are given in the Appendix A). We find the optimal control law and the corresponding common upper bound γ . Then, we generate N specific realizations of the uncertain model (10) by choosing specific randomly generated values for the matrices F 1 , F 2 , and F 3 with a spectral norm less than 1. For each system realization, we compute the particular upper bounds γ i through the Bounded-Real Lemma.
Figure 9 shows the distribution of the particular upper bounds γ i as a bar graph. We can observe that the distribution has a well-defined center, and it trails off rapidly; moreover, the entire distribution is situated in the range [ 0.02167 , 0.02178 ] , and the difference in the upper bounds affects only the third significant figure. The upper bound γ found during control design is equal to 0.092. We can draw the following conclusions: (1) the common performance parameter significantly overestimates the upper bound on the H gain of the particular realizations of that system; (2) the tighter upper bounds could be effectively approximated.
We can demonstrate that the distribution of the upper bounds γ i depends on the bounds on the matrices F 1 , F 2 , and F 3 . In the next experiment, we generate particular realizations for the uncertain system (10) using the following common bounds on the uncertain matrices:
| | F 1 | | κ , | | F 2 | | κ , | | F 3 | | κ .
where κ is the uncertainty scaling factor, a positive scalar. Figure 10 shows a scatter plot of the upper bounds γ i for particular realizations generated for a given value of κ . All experiments are performed with the same control law, designed for κ = 1 .
We can observe that an increase in the uncertainty scaling factor leads to a linear increase in the spread of the upper bounds γ i we observe. We can use this relation to assess the gap between the expected and actual performance due to the overestimation of uncertainty.

5.4. Study of the Effect of Regularization

Our second experiment aims at exposing the effect the regularization hyperparameters have on the solution of the optimal control problem (14). Specifically, we study the effects of the parameter α , which promotes more aggressive disturbance attenuation, and the parameter β , which punishes excessive control action. Figure 11 shows two graphs: the dependence of the Frobenius norm of the control gain on the value of α and the dependence of the attenuation bound γ on α . In both cases, β is fixed at 0.001. Figure 12 shows the same dependencies, but for β = 0.01 .
Analyzing Figure 11 and Figure 12, we find that the parameter α acts as a slider, allowing us to choose the desired trade-off between the higher performance (in terms of the attenuation bound γ ) and the lower controller gain | | K | | 2 . On the other hand, the parameter β controls the sharpness (the sensitivity) of this slider—the larger β is, the more sensitive the optimal control problem to the choice of the slider variable α .

6. Discussion

In this section, we discuss the results presented in the previous sections and give practical recommendations for the use of the proposed control method.

6.1. Implementation and Complexity Analysis

Hyperparameter tuning. The proposed method features two important hyperparameters: α and β . While the preferable values of the parameters could be chosen through the use of the analysis presented in Section 5.4, we can give general recommendations here. We recommend fixing the value of β to be low (in our case, β = 0.001 ) and choosing α based on the desired upper bound on the H gain of the system. The relation between the two parameters with a fixed β is shown in Figure 11.
Computational complexity. In the previous section, we demonstrated that the proposed algorithm shows a computational advantage in comparison with an observer-based method that results in a comparable accuracy. This is the direct result of a more compact LMI (with fewer decision variables), as the primal-dual interior-point methods used in the modern solvers scale poorly with respect to the decision variables [36]. The compact size of the proposed LMI allows the computationally affordable use of the method for robots with a large number of states, such as humanoid robots. The computational expense of the method is also directly linked to the maximal gain tuning update frequency that can be achieved for the robot. Lighter computational load allows for updating the controller gains rapidly when the state moves away from the linearization point.
There exist a number of ways to integrate the proposed controller with the prevalent walking robot control pipelines. For instance, it could be used as a replacement for the whole-body impulse controller in the pipeline proposed in [7].

6.2. Robustness

The robustness of the proposed method to norm-bounded uncertainties was directly addressed in the proof of Theorem 1. Other types of disturbance, which we did not address directly, include time delays, unmodeled dynamics, and the deviation from the linearization point.
Robustness to additive disturbances. The inherent property of the H controllers that attenuates external inputs makes the method robust to additive disturbances, including almost periodic signals, as was shown in the previous section. One of the key features of the H control is the application of the upper bounds to input amplification for all frequencies. We showed in the previous section that the inputs with an instantaneous change (representing a signal with high-frequency components) could be handled directly by the proposed method. Such signals may be experienced as a result of the change in contact or due to collisions with the environment. Other types of robot–environment interactions may result in aperiodic disturbance inputs, which can also be handled directly, as we demonstrated.
Robustness to time delays. The stability margins with respect to time delays, especially time delays in the feedback channel of the controller, can be estimated using the small delay approximation. In modern walking robots, the update frequency in the feedback channels is often much higher than 1 kHz, alleviating time delay problems.
Robustness to the unmodeled dynamics. Unmodeled dynamics and the deviation from the linearization point, if bounded, can both be conservatively approximated through the norm-bounded uncertainties, allowing us to use the proposed method to directly account for this type of uncertainty.
H gain bounds. We demonstrated that the common performance parameter γ , found as a part of the robust control design, significantly overestimates the upper bound on the H gain of the particular realizations of the system. This can be linked to the use of the Young relation approximation as a part of deriving the robust control design conditions. If a relatively accurate upper bound on the H gain of the system is required, it could be computed based on the nominal model of the system through the BRL.

7. Conclusions

In this paper, we presented a robust method for mechanical systems with contact, especially suited to walking robots and other systems, where the full state of the system is not measured directly. Our method overcomes the effect of the static states (states that are fixed due to the effect of the constraints imposed on the system), which, if unmeasured, can lead to a steady-state error and thus worse performance of the robot. Unlike previous works, which relied on estimating the static states in order to compensate for their action through the feed-forward component of the control law, we proposed the use of an H controller to attenuate their effect. This allowed us to propose a robust control design capable of handling bounded multiplicative uncertainties in the state, control, and disturbance matrices while guaranteeing an upper bound on the H gain for the uncertain system.
Handling multiplicative uncertainties leads to a bilinear matrix inequality (BMI) reformulation of the Bounded-Real Lemma. To handle the bilinear terms, we used Young’s relation approximation. We found that, for our formulation, it is possible to automatically tighten the approximation by including the Young’s relation scaling parameter as a variable of the optimization problem while maintaining linearity (the final optimization problem is an SDP). This allows us to avoid grid search, which is often required in Young’s relation-based methods, resulting in a radical improvement in the computational complexity, as grid search requires solving N instances of an SDP, and our method requires solving a single instance. The casting of the optimal control problem as a static feedback design resulted in smaller LMIs and much faster computation times compared with the observer-based alternative.
Also, we established that the proposed formulation overestimates the upper bounds on the H gain (the solution to the robust control design problem for a system with bounded multiplicative uncertainties generates a conservative upper bound on the closed-loop H gain). This bound can be tightened by solving the Bounded-Real Lemma problem for the particular instances of the uncertain model (fixing the uncertain matrices by choosing them from an appropriate distribution). Our numerical experiments showed that such tightened upper bounds demonstrate a clear pattern and allow an approximation without a loss of robustness guarantees.
Further work could be done towards building a custom problem-tailored SDP solver for the proposed control design method, with the aim of radically increasing its speed and thus combating the issues related to high computational load when the controller gain needs to be frequently re-tuned. Combining highly efficient custom solvers with efficient and robust controllers may represent a new direction for walking robotics.

Author Contributions

Conceptualization, A.A. and S.S.; methodology, A.A. and S.S.; software, A.A. and S.S.; validation, A.A.; formal analysis, A.A. and S.S.; investigation, A.A. and S.S.; writing—original draft preparation, A.A. and S.S.; writing—review and editing, A.A. and S.S.; visualization, A.A.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Economic Development of the Russian Federation (agreement no. 139-10-2025-034 dd. 19.06.2025, IGK 000000C313925P4D0002).

Data Availability Statement

Data will be made available upon request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MPCModel Predictive Control
MPPIModel Predictive Path Integral
LMILinear Matrix Inequality
DAEDifferential Algebraic Equation
BRLBounded-Real Lemma
CLQRConstrained Linear Quadratic Regulator
SDPSemidefinite Programming
BMIBilinear Matrix Inequality
OCPOptimal Control Problem

Appendix A

This appendix shows the numerical values of the linear model used in this paper.
A r = 1.98 1.44 0.81 0.35 0.25 1.45 3.57 3.35 0.69 0.51 1.64 1.26 6.26 7.70 4.31 0.06 2.58 1.23 2.13 6.26 3.78 1.56 0.40 0.63 0.01 0.54 0.11 1.70 2.05 0.49 8.38 3.94 0.68 0.88 1.39 3.79
A ρ T = 5.87 × 10 6 1.57 × 10 5 7.19 × 10 5 5.37 × 10 5 9.16 × 10 5 5.51 × 10 5 2.13 3.75 9.91 9.05 1.69 3.40 0.18 0.48 0.54 0.36 0.41 1.39 2.40 × 10 8 6.68 × 10 8 5.89 × 10 7 7.34 × 10 7 3.82 × 10 7 6.71 × 10 7 1.38 2.05 3.26 1.66 2.32 5.52 5.64 × 10 6 2.02 × 10 5 9.21 × 10 5 7.35 × 10 5 1.05 × 10 4 6.76 × 10 5 2.44 3.95 11.46 11.84 3.07 2.31 7.14 × 10 6 1.10 × 10 5 6.05 × 10 5 4.79 × 10 5 8.43 × 10 5 5.90 × 10 5
B = 0.13 0.08 0.03 0.44 0.67 0.02 0.48 0.81 1.49 0.45 1.39 0.68 0.47 0.51 0.61 0.70 1.37 0.59 1.78 0.20 0.66 0.71 0.21 2.96
C = I 6 × 6
N 1 = 0.10 0.94 0.23 0.06 0.10 0.21 0.97 0.04 0.22 0.05 0.07 0.09 0.13 0.06 0.49 0.60 0.41 0.46
M 1 1.04 × 10 3 2.56 × 10 4 7.12 × 10 5 4.21 × 10 4 8.17 × 10 4 5.57 × 10 5 4.98 × 10 3 1.24 × 10 3 2.07 × 10 4 4.60 × 10 3 1.84 × 10 4 3.41 × 10 4 8.31 × 10 3 1.01 × 10 3 2.89 × 10 4 7.54 × 10 3 1.84 × 10 3 1.95 × 10 5
N 2 = 0.52 0.07 0.85 0.09 0.15 0.17 0.20 0.95
M 2 = 1.11 × 10 5 9.13 × 10 6 5.18 × 10 5 1.16 × 10 5 1.25 × 10 4 3.98 × 10 5 1.57 × 10 8 5.07 × 10 5 2.46 × 10 4 2.22 × 10 5 3.40 × 10 6 3.33 × 10 7
N 3 T = 2.31 × 10 3 8.96 × 10 3 0.18 0.70 0.08 0.49 1.42 × 10 2 0.04 0.68 3.61 × 10 3 4.07 × 10 3 0.02 6.26 × 10 3 0.99 0.10 6.65 × 10 4 1.00 × 10 3 0.01 0.71 0.07 0.50 4.39 × 10 3 1.46 × 10 3 0.08
M 3 = 1.16 × 10 3 2.34 × 10 4 8.71 × 10 5 1.54 × 10 3 2.49 × 10 3 1.27 × 10 4 8.16 × 10 3 1.45 × 10 3 2.80 × 10 4 5.74 × 10 3 6.09 × 10 3 2.49 × 10 4 1.29 × 10 2 6.73 × 10 3 1.59 × 10 4 8.01 × 10 3 8.35 × 10 3 2.21 × 10 4

References

  1. Li, Q.; Cicirelli, F.; Vinci, A.; Guerrieri, A.; Qi, W.; Fortino, G. Quadruped Robots: Bridging Mechanical Design, Control, and Applications. Robotics 2025, 14, 57. [Google Scholar] [CrossRef]
  2. Gehring, C.; Fankhauser, P.; Isler, L.; Diethelm, R.; Bachmann, S.; Potz, M.; Gerstenberg, L.; Hutter, M. Anymal in the Field: Solving Industrial Inspection of an Offshore HVDC Platform with a Quadrupedal Robot. In Field and Service Robotics: Results of the 12th International Conference; Springer: Singapore, 2021; pp. 247–260. [Google Scholar]
  3. Rodríguez-Lera, F.; González-Santamarta, M.; Orden, J.; Fernández-Llamas, C.; Matellán-Olivera, V.; Sánchez-González, L. Lessons Learned in Quadruped Deployment in Livestock Farming. arXiv 2024, arXiv:2404.16008. [Google Scholar] [CrossRef]
  4. Cai, S.; Ram, A.; Gou, Z.; Shaikh, M.A.W.; Chen, Y.A.; Wan, Y.; Hara, K.; Zhao, S.; Hsu, D. Navigating Real-World Challenges: A Quadruped Robot Guiding System for Visually Impaired People in Diverse Environments. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24); Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  5. Hu, X.; Sun, Q.; He, B.; Liu, H.; Zhang, X.; Lu, C.; Zhong, J. Impact of Static Friction on Sim2Real in Robotic Reinforcement Learning. In Proceedings of the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hangzhou, China, 19–25 October 2025; pp. 17107–17114. [Google Scholar] [CrossRef]
  6. Sobanbabu, N.; He, G.; He, T.; Yang, Y.; Shi, G. Sampling-based System Identification with Active Exploration for Legged Sim2Real Learning. arXiv 2025, arXiv:2505.14266. [Google Scholar]
  7. Kim, D.; Carlo, J.D.; Katz, B.; Bledt, G.; Kim, S. Highly Dynamic Quadruped Locomotion via Whole-Body Impulse Control and Model Predictive Control. arXiv 2019, arXiv:1909.06586. [Google Scholar] [CrossRef]
  8. Kim, D.H.; Cho, J.; Park, J.H. Model Predictive Impedance Control and Gait Optimization for High-Speed Quadrupedal Running. Appl. Sci. 2025, 15, 8861. [Google Scholar] [CrossRef]
  9. Xue, H.; Pan, C.; Yi, Z.; Qu, G.; Shi, G. Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing. In Proceedings of the 2025 IEEE International Conference on Robotics and Automation (ICRA), Atlanta, GA, USA, 19–23 May 2025; pp. 4974–4981. [Google Scholar] [CrossRef]
  10. Alvarez-Padilla, J.; Zhang, J.Z.; Kwok, S.; Dolan, J.M.; Manchester, Z. Real-Time Whole-Body Control of Legged Robots with Model-Predictive Path Integral Control. In Proceedings of the 2025 IEEE International Conference on Robotics and Automation (ICRA), Atlanta, GA, USA, 19–23 May 2025; pp. 14721–14727. [Google Scholar] [CrossRef]
  11. Chen, S.; Zhang, B.; Mueller, M.W.; Rai, A.; Sreenath, K. Learning Torque Control for Quadrupedal Locomotion. In Proceedings of the 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), Austin, TX, USA, 12–14 December 2023; pp. 1–8. [Google Scholar] [CrossRef]
  12. Hoeller, D.; Rudin, N.; Sako, D.; Hutter, M. Anymal parkour: Learning agile navigation for quadrupedal robots. Sci. Robot. 2024, 9, eadi7566. [Google Scholar] [CrossRef] [PubMed]
  13. Sleiman, J.P.; Mittal, M.; Hutter, M. Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation. In Proceedings of the 8th Annual Conference on Robot Learning, Munich, Germany, 6–9 November 2024. [Google Scholar]
  14. Li, Z.; Cheng, X.; Peng, X.B.; Abbeel, P.; Levine, S.; Berseth, G.; Sreenath, K. Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 2811–2817. [Google Scholar] [CrossRef]
  15. Feng, G.; Zhang, H.; Li, Z.; Peng, X.B.; Basireddy, B.; Yue, L.; Song, Z.; Yang, L.; Liu, Y.; Sreenath, K.; et al. GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots. In Proceedings of the 6th Conference on Robot Learning, Auckland, New Zealand, 14–18 December 2023; Proceedings of Machine Learning Research. Volume 205, pp. 1893–1903. [Google Scholar]
  16. Long, J.; Yu, W.; Li, Q.; Wang, Z.; Lin, D.; Pang, J. Learning H-Infinity Locomotion Control. arXiv 2024, arXiv:2404.14405. [Google Scholar] [CrossRef]
  17. Mason, S.; Righetti, L.; Schaal, S. Full dynamics LQR control of a humanoid robot: An experimental study on balancing and squatting. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 374–379. [Google Scholar] [CrossRef]
  18. Mason, S.; Rotella, N.; Schaal, S.; Righetti, L. Balancing and walking using full dynamics LQR control with contact constraints. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 63–68. [Google Scholar] [CrossRef]
  19. Aghili, F. A unified approach for inverse and direct dynamics of constrained multibody systems based on linear projection operator: Applications to control and simulation. IEEE Trans. Robot. 2005, 21, 834–849. [Google Scholar] [CrossRef]
  20. Mistry, M.; Buchli, J.; Schaal, S. Inverse dynamics control of floating base systems using orthogonal decomposition. In Proceedings of the 2010 IEEE International Conference on Robotics And Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 3406–3412. [Google Scholar] [CrossRef]
  21. Righetti, L.; Buchli, J.; Mistry, M.; Schaal, S. Inverse dynamics control of floating-base robots with external constraints: A unified view. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1085–1090. [Google Scholar] [CrossRef]
  22. Scherer, C.W. The Riccati Inequality and State-Space H-Optimal Control. Ph.D. Thesis, University of Wurzburg, Wurzburg, Germany, 1990. [Google Scholar]
  23. Du, Z.; Chen, C.; Li, C.; Yang, X.; Li, J. Fault-Tolerant H-Infinity Stabilization for Networked Cascade Control Systems With Novel Adaptive Event-Triggered Mechanism. IEEE Trans. Autom. Sci. Eng. 2025, 22, 22597–22608. [Google Scholar] [CrossRef]
  24. Zhang, R.; Wang, Z.; Keogh, P. H-infinity optimised control of external inertial actuators for higher precision robotic machining. Int. J. Comput. Integr. Manuf. 2022, 35, 129–144. [Google Scholar] [CrossRef]
  25. Hui, N.; Guo, Y.; Han, X.; Wu, B. Robust H Dual Cascade MPC-Based Attitude Control Study of a Quadcopter UAV. Actuators 2024, 13, 392. [Google Scholar] [CrossRef]
  26. Aldaher, A.; Savin, S. H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition. Robotics 2025, 14, 64. [Google Scholar] [CrossRef]
  27. Savin, S.; Balakhnov, O.; Khusainov, R.; Klimchik, A. State Observer for Linear Systems with Explicit Constraints: Orthogonal Decomposition Method. Sensors 2021, 21, 6312. [Google Scholar] [CrossRef] [PubMed]
  28. Mäkilä, P.M. H optimization and optimal rejection of persistent disturbances. Automatica 1990, 26, 617–618. [Google Scholar] [CrossRef]
  29. Cooke, R.L. Almost-periodic functions. Am. Math. Mon. 1981, 88, 515–526. [Google Scholar] [CrossRef]
  30. Petersen, I.R. A stabilization algorithm for a class of uncertain linear systems. Syst. Control Lett. 1987, 8, 351–357. [Google Scholar] [CrossRef]
  31. Kheloufi, H.; Zemouche, A.; Bedouhene, F.; Boutayeb, M. On LMI conditions to design observer-based controllers for linear systems with parameter uncertainties. Automatica 2013, 49, 3700–3704. [Google Scholar] [CrossRef]
  32. Amerio, L.; Prouse, G. Almost-Periodic Functions and Functional Equations; Springer: New York, NY, USA, 1971. [Google Scholar]
  33. Katznelson, Y. An Introduction to Harmonic Analysis; Cambridge University Press: Cambridge, UK, 1976. [Google Scholar]
  34. Boyd, S.; Ghaoui, L.E.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1994. [Google Scholar]
  35. MOSEK ApS. The MOSEK Python Fusion API Manual, version 11.0; MOSEK ApS: København, Denmark, 2026. [Google Scholar]
  36. Vandenberghe, L.; Balakrishnan, V.R.; Wallin, R.; Hansson, A.; Roh, T. Interior-point algorithms for semidefinite programming problems derived from the KYP lemma. In Positive Polynomials in Control; Springer: Berlin/Heidelberg, Germany, 2005; pp. 195–238. [Google Scholar]
Figure 1. The control diagram for the proposed robust control method. In the diagram, Controller represents the feedback control u = K r with K tuned by solving the problem (21); Model uncertainty represents the action of the norm-bounded multiplicative uncertainty blocks; Nominal plant is represented by the system (12).
Figure 1. The control diagram for the proposed robust control method. In the diagram, Controller represents the feedback control u = K r with K tuned by solving the problem (21); Model uncertainty represents the action of the norm-bounded multiplicative uncertainty blocks; Nominal plant is represented by the system (12).
Robotics 15 00067 g001
Figure 2. A diagram of the flat quadruped robot. The robot consists of a body and two-link legs. Each link has length l 1 ; the distance between the attachment joints for the front and rear legs is l 2 . The joint angles describing the hip joints are Θ 1 and Θ 2 , the knee joints Θ 3 and Θ 4 .
Figure 2. A diagram of the flat quadruped robot. The robot consists of a body and two-link legs. Each link has length l 1 ; the distance between the attachment joints for the front and rear legs is l 2 . The joint angles describing the hip joints are Θ 1 and Θ 2 , the knee joints Θ 3 and Θ 4 .
Robotics 15 00067 g002
Figure 3. Dynamics of the active state errors (the difference between the desired and actual values) for a quadruped under the proposed control design method. The model matrices are assumed to be known exactly.
Figure 3. Dynamics of the active state errors (the difference between the desired and actual values) for a quadruped under the proposed control design method. The model matrices are assumed to be known exactly.
Robotics 15 00067 g003
Figure 4. Dynamics of the active state errors (the difference between the desired and actual values) for a quadruped under the CLQR control law. The model matrices are assumed to be known exactly.
Figure 4. Dynamics of the active state errors (the difference between the desired and actual values) for a quadruped under the CLQR control law. The model matrices are assumed to be known exactly.
Robotics 15 00067 g004
Figure 5. Almost periodic signal disturbance input added to the input channel of the system. The input signal used in this experiment is d ( t ) = 0.1 sin ( t ) + 0.1 sin ( 2 t ) .
Figure 5. Almost periodic signal disturbance input added to the input channel of the system. The input signal used in this experiment is d ( t ) = 0.1 sin ( t ) + 0.1 sin ( 2 t ) .
Robotics 15 00067 g005
Figure 6. Dynamics of the active state errors in case of almost periodic additive external disturbance. The input signal used in this experiment is d ( t ) = 0.1 sin ( t ) + 0.1 sin ( 2 t ) .
Figure 6. Dynamics of the active state errors in case of almost periodic additive external disturbance. The input signal used in this experiment is d ( t ) = 0.1 sin ( t ) + 0.1 sin ( 2 t ) .
Robotics 15 00067 g006
Figure 7. Almost periodic signal disturbance input added to the static states. The input signal used in this case is d ( t ) = 0.1 sign ( sin ( sin ( 0.2 t ) ) ) .
Figure 7. Almost periodic signal disturbance input added to the static states. The input signal used in this case is d ( t ) = 0.1 sign ( sin ( sin ( 0.2 t ) ) ) .
Robotics 15 00067 g007
Figure 8. Dynamics of the active state errors in case of almost periodic additive external disturbance. The input signal used in this case is d ( t ) = 0.1 sign ( sin ( sin ( 0.2 t ) ) ) .
Figure 8. Dynamics of the active state errors in case of almost periodic additive external disturbance. The input signal used in this case is d ( t ) = 0.1 sign ( sin ( sin ( 0.2 t ) ) ) .
Robotics 15 00067 g008
Figure 9. Distribution of the upper bounds γ i of the H norms for particular realizations of the system with multiplicative uncertainties using a common robust control law.
Figure 9. Distribution of the upper bounds γ i of the H norms for particular realizations of the system with multiplicative uncertainties using a common robust control law.
Robotics 15 00067 g009
Figure 10. A scatter plot of the upper bounds γ i for particular realizations of the uncertain system (10) generated for a given value of κ , which bounds the range of admissible realizations of the uncertain systems via the expression (23). The vertical axis shows the upper bound of the H norm, computed via the Bounded-Real Lemma; the horizontal is the value of κ with which the system was generated.
Figure 10. A scatter plot of the upper bounds γ i for particular realizations of the uncertain system (10) generated for a given value of κ , which bounds the range of admissible realizations of the uncertain systems via the expression (23). The vertical axis shows the upper bound of the H norm, computed via the Bounded-Real Lemma; the horizontal is the value of κ with which the system was generated.
Robotics 15 00067 g010
Figure 11. The effect of the regularization hyperparameters on the solution of the optimal control problem (14). In blue, the Frobenius norm of control gain | | K | | 2 as a function of α ; in red, the attenuation bound γ as a function of α ; the hyperparameter β is fixed at β = 0.001 . The horizontal axis is on a log scale, and the vertical axis for | | K | | 2 (in blue) is on a log scale.
Figure 11. The effect of the regularization hyperparameters on the solution of the optimal control problem (14). In blue, the Frobenius norm of control gain | | K | | 2 as a function of α ; in red, the attenuation bound γ as a function of α ; the hyperparameter β is fixed at β = 0.001 . The horizontal axis is on a log scale, and the vertical axis for | | K | | 2 (in blue) is on a log scale.
Robotics 15 00067 g011
Figure 12. The effect of the regularization hyperparameters on the solution of the optimal control problem (14). In blue, the Frobenius norm of control gain | | K | | 2 as a function of α ; in red, the attenuation bound γ as a function of α ; the hyperparameter β is fixed at β = 0.01 . The horizontal axis is on a log scale, and the vertical axis for | | K | | 2 (in blue) is on a log scale.
Figure 12. The effect of the regularization hyperparameters on the solution of the optimal control problem (14). In blue, the Frobenius norm of control gain | | K | | 2 as a function of α ; in red, the attenuation bound γ as a function of α ; the hyperparameter β is fixed at β = 0.01 . The horizontal axis is on a log scale, and the vertical axis for | | K | | 2 (in blue) is on a log scale.
Robotics 15 00067 g012
Table 1. Mechanical parameters of the flat quadruped robot model used in the simulation experiments.
Table 1. Mechanical parameters of the flat quadruped robot model used in the simulation experiments.
ParameterSymbolValue
Link length l 1 0.3 m
Body length l 2 0.3 m
Link mass m 1 3 kg
Body mass m 2 10 kg
Table 2. Comparative analysis of the proposed controller against benchmarks.
Table 2. Comparative analysis of the proposed controller against benchmarks.
MethodSteady-State ErrorNumber of VariablesComputation Time
The proposed method0.0036620.103 s
CLQR [17]0.2094600.022 s
Observer-based H  [26]0.000736928.207 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldaher, A.; Savin, S. H Control for Walking Robots Robust to the Bounded Uncertainties in the State and the Model. Robotics 2026, 15, 67. https://doi.org/10.3390/robotics15040067

AMA Style

Aldaher A, Savin S. H Control for Walking Robots Robust to the Bounded Uncertainties in the State and the Model. Robotics. 2026; 15(4):67. https://doi.org/10.3390/robotics15040067

Chicago/Turabian Style

Aldaher, Ahmad, and Sergei Savin. 2026. "H Control for Walking Robots Robust to the Bounded Uncertainties in the State and the Model" Robotics 15, no. 4: 67. https://doi.org/10.3390/robotics15040067

APA Style

Aldaher, A., & Savin, S. (2026). H Control for Walking Robots Robust to the Bounded Uncertainties in the State and the Model. Robotics, 15(4), 67. https://doi.org/10.3390/robotics15040067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop