Next Article in Journal
Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
Previous Article in Journal
A Systematic Literature Review of DDS Middleware in Robotic Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition

Institute of Robotics and Computer Vision, Innopolis University, Innopolis 420500, Russia
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(5), 64; https://doi.org/10.3390/robotics14050064
Submission received: 27 March 2025 / Revised: 9 May 2025 / Accepted: 14 May 2025 / Published: 16 May 2025
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

:
In this paper, we study H control for systems with explicit mechanical constraints and a lack of state information, such as walking robots. This paper proposes an H control design scheme based on solving an optimization problem with linear matrix inequality constraints. Our method is based on the orthogonal decomposition of the state variables and the use of two linear controllers and a Luenberger observer, tuned to achieve the desired properties of the closed-loop system. The method takes into account static linear additive disturbance, which appears due to the uncertainties associated with the mechanical constraints. We propose a dynamics linearization procedure for systems with mechanical constraints, taking into account the inevitable lack of information about the environment; this procedure allows a nonlinear system to be transformed into a form suitable for the application of the proposed control design method. The method is tested on a constrained underactuated three-link robot and a flat quadruped robot, showing the desired behavior in both cases.

1. Introduction

Legged robots have always been a focus of robotics research. Rapid progress over the last several years has led to the first instances of mass production of legged robots, as well as to the first cases of their practical use [1]. This in turn opens new questions, such as the reliability and robustness of legged robots and their control systems. In particular, one may pose the following questions: is it possible to design a control system for a legged robot that would be robust with respect to external disturbances, and is it possible to prove this property? In this paper, we aim to give a particular answer to these questions.
The progress achieved in legged robotics has been partially due to a successful adaptation of well-known control methods, such as Model-Predictive Control (MPC) [2,3], Proportional-Differential (PD) control, Extended Kalman Filters (EKFs) [4], and others. These methods in their standard formulations are directly applicable to systems like industrial robot arms; however, they require modifications to be used for legged robots. Legged robots acquire and break mechanical contact with the supporting surface as a part of their motion, leading to discontinuities in their models. Between discontinuous events (acquisition or breaking of contact), the dynamics of legged robots are well described via differential-algebraic equations (DAEs), where the mechanical constraints are explicitly stated. From the point of view of state estimation, it presents a qualitative difference from models with smooth dynamics described via ordinary differential equations (ODEs).
There are well-known robust control design methods: disturbance observer, adaptive control, quadratic stability, s-procedure-based robust control, etc. [5,6,7,8]. H control is an especially promising type of robust control, as can be seen in recent advancements in the area [9,10,11,12]. In this work, we focus on H control. Effective H control design methods require the plant to be modeled as a system of linear ODEs, while legged robots are modeled with nonlinear DAEs.
Numerous methods can be used to transform a legged robot model into an ODE; these methods usually rely on differentiating constraint equations and introducing a change of variables [13,14,15]. Among these is the orthogonal decomposition method, which consists of projecting dynamics onto the null space of the constraints equations, while decomposing the state variables into the sets of static and independent states. This results in a system of independent ODEs with independent variables. In this paper, we discuss how this process can be combined with linearization to produce an accurate linear model. The resulting linear model can be used to derive H control.
For practical legged robots, the full-state information is not available (some of the floating base states are usually not measured), requiring the use of observers or equivalent constructions. In this paper, we use the Luenberger state observer, adapting it to the model produced by the orthogonal decomposition method. To tune the controller and observer gains and ensure H gain properties, we derive and solve a linear matrix inequality (LMI) condition. The systems where the controller and observer are tuned simultaneously produce bi-linear matrix inequalities (BMIs); to transform these into an LMI, we use Young’s relation, following an earlier example proposed in [5].
In this paper, we propose a control design method for constrained mechanical systems, finding the optimal gains for a linear controller and Luenberger observer in terms of the H norm of the system, minimizing the system’s reaction to an external disturbance in terms of its L 2 norm. To our knowledge, this is the first H control method designed to handle constrained mechanical systems with unmeasured static states. We present an LMI condition that guarantees the upper bound on the H gain and that can be used directly in the optimal control design formulated as a semidefinite program (SDP). Our method allows us to account for additive terms in the model, related to the uncertainty in the constraints (or equivalently, the uncertainty in our knowledge about the environment in which the robot operates). To this end, we give a comprehensive description of a linearization method for dynamical systems with mechanical constraints, producing a linear model in the tangent space to the constraint manifold. We verify the results via numerical experiments on an underactuated three-link robot with a constrained end-effector and a flat quadruped model.
The remainder of the paper is organized as follows. In Section 2, we review the state of the art and discuss closely related works. Section 3 introduces the proposed linearization method for constrained systems. In Section 4, we provide definitions and formulas useful in the remaining sections. The mathematical formulation of the problem solved in this paper is given in Section 5, and in Section 6, we describe and prove the proposed solution to that problem. The last section provides a description of numeric experiments verifying the method.

2. State of the Art

The control system of a modern walking robot usually consists of multiple layers. The configuration and type of these layers vary significantly. A typical example would include a high-level trajectory and step planner, an MPC that runs over the pre-determined foothold positions, a QP-based feedback controller, and an array of motor-level PID controllers. This is augmented with an EKF-based state estimator and a simultaneous localization and mapping (SLAM) algorithm [2,16].
The control systems of the type described above use several simplified dynamical models of the robot. These include a point-mass model, a single rigid body (SRB) model (often with additional assumptions and simplifications), and a linear model that relates generalized accelerations to reaction forces and control inputs (but not the state) based on the full dynamical model evaluated at a single moment in time. The first of these is often used in the design of MPC, the second in EKF, and the last in QP-based feedback controllers [4,17].
Our goal is to propose a control system that guarantees robustness with respect to bounded-energy exogenous inputs. This motivates the decision to use the linearization of the full dynamics of the robot as our model and to simultaneously design both the controller and the observer, guaranteeing that the system’s H gain is below the desired threshold. The linear model used to tune the proposed controller is required to be the optimal local linear approximation of the robot dynamics on the tangent space to the constraint manifold.
Previously, successful attempts to design optimal controllers and observers for systems with explicit contact constraints have been made [13,18,19]; we describe these works in the next subsection. However, the question of robustness to exogenous inputs has not been raised in those papers. In the area of optimal control, a similar (but more general) class of linear systems has been considered—descriptor systems [20]. For these systems, several robust control methods have been proposed [7]; we review them in Section 2.2.

2.1. Orthogonal Projection and Decomposition Methods

Orthogonal projection and decomposition (OPD) methods can be seen as an attempt to re-model the dynamics of systems with explicit constraints (presented as a DAE) into a form suitable for control design, using orthogonal projections and decomposition of the state variables.
An early example of such work can be found in [21], where orthogonal projections are used to exclude algebraic variables from equations of dynamics. Same as in most OPD methods, [21] takes advantage of a projector onto the null space of the constraint Jacobian. One of the key features of the work is the construction of a constraint inertia matrix—an analog to the generalized inertia matrix in Lagrangian mechanics. This approach is applied for solving problems related to simulation, inverse dynamics, and control design [21,22,23].
A parallel series of works proposed using QR decomposition of the constraint Jacobian and introduced a QR decomposition-based projector matrix [14,24,25,26]. They showed the equivalence of several independently developed OPD methods for inverse dynamics [25].
The works we have cited so far used an orthogonal projection of the dynamics equations to find a better representation of the relations between the generalized accelerations, control inputs, and reaction forces; as such, they were dealing with nonlinear second-order ODEs. A new class of methods originated in [13,18], where (1) a projection of the state variables (rather than the generalized accelerations) onto the tangent space of the constraint manifold was proposed and (2) a locally linearized model of the robot dynamics was considered. This allowed for the implementation of an LQR control for a walking robot.
In the paper [19], a decomposition of the state variables into active (lying on the tangent space to the constraint manifold) and static (orthogonal to the active ones) states was used to allow simultaneous controller and observer design, including a disturbance observer estimating the value of static state variables. This paper used the same linearized model as [18]. In [27,28], orthogonal decomposition was used to facilitate a simulation-based reinforcement learning algorithm on a constraint manifold.
While orthogonal projection and decomposition methods have successfully covered problems such as controller and observer design, as well as forward and inverse dynamics, there are still numerous control-related problems that have not been solved within this framework. One of them is the problem of designing a control law robust to an energy-bound exogenous input, which would guarantee a bounded output. This motivates the aim of the present paper: to design an H controller–observer pair for the systems with explicit mechanical constraints, which can be represented by a linearized model in the tangent space to the constraint manifold. Additionally, we present a detailed treatment of the question of constructing linearized models, mentioned above, which has not been performed previously in OPD-related papers.

2.2. Linear Models for Constrained Systems

A descriptor system is a state-space model that expresses the relation between state variables and their derivatives in a general form, allowing the system to be described as a mix of differential and algebraic equations. The generality of such a description allows the systems with explicit mechanical constraints to be presented as descriptor systems. Descriptor systems have been extensively studied, and several important results have been achieved in the development of stability criteria as well as control and observer design methods [29]. The problem of designing an H controller for a descriptor system was considered in [7,30], where a version of the Bounded Real Lemma was proposed, and in [31], where the Riccati equation was used to design an H controller.
While these results are of significant interest, it has not been shown that a mechanical system with changing contact, such as a walking robot, can be successfully controlled using these methods. The class of descriptor systems is very general and includes far more than only systems with mechanical constraints; control methods designed for a smaller class of systems, such as OPD methods, can take advantage of the specific dynamic properties of the individual systems. For example, in [19], an OPD controller–observer design method allowed an unobservable system to be reduced to an observable one without the loss of information relevant to the control task.
Aside from the descriptor models, one can attempt the linearization of systems with explicit mechanical constraints based on Taylor expansion. The end result can be a model, linear with respect to the full state of the original nonlinear system, control actions, and reaction forces; we can call it an explicit form of a linear model with mechanical constraints. Alternatively, a model can be linearized on the tangent plane to the constraint manifold, producing a model with no reaction forces and a state vector of fewer dimensions; we can call it an implicit model. In papers [18,19], the transformation of an explicit model to an implicit one is discussed. In [13], the construction of a full-state implicit linear model is considered, while in [32], a linear model for floating-base coordinates (the position and orientation of the robot’s body and corresponding velocities) is constructed using an orthogonal projection of the nonlinear equations. In this paper, we extend and elaborate on this method, presenting a finite difference-based linearization scheme that produces models equivalent to the type required by OPD methods such as those proposed in [18].

3. Preliminaries

We start by recalling a few general results, which will be used in the following derivations.
Schur complement [33]: Given symmetric matrices A S n and C S m and a matrix B R n × m , as well as their concatenation
M = A B B T C ,
we can make the following equivalent statements:
  • M < 0 .
  • A B C 1 B T < 0 , C < 0 .
Here and later, the signs < and ≤ imply negative definiteness and semidefiniteness when applied to the matrices or quadratic forms (equivalently for the opposite sign).
Young’s Relation [33]: For any symmetric positive definite matrix M > 0 , positive scalar ϵ > 0 , and X , Y R n × m , the following inequality is true:
X T Y + Y T X X T ϵ M 1 X + 1 ϵ Y T M Y .
L 2 Gain for Linear Systems [33]: Given an LTI system:
x ˙ = A x + B w , z = C x + D w ,
where x , z , and w are the system state, desired output vector, and the disturbance input, respectively, while A , B , C , and D are the state, control, observation and feed-through matrices, respectively, we define its L 2 gain (which is also its H norm) as:
sup | | w | | 2 0 | | z | | 2 | | w | | 2 ,
where | | z | | 2 is L 2 norm of the signal, defined as:
| | z | | 2 2 = 0 z ( t ) T z ( t ) d t ,
and | | w | | 2 is defined equivalently. If there exists a function V ( x ) = x T P x > 0 and the following equation holds for some scalar γ > 0 :
J = V ˙ + 1 γ z T z γ w T w < 0 ,
then the H norm of the system is bounded by γ .

4. The Linearization of a Constrained Model

A walking robot can be modeled using a combination of Lagrange and constraint equations:
H m q ¨ + C m q ˙ + g m = S m τ + J m T λ , J m q ¨ + J ˙ m q ˙ = 0 ,
where H m , C m , g m , and J m = h q are the generalized inertia matrix, Coriolis and normal inertial force matrix, generalized gravity force, and constraint Jacobian, while q , λ , τ , and S m are generalized coordinates, reaction forces, joint torques, and an actuation selector matrix. The second equation is obtained by twice-differentiating the constraint equation h m ( q ) = 0 .
These equations can be solved to find an explicit formula for the generalized accelerations:
q ¨ = ψ m ( q , q ˙ , τ ) = ( I J m # J m ) H m 1 ( S m τ C m q ˙ g m ) J m # J ˙ m q ˙ m ,
where J m # = H m 1 J m T ( J m H m 1 J m T ) 1 is an inertially weighted pseudoinverse. We can describe the dynamics in a state-space form with state variables q , q ˙ :
d d t q q ˙ = q ˙ ψ m ( q , q ˙ , τ ) .
Introducing state variable x = q T q ˙ T T R n and control input u = τ , we re-write the last system as:
x ˙ = ϕ m ( x , u ) .

4.1. Orthogonal Decomposition of State Coordinates

The state-space model obtained above produces correct state derivatives, given states that correspond to the constraints. We use this model to produce a least-squares-type linearization scheme for the original constrained model. To this end, we need to identify variations Δ x i of the state variable x that do not leave the constraint manifold defined by the equation h m ( q ) = 0 and its derivative J m q ˙ = 0 . We consider the Taylor expansion of both equations around a point q 0 , q ˙ 0 :
0 = h m ( q ) = h m ( q 0 ) + h q ( q q 0 ) + h . o . t . 0 = J m q ˙ = J m ( q 0 ) q ˙ 0 + J m q ˙ q ( q q 0 ) + J m q ˙ q ˙ ( q ˙ q ˙ 0 ) + h . o . t .
where higher-order terms (h.o.t.) include quadratic and higher-order terms in variables q and q ˙ of the Taylor expansion. Since the linearization point lies on the constraint manifold, we cancel the terms h m ( q 0 ) = 0 and J m ( q 0 ) q ˙ 0 = 0 . Introducing variations Δ q = q q 0 and Δ q ˙ = q ˙ q ˙ 0 and dropping the higher-order terms, we obtain a linear system of equations governing the admissible values of variations:
J m 0 J ˙ m J m Δ q Δ q ˙ = 0 0 .
We denote the matrix on the left-hand side of the system as G , obtaining conditions G Δ x i = 0 , with an implication that any admissible Δ x i lies in the null space of that matrix.
Let N R n × n r be an orthonormal basis in the null space of G , and let R R n × n ρ be an orthonormal basis in its orthogonal compliment. Any vector x R n can be represented via its coordinates in the direct sum of the two subspaces:
x = N r + R ρ ,
where r are coordinates in the tangent plane to the constraint manifold, which we will call active states or null space coordinates, and ρ are coordinates in the linear space that is normal to the manifold, which we will refer to as static states or a static state discrepancy. Since Δ x i lies entirely in the null space of G , we can represent it via its null space coordinates Δ r i as Δ x i = N Δ r i .
Note that by differentiating the constraints equation twice, we obtain a condition on the state derivative G x ˙ = 0 . Then, any admissible state derivative x ˙ can be expressed via its null space coordinates: x ˙ = N r ˙ , and equivalently r ˙ = N T x ˙ = N T ϕ m ( x , u ) . With that, we can construct linearized dynamics in the tangent plane to the constraint manifold.
Let E r R n r × n r be a full-rank matrix and e i be its i-th column. Then, we can find an admissible state variation Δ x i = N e i ; we denote the corresponding state derivative variation as:
Δ v i = N T ϕ m ( x 0 + N e i , u 0 ) ϕ m ( x 0 , u 0 ) .
Concatenating state derivatives into a matrix V r = [ Δ v 1 , Δ v n r ] , we obtain an active state matrix A r of the linear model by finding the least-squares solution to the following system of equations:
V r = A r N E r .
We find the control matrix B in a similar way, considering variations in control actions u .

4.2. Constraint Uncertainty and the Static States

Assume that equation h m ( q ) = 0 and its derivative are augmented with a small additive term, which is uncertain. Then, expanding both equations around the point q 0 , q ˙ 0 exactly as was performed in the above section, we arrive at a new system of equations:
G Δ x i = ε ,
where ε represents the uncertain term. For a walking robot climbing stairs, it can represent an error related to an incorrect estimation of the height of the stairs. Equation (15) has a particular solution Δ x p = G ε ; this value represents a distance between the point of expansion x 0 and a point on the constraint manifold. Since Δ x p lies in the row space of G , we can re-write it in terms of the earlier proposed static state discrepancy coordinates:
Δ ρ = R T Δ x p = R T G ε .
We can find how the value of the discrepancy Δ ρ influences the active state derivative. Let E ρ R n ρ × n ρ be a full-rank matrix and e j be its j-th column. Then, we can find the discrepancy-related state variation as Δ x j = R e j ; we denote the corresponding state derivative variation as:
Δ v j = N T ϕ m ( x 0 + R e j , u 0 ) ϕ m ( x 0 , u 0 ) .
Concatenating the state derivatives into a matrix V ρ = [ Δ v 1 , Δ v n ρ ] , we obtain the static state matrix A ρ of the linear model by finding the least-squares solution to the following linear system:
V ρ = A ρ R E ρ .

4.3. External Disturbance

Consider an additive external disturbance w R n w added to the model Equation (9):
x ˙ = ϕ m ( x , u ) + ϕ d ( x , u , w ) ,
where ϕ d ( · ) is a function that maps the disturbance w to the state derivatives. The sources of external disturbances for walking robots range from collisions to high-speed wind. We find the linearization of this model as performed previously. Let E w R n w × n w be a full-rank matrix and e i be its i-th column. Then, we can find a disturbance variation Δ w i = e i ; we denote the corresponding state derivative variation as:
Δ v i = N T ϕ d ( x 0 , u 0 , e i ) ϕ d ( x 0 , u 0 , 0 ) .
Concatenating the state derivatives into a matrix V w = [ Δ v 1 , Δ v n w ] , we obtain the disturbance matrix D 1 of the linear model by finding the least-squares solution to the following linear system:
V w = D 1 E w .
Remark 1.
We linearized the model around the point w = 0 . If ϕ d ( x , u , 0 ) = 0 , or if ϕ d x = 0 and ϕ d u = 0 , then the previous derivation of the matrices A r , A ρ , and B remains unaffected. Otherwise, when computing the state derivative variations in Equations (13) and (17), we need to use ϕ m ( x , u ) + ϕ d ( x , u , w ) rather than only ϕ m ( x , u ) .
Together, this gives us a linear model that depends on both the active states and the discrepancy:
r ˙ = A r r + A ρ ρ + B u + D 1 w ,
where r represents the distance from the linearization point and ρ represents the state discrepancy related to uncertainty in the constraint equations. For robots moving over static environments, ρ = const , motivating its description as a static state.
Finally, we observe that the size of the matrices E r , E ρ , E u , and E w (as measured by their spectral norm) determines the accuracy of the linearization, similar to how the step size determines the accuracy of a regular finite-difference method.

5. Problem Statement

In this section, we introduce the problem of H control with a Luenberger observer for a linear system with constraints. Let us consider the linear system (22) and the measurement equation with sensor disturbances:
r ˙ = A r r + A ρ ρ + B u + D 1 w , y = CN r + CR ρ + D 2 w ,
where C is the measurement matrix, D 1 , D 2 are the process and sensor disturbance matrices, and w is the disturbance. The state of the system can be expressed as x = N r + R ρ , and the measurement is modeled as y = C x + D 2 w , giving us the second equation of the system. To facilitate the observer design, we re-write the dynamics in the form:
r ˙ ρ ˙ = A r A ρ 0 0 r ρ + B 0 u + D 1 0 w , y = CN CR r ρ + D 2 w .
We introduce the Luenberger observer as follows:
r ^ ˙ ρ ^ ˙ = A r A ρ 0 0 r ^ ρ ^ + B 0 u + L y CN CR r ^ ρ ^ .
Defining estimation error as e = [ ( r r ^ ) T ( ρ ρ ^ ) T ] T and introducing the variables S = I 0 , E = [ N R ] , and A c = A r A ρ 0 0 , we write error dynamics as:
e ˙ = ( A c L C E ) e + ( S D 1 L D 2 ) w .

5.1. Feedback State Controller

Using the observer state r ^ , ρ ^ , we can introduce the control law as follows:
u = K r r ^ + K ρ ρ ^ ,
where K r and K ρ are control gains related to the active and static states. Defining concatenated control gains K = K r K ρ , we write the closed-loop dynamics for the system (23) as:
r ˙ = ( A r + B K r ) r B K e + ( A ρ + B K ρ ) ρ + D 1 w .
We propose the following choice of the control gain K ρ :
K ρ = B A ρ .
As long as columns of A ρ lie in the column space of B , this control law negates the effect of ρ on the dynamics, at the cost of having less freedom of choice when tuning the controller gains to shape the state observer error dynamics. With the aforementioned assumption, the equation of dynamics takes the form:
r ˙ = ( A r + B K r ) r B K e + D 1 w .
Combining the last equation with the error dynamics (26), we have the closed-loop system with a linear controller, which we need to tune to attain the desired H properties.

5.2. H Control Problem

Consider the desired output defined as:
z = C 1 r + C 2 e + D 3 w .
where D 3 is the disturbance feed-through matrix for the output. Noting that K r = K S , we combine the error dynamics (26) and the state dynamics (30), presenting the system we intend to control:
r ˙ e ˙ = ( A r + BKS ) BK 0 ( A c LCE ) r e + D 1 ( S D 1 L D 2 ) w , z = C 1 r + C 2 e + D 3 w .
In the next section, we present a method of tuning the controller and observer gains K and L for this system, minimizing its H gain.

6. Proposed LMI Method

In this section, we propose LMI conditions sufficient both to prove an upper limit of the H gain of the closed-loop system and to recover the controller and observer gains of this system. These conditions are given by the following theorem:
Theorem 1.
The system:
r ˙ = A r r + A ρ ρ + B u + D 1 w , y = CN r + CR ρ + D 2 w , z = C 1 r + C 2 e + D 3 w ,
with observer (25) has an H norm less than γ > 0 if there exist positive-definite matrices P 1 > 0 , Q 2 > 0 , matrices V , U , and a constant ϵ > 0 such that the following LMI is satisfied:
Ψ 1 0 D 1 P 1 C 1 T Ξ 0 * Ψ 2 Q 2 S D 1 V D 2 C 2 T 0 I * * γ I D 3 T 0 0 * * * γ I 0 0 * * * * 1 ϵ H 0 * * * * * ϵ H < 0 ,
where
Ψ 1 = P 1 A r T + A r P 1 + BU + U T B T ,
Ψ 2 = A c T Q 2 + Q 2 A c VCE E T C T V T ,
H = P 1 0 0 I ,
Ξ = BU B K ρ ,
and the controller and observer gains are given by L = Q 2 1 V , K r = U P 1 1 = K S , and K ρ = B A ρ , while the control law is defined as u = K r r ^ + K ρ ρ ^ .
Proof. 
Rewriting the first and second equations from (33) as in (24) and applying the observer dynamics equation (25), we obtain the state estimation error dynamics (26). Using the feedback control (27) and selecting the gain K ρ as in (29), we can express the first equation of (33) as in (30). By combining this with the estimation error dynamics (26) and the desired output equation for z , we arrive at Equation (32).
We introduce a variable ϑ = [ r T e T ] T and write a Lyapunov candidate function:
V = r e Q 1 0 0 Q 2 r e = ϑ T Q ϑ > 0 .
The H norm of the system will be less than γ > 0 if the following condition holds: V ˙ + 1 γ z T z γ w T w < 0 . We can expand the terms of this condition:
V ˙ = ϑ ˙ T Q ϑ + ϑ T Q ϑ ˙ ,
and using (32), we have:
ϑ ˙ T Q ϑ = w T D 1 T Q 1 r + w T ( S D 1 L D 2 ) T Q 2 e + + r e T ( A r + BKS ) T Q 1 0 B T K T Q 1 ( A c LCE ) T Q 2 r e ,
1 γ z T z = r e w T C 1 T C 2 T D 3 T 1 γ C 1 C 2 D 3 r e w .
We can re-write the condition as follows:
r T e T w T Λ r e w < 0 ,
where
Λ = Σ 1 Q 1 BK Q 1 D 1 * Σ 2 Q 2 ( S D 1 L D 2 ) * * γ I + C 1 T C 2 T D 3 T 1 γ C 1 C 2 D 3 < 0 ,
and
Σ 1 = ( A r + BKS ) T Q 1 + Q 1 ( A r + BKS ) ,
Σ 2 = ( A c LCE ) T Q 2 + Q 2 ( A c LCE ) .
Using the Schur complement on (44), we have:
Σ 1 Q 1 BK Q 1 D 1 C 1 T * Σ 2 Q 2 ( S D 1 L D 2 ) C 2 T * * γ I D 3 T * * * γ I < 0 .
Conjugating (47) by a block-diagonal matrix diag ( P 1 , I , I , I ) , where P 1 = Q 1 1 , we find:
Π 1 BK D 1 P 1 C 1 T * Σ 2 Q 2 ( S D 1 L D 2 ) C 2 T * * γ I D 3 T * * * γ I < 0 ,
where Π 1 = P 1 ( A r + BKS ) T + ( A r + BKS ) P 1 . Let us re-write (48) as:
Π 1 0 D 1 P 1 C 1 T * Σ 2 Q 2 ( S D 1 L D 2 ) C 2 T * * γ I D 3 T * * * γ I + BK 0 0 0 0 I 0 0 T + 0 I 0 0 BK 0 0 0 T < 0 .
Now, we use Young’s relation to convert (49) to an LMI [5]. Introducing variables:
M = Q 1 0 0 I , M 1 = H = P 1 0 0 I ,
and using Young’s relation (1), we have:
BK 0 0 0 0 I 0 0 T + 0 I 0 0 BK 0 0 0 T BK ϵ H K T B T 0 0 0 0 1 ϵ M 0 0 0 0 0 0 0 0 0 0 .
Substituting (51) into (49), we have:
Θ 0 D 1 P 1 C 1 T * Σ 2 + 1 ϵ M Q 2 ( S D 1 L D 2 ) C 2 T * * γ I D 3 T * * * γ I < 0 ,
where
Θ = Π 1 + BK ϵ H K T B T .
Expressions quadratic in decision variables appear in the first and second diagonal blocks of the matrix. Using the Schur complement separately on each block, we find a new condition:
Π 1 0 D 1 P 1 C 1 T BKH 0 * Σ 2 Q 2 ( S D 1 L D 2 ) C 2 T 0 I * * γ I D 3 T 0 0 * * * γ I 0 0 * * * * 1 ϵ H 0 * * * * * ϵ H < 0 .
Since V = Q 2 L , U = KS P 1 we can transform the block:
BKH = B K r P 1 B K ρ = BU B K ρ =   Ξ
We substitute into (54):
Ψ 1 0 D 1 P 1 C 1 T Ξ 0 * Ψ 2 Q 2 S D 1 V D 2 C 2 T 0 I * * γ I D 3 T 0 0 * * * γ I 0 0 * * * * 1 ϵ H 0 * * * * * ϵ H < 0 ,
where Ψ 1 and Ψ 2 are defined as expressions (35) and (36).
This is an LMI in variables Q 2 , U , V , P 1 . □
Using Theorem 1, we can formulate a semidefinite program, solving which we obtain the controller gain that guarantees the H gains of the system below γ :
minimize γ , P 1 , Q 2 , V , U γ , subject to P 1 > 0 , Q 2 > 0 , γ > 0 , condition ( 34 )
This optimization problem can be efficiently solved using an SDP solver. We will refer to it as the optimal control problem (OCP) in the rest of the paper. It contains ϵ as a parameter. The resulting closed-loop control system is illustrated in the Figure 1.
Remark 2.
The scalar ϵ > 0 is chosen ahead of solving the problem (57). To overcome the difficulty of choosing the proper ϵ, we use a gridding method, searching over a range of values of ϵ to find the one that provides the best γ. Let κ = ϵ ϵ + 1 ; hence, ϵ = κ 1 κ . Since ϵ > 0 , it follows that 0 < κ < 1 . We divide the interval ( 0 , 1 ) evenly, find ϵ associated with each value of κ on that interval, and solve the OCP for that ϵ value. Among multiple OCP solutions, we choose the one that demonstrates the smallest gain γ. This method was introduced in [8].

7. Numerical Simulations and Discussion

To verify the proposed method, we apply it to the design of H control for two mechanical systems: an underactuated three-link robot with a fixed end-effector and a flat quadruped robot model.

7.1. Constrained Underactuated Three-Link Robot

In this subsection, we design a control law for a serial mechanism consisting of three links connected via rotary joints. Each link has length l = 0.5 m, mass m = 10 kg, and inertia 0.2083 kg·m2. Let q 1 , q 2 , and q 3 be joint angles; then, the state of the robot is formed by the joint angles and their derivatives. The end-effector of the robot is fixed, providing two mechanical constraints and leaving the robot with one mechanical degree of freedom, or equivalently, two active states (using terminology proposed in Section 4). A scheme of the robot is shown in Figure 2.
In this experiment, the coordinates at which the end-effector is fixed are x e = 0.25 , y e = 0.79 . The mechanical constraints related to the fixed end-effector have the following form:
l sin ( q 1 ) l sin ( q 1 + q 2 ) l sin ( q 1 + q 2 + q 3 ) l cos ( q 1 ) + l cos ( q 1 + q 2 ) + l cos ( q 1 + q 2 + q 3 ) = x e y e .
Choosing the configuration q 1 = π / 4 , q 2 = 2 π / 3 , and q 3 = 0.7 π as the linearization point, we use the method discussed in Section 4 to produce the following linear approximation of the robot model:
r ˙ = A r r + A ρ ρ + B u + D 1 w , y = C N r + C R ρ + D 2 w , z = C 1 r + C 2 e ,
where the state, control, and constraint matrices A r , A ρ , B , J m attain the following values:
A r = 0.71 0.97 1.28 0.07 , A ρ = 0.00 8.56 0.00 0.41 0.00 27.39 0.00 1.31 ,
B = 0.11 0.34 .
Matrix N , providing a basis in the null space of the constraint equation, is given as:
N T = 0.54 0.61 0.50 0.17 0.19 0.16 0.17 0.19 0.16 0.54 0.61 0.50 .
The external force is assumed to act on the center of mass of the second link in the vertical direction. The corresponding D 1 is found to be:
D 1 = 0.04 0.13 .
We consider the case where the rank of the measurement matrix is 4; additionally, we assume that there is a feed-through from the external disturbance to the measurement. Thus, the measurement matrix C and the input matrix D 2 take the form:
C = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 , D 2 = 0.01 0.01 0.01 0.01 .
The desired output matrices C 1 and C 2 have the form:
C 1 = 1 1 , C 2 = 1 1 1 1 1 1 .
For brevity, we do not discuss units of the model matrices here; however, we note that the units are chosen in such a way that the input and output signals w ( t ) , z ( t ) are both unitless. We use the static control law (29) to find the following control gain:
K ρ = 0.00 79.32 0.00 3.79 .
Solving a semidefinite program with the conditions given by (34), minimizing γ , and choosing the ϵ that gives the lowest value for γ using Remark 2, we find the optimal controller and observer gains:
K r = 158.25 154.26 ,
L = 42.97 11.84 17.89 17.59 84.87 24.29 36.47 37.53 3.81 2.72 0.88 0.43 0.37 4.49 1.57 3.31 11.49 3.91 3.74 3.86 1.11 0.43 0.84 1.56 .
In our case, the optimal value of γ was found to be 0.03 , achieved with ϵ = 0.014 . It was easy to verify that the closed-loop system was stable. We tested the system’s response to signal w ( t ) = exp ( t ) , whose L 2 norm was 0.707 . The ratio between the input and the output signals in that experiment was 0.002 . The time required to solve a single OCP was 0.127 s, while the time for grinding search was 4.967 s with a fixed step of size 0.001 for κ over the interval (0,1), as outlined in Remark 2. The gridding is terminated for ϵ > 0.5 as the solver fails for larger ϵ values.
Figure 3 shows the output signal z ( t ) for this experiment; Figure 4 shows the same signal plotted together with the input signal w ( t ) using logarithmic scale. We observe that the output signal quickly decays to zero. As the input signal vanishes, so does the output. This is the desired behavior of the controller. Let us note that the time horizon of the graph in Figure 3 was chosen with the aim to demonstrate the shape of the signal; however, the primary way of analyzing the signal is linked to its L 2 norm, rather than the speed of its decay.

7.2. Flat Quadruped Robot Model

In this subsection, we apply the proposed method to a flat quadruped robot model, as shown in Figure 5. The robot consists of two legs attached to the body via a rotary joint; each leg consists of two links connected via a rotary joint. All joints are actuated, and the feet of the robot remain on the ground throughout the experiment, experiencing constraints associated with unilateral mechanical contact. The lengths of the links are l 1 = 0.3 m, and the length of the body is l 2 = 0.5 m. The mass of each link is 2kg, and the mass of the body is 10 kg. The inertia of each link is 0.0750 kg·m2, and the inertia of the body is 0.2083 kg·m2. The state of the robot includes the position and orientation of the robot’s body, the robot’s joint angles q 1 , q 2 , q 3 , q 4 , and the derivatives of all these quantities. We linearize the model around the configuration q 1 = π / 6 , q 2 = π / 6 , q 3 = π / 3 , and q 3 = π / 3 , while the robot’s body is horizontal. The external force is assumed to act on the center of mass of the robot’s body in the horizontal direction.
Performing grid search, we find the optimal ϵ = 0.025 , while the optimal upper estimate of the H gain of the system is γ = 0.19 . We test the system with an exponential input w ( t ) = exp ( t ) , finding that the ratio of L 2 norms of the input and the output signals (the latter is chosen as the sum of robot states and observer error states) is 0.04 . The time required to solve a single OCP is 0.507 s, while the total time for gridding search is 28.207 s. The gridding is terminated for ϵ > 0.3 as the solver fails for larger ϵ values.
Figure 6 shows the output signal z ( t ) for this experiment; Figure 7 shows the same signal plotted together with the input signal w ( t ) on a logarithmic scale. As we can see, the output signal is orders of magnitude below the input signal; as the input signal vanishes, so does the output. Like in the previous experiment, what we observe is the desired behavior of the controller.

7.3. Limitations

When analyzing the practicality of the proposed method, it is important to consider its computational cost. Computationally expensive control may lead to demanding hardware requirements or a lower-frequency control loop, both of which are undesirable. The proposed H control method presents two sources of computational expense. The first is the linear controller and observer, which are updated on each control loop iteration. They pose a negligible computational expense related to vector addition and matrix–vector multiplication. The second is related to solving OCP (57) multiple times during the gridding, as explained in Remark 2. This is a computationally expensive process: solving the OCP a single time requires 0.127 s for a three-link robot arm and 0.507 s for a quadruped. Gridding pushes these numbers to 4.967 s and 28.207 s, respectively. However, these computations do not need to be performed during the operation of the robot. If the controller and observer gains are tuned before the robot starts its operation, this source of computational expense will not affect the performance of the control system.
The OCP (57) is an SDP problem and thus requires an SDP solver. The computational expense of a semidefinite program grows with the size of its LMI. In the case of problem (57), the size of the LMI (34) grows quadratically (in terms of the number of elements on the LMI matrix) with the increase in the number of state variables describing the robot. Additionally, the SDP solver may fail to solve a large-scale problem even if a solution exists. Other considerations, such as the structure and sparsity of the matrices and the solver settings, also affect the computation time and accuracy [34].
Like other model-based methods, the proposed controller assumes that the system can be modeled by (23) within a reasonable degree of accuracy. This assumption can be violated for several reasons: if the linearization point is too far from the current operational point, if there is significant parametric uncertainty or unmodeled dynamics, if the system undergoes a hybrid transition (e.g., impact or collision), or for other reasons. Given an accurate nonlinear model, it is possible to use nonlinear control tools to ascertain the stability of the nonlinear system controlled by a linear feedback law [35]. Methods such as sum-of-squares programming (SOS) allow us to estimate the region of attraction in the state space where the control law guarantees the stability of the system [36]. Estimating the region where the H norm of the nonlinear system is below a given threshold remains a problem to be solved and poses an interesting direction for further research.

8. Conclusions

In this paper, we presented an H control design method for systems with explicit constraints, taking into account both the geometry of the constraints manifold and the uncertainty in the constraints, which acts as an additive disturbance. We showed a linearization technique, producing a linear model with respect to the active states (the states that lie on the tangent space to the constraint manifold) as well as to the static states (the orthogonal complement to the active ones). We presented an LMI condition required for the desired H gain of the system to be achieved. We demonstrated the work of the proposed method on a constrained underactuated three-link robot as well as on a flat quadruped model. The proposed method is limited by the necessity of finding an accurate linearization for the robot dynamics as well as by the use of semidefinite programming in the formulation of the optimal control problem.

Author Contributions

Conceptualization, A.A. and S.S.; methodology, A.A. and S.S.; software, A.A. and S.S.; validation, A.A.; formal analysis, A.A. and S.S.; investigation, A.A. and S.S.; writing—original draft preparation, A.A. and S.S.; writing—review and editing, A.A. and S.S.; visualization, A.A.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data will be available upon request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MPCModel-Predictive Control
PDProportional-Differential
EKFExtended Kalman Filter
DAEDifferential-Algebraic Equation
ODEordinary differential equation
LMIlinear matrix inequality
BMIbi-linear matrix inequality
QPQuadratic Programming
SLAMsimultaneous localization and mapping
SRBsingle rigid body
OPDorthogonal projection and decomposition
LQRLinear Quadratic Regulator
SDPsemidefinite programming
OCPoptimal control problem
SOSsum of squares

References

  1. Bellicoso, C.D.; Bjelonic, M.; Wellhausen, L.; Holtmann, K.; Gunther, F.; Tranzatto, M.; Fankhauser, P.; Hutter, M. Advances in real-world applications for legged robots. J. Field Robot. 2018, 35, 1311–1326. [Google Scholar] [CrossRef]
  2. Kim, D.; Di Carlo, J.; Katz, B.; Bledt, G.; Kim, S. Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control. arXiv 2019, arXiv:1909.06586. [Google Scholar]
  3. Bratta, A.; Focchi, M.; Rathod, N.; Semini, C. Optimization-Based Reference Generator for Nonlinear Model Predictive Control of Legged Robots. Robotics 2023, 12, 6. [Google Scholar] [CrossRef]
  4. Camurri, M.; Ramezani, M.; Nobili, S.; Fallon, M. Pronto: A Multi-Sensor State Estimator for Legged Robots in Real-World Scenarios. Front. Robot. AI 2020, 7, 68. [Google Scholar] [CrossRef]
  5. Kheloufi, H.; Zemouche, A.; Bedouhene, F.; Boutayeb, M. On LMI conditions to design observer-based controllers for linear systems with parameter uncertainties. Automatica 2013, 49, 3700–3704. [Google Scholar] [CrossRef]
  6. hadi Sarajchi, M.; Ganjefar, S.; Hoseini, S.M.; Shao, Z. Adaptive Controller Design Based On Predicted Time-delay for Teleoperation Systems Using Lambert W function. Int. J. Control Autom. Syst. 2013, 17, 1445–1453. [Google Scholar] [CrossRef]
  7. Inoue, M.; Wada, T.; Ikeda, M.; Uezato, E. State-space H controller design for descriptor systems. Automatica 2015, 59, 164–170. [Google Scholar] [CrossRef]
  8. Li, H.; Fu, M. A linear matrix inequality approach to robust H filtering. IEEE Trans. Signal Process. 1997, 45, 2338–2350. [Google Scholar] [CrossRef]
  9. Xue, W.; Lian, B.; Kartal, Y.; Fan, J.; Chai, T.; Lewis, F.L. Model-free inverse H-infinity control for imitation learning. IEEE Trans. Autom. Sci. Eng. 2024, 22, 5661–5672. [Google Scholar] [CrossRef]
  10. Yu, T.; Song, J.; He, S. Decentralized H-infinity Control for Switched Interconnected Large-Scale Systems: A Computationally Attractive Method. IEEE Syst. J. 2022, 17, 3918–3927. [Google Scholar] [CrossRef]
  11. Zhi, Y.L.; Liu, X.; He, S.; Lin, W.; Chen, W. Adaptively Event-Triggered H-infinity Control for Networked Autonomous Aerial Vehicles Control Systems Under Deception Attacks. IEEE Trans. Ind. Inform. 2025, 21, 3458–3465. [Google Scholar] [CrossRef]
  12. Schwerdtner, P.; Voigt, M. Fixed-order H-infinity controller design for port-Hamiltonian systems. Automatica 2023, 152, 110918. [Google Scholar] [CrossRef]
  13. Mason, S.; Righetti, L.; Schaal, S. Full dynamics LQR control of a humanoid robot: An experimental study on balancing and squatting. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 374–379. [Google Scholar] [CrossRef]
  14. Kim, S.S.; Vanderploeg, M.J. QR Decomposition for State Space Representation of Constrained Mechanical Dynamic Systems. J. Mech. Trans. Autom. Des. 1986, 108, 183–188. [Google Scholar] [CrossRef]
  15. Haug, E.J.; Yen, J. Generalized Coordinate Partitioning Methods for Numerical Integration of Differential-Algebraic Equations of Dynamics. In Real-Time Integration Methods for Mechanical System Simulation; Springer: Berlin/Heidelberg, Germany, 1991; pp. 97–114. [Google Scholar]
  16. Gao, F.; Tang, W.; Huang, J.; Chen, H. Positioning of Quadruped Robot Based on Tightly Coupled LiDAR Vision Inertial Odometer. Remote Sens. 2022, 14, 2945. [Google Scholar] [CrossRef]
  17. Bledt, G.; Powell, M.J.; Katz, B.; Di Carlo, J.; Wensing, P.M.; Kim, S. MIT Cheetah 3: Design and Control of a Robust, Dynamic Quadruped Robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2245–2252. [Google Scholar] [CrossRef]
  18. Mason, S.; Rotella, N.; Schaal, S.; Righetti, L. Balancing and walking using full dynamics LQR control with contact constraints. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 63–68. [Google Scholar] [CrossRef]
  19. Savin, S.; Balakhnov, O.; Khusainov, R.; Klimchik, A. State Observer for Linear Systems with Explicit Constraints: Orthogonal Decomposition Method. Sensors 2021, 21, 6312. [Google Scholar] [CrossRef]
  20. Luenberger, D.G. Time-invariant descriptor systems. Automatica 1978, 14, 473–480. [Google Scholar] [CrossRef]
  21. Aghili, F.; Piedbœuf, J.C. Simulation of motion of constrained multibody systems based on projection operator. Multibody Syst. Dyn. 2003, 10, 3–16. [Google Scholar] [CrossRef]
  22. Aghili, F. A unified approach for inverse and direct dynamics of constrained multibody systems based on linear projection operator: Applications to control and simulation. IEEE Trans. Robot. 2005, 21, 834–849. [Google Scholar] [CrossRef]
  23. Farhad, A. Control and simulation of motion of constrained multibody systems based on projection matrix formulation. arXiv 2022, arXiv:2210.17053. [Google Scholar]
  24. Mistry, M.; Buchli, J.; Schaal, S. Inverse dynamics control of floating base systems using orthogonal decomposition. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 3406–3412. [Google Scholar] [CrossRef]
  25. Righetti, L.; Buchli, J.; Mistry, M.; Schaal, S. Inverse dynamics control of floating-base robots with external constraints: A unified view. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1085–1090. [Google Scholar] [CrossRef]
  26. Reher, J.; Ames, A.D. Inverse dynamics control of compliant hybrid zero dynamic walking. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May 2021; pp. 2040–2047. [Google Scholar] [CrossRef]
  27. Liu, P.; Tateo, D.; Bou-Ammar, H.; Peters, J. Robot Reinforcement Learning on the Constraint Manifold. In Proceedings of the 5th Conference on Robot Learning (CoRL), Auckland, New Zealand, 14–18 December 2022. [Google Scholar]
  28. Liu, P.; Zhang, K.; Tateo, D.; Jauhri, S.; Hu, Z.; Peters, J.; Chalvatzaki, G. Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 9449–9456. [Google Scholar] [CrossRef]
  29. Kawaji, S. Design of Observer for Linear Descriptor Systems. IFAC Proc. Vol. 1990, 23, 241–245. [Google Scholar] [CrossRef]
  30. Abdullah, A.A. Model following control of linear parameter-varying descriptor systems. Optim. Control Appl. Methods 2022, 43, 943–961. [Google Scholar] [CrossRef]
  31. Wang, H.S.; Yung, C.F.; Chang, F.R. Bounded real lemma and H control for descriptor systems. In Proceedings of the 1997 American Control Conference, Albuquerque, NM, USA, 6 June 1997; Volume 3, pp. 2115–2119. [Google Scholar] [CrossRef]
  32. Xin, G.; Xin, S.; Cebe, O.; Pollayil, M.J.; Angelini, F.; Garabini, M.; Vijayakumar, S.; Mistry, M. Robust footstep planning and LQR control for dynamic quadrupedal locomotion. IEEE Robot. Autom. Lett. (RA-L) 2021, 6, 4488–4495. [Google Scholar] [CrossRef]
  33. Boyd, S.; Ghaoui, L.E.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1994. [Google Scholar]
  34. Todd, M.J. Semidefinite optimization. Acta Numer. 2001, 10, 515–560. [Google Scholar] [CrossRef]
  35. Slotine, J.J.E.; Li, W. Applied Nonlinear Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1991; Volume 199. [Google Scholar]
  36. Topcu, U.; Packard, A.; Seiler, P. Local stability analysis using simulations and sum-of-squares programming. Automatica 2008, 44, 2669–2675. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed control method; the LMI OCP block refers to problem (57), while the ϵ -search block refers to the gridding method outlined in Remark 2.
Figure 1. The flow chart of the proposed control method; the LMI OCP block refers to problem (57), while the ϵ -search block refers to the gridding method outlined in Remark 2.
Robotics 14 00064 g001
Figure 2. A diagram of a 3-link robot with a constrained end effector; l 1 , l 2 , and l 3 are the link lengths, q 1 , q 2 , and q 3 are the joint angles, and x e , y e are the coordinates of the end effector.
Figure 2. A diagram of a 3-link robot with a constrained end effector; l 1 , l 2 , and l 3 are the link lengths, q 1 , q 2 , and q 3 are the joint angles, and x e , y e are the coordinates of the end effector.
Robotics 14 00064 g002
Figure 3. The output signal z ( t ) (computed by simulating the closed-loop system forward) for the underactuated constrained three-link robot.
Figure 3. The output signal z ( t ) (computed by simulating the closed-loop system forward) for the underactuated constrained three-link robot.
Robotics 14 00064 g003
Figure 4. The input signal w ( t ) and the output signal z ( t ) (computed by simulating the closed-loop system forward) for the underactuated constrained three-link robot; log scale.
Figure 4. The input signal w ( t ) and the output signal z ( t ) (computed by simulating the closed-loop system forward) for the underactuated constrained three-link robot; log scale.
Robotics 14 00064 g004
Figure 5. A diagram of a flat quadruped robot; l 1 is the leg link length, l 2 is the body length; q 1 , q 3 and q 2 , q 4 are the joint angles of the back and front legs, respectively; the points K 1 and K 2 represent contacts between the robot and the environment.
Figure 5. A diagram of a flat quadruped robot; l 1 is the leg link length, l 2 is the body length; q 1 , q 3 and q 2 , q 4 are the joint angles of the back and front legs, respectively; the points K 1 and K 2 represent contacts between the robot and the environment.
Robotics 14 00064 g005
Figure 6. The output signal z ( t ) (computed by simulating the closed-loop system forward) for the flat quadruped robot model.
Figure 6. The output signal z ( t ) (computed by simulating the closed-loop system forward) for the flat quadruped robot model.
Robotics 14 00064 g006
Figure 7. The input signal w ( t ) and the output signal z ( t ) (computed by simulating the closed-loop system forward) for the flat quadruped robot model; log scale.
Figure 7. The input signal w ( t ) and the output signal z ( t ) (computed by simulating the closed-loop system forward) for the flat quadruped robot model; log scale.
Robotics 14 00064 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldaher, A.; Savin, S. H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition. Robotics 2025, 14, 64. https://doi.org/10.3390/robotics14050064

AMA Style

Aldaher A, Savin S. H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition. Robotics. 2025; 14(5):64. https://doi.org/10.3390/robotics14050064

Chicago/Turabian Style

Aldaher, Ahmad, and Sergei Savin. 2025. "H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition" Robotics 14, no. 5: 64. https://doi.org/10.3390/robotics14050064

APA Style

Aldaher, A., & Savin, S. (2025). H Control for Systems with Mechanical Constraints Based on Orthogonal Decomposition. Robotics, 14(5), 64. https://doi.org/10.3390/robotics14050064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop