Next Article in Journal
Comparative Study of Distributed Compensation Effects on E-Field Emissions in Conventional and Phase-Inverted Wireless Power Transfer Coils
Previous Article in Journal
A Systematic Literature Review on PHM Strategies for (Hydraulic) Primary Flight Control Actuation Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Shared Trajectory Tracking Control for Output-Constrained Euler–Lagrange Systems

Key Laboratory of Intelligent Bionic Unmanned Systems, Ministry of Education, School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Actuators 2025, 14(8), 383; https://doi.org/10.3390/act14080383
Submission received: 25 June 2025 / Revised: 29 July 2025 / Accepted: 31 July 2025 / Published: 3 August 2025

Abstract

This study presents the state-feedback and output-feedback adaptive shared trajectory tracking control laws for nonlinear Euler–Lagrange systems subject to parametric uncertainties and output constraints framed within linear inequalities. The logarithm-driven coordinate transformation is used to ensure that system outputs are consistently bounded by defined regions, while model-based adaptive laws are used in the machine controller to estimate and cancel parametric uncertainties and the human controller can be given arbitrarily. The stability of the whole controlled system is proved by Lyapunov stability theory, and simulation examples are used to illustrate the performance of the proposed shared control laws.

1. Introduction

Industrial manipulators and many man-made systems exhibit major safety risks due to human tiredness, inattention, and judgment mistakes. Thus, scientists endeavor to synthesize automatic controllers that undertake part of the control task, ensuring the security of the controlled system. In essence, the operation authority in the system is bidirectional switching between a automatic controller and a human operator. A shared control framework involves feedback control inputs and human operator inputs simultaneously in the controlled system [1,2]. Over the past decades, it has attracted significant attention and interest from a wide range of control system societies in both academia and industry, as it combines the reliable, precise, and fast capabilities of automatic controllers with interactive, adaptive, and creative task execution performance of human controllers [3]. Under normal circumstances, the system is handled by human operators, while in dangerous situations, the operation authority is obtained by the automatic controller. For a human–machine coordination control system in dynamic scenarios, the human is accountable for developing a policy, while the automatic controller will evaluate the security of the policy. If it is not safe, the automatic controller should adjust the strategy and apply the updated one. Moreover, human beings are able to perform their operations, feeling that the system is reacting to their behavior, and they receive feedback from the shared controller, which signifies the level of risk associated with their behavior. This is why human–machine control holds considerable significance and is widely employed in both military and civilian facilities. Common applications of human–machine control can be found in robots [4,5,6], automotive vehicles [7,8,9], and human–robot systems [10,11].
To improve the safety and robustness of the controlled system, adaptive shared controllers are developed for Euler–Lagrange systems in this study. New contributions are as follows: firstly, unlike the traditional machine automatic control law design with the known parameters of the system model, certainty equivalent and non-certainty equivalent principles are employed to develop two kinds of state-feedback controllers for human–machine systems to deal with model uncertainties. Secondly, compared with the requirements of full-state measurements in traditional state-feedback control techniques, a different adaptive output-feedback law is designed in the shared control framework based on the inherent passivity of the human operator. In particular, a velocity observer is designed for the output-feedback law to guarantee automatic control performance. Thirdly, different from the authority sharing methods in many human–machine system studies, a hysteresis sharing a function-based smooth transitional mechanism between the automatic control inputs and the human inputs is designed to allocate the operation authority of the shared control system, while the joystick handle-based passive human input is also given to obtain the human intention weighted shared controller. Actually, the proposed human–machine control system in this study could accept any human input freely. Specifically, rigorous proofs of all theoretical results are presented carefully, and simulation examples are given to show the safety and robustness of the proposed algorithm for the system under output constraints.

2. Problem Statement

The category of human–machine systems under consideration is presumed to be represented by the Euler–Lagrange equations [12].
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + g ( q ) = u s ,
where q ( t ) R n , q ˙ ( t ) R n , q ¨ ( t ) R n represent configuration and its derivatives, respectively; M ( q ) represents the inertia matrix; C ( q , q ˙ ) represents the centripetal Coriolis matrix; g ( q ) is the gravity term; and u s R n denotes the external control input, which is a blend of the inputs provided by the human u h and by the automatic controller u f .
Assumption 1. 
An admissible set Q a is characterized by a group of linear matrix inequalities as
Q a = { q R n | K q + T 0 } ,
where T = [ t 1 , t 2 , , t m ] T R m , K = [ k 1 T , k 2 T , , k m T ] T R m × n with t i R and k i R 1 × n for i { 1 , 2 , , m } . When m > n , T and K meet the condition rank ( [ k r 1 T , k r 2 T , k r l T ] T ) < rank ( [ s r 1 T , s r 2 T , s r l T ] T ) under s r i = [ k r i T , t r i ] T for i { 1 , 2 , , h } , r 1 , r 2 , , r h { 1 , 2 , , m } , h [ n + 1 , m ] .
Definition 1. 
The jth constraint is considered to be a motivation for the first-order derivative q ˙ R n if k j ( q + h q ˙ ) + t j = 0 with h > 0 .
Definition 2. 
There are three subspaces in the whole state space, such as the safe set O s , the hazardous set O d , and the hysteresis set O h , determined by the position and speed towards the borderline. With respect to the ith set of activation constraints
x i = K i q + T i 0
with T i R n and invertible matrix K i , mathematical descriptions of the safe, hazardous, and hysteresis sets are provided by [1]
O ¯ s i = { ( x i , x ˙ i ) X a i × R n : x ˙ j i 1 x j i + b 2 1 b 2 if x j i b 2 , j { 1 , 2 , , n } } , O ¯ d i = { ( x i , x ˙ i ) X a i × R n : j { 1 , 2 , , n } s . t . x ˙ j i 1 x j i + b 1 1 b 1 and b 1 x j i < 0 } , O ¯ h i = { ( x i , x ˙ i ) X a i × R n : j { 1 , 2 , , n } s . t . x ˙ j i > 1 x j i + b 2 1 b 2 and x j i b 2 , and x ˙ k i < 1 x k i + b 1 1 b 1 if x k i b 1 , k { 1 , 2 , , n } } ,
with X a i = K i Q a + T i and b 2 > b 1 > 0 .
Remark 1. 
It is noted that the subsets O s , O d , and O h are related to the original system states ( q , q ˙ ) , but the subsets O ¯ s i , O ¯ d i , and O ¯ h i are related to the constrained system states ( x i , x ˙ i ) .
Definition 3. 
The s-closed-loop system is defined by (1), and the h-closed-loop system is defined by M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + g ( q ) = u h . Additionally, define Λ s and Λ h to represent Λ limit sets for two kinds of closed-loop systems.
Assumption 2. 
System states q ( t ) and q ˙ ( t ) are available for state-feedback design, while the state q ˙ ( t ) is unavailable for output-feedback design. It is assumed that the desired trajectory q d ( t ) R n and its derivatives q ˙ d ( t ) R n and q ¨ d ( t ) R n exist and are bounded.
The goal of designing the human–machine shared controller in this study is to find an automatic controller, a safe subset, and an operation authority sharing mechanism under a specified human input u h such that
  • The states of the system (1) remain within a specified and compact admissible set Q a under a safe subset O s O , where O Q a × R n is a forward invariant set;
  • u s does not modify the objective of the human operator. Specifically Λ s = Λ h if Λ h O s , and Λ s = Π O s ( Λ h ) if Λ h O s , where Π O s ( Λ h ) represents a projection of Λ h into O s to be designed;
  • u s = u h if the system state remains within the safe subset O s .

3. Machine Controller Design and Analysis

3.1. Machine Certainty-Equivalent Adaptive Control

Generally, the number of constraints on system outputs mainly depends on the control mission and the environment conditions. When m > n , only n constraints are activated according to (2), while when m < n , augment constraints p i < α ( i = 1 , 2 , , n ) with a sufficient large constant α can be added by the designer to the system until m = n . Without loss of generality, we devise the automatic controller for the scenario with m = n . The Euler–Lagrange system (1) with the coordinate x i is characterized by (3), and the corresponding automatic controller u f i can be reformulated as
x ˙ i = K i q ˙ i = K i v i M ( q i ) v ˙ i + C ( q i , v i ) v i + g ( q i ) = u f i x i 0
where q i = ( K i ) 1 ( x i T i ) . A new variable z i = [ z 1 i , z 2 i , , z n i ] T is defined by
z j i = ln x j i x r j i , j { 1 , 2 , , n } ,
to remove the constraint on x i , where x r i = [ x r 1 i , x r 2 i , , x r n i ] T is the reference signal for x i with x r j i defined by
x r j i = ϵ m ϵ ϵ + m [ 1 e γ ( m + ϵ ) ] m < ϵ ,
and m = s j i p d j + t j i , γ > 0 , ϵ > 0 . It is noted that x r j i is the projected feasible reference that satisfies all constraints, i.e., x r j i < 0 . Reference signal (7) is one possible smooth solution. In addition, according to (7), we also find that x r j i m when m is sufficiently small, indicating that the reference signal tracks the human operator’s intentions as long as the human behavior is sufficiently safe (i.e., q d is far away from obstacles).
Since the smooth function x r j i < 0 for all j { 1 , 2 , , n } , then x ˙ r i ( t ) and x ¨ r i ( t ) exist and q r i ( t ) = ( K i ) 1 ( x r i ( t ) T i ) , v r i ( t ) = ( K i ) 1 x ˙ r i ( t ) , v ˙ r i ( t ) = ( K i ) 1 x ¨ r i ( t ) . Furthermore, note that ( q r i , v r i ) Q a × R n .
Suppose ( q d , q ˙ d ) denotes a point in Λ h at ( q , q ˙ ) space, so the projection of ( q d , q ˙ d ) into the safe subset O s with regard to the ith set of motivation constraints, denoted by Π O s i ( q d , q ˙ d ) , is characterized by Π O s i ( q d , q ˙ d ) = ( q r i , v r i ) . Thus, the projection of Λ h into the subset O s with regard to the ith group of active constraints is characterized as
Π O s i ( Λ h ) = { φ O s | φ = Π O s i ( q d , q ˙ d ) , ( q d , q ˙ d ) Λ h } .
The velocity error is defined by v e i = v i v r i under the ith set of motivation constraints. Next, by differentiating z i and v e i , the tracking dynamics of the constrained Euler–Lagrange system is obtained by
z ˙ i = Z 1 K i v e i + Z 2 K i v r i M ( q i ) v ˙ e i + C ( q i , v i ) v e i + g r i = u f i
where g r i = M ( q i ) v ˙ r i + C ( q i , v i ) v r i + g ( q i ) , Z 1 = diag e z 1 i x r 1 i , e z 2 i x r 2 i , , e z n i x r n i , Z 2 = diag e z 1 i 1 x r 1 i , e z 2 i 1 x r 2 i , , e z n i 1 x r n i .
In the backstepping control design framework, the virtual input of the first subsystem in (9) is formulated as K i v e i = z i + Z 3 K i v r i , where Z 3 = diag ( e z 1 i 1 , e z 2 i 1 , , e z n i 1 ) . Then the virtual input derivative is K i v ˙ e i = z ˙ i + Z ˙ 3 K i v r i + Z 3 K i v ˙ r i . Furthermore, define y i = v e i v e i , and the tracking dynamics (9) is reformulated by
z ˙ i = Z 1 K i y i + Z 1 z i M ( q i ) y ˙ i + C ( q i , v i ) y i + g e i = u f i
where g e i = M ( q i ) v ˙ e i + C ( q i , v i ) v e i + g r i .
Define p i = y i λ q ( K i ) 1 z i with a tunable constant λ q > 0 ; then, the model (10) can be reformulated as
z ˙ i = Z 1 K i p i + ( λ q + 1 ) Z 1 z i M ( q i ) p ˙ i + C ( q i , v i ) p i + Y i θ = u f i
where Y i θ = M ( q i ) [ λ q ( K i ) 1 z ˙ i + v ˙ e i + v ˙ r i ] + C ( q i , v i ) [ λ q ( K i ) 1 z i + v e i + v r i ] + g ( q i ) , with the parametric vector θ R m formulated from the terms M ( q i ) , C ( q i , v i ) , and g ( q i ) and with the corresponding regression matrix Y i of θ . Actually, this is the result of the linear parameterization property of model (1). Different sequences of parameters in the parameter vector result in different regression matrices. Thus, the explicit form of the known regressor Y i vs. the unknown parameter θ in (11) cannot be directly given based on the general model (1), unless the model (1) of the mechanical system is explicit.
Design the certainty-equivalent adaptive state feedback controller as
u f i = K p i K i T Z 1 T z i + Y i θ ^ , θ ^ ˙ = 1 γ θ Y i T p i ,
where K is a positive-definite matrix; θ ^ is the estimation of θ ; and γ θ > 0 is a tunable constant.
Theorem 1. 
For the nonlinear constrained system (1) under Assumptions 1 and 2, if the state-feedback adaptive control law is designed by (12), then it can be readily demonstrated such that the controlled system satisfies q i q r i and v i v r i as t .
Proof. 
Choose a candidate Lyapunov function for the system (11) by L i = 1 2 [ z i T z i + p i T M ( q i ) p i + γ θ θ ˜ T θ ˜ ] 0 with parametric estimation error θ ˜ = θ ^ θ . Then, the derivative of L i along the trajectory of (11) with controller (12) satisfies L ˙ i = ( λ q + 1 ) z i T Z 1 z i p i T K p i . Since Z 1 and K are negative definite and positive definite matrices, then L ˙ i 0 , which indicates that both z i and p i are bounded. In addition, the second derivative with respect to L i is given as L ¨ i = ( λ q + 1 ) ( 2 z i T Z 1 z ˙ i + z i T Z ˙ 1 z i ) ( p ˙ i ) T K p i p i T K p ˙ i . According to the definition of p i and (10), it can be concluded that both p ˙ i and z ˙ i exist and are bounded; therefore, L ¨ i exists and is bounded. From Barbalat’s Lemma [13], it is immediately concluded that z i 0 and p i 0 as t . Furthermore, it can be concluded that q i q r i and v i v r i as t . □
Remark 2. 
The concept of certainty equivalence in traditional adaptive control is an important feature. This implies that initially, a controller is constructed under the assumption that all the system parameters are known. Controller parameters are defined by a function of all model parameters. Given the actual values of the model parameters, the controller parameters of the controller are computed by solving design equations aimed at pole-zero placement, model matching, or optimality. When actual model parameters are unavailable, controller parameters can be either estimated directly or computed indirectly by solving the identical design equations utilizing estimations of the model parameters. The obtained controller, which is estimated or designed specifically for the estimated model, is termed a certainty equivalence adaptive controller. However, since certainty-equivalent adaptive law is generally just an integrator about system states, the parameter estimator performance depends on the state tracking performance.

3.2. Machine Non-Certainty-Equivalent Adaptive Control

Based on (11), one can design non-certainty-equivalent adaptive control law as
u f i = W i ( θ ^ + β ) γ n W n i W n i T [ p i + ( k q α n ) p n i ] θ ^ ˙ = γ n ( α n W n i W i ) T p n i + γ n k q W n i T p n i
with β = γ n W n i T p n i , the stable first-order linear filters
p ˙ n i = α n p n i + p i , W ˙ n i = α n W n i + W i ,
and the regression matrix W i defined by W i θ = M ˙ ( q i ) [ p i + ( k q α n ) p n i ] + k q M ( q i ) p i C ( q i , v i ) p i Y i θ , where α n > 0 , k q > 0 , and γ n 1 k q .
Theorem 2. 
For the nonlinear constrained system (1) under Assumptions 1 and 2, if the machine adaptive controller is designed by (13) and (14) with conditions
p n i ( 0 ) = p i ( 0 ) α n k q , α n k q 0 , W n i ( 0 ) = 0 .
then it is also readily demonstrated from [14,15] that the closed-loop system satisfies q i q r i and v i v r i as t .
Proof. 
Adding and subtracting M ˙ ( q i ) [ p i + ( k q α n ) p n i ] + k q M ( q i ) p i into the right-hand side of dynamics M ( q i ) p ˙ i + C ( q i , v i ) p i + Y i θ = u f i in (11) results in
M ( q i ) p ˙ i = W i θ + u f i M ˙ ( q i ) [ p i + ( k q α n ) p n i ] k q M ( q i ) p i
Denote a linear filter for the control signal as
u ˙ n i = α n u n i + u f i , u n i ( 0 ) = 0 .
Then substitute (14) and (17) into (16) to yield the filtered-state error dynamics
d d t [ M ( q i ) ( p ˙ n i + k q p n i ) W n i θ u n i ] = α n [ M ( q i ) ( p ˙ n i + k q p n i ) W n i θ u n i ] .
Thus, the solution of system (18) can be immediately derived by p ˙ n i = k q p n i + M 1 ( q i ) ( W n i θ + u n i ) + M 1 ( q i ) ζ ( t ) , where ζ ( t ) = [ M ( q i ( 0 ) ) ( p ˙ n i ( 0 ) + k q p n i ( 0 ) ) W n i ( 0 ) θ u n i ( 0 ) ] e α n t . It can be seen that ζ ( t ) is an exponentially decaying term and its initial value directly depends on q i ( 0 ) , p ˙ n i ( 0 ) , p n i ( 0 ) , W n i ( 0 ) , and u n i ( 0 ) . Thus ζ ( t ) 0 for all t 0 is equivalent to following conditions about tunable parameters: p ˙ n i ( 0 ) + k q p n i ( 0 ) = 0 , W n i ( 0 ) = 0 , and u n i ( 0 ) = 0 . Furthermore, based on (14), this means that initial values of filters should satisfy (15).
Thus, ζ ( t ) 0 leads to the filtered-state error dynamics as
p ˙ n i = k q p n i + M 1 ( q i ) ( W n i θ + u n i ) .
Define the parameter estimation error θ ˜ = θ ^ + β θ and the filtered control signal u n i = W n i ( θ ^ + β ) ; then, the filtered-state error dynamics (19) can be rewritten as
p ˙ n i = k q p n i M 1 ( q i ) W n i θ ˜ ,
and the derivation of the dynamics of the estimation error is obtained by
θ ˜ ˙ = θ ^ ˙ + β ˙ = γ n W n i T M 1 ( q i ) W n i θ ˜ .
Set a candidate Lyapunov function for (11) by L i = 1 2 p n i T p n i + 1 2 m ̲ θ ˜ T θ ˜ 0 . Then, recalling γ n 1 k q and taking the time derivative of L i along the trajectory of (20) and (21) yields
L ˙ i = p n i T [ k q p n i M 1 ( q i ) W n i θ ˜ ] + 1 m ̲ θ ˜ T ( θ ^ ˙ + β ˙ ) = k q p n i T p n i p n i T M 1 ( q i ) W n i θ ˜ γ n m ̲ θ ˜ T W n i T M 1 ( q i ) W n i θ ˜ k q 2 p n i 2 γ n 2 M 1 ( q i ) W n i θ ˜ 2 0 .
Thus, it can be concluded from Barbalat’s Lemma [13] that p n i 0 and [ M 1 ( q i ) W n i θ ˜ ] 0 as t . Then, based on the stability of the linear filters in (14), it can be concluded that p i 0 as t . Furthermore, since Z 1 is a negative definite matrix in (11) and p i 0 as t , it is immediately concluded that z i 0 and y i 0 as t . Therefore, q i q r i and v i v r i as t .
Finally, the adaptive state-feedback control input u f i can be retrieved from the filtered control signal u n i in (17) and u n i = W n i ( θ ^ + β ) as u f i = u ˙ n i + α n u n i = W i ( θ ^ + β ) W n i ( θ ^ ˙ + β ˙ ) = W i ( θ ^ + β ) γ n W n i W n i T [ p i + ( k q α n ) p n i ] , which is the same as (13), thus concluding the verification. □
Remark 3. 
Founded on the non-certainty-equivalent principle, the adaptive law of plant parameters could be designed as a stable filter, and the estimator performance could also be improved with the tunable parameter in the filter. Moreover, the estimate parameters of the non-certainty-equivalent adaptive system encompasses not only the estimates from the adaptive dynamic subsystem but also nonlinear functions. The inclusion of additional nonlinear functions within the estimated parameter vector enhances the performance of the controller. A typical non-certainty-equivalent adaptive design technique is the immersion and invariance manifold method. This method aims to keep the manifold’s invariable and attractive properties such that the overall system is stable. This idea is also applied to the adaptive estimator. If the error between the estimation of the plant parameter and its true value is chosen as the invariable manifold and the suitable nonlinear function in the adaptive law is designed, then the estimation error could be always bounded by small constants even asymptotically converging to zero. Thus, the non-certainty-equivalent adaptive controller generally has better parameter estimation capability than the traditional certainty-equivalent adaptive controller. Moreover, since the considered nonlinear mechanical system has the double-integrator structure and the model linear parametrization property, the certainty-equivalent and non-certainty-equivalent principles could be easily employed in the feedback control subsystem designed in this study.

3.3. Machine Adaptive Output-Feedback Control

The error Equation (9) can be reformulated as
M ( q i ) ( K i ) 1 Z 1 1 z ¨ i + f a + f b = u f i ,
where f a = M ( q i ) Z 1 1 Z ˙ 1 [ Z 1 1 Z 2 v r i ( K i ) 1 Z 1 1 z ˙ i ] + C ( q i , ( K i ) 1 Z 1 1 z ˙ i ) [ ( K i ) 1 Z 1 1 z ˙ i + 2 v r i 2 Z 1 1 Z 2 v r i ] M ( q i ) Z 1 1 Z ˙ 2 v r i , f b = M ( q i ) [ v ˙ r i Z 1 1 Z 2 v ˙ r i ] + C ( q i , v r i Z 1 1 Z 2 v r i ) ( v r i Z 1 1 Z 2 v r i ) + g ( q i ) . Clearly, since f b can be written as f b = Ψ i θ based on the linear parametrization, then the model (23) can be parameterized as
M ( q i ) ( K i ) 1 Z 1 1 z ¨ i + f a + Ψ i θ = u f i ,
Furthermore, it can be seen that M ( q i ) ( K i ) 1 Z 1 1 and f a are the first-order continuously differentiable functions.
The adaptive output feedback control law for (23) is proposed as
u f i = ( K i ) T Z 1 T [ K 1 sgn ( z i + z f i ) ( k 2 + 1 ) r f i + z i ] Ψ i θ ^ ,
z ˙ f i = z f i + r f i , z f i ( 0 ) = 0 ,
r f i = ξ i ( k 2 + 1 ) z i ,
ξ ˙ i = r f i ( k 2 + 1 ) ( z i + r f i ) + z i z f i , ξ i ( 0 ) = ( k 2 + 1 ) z i ( 0 ) ,
θ ^ ˙ = 1 γ θ ( K i ) T Z 1 T ( Ψ i ) T ( z i + r f i ) ,
where z f i and r f i are filtered signals; θ ^ is the estimation of θ ; ξ i is an auxiliary variable used in the filter; K 1 is a constant, diagonal, positive-definite matrix; k 2 is a positive constant; and sgn ( a ) = [ sgn ( a 1 ) , sgn ( a 2 ) , , sgn ( a n ) ] T for any a R n with sgn ( · ) being the standard signum function.
To facilitate the stability analysis of the controlled system, define a signal as
η i = z ˙ i + z i + r f i ,
Then, taking the time derivative of r f i in (27) with a signal in (30) leads to
r ˙ f i = r f i ( k 2 + 1 ) η i + z i z f i .
Taking the time derivative of η i in (30) yields
η ˙ i = z ¨ i z f i 2 r f i k 2 η i .
After multiplying (32) by Z 1 T ( K i ) T M ( q i ) ( K i ) 1 Z 1 1 and substituting (25) into (24), the closed-loop system is
Z 1 T ( K i ) T M ( q i ) ( K i ) 1 Z 1 1 η ˙ i = k 2 Z 1 T ( K i ) T M ( q i ) ( K i ) 1 Z 1 1 η i 1 2 Z 1 T ( K i ) T M ˙ ( q i ) ( K i ) 1 Z 1 1 η i Z 1 T ( K i ) T M ( q i ) ( K i ) 1 Z ˙ 1 1 η i + f ˜ K 1 sgn ( z i + z f i ) + ( k 2 + 1 ) r f i z i Z 1 T ( K i ) T Ψ i θ ˜ ,
where f ˜ = Z 1 T ( K i ) T [ f a + M ( q i ) ( K i ) 1 Z 1 1 ( z f i + 2 r f i ) 1 2 M ˙ ( q i ) ( K i ) 1 Z 1 1 η i M ( q i ) ( K i ) 1 Z ˙ 1 1 η i ] and the parameter estimation error is θ ˜ = θ ^ θ .
Remark 4. 
Since f a defined in (23) is continuously differentiable, it is shown that f ˜ in (33) is upper-bounded by f ˜ ϕ ( ϕ i ) ϕ i with ϕ i [ ( z i ) T , ( z f i ) T , ( r f i ) T , ( η i ) T ] T , and the positive function ϕ ( ϕ i ) is non-decreasing in ϕ i .
Lemma 1. 
Define a function l ( t ) = ( η i ) T K 1 sgn ( z i + z f i ) . If the control gain matrix K 1 is selected to satisfy the sufficient condition k 1 j > 0 , where k 1 j is the jth entry of the diagonal matrix K 1 , then it can be derived that 0 t l ( τ ) d τ l ¯ b with the positive constant l ¯ b j = 1 n k 1 j | z j i ( 0 ) | .
Proof. 
From (26) and (30), one has η i = ( z ˙ i + z ˙ f i ) + ( z i + z f i ) with z f i ( 0 ) = 0 . Then 0 t l ( τ ) d τ 0 t d ( z i + z f i ) T d τ K 1 sgn ( z i + z f i ) d τ = j = 1 n k 1 j | z j i ( τ ) + z f j i ( τ ) | 0 t l ¯ b . □
Theorem 3. 
Consider the output-constrained system (1) under Assumptions 1–2. If the adaptive output-feedback controller is designed by (25), then the closed-loop system signals are bounded and q i q r i and v i v r i as t , where K 1 is chosen to satisfy k 1 i > 0 , and k 2 = k n + 1 m ̲ with constant k n > 0 .
Proof. 
Define a function l c l ¯ b 0 t l ( τ ) d τ 0 from Lemma 1. Choose a function as
L i = 1 2 [ ( z i ) T z i + ( z f i ) T z f i + ( r f i ) T r f i + γ θ θ ˜ T θ ˜ + 2 l c + ( ( K i ) 1 Z 1 1 η i ) T M ( q i ) ( K i ) 1 Z 1 1 η i ] 0 .
Note that (34) can be bounded by
σ a ( z i ) δ 2 L i σ b ( z i ) δ 2 ,
where δ = [ ( ϕ i ) T , θ ˜ T , l c ] T , σ a ( z i ) 1 2 min { 1 , γ θ , m ̲ ( K i ) 1 Z 1 1 2 } and σ b ( z i ) max { 1 , 1 2 γ θ , 1 2 m ¯ ( K i ) 1 Z 1 1 2 } . The time derivative of L i along the system trajectory of (33) is given by
L ˙ i ( z i ) T z i ( z f i ) T z f i ( r f i ) T r f i + f c k 2 ( η i ) T Z 1 T ( K i ) T M ( q i ) ( K i ) 1 Z 1 1 η i ϕ i 2 + [ η i ϕ ( ϕ i ) ϕ i k n η i 2 ] 1 ϕ 2 ( ϕ i ) 4 k n ϕ i 2 ,
where f c = ( η i ) T f ˜ ( z ˙ i ) T Z 1 T ( K i ) T Ψ i θ ˜ , f c η i ϕ ( ϕ i ) ϕ i , k n 1 4 ϕ 2 σ b ( z i ( 0 ) ) σ a ( z i ( 0 ) ) δ ( 0 ) , and δ ( 0 ) = z i ( 0 ) 2 + η i ( 0 ) 2 + j = 1 n k 1 j | z j i ( 0 ) | . This means that
L ˙ i ϖ ϕ i 2
for k n > 1 4 ϕ 2 ( ϕ i ) or ϕ i < ϕ 1 ( 2 k n ) , where ϖ is a positive constant. Then, following Theorem 8.4 in [13], one can define the region as Λ { δ R 4 n × R m × R + | δ < ϕ 1 ( 2 k n ) } . From (35) and (37), we know that z i , z f i , r f i , η i , and θ ^ are bounded. From (30), we further know that z ˙ i is also bounded. Then, from (26)–(28), we know that z ˙ f i , ξ i , and ξ ˙ i are bounded. Furthermore, since f a is a first-order continuously differentiable function, then it is also a bounded function. Then, from (25) and (23), we know that the feedback control inputs u f i and z ¨ i are bounded. Based on the above boundedness statements, it can be deduced that W ˙ c ( δ ) is also bounded, which is a sufficient condition for W c ( δ ) = ϖ ϕ i 2 being uniformly continuous. If we define the region Λ 0 { δ Λ | W b ( δ ) < σ a ( z i ( 0 ) ) ( ϕ 1 ( 2 k n ) ) 2 } , then we can recall Theorem 8.4 in [13] to state that ϖ ϕ i 2 0 as t for any δ ( 0 ) Λ 0 . Then, from the definition of ϕ i in Remark 1, we know z i 0 , z f i 0 , r f i 0 , and η i 0 as t for any δ ( 0 ) Λ 0 . Furthermore, it can be concluded from (30) that z ˙ i 0 as t for any δ ( 0 ) Λ 0 . Therefore, all closed-loop system signals are bounded, and q i q r i and v i v r i as t for any δ ( 0 ) Λ 0 . It can be further noticed that larger k n results in a larger attraction region Λ to include any initial conditions of systems so that a semi-global type of stability result is derived. Then, based on W b ( δ ) = σ b ( z i ) δ 2 and Λ 0 { δ Λ | W b ( δ ) < σ a ( ϕ 1 ( 2 k n ) ) 2 } , we can derive that the attraction region is δ ( 0 ) < σ a ( z i ( 0 ) ) σ b ( z i ( 0 ) ) ϕ 1 ( 2 k n ) such that W b ( δ ( 0 ) ) < σ a ( z i ( 0 ) ) ( ϕ 1 ( 2 k n ) ) 2 . □
In spite of the availability of the proposed adaptive output-feedback controller (25), the velocity z ˙ i of system (23) is still unknown, so the safety of system (1) cannot be known in real time. Fortunately, based on the separation principle of observation and control, the velocity observer for (23) could be designed under the adaptive output-feedback controller (25). In fact, once the velocity observation z ^ ˙ i is derived by the velocity observer, the observation for actual velocity v i in (5) can be directly computed from (6) and the first sub-equation in (9) as
v ^ i = ( K i ) 1 [ Z 3 z ^ ˙ i + Z 4 x ˙ r ] ,
where Z 3 = diag ( e z 1 i x r 1 i , e z 2 i x r 2 i , , e z n i x r n i ) and Z 4 = diag ( e z 1 i , e z 2 i , , e z n i ) . Thus, the objective of velocity observer design is to ensure the observation error z ˜ ˙ i z ˙ i z ^ ˙ i 0 as t only using measurement z i ( t ) .
To facilitate designing the velocity observer for (23) subject to parametric uncertainty, it is assumed that the matrix M ( q i ) can be written as M ( q i ) = M 0 ( q i ) + M Δ ( q i ) with known part M 0 ( q i ) and unknown part M Δ ( q i ) . Then the system model (23) is rewritten as
z ¨ i = f d + Z 1 K i M 0 1 ( q i ) u f i ,
where f d = Z 1 K i M ¯ Δ 1 ( q i ) u f i Z 1 K i M 1 ( q i ) ( f a + f b ) and M ¯ Δ 1 ( q i ) M 0 1 ( q i ) M 1 ( q i ) .
Assumption 3. 
The parametric uncertainty M Δ ( p i ) is bounded, and f d is a first-order continuously differentiable bounded function in (39).
Based on the above assumption, the following velocity observer is designated by
z ^ ˙ i = β o i + K a z ˜ i , β ˙ o i = K b sgn ( z ˜ i ) + K c z ˜ i ,
where z ˜ i = z i z ^ i β o i is an auxiliary variable and K j R n × n ( j = a , b , c ) are constant, diagonal, and positive definite matrices. Furthermore, the observer (40) can be reformulated as
z ^ ¨ i = K b sgn ( z ˜ i ) + K c z ˜ i + K a z ˜ ˙ i .
Then, based on the system model (39), the observation error system is
z ˜ ¨ i = f d + Z 1 K i M 0 1 ( q i ) u f i K b sgn ( z ˜ i ) K c z ˜ i K a z ˜ ˙ i .
Define a signal k β i = z ˜ ˙ i + z ˜ i and set K c = K a I n with n × n identity matrix I n . Then, from (42), one has
k ˙ β i = f s K b sgn ( z ˜ i ) K c k β i ,
where f s = f d + Z 1 K i M 0 1 ( q i ) u f i . Note that f s and f ˙ s are bounded functions due to Assumption 3.
It is easily proven that the proposed velocity observer (40) for system (23) can guarantee that z ˜ i 0 and z ˜ ˙ i 0 as t if the gain matrix K b diag ( k b 1 , k b 2 , , k b n ) satisfies k b j > | f s j | + | f s j | , where k b j and f s j are the jth entry of K b and f s , respectively.
Theorem 4. 
Given the output-constrained system (1) under Assumptions 1 and 2 and reference signal (7), assume q i ( 0 ) Q a for all i { 1 , 2 , , N c } . If the adaptive state-feedback controller for (1) is given by (12) or (13) or if the adaptive output-feedback controller for (1) is given by (25), then q i q r i and v i v r i as t , and q i ( t ) Q a for all t 0 .
Proof. 
It has been clearly proven in Theorems 1 and 2 that q i q r i and v i v r i as t under the condition that the feedback controller has no switching from u f i to u f j under i j . Now consider the case of switching from u f i to u f j under i j . If u s in (1) switches without delay from u f i to u f j at t = t 0 ; then, L j ( t 0 ) L i ( t 0 ) and L ˙ j ( t ) < 0 at t t 0 . If u s in (1) switches not directly from u f i to u f j , such as if ( q , q ˙ ) departs from O d i at t = t 0 and falls into O d j at t = t 1 , then owing to the existence of O h i , there is δ i t > 0 ensuring that u s ( t ) = u f i ( t ) for t [ t 0 , δ i t + t 0 ] . Define Δ t = min i = 1 N c δ i t and note that Δ t > 0 ; then, we have t 0 t 0 + Δ t L ˙ j d t L j ( t 0 + Δ t ) L i ( t 0 ) and L j ( t 0 + Δ t ) L i ( t 0 ) . Let { j 1 , j 2 , , j I } represent a succession of motivation constraint groups of which j i { 1 , 2 , , N c } for i { 1 , 2 , , I } , and the jth constraint group is motivated in the time ( t j i , T j i ] under t j i + 1 = T j j with i { 1 , 2 , , I 1 } . Thus, 0 L j I ( T j I ) < < L j 2 ( T j 2 ) < L j 1 ( T j 1 ) . Set the multiple Lyapunov function L ( t ) = V j i ( t ) for whole system if t ( t j i , T j i ] , then q i q r i and v i v r i as t for t 0 . Furthermore, founded on the signal z i , it can be derived that x j i ( t ) ϵ < 0 ; thus, q i ( t ) Q a for t 0 . □

3.4. Human Control Inputs

In order to provide human inputs by joystick handles, the position of handles should be away from its zero position, and the human can give the mechanical systems different speed and direction commands based on the human’s intent. When this occurs, a control force feedback is created according to the human’s hand, where the force from the handles scales linearly with the displacement away from the tracked position along each degree of freedom of the mechanical systems, and the control force can be expressed as
u h = k p q ^ h
where k p > 0 is a spring constant and q ^ h represents the deviation from the tracked position along each degree of freedom. Note that only the stiffness parameter k p is applied to calculate the joystick feedback forces, ensuring that the human undergoes the identical stiffness response in both directions.
Assumption 4. 
In general, the human–machine interactive system based on the joystick handles is passive, that is 0 t ( u h T q ˙ h u e q ˙ i ) d t 0 , where u h denotes the input force exerted by a human, u e denotes an external force exerted by the environment, q ˙ h is the speed of the handle of a joystick, and q ˙ i is the speed of the controlled mechanical systems. As the mechanical system is not manipulating its environment, then it is observed that no external forces are applied to the system except to the disturbances so that 0 t u h T q ˙ h d t 0 and the human control input is passive.
Measures of the human’s intent to control the position of the mechanical system can be defined by using the above assumption. Denote the rates of the joystick handles displaced from the tracked position as q ^ ˙ h . Then, the intent of human’s operation is measured as
v h = 0 t u h T q ^ ˙ h d t = 0 t k p q ^ h T q ^ ˙ h d t = 1 2 k p q ^ h T q ^ h 0
Note that v h > 0 whenever the joystick handle is moved away from the tracked position in any direction, and v h = 0 only when q ^ h = 0 so that the human’s intent to control the system is measured conveniently.

4. Human–Machine Shared Controller Design and Analysis

Within the ith group of n constraints, the state space of the system is constituted by three subsets O k i = diag ( ( K i ) 1 , ( K i ) 1 ) ( O ¯ k i col ( T i , 0 ) ) under k = { s , h , d } . Then, it can be constructed that O = O s i O h i O d i for all i { 1 , 2 , , N c } . This fact implies that the combination of the safe, hazardous, and hysteresis sets with regard to the ith group of motivation constraints is equal to the whole feasible set. Meanwhile, the whole safe, hazardous, and hysteresis set for varied constraints is, respectively, constructed by O s = O s 1 O s 2 O s N c , O d = O d 1 O d 2 O d N c , and O h = O O d O s . Determined by the above sets, the state-feedback-based human–machine shared control law for (1) is designated as
u s = min 1 i N c k s i k h u h + i = 1 N c ( 1 k s i ) k f u f i ,
where k h = e v h , k f = 1 e v h and v h is applied as a measure of the human’s intent to control the system based on the mixed-initiative approach for the control input blending under k f + k h = 1 ; u f i is the state-feedback controller (12) or (13); and the operation authority sharing function is
k s i ( q , v , t ) = 0 ( q , v ) O d i and L i = min j = I 1 I N c L j l s i ( q , v , t ) ( q , v ) O h i 1 otherwise
with l s i ( q , v , t ) = 0 k s i ( t ) = 0 1 k s i ( t ) = 1 .
Furthermore, the output-feedback-based human–machine shared control law for (1) is designated by
u s = min 1 i N c k o i k h u h + i = 1 N c ( 1 k o i ) k f u f i ,
where u f i is the output-feedback controller (25), and the operation authority sharing function is
k o i ( q , v ^ , t ) = 0 ( q , v ^ ) O d i and L i = min j = I 1 I N c L j l o i ( q , v ^ , t ) ( q , v ^ ) O h i 1 otherwise
with l o i ( q , v ^ , t ) = 0 k o i ( t ) = 0 1 k o i ( t ) = 1 .
Lemma 2. 
Consider a class of a nonlinear constrained system (1) with ( q ( 0 ) , q ˙ ( 0 ) ) O s and the shared controller (46) or (47). Suppose there is a t ¯ > 0 ensuring q ( t ¯ ) Q a ; then, there exists 0 < t s < t ¯ ensuring that ( q ( t s ) , v ( t s ) ) O d .
Theorem 5. 
Consider a class of a nonlinear constrained system (1) under shared controllers (46) or (47) with q ( 0 ) Q a . Then, there exists the minimum switching time interval Δ t > 0 between two groups of activated constraints and the parameters b 2 > b 1 > 0 in Definition 2, ensuring that the s-closed-loop system satisfies (1) q ( t ) Q a for all t > 0 ; (2) Λ s = Π O s ( Λ h ) ; (3) u s ( t ) = u h ( t ) for all times and ( q ( t ) , v ( t ) ) O s O d and ( q ( t ) , v ^ ( t ) ) O s O d for state-feedback and output-feedback cases, respectively.
Proof. 
(1) Due to the proof in Theorem 2, the feedback controller u f can ensure that the state q of the system for all trajectories stays in Q a . Moreover, Lemma 1 implies that any trajectory in O should firstly enter O d , where the feedback controller is active. Thus the set O is a forward invariant set, so that q ( t ) Q a for all t > 0 . (2) If Λ h O s , then Λ s = Π O s ( Λ h ) is just a consequence arising from the general result presented in [16], and Λ h is Λ -limit set of the s-closed-loop system. Otherwise, the Λ limit set of the s-closed-loop system with machine control inputs is Π O s ( Λ h ) in light of the proof in Theorem 2. Moreover, Lemma 1 implies that the system’s trajectory enters O d , where the machine controller is motivated, thereby facilitating the restoration of the system’s state back to O s before exiting the feasible set O . Claim (3) arises directly from the definitiondefinition of the shared control law. □
Remark 5. 
Safety (i.e., being collision-free) is a very important requirement for autonomous systems. The safety of the closed-loop system with the shared control has been proven in Theorem 5. Compared with pure human control, it is much safer; thus, we conclude that the safety and the ability of collision avoidance could be improved by using shared control. Human control is still important for the design of many autonomous systems. Some autonomous systems cannot be widely promoted in the market because human input is not integrated in the control system, although the system is used to serve humans. In fact, manned systems are preferred over unmanned systems for many people, as a human can behave as the supervisor or the controller according to his/her willingness if he/she is involved in the control loop. Moreover, there are many other issues for fully autonomous systems. For instance, fully autonomous driving technologies still have lots of issues to be solved before becoming available on the market, such as insurance, legality, and ethical issues.

5. Simulation Validations

Consider that the omnidirectional mobile robot [17] is characterized by (1), whereas the signals and parameters within the system are represented by q = [ x , y , ψ ] T , q ˙ = [ x ˙ , y ˙ , ψ ˙ ] T .
M ( q ) = m 1 0 0 0 m 1 0 0 0 m 3 , C ( q , q ˙ ) = a 0 ψ ˙ 0 ψ ˙ 0 0 0 0 0 ,
u s = B τ s , B = 1 r sin ( δ + ψ ) sin ( δ ψ ) cos ( ψ ) cos ( δ + ψ ) cos ( δ ψ ) sin ( ψ ) L r L r L r ,
G ( q ) = 0 , m 1 = M p + 3 I r 2 r 2 , m 3 = I p + 3 I r L r 2 r 2 , and a = 3 I r 2 r 2 , where ( x , y , ψ ) denotes the robot’s mass center position and the yaw angle; M p and I p are the mass and inertia parameters; r and I r denote the wheel’s radius and inertia; and L r denotes the distance the from mass center to any one of the wheels. All robot parameters are set to M p = 9.85 (kg), I r = 0.0086 (kgm2), I p = 0.522 (kgm2), L r = 0.205 (m), r = 0.03965 (m), and δ = π 6 , respectively. In the simulation example, the initial values of the robot’s mass center position and the yaw angle are set to q = [ 0.9 , 3 , 0 ] T , and the initial values of its derivatives are set to q ˙ = [ 0 , 0 , 0 ] T . There is an unknown parametric vector θ = [ m 1 , m 3 , a ] T because of unknown parameters M p , I r , and I p for the robot, and its initial value is set to θ = [ 0 , 0 , 0 ] T . Then, the human–machine shared controller is given by τ s = B 1 u s under (46) or (47), and the human control inputs are imitated by a simple proportional controller (44).
Assume the convex feasible set Q a is given by
Q a = q = x y ψ x 1 0.5 y 4 π / 2 < ψ < π / 2 .
Figure 1, Figure 2, Figure 3 and Figure 4 show the trajectory tracking results based on the proposed human–machine shared controllers (46) with (12) and (13) and shared controllers (47) with (25)–(29), respectively. Figure 1 clearly demonstrates the effect of trajectory tracking under output constraints, where the arrows indicate the trajectory direction. With the implementation of the shared controllers we proposed, the robot does not violate the output constraints and is able to accurately follow the desired trajectory. Figure 2, Figure 3 and Figure 4, respectively, illustrate the system responses based on three shared controllers, allowing for a more detailed observation of the system changes at each moment as they vary over time. Furthermore, it can be observed that the control input of the system is relatively small. If the initial position is far from the desired trajectory, the control input will increase in order to enable the robot to rapidly track the required trajectory. When appropriate control parameters are selected, the robot can still perform trajectory tracking stably. The stability of the system will not be affected by variations in the initial position and control input. These results illustrate that the convex feasible set of the robot motion trajectory can be guaranteed, and the safety is guaranteed by combining human-controlled input and machine-controlled input, where the feedback controller has full control authority if the state belongs to the hazardous set and the human operator is in full charge if the state belongs to the safe set. Furthermore, the proposed adaptive output-feedback controller (47) is employed to complete the straight-line path-tracking task under the convex feasible set, and simulation results are shown in Figure 1, Figure 2, Figure 3 and Figure 4. Clearly, the adaptive output-feedback shared controller can also ensure satisfactory performance of the human–robot system in the convex feasible set in the motion space.
Assume the non-convex feasible set Q a is given by
Q a = q = x y ψ x 0 ; y 0 if x 0 ; π / 2 < ψ < π / 2 .
Under the non-convex feasible set in the robot motion space, the desired straight-line path can be still tracked well through the application of the presented adaptive shared controller, and the tracking performance is also satisfactory, as shown in Figure 5, Figure 6, Figure 7 and Figure 8, where the rectangular obstacle is well avoided. The adaptive state-feedback shared controllers (46) with (12) and (13) can drive the robot to avoid the collision area, even though the robot is approaching the border of the feasible set. Similarly, from Figure 5, Figure 6, Figure 7 and Figure 8, the performance of the robot working in the non-convex feasible set is guaranteed by using the presented adaptive output-feedback controller (47). All results imply that the presented shared control approaches are effective even though the feasible set is non-convex.

6. Conclusions

This study developed human–machine shared controllers for uncertain output-constrained Euler–Lagrange systems subject to parametric uncertainties, where certainty-equivalent and non-certainty-equivalent principles are employed to develop two types of adaptive state-feedback shared controllers, and an output-feedback shared controller is also designed. The hysteresis sharing function is also designed to synthesize the human controller and the machine controller together, and the trajectory tracking performance of shared control systems is also analyzed in a Lyapunov framework. Simulation results illustrate that the presented human–machine shared controller is efficacious for Euler–Lagrange systems under non-convex feasible areas. Safety is guaranteed by the presented shared controller, since the desired trajectory is tracked precisely and collision areas are also avoided. The robustness of the system is mainly seen from the good tracking performance of the system under model parametric uncertainties. In addition, due to the current limitations under experimental conditions, we have not yet conducted experimental validation. Thus in future studies, we plan to apply the proposed method to our omnidirectional mobile robot platform to evaluate its feasibility and robustness.

Author Contributions

Conceptualization, L.S.; Methodology, L.S.; Software, K.T.; Validation, K.T.; Formal analysis, K.T. and L.S.; Investigation, K.T.; Data curation, K.T.; Writing—original draft, K.T.; Writing—review & editing, L.S.; Visualization, K.T.; Supervision, L.S.; Project administration, L.S.; Funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (no. 62373038) and the Fundamental Research Funds for the Central Universities (no. FRF-IDRY-GD22-002).

Data Availability Statement

Data not available due to ethical restrictions.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Jiang, J.; Astolfi, A. State and output-feedback shared control for a class of linear constrained systems. IEEE Trans. Autom. Control 2015, 61, 389–392. [Google Scholar] [CrossRef]
  2. Sun, L.; Jiang, J. Adaptive state-feedback shared control for constrained uncertain mechanical systems. IEEE Trans. Autom. Control 2022, 67, 949–956. [Google Scholar] [CrossRef]
  3. Habboush, A.; Yildiz, Y. An adaptive human pilot model for adaptively controlled systems. IEEE Control Syst. Lett. 2022, 6, 1964–1969. [Google Scholar] [CrossRef]
  4. Tika, A.; Bajcinca, N. Predictive control of cooperative robots sharing common workspace. IEEE Control Syst. Technol. 2024, 32, 456–471. [Google Scholar] [CrossRef]
  5. Becanovic, F.; Bonnet, V.; Dumas, R.; Jovanovic, K.; Mohammed, S. Force sharing problem during gait using inverse optimal control. IEEE Robot. Autom. Lett. 2023, 8, 872–879. [Google Scholar] [CrossRef]
  6. Xu, P.; Wang, Z.; Ding, L.; Li, Z.; Shi, J.; Gao, H.; Liu, G.; Huang, Y. A closed-loop shared control framework for legged robots. IEEE-ASME Trans. Mechatron. 2024, 29, 190–201. [Google Scholar] [CrossRef]
  7. Li, X.; Wang, Y. Shared steering control for human-machine co-driving system with multiple factors. Appl. Math. Modell. 2021, 100, 471–490. [Google Scholar] [CrossRef]
  8. Fang, Z.; Wang, J.; Wang, Z.; Chen, J.; Yin, G.; Zhang, H. Human-machine shared control for path following considering driver fatigue characteristics. IEEE Trans. Intell. Transp. Syst. 2024, 25, 7250–7264. [Google Scholar] [CrossRef]
  9. Feng, J.; Yin, G.; Liang, J.; Lu, Y.; Xu, L.; Zhou, C.; Peng, P.; Cai, G. A robust cooperative game theory-based human-machine shared steering control framework. IEEE Trans. Transp. Elect. 2024, 10, 6825–6840. [Google Scholar] [CrossRef]
  10. Guo, W.; Zhao, S.; Cao, H.; Yi, B.; Song, X. Koopman operator-based driver-vehicle dynamic model for shared control systems. Appl. Math. Modell. 2023, 114, 423–446. [Google Scholar] [CrossRef]
  11. Zhou, H.; Zhang, X.; Liu, J. A corrective shared control architecture for human–robot collaborative polishing tasks. Robot. Comput.-Integr. Manuf. 2025, 92, 102876. [Google Scholar] [CrossRef]
  12. Ortega, R.; Perez, J.A.L.; Nicklasson, P.J.; Sira-Ramirez, H. Passivity-Based Control of Euler-Lagrange Systems: Mechanical, Electrical and Electromechanical Applications; Springer: London, UK, 2013. [Google Scholar]
  13. Khalil, H. Nonlinear Systems; Prentice-Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
  14. Seo, D.; Akella, M.R. Non-certainty equivalent adaptive control for robot manipulator systems. Syst. Control Lett. 2009, 58, 304–308. [Google Scholar] [CrossRef]
  15. Astolfi, A.; Karagiannis, D.; Ortega, R. Nonlinear and Adaptive Control Design with Applications; Springer: London, UK, 2008. [Google Scholar]
  16. Prieur, C.; Teel, A. Uniting local and global output feedback controller. IEEE Trans. Autom. Control 2010, 56, 1636–1649. [Google Scholar] [CrossRef]
  17. Sira-Ramirez, H.; Lopez-Uribe, C.; Velasco-Villa, M. Linear observer-based active disturbance rejection control of the omnidirectional mobile robot. Asian J. Control 2013, 15, 51–63. [Google Scholar] [CrossRef]
Figure 1. Trajectory tracking results under convex a .
Figure 1. Trajectory tracking results under convex a .
Actuators 14 00383 g001
Figure 2. System response based on (46) with (12) under convex Q a .
Figure 2. System response based on (46) with (12) under convex Q a .
Actuators 14 00383 g002
Figure 3. System response based on (46) with (13) under convex Q a .
Figure 3. System response based on (46) with (13) under convex Q a .
Actuators 14 00383 g003
Figure 4. System response based on (47) under convex Q a .
Figure 4. System response based on (47) under convex Q a .
Actuators 14 00383 g004
Figure 5. Trajectory tracking results under non-convex Q a .
Figure 5. Trajectory tracking results under non-convex Q a .
Actuators 14 00383 g005
Figure 6. System response based on (46) with (12) under non-convex Q a .
Figure 6. System response based on (46) with (12) under non-convex Q a .
Actuators 14 00383 g006
Figure 7. System response based on (46) with (13) under non-convex Q a .
Figure 7. System response based on (46) with (13) under non-convex Q a .
Actuators 14 00383 g007
Figure 8. System response based on (47) under non-convex Q a .
Figure 8. System response based on (47) under non-convex Q a .
Actuators 14 00383 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, K.; Sun, L. Adaptive Shared Trajectory Tracking Control for Output-Constrained Euler–Lagrange Systems. Actuators 2025, 14, 383. https://doi.org/10.3390/act14080383

AMA Style

Tang K, Sun L. Adaptive Shared Trajectory Tracking Control for Output-Constrained Euler–Lagrange Systems. Actuators. 2025; 14(8):383. https://doi.org/10.3390/act14080383

Chicago/Turabian Style

Tang, Ke, and Liang Sun. 2025. "Adaptive Shared Trajectory Tracking Control for Output-Constrained Euler–Lagrange Systems" Actuators 14, no. 8: 383. https://doi.org/10.3390/act14080383

APA Style

Tang, K., & Sun, L. (2025). Adaptive Shared Trajectory Tracking Control for Output-Constrained Euler–Lagrange Systems. Actuators, 14(8), 383. https://doi.org/10.3390/act14080383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop