Next Article in Journal
Truncation Error Bounds for Branched Continued Fraction Expansions of Some Appell’s Hypergeometric Functions F2
Next Article in Special Issue
Parameter-Gain Accelerated ZNN Model for Solving Time-Variant Nonlinear Inequality-Equation Systems and Application on Tracking Symmetrical Trajectory
Previous Article in Journal
Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution
Previous Article in Special Issue
Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths

School of Railway Transportation, Guangzhou Railway Polytechnic, Guangzhou 511300, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1203; https://doi.org/10.3390/sym17081203
Submission received: 1 July 2025 / Revised: 21 July 2025 / Accepted: 24 July 2025 / Published: 29 July 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Intelligent Control and Computing)

Abstract

Intelligent driving is a key technology in the field of automotive manufacturing due to its advantages in environmental protection, energy efficiency, and economy. However, since the intelligent driving model is an uncertain multi-input multi-output dynamic system, especially in an interactive environment, it faces uncertainties such as non-uniform trial lengths, unknown nonlinear parameters, and unknown control direction. In this paper, an adaptive iterative learning control method is proposed for vehicle systems with non-uniform trial lengths and asymmetric control gain matrices. Unlike the existing research on adaptive iterative learning for non-uniform test lengths, this paper assumes that the elements of the system’s control gain matrix are asymmetric. Therefore, the assumption made in traditional adaptive iterative learning methods that the control gain matrix of the system is known or real, symmetric, and positive definite (or negative definite) is relaxed. Finally, to prove the convergence of the system, a composite energy function is designed, and the effectiveness of the adaptive iterative learning method is verified using vehicle systems. This paper aims to address the challenges in intelligent driving control and decision-making caused by environmental and system uncertainties and provides a theoretical basis and technical support for intelligent driving, promoting the high-quality development of intelligent transportation.

1. Introduction

As an important component of future transportation systems, intelligent driving has significant development advantages. In the actual operation of intelligent driving vehicles [1,2,3,4], tracking task repetition is one of the common tasks. This repetition typically involves the repeated driving of vehicles along specific paths or trajectories, and it occurs in fixed-route buses, logistics delivery vehicles, park shuttle buses, port container transportation, airport baggage transportation, and factory material transportation. These operations are usually highly regular and predictable, making them well-suited for automated processing by intelligent driving systems. By continuously optimizing path-tracking algorithms and control strategies, intelligent driving vehicles can achieve efficient and safe operation in these scenarios.
For trajectory-tracking tasks that are repeated within a finite time interval, which is different to asymptotic tracking over infinite time [5,6,7,8,9,10,11,12,13,14,15,16,17,18], iterative learning control [19,20,21,22,23,24,25,26] is an effective control strategy which performs the same task repeatedly over a finite time interval. The core idea is to improve tracking performance from one execution (called a “trial”, “pass”, or “iteration”) to the next by utilizing data (like tracking errors and control signals) from previous executions, leveraging past control experience to enhance the current tracking performance of dynamic systems. For example, iterative learning control effectively addresses the challenges of repetitive operations for underwater robots in complex hydrodynamic environments such as subsea pipeline inspection systems and underwater equipment maintenance scenarios. These applications have promoted the proposal of iterative learning control methods. Wei et al. proposed an iterative learning control (ILC) approach for nonlinear systems with iteratively varying reference trajectories and random initial state shifts, and established theoretical proofs of error convergence [26]. The advantage of the iterative learning control method is that it does not require a precisely known system model, the controller structure is simple, and it is suitable for controlled systems with high tracking accuracy requirements, strong coupling, or complex nonlinearity. In the past three decades, significant progress has been made in iterative learning control in both basic theory and experimental applications [23,24,25,26].
However, most existing iterative learning control designs require that the system object, initial state, reference trajectory, and test length in the iterative learning control process remain constant as the number of iterations increases [22], especially for processing the same test length [26,27]. In fact, it is difficult to repeatedly perform the same task within a fixed time interval in practice. For example, during underwater robotic seabed inspections, when the multimodal sensor array detects deviations in critical hydroacoustic signal characteristics beyond preset thresholds, the system activates failure determination protocols, terminates the current operation, and autonomously transitions to the next inspection target. Another example is in the field of fully automated precision manufacturing, where semi-finished products undergo a series of processing steps to ultimately form highly integrated end products. This method intertwines the processing and inspection stages in a complex manner; any defect in the processing operation can jeopardize the success of the entire production line, and detecting irreparable faults in semi-finished products during the inspection phase can lead to automatic stoppage of the manufacturing process. Therefore, non-uniform trial lengths are quite common in real life. For non-uniform trial lengths, the average operator technique and modified tracking error are used to provide corresponding iterative learning control algorithms [28,29], and the robustness and convergence of iterative learning control errors are analyzed using constant norms. On the other hand, most of the research has focused on discrete-time linear systems [30,31], mainly due to the beneficial system structure and mature discrete random variable analysis techniques. Although for nonlinear systems [32], the nonlinear function involved needs to satisfy the global Lipschitz condition.
In the past few decades, by introducing the concept of adaptive control into the design of iterative learning control, an important class of iterative learning control algorithms has been proposed, known as adaptive iterative learning control methods [33,34,35]. The main features of such methods are the estimation of uncertain control parameters between successive iterations and the use of these parameters to generate the current control input. Adaptive iterative learning control is a control method applied to dynamic systems in continuous or discrete time. It combines the ideas of adaptive control and iterative learning control methods and gradually improves the control performance of the system through learning during the iterative process. In the adaptive iterative learning control method, the controller adjusts according to the error between the real-time response of the system and the desired output. Through continuous iteration, the controller can gradually learn the dynamic characteristics of the system and improve the control strategy based on the learned knowledge, thereby achieving better control performance. Adaptive iterative learning control methods are typically employed for underwater robots performing repetitive operational tasks in complex hydrodynamic environments, such as subsea trajectory tracking or precision manipulation of seabed equipment. They can gradually reduce the tracking error of the system through multiple iterations and improve the stability and robustness of the system. In practical applications, adaptive iterative learning control methods need to consider factors such as the dynamic characteristics, learning speed, and convergence of the system. At the same time, selecting the appropriate learning algorithm and control strategy is also a crucial step. Adaptive iterative learning control methods have obvious advantages in relaxing the global Lipschitz continuity condition, handling non-uniform initial states, iteratively changing reference trajectories, and dealing with external disturbances. Liu et al. studied the problem of variable test length for continuous-time nonlinear systems based on adaptive iterative learning control methods [33]. However, their system was a single-input single-output system, and the input gain or control direction was a scalar value. An adaptive iterative learning method was proposed to solve the problem of random changes in the length of the experiment. However, the system involved in the study involved multiple input and multiple output systems [34]. The system was parameterized, and the control gain matrix of the system was assumed to be known.
In fact, the control gain matrix in a multi-input multi-output system [36,37,38] is the derivative of the system output to the control input, and it is usually required to be symmetric and positive definite (or negative definite). However, in most underwater robotic propulsion systems (e.g., multi-thruster power distribution systems [36] and articulated drive systems [37]), the control gain matrices consistently exhibit off-diagonal coupling characteristics, which constitutes a prominent system feature arising from hydrodynamic anisotropy. Obviously, the assumption of the positive (or negative) definiteness of the control gain matrix, or even the assumption of knownness, greatly limits the development of adaptive iterative learning control.
This paper proposes a new adaptive iterative control strategy for multi-input multi-output nonlinear vehicle systems with non-uniform test lengths and asymmetric control gain matrices, which relaxes the assumption requirements that stipulate that the traditional control gain matrices are positive definite (or negative definite) or even known. The key features and contributions of the paper are as follows:
  • Vehicle systems for intelligent driving are investigated for reference trajectory tracking in an infinite iteration domain with non-uniform trial lengths.
  • In the adaptive control community, it is generally required that the control gain matrices of plants be real, symmetric, and positive definite (or negative definite), but, in this paper, only the asymmetric control gain matrices in the vehicle systems are assumed.
  • Unlike those fuzzy systems or neural network-based techniques used for adaptive iterative learning control, less adaptive variables are designed for adjusting or updating in the proposed method such that the structure of the controller is very simple and the memory space for computing is greatly saved.

2. Problem Formulation

2.1. Intelligent Driving Vehicle System

This paper focuses on an in-depth study of a control system for intelligent driving vehicles in interactive environments. In real-world traffic scenarios, intelligent driving vehicles do not operate in isolation but constantly interact with other traffic participants, such as other vehicles and pedestrians. This interaction has a crucial impact on the safety and efficiency of intelligent driving vehicles. Therefore, constructing an accurate and reasonable dynamic model of the control system is a key foundation for achieving safe and efficient intelligent driving. Consider the following dynamic model of a control system for intelligent driving vehicles in an interactive environment that moves within a finite time interval:
x ˙ p , k ( t ) = g ( x p , k ( t ) , t ) + G u p , k ( t ) + A p , q x q , k ( t ) , t [ 0 , T ]
Here, k represents the system iteration number. During the system optimization and parameter adjustment process, the accuracy and adaptability of the model are continuously improved through multiple iterations. t [ 0 , T ] represents continuous time, where T is the set upper limit of the finite time, covering the time period of vehicle motion analysis that we are concerned about. x p , k ( t ) R l is the state vector of the p-th vehicle at the k-th iteration and time t. This vector usually includes key state variables such as the vehicle’s position, velocity, and acceleration, comprehensively describing the vehicle’s motion state in space. x q , k ( t ) R l is the state vector of the q-th vehicle at the corresponding time and iteration number, also recording various motion information of this vehicle. u p , k ( t ) R l is the control input vector of the p-th vehicle. For example, control commands such as the steering wheel angle, accelerator pedal opening, and braking force are used to adjust the vehicle’s operating state. G R l × l is the control gain matrix. It not only determines the direction of the effect of the control input on the system output but also, essentially, represents the derivative of the system output with respect to the control input, reflecting the sensitivity of the vehicle state change to the control commands. However, in practical applications, G is unknown, its values are bounded, and it is non-symmetric, which poses challenges for the design of the control system. g ( x p , k ( t ) , t ) R l is the unknown nonlinear vector function of vehicle p itself, used to describe the parts of the vehicle’s own motion characteristics that cannot be expressed by linear relationships, such as the changes in the vehicle’s dynamic characteristics when driving on complex road surfaces.
In an interactive environment, to accurately depict the interactions between vehicles, it is necessary to determine the interaction relationships according to the behavioral characteristics of traffic participants. Taking vehicle-to-vehicle interactions as an example, the influence of vehicle p on vehicle q may be reflected in many aspects. Common ones include car-following behavior: vehicle p adjusts its own speed according to the speed and distance of the preceding vehicle q to maintain a safe following distance; evasion behavior: when vehicle p detects that vehicle q may pose a collision risk, it actively changes its driving direction or speed to avoid the collision; and path adjustment: vehicle p may re-plan its driving route due to the influence of vehicle q’s driving path on its own planned path.
Based on the characteristics of these interaction relationships, further quantifying the interaction intensity is an important part of constructing an accurate model. In the quantification process, various indicators can be used. For example, physical quantities such as distance, speed difference, and acceleration difference are used to measure the degree of interaction between vehicles. At the same time, in order to reflect the differences in the importance of different state variables during the interaction process, weight coefficients are introduced to represent the influence degree of different state variables. Finally, the quantified interaction intensity is represented in matrix form as follows:
A p , q = a 11 a 12 a 1 l a 21 a 22 a 2 l a l 1 a l 2 a l l
The element a i j in the matrix A p , q represents the influence weight of the j-th state variable of the q-th vehicle on the i-th state variable of the p-th vehicle. The entire matrix A p , q comprehensively describes the influence of vehicle q on vehicle p.
Assumption 1. 
Let the nonlinear mapping g ( x p , k ( t ) , t ) satisfy the sublinear growth constraint as follows:
g ( x p , k ( t ) , t ) α 1 x p , k ( t ) + α 2 ,
where α 1 and α 2 denote positive-definite parameters.
Assumption 2. 
The iterative learning process satisfies the uniform boundedness conditions: all iterative initial values x p , k ( 0 ) R n , expected trajectories are r d ( t ) iterative transformations, external disturbances w k ( t ) = A p , q x q , k ( t ) , and x p , k 0 = r d 0 .
Assumption 1 is often used in iterative learning control designs for nonlinear dynamic systems and practical applications. Assumption 2 is a common constraint condition and is reasonable in practical physical systems. In fact, if the nonlinear function of a system changes too rapidly or the initial value of the system is unbounded, it is obvious that it will not meet the requirements of an actual system.

2.2. Design of Adaptive Iterative Learning Controller for Non-Uniform Test Length

2.2.1. Non-Uniform Trial Length

In conventional iterative learning control (ILC) algorithms, the task execution time interval is strictly confined to a fixed duration to facilitate an idealized iterative learning process. However, in practical applications, this requirement frequently becomes infeasible due to inherent uncertainties and unpredictable disturbances. This fundamental challenge has motivated substantial research addressing uneven trial lengths. The following representative scenarios exemplify typical manifestations of non-uniform trial durations in real-world implementations.
As a foundational step in constructing the adaptive iterative learning control (AILC) framework, the system’s tracking error must be formally defined:
E k t = r d t x p , k t ,
where r d t represents the desired trajectory and E k t = E k 1 t E k 2 t E k n t .
Under non-uniform trial length conditions, τ k denotes the duration of each iterative operation. The modified tracking error is formulated as follows:
f k ( t ) = E k ( t ) , 0 t τ k E k ( τ k ) , τ k < t T
It can be concluded that f k ( t ) = λ k E k ( t ) + ( 1 λ k ) E k ( τ k ) , where λ k { 0 , 1 } . When λ k = 1 , the system stops halfway during task execution and fails to reach the desired trial duration; conversely, when λ k = 0 , the system reaches the desired duration.

2.2.2. Controller Design

Building upon Assumptions 1–3 as the theoretical foundation, this study develops an adaptive iterative learning control scheme with the following architectural configuration:
u p , k ( t ) = G ^ E k ( t ) E k T ( t ) ξ ( E k ( t ) ) ϵ k ( t ) E k ( t ) 2 2
ϵ ^ k ( t ) = ϵ ^ k 1 ( t ) + Γ ξ T ( E k ( t ) ) E k ( t ) R 2 , 0 t τ k ϵ ^ k 1 ( t ) R 2 , τ k < t T
ξ ( E k ( t ) ) = E k ( t ) sign ( E k ( t ) ) R n × 2 ,
where ϵ ^ 1 ( t ) = 0 R 2 , and Γ R n is the learning gain to be designed. Equation (3) formulates the control input that constitutes the core controller architecture, while Equation (4) defines the adaptive parameter-updating mechanism. The algorithm’s innovation resides in its synergistic optimization process, which integrates cross-iteration parameter transfer with error feedback, dynamically calibrating control parameters through historical iteration data and real-time error signals to achieve progressive input refinement.

2.2.3. Stability Analysis

Theorem 1. 
Based on Assumptions 1 and 2, under the action of the adaptive control law (3) and adaptive parameter update laws (4) and (5), when the control gain matrix G of the vehicle system satisfies σ ( G G ^ ) C and Γ > 0 is satisfied, the tracking error of the system is lim k E k ( t ) = 0 .
Proof. 
Because of σ ( G G ^ ) C , there exists a positive-definite symmetric matrix Q = Q T R n × n satisfying
G G ^ T Q + Q G G ^ = 2 I n .
Building upon the theoretical foundation of Assumptions 1 and 2, we establish the following formal definition:
sup 0 t T Q r ˙ d ( t ) Q w k ( t ) α
sup 0 t T Q m c ,
and
h = n c α 1 j = n α + n c α 1 r d ( t ) + n c α 2
ϵ = h j T R 2 ,
A composite-type energy function is formulated as follows:
V k ( t ) = 1 2 f k T ( t ) Q f k ( t ) + 1 2 0 t ϵ ˜ k T ( t ) Γ 1 ϵ ˜ k ( t ) d τ
where the estimated error of ϵ is ϵ ˜ k t = ϵ ϵ ^ k t , and ϵ ^ k t is the estimated value of ϵ .
Step 1: To prove that sequence V k ( t ) is a monotone non-increasing sequence, the difference in V k ( t ) is first defined in the iterative direction:
Δ V k ( t ) = V k ( t ) V k 1 ( t ) = 1 2 f k T ( t ) Q f k ( t ) 1 2 f k 1 T ( t ) Q f k 1 ( t ) + 0 t 1 2 ϵ ˜ k T ( τ ) Γ 1 ϵ ˜ k ( τ ) ϵ ˜ k 1 T ( τ ) Γ 1 ϵ ˜ k 1 ( τ ) d τ
It can be observed that the difference in V k ( t ) consists of two parts: the error term and the adaptive parameter estimation error term.
When 0 t τ k , the adaptive parameter estimation error term for V k ( t ) yields the following result:
1 2 ϵ ˜ k T ( t ) Γ 1 ϵ ˜ k ( t ) 1 2 ϵ ˜ k 1 T ( t ) Γ 1 ϵ ˜ k 1 ( t ) = 1 2 ϵ ˜ k T ( t ) Γ 1 ϵ ˜ k ( t ) + 1 2 ϵ ˜ k T ( t ) Γ 1 ϵ ˜ k 1 ( t ) 1 2 ϵ ˜ k 1 T ( t ) Γ 1 ϵ ˜ k ( t ) 1 2 ϵ ˜ k 1 T ( t ) Γ 1 ϵ ˜ k 1 ( t ) = 1 2 ϵ ˜ k ( t ) ϵ ˜ k 1 ( t ) T Γ 1 ϵ ˜ k ( t ) + ϵ ˜ k 1 ( t ) = 1 2 ϵ ^ k ( t ) ϵ ^ k 1 ( t ) T Γ 1 2 ϵ ˜ k ( t ) + ϵ ^ k T ( t ) ϵ ^ k 1 ( t )
By incorporating the adaptive parameter update mechanism into Equation (13) and performing integration on both sides, the following relationship is derived:
0 t 1 2 ϵ ˜ k T ( τ ) Γ 1 ϵ ˜ k ( τ ) ϵ ˜ k 1 T ( τ ) Γ 1 ϵ ˜ k 1 ( τ ) d τ = 0 t 1 2 ϵ ^ k ( τ ) ϵ ^ k 1 ( τ ) T Γ 1 2 ϵ ˜ k ( τ ) + ϵ ^ k T ( τ ) ϵ ^ k 1 ( τ ) d τ
Regarding the error dynamics associated with parameter V k ( t ) , the results are established as follows:
1 2 f k T ( t ) Q f k ( t ) = 1 2 E k T ( t ) Q E k ( t ) = 0 t E k T ( τ ) Q f ˙ k ( τ ) d τ + 1 2 E k T ( 0 ) Q E k ( 0 ) = 0 t E k T ( τ ) Q f ˙ k ( τ ) d τ = 0 t E k T ( τ ) Q r ˙ d ( τ ) Q x ˙ k ( τ ) d τ = 0 t E k T ( τ ) Q r ˙ d ( τ ) Q g ( x p , k ( τ ) ) + H u p , k ( τ ) + w k ( τ ) d τ = 0 t E k T ( τ ) Q r ˙ d ( τ ) Q g ( x p , k ( τ ) ) Q w k ( τ ) E k T ( τ ) Q H u p , k ( τ ) d τ
where
E k T ( t ) Q r ˙ d ( t ) Q g ( x p , k ( t ) ) Q w k ( t ) E k ( t ) p E k ( t ) + q E k T ( t ) p E k ( t ) + q sign ( E k ( t ) ) = E k T ( t ) ξ ( E k ( t ) ) ϵ
and
E k T ( t ) Q G u p , k ( t ) = 1 2 E k T ( t ) Q G u p , k ( t ) + Q G u p , k ( t ) T E k ( t ) = 1 2 E k T ( t ) Q G G ^ E k ( t ) E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t ) E k ( t ) 2 2 + E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t ) E k ( t ) 2 2 T E k T ( t ) G ^ T G T Q E k ( t ) = 1 2 E k T ( t ) Q G G ^ + G ^ T G T Q E k ( t ) E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t ) E k ( t ) 2 2 = E k T ( t ) E k ( t ) E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t ) E k ( t ) 2 2 = E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t )
That is,
E k T ( t ) Q G u p , k ( t ) = E k T ( t ) ξ ( E k ( t ) ) ϵ ^ k ( t )
By establishing the algebraic integration framework through Equations (16) and (17) within Equation (15), the subsequent derivation yields the following:
1 2 f k T ( t ) Q f k ( t ) 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ E k T ( τ ) ξ ( E k ( τ ) ) ϵ ^ k ( τ ) d τ = 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ k ( τ ) d τ
Building upon the theoretical framework established in Equations (14) and (18), the mathematical reconstruction of Equation (12) yields the following:
Δ V k ( t ) = V k ( t ) V k 1 ( t ) 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ ( τ ) d τ 1 2 E k 1 T ( t ) Q E k 1 ( t ) + 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ k ( τ ) 1 2 E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ = 1 2 0 t E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ 1 2 E k 1 T ( t ) Q E k 1 ( t )
Under the parametric constraint 0 t τ k , Equation (14) can be reformulated through the adaptive law (4) into the following equivalent representation:
0 t 1 2 ϵ ˜ k T ( τ ) Γ 1 ϵ ˜ k ( τ ) ϵ ˜ k 1 T ( τ ) Γ 1 ϵ ˜ k 1 ( τ ) d τ = 0 t 1 2 Γ ξ T ( E k ( τ ) ) E k ( τ ) T Γ 1 2 ϵ ˜ k ( τ ) + Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ = 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ k ( τ ) 1 2 E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ
Under the second operational scenario, Equation (18) undergoes structural transformation characterized as follows:
1 2 f k T ( t ) Q f k ( t ) 0 t E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ k ( τ ) d τ
Through deductive theoretical pathways, the subsequent relationship is established:
Δ V k ( t ) = V k ( t ) V k 1 ( t ) 0 τ k E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ ( τ ) d τ 1 2 E k 1 T ( τ k ) Q E k 1 ( τ k ) + 0 τ k E k T ( τ ) ξ ( E k ( τ ) ) ϵ ˜ k ( τ ) 1 2 E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ = 1 2 0 τ k E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ 1 2 E k 1 T ( τ k ) Q E k 1 ( τ k )
The synergistic analytical framework integrating Equations (19) and (22) yields the following fundamental evolutionary relationship:
Δ V k ( t ) = V k ( t ) V k 1 ( t ) 1 2 0 τ k t E k T ( τ ) ξ ( E k ( τ ) ) Γ ξ T ( E k ( τ ) ) E k ( τ ) d τ 1 2 E k 1 T ( τ k t ) Q E k 1 ( τ k t )
where τ k t = min τ k , t . To date, it can be proven that Δ V k ( t ) 0 .
Step 2: Because
V 0 ( t ) = 1 2 f 0 T ( t ) Q f 0 ( t ) + 1 2 0 t ϵ ˜ 0 T ( τ ) Γ 1 ϵ ˜ 0 ( τ ) d τ ,
through deductive verification of the initial proposition phase, the following relationship is established:
V ˙ 0 ( t ) = 1 2 d d d t f 0 T ( t ) Q f 0 ( t ) + 1 2 ϵ ˜ 0 T ( t ) Γ 1 ϵ ˜ 0 ( t )
From Equation (19) and f 0 ( t ) = λ 0 E 0 ( t ) + ( 1 λ 0 ) E 0 ( τ 0 ) , we can obtain the following:
1 2 d d t E 0 T ( t ) Q E 0 ( t ) λ 0 E 0 T ( t ) ξ ( E 0 ( t ) ) ϵ ˜ 0 ( t )
By substituting Equation (26) into Equation (25) and taking into account Equations (16) and (17), we obtain the following:
V ˙ 0 ( t ) λ 0 E 0 T ( t ) ξ ( E 0 ( t ) ) ϵ ˜ 0 ( t ) + 1 2 ϵ ˜ 0 T ( t ) Γ 1 ϵ ˜ 0 ( t )
Because ϵ ^ 1 ( t ) = 0 , it follows that ϵ ^ 0 ( t ) = ϵ ^ 1 ( t ) + λ 0 Γ ξ T ( E 0 ( t ) ) E 0 ( t ) = λ 0 Γ ξ T ( E 0 ( t ) ) E 0 ( t ) .
1 2 ϵ ˜ 0 T ( t ) Γ 1 ϵ ˜ 0 ( t ) = 1 2 ϵ ϵ ^ 0 ( t ) T Γ 1 ϵ ϵ ^ 0 ( t ) = 1 2 ϵ T Γ 1 ϵ ϵ ^ 0 T ( t ) Γ 1 ϵ + 1 2 ϵ ^ 0 T ( t ) Γ 1 ϵ ^ 0 ( t ) = 1 2 ϵ T Γ 1 ϵ ϵ ^ 0 T ( t ) Γ 1 ϵ ˜ 0 ( t ) + ϵ ^ 0 ( t ) + 1 2 ϵ ^ 0 T ( t ) Γ 1 ϵ ^ 0 ( t ) = 1 2 ϵ T Γ 1 ϵ ϵ ^ 0 T ( t ) Γ 1 ϵ ˜ 0 ( t ) 1 2 ϵ ^ 0 T ( t ) Γ 1 ϵ ^ 0 ( t ) = 1 2 ϵ T Γ 1 ϵ λ 0 E 0 T ( t ) ξ ( E 0 ( t ) ) ϵ ˜ 0 ( t ) 1 2 ϵ ^ 0 T ( t ) Γ 1 ϵ ^ 0 ( t )
Substituting Equation (28) into Equation (27), we can obtain the following:
V ˙ 0 ( t ) 1 2 ϵ T Γ 1 ϵ 1 2 ϵ ^ 0 T ( t ) Γ 1 ϵ ^ 0 ( t ) 1 2 ϵ T Γ 1 ϵ M
where M is defined as a positive-definite scalar parameter. By performing integration on both sides of Equation (29), we obtain the following:
V 0 ( t ) = 0 t V ˙ 0 ( τ ) d τ M T
Step 3: From Equation (23), the following can be readily derived:
i = 1 k Δ V i ( t ) i = 1 k 1 2 E i 1 T ( τ i t ) Q E i 1 ( τ i t )
Through deductive theoretical pathways, the subsequent relationship is established:
V k ( t ) = V 0 ( t ) + i = 1 k Δ V i ( t ) V 0 ( t ) + i = 1 k 1 2 E i 1 T ( τ i t ) Q E i 1 ( τ i t )
By combining Equations (30) and (31), we derive the following:
i = 1 E i 1 T ( τ i t ) Q E i 1 ( τ i t ) 2 V 0 ( t ) V k ( t ) 2 V 0 ( t ) 2 M T
Finally, based on the above systematic demonstration process, its is concluded that lim k E k ( t ) = 0 , and the completeness of the theoretical system of this study is rigorously verified. □

3. Experimental Results and Discussion

Based on the two-vehicle interaction scenario, each vehicle’s state vector includes two variables: position and velocity ( l = 2 ). The state vector of the p-th vehicle is x p , k ( t ) = x p , k ( t ) v p , k ( t ) , and the state vector of the q-th vehicle is x q , k ( t ) = x q , k ( t ) v q , k ( t ) . The control input vector is u p , k ( t ) = a p , k ( t ) δ p , k ( t ) , where a p , k ( t ) denotes the acceleration control command of the p-th vehicle at time t during the k-th iteration, and δ p , k ( t ) represents the steering angle control command.
The unknown nonlinear vector function f ( x p , k ( t ) , t ) of vehicle p is defined as follows:
f ( x p , k ( t ) , t ) = v p , k ( t ) cos ( v p , k ( t ) ) c 1 v p , k ( t ) c 2 v p , k 2 ( t )
where c 1 and c 2 are constants used to model nonlinear factors such as resistance during vehicle motion. The control gain matrix G is configured as an asymmetric matrix as follows:
G = α 11 α 12 α 21 α 22
Here, α 11 represents the degree of influence of the acceleration control input a p , k ( t ) on the vehicle’s position x p , k ( t ) ; α 12 represents the degree of influence of the steering angle control input δ p , k ( t ) on the vehicle’s position x p , k ( t ) ; α 21 represents the degree of influence of the acceleration control input a p , k ( t ) on the vehicle’s velocity v p , k ( t ) ; and α 22 represents the degree of influence of the steering angle control input δ p , k ( t ) on the vehicle’s velocity v p , k ( t ) . Due to the dynamic characteristics of vehicles, different control inputs have varying effects on position and velocity, rendering G an asymmetric matrix.
Assuming the interaction between vehicle q and vehicle p affects both position and velocity, the interaction matrix A p , q is configured as follows:
A p , q = b 11 b 12 b 21 b 22
where b i j ( i , j = 1 , 2 ) are weight coefficients quantifying the influence of vehicle q’s position and velocity on vehicle p’s position and velocity.
Substituting the above settings into the dynamic model formula x ˙ p , k ( t ) = f ( x p , k ( t ) , t ) + G u p , k ( t ) + A p , q x q , k ( t ) and letting w k ( t ) = A p , q x q , k ( t ) , we have
x ˙ p , k ( t ) v ˙ p , k ( t ) = v p , k ( t ) cos ( v p , k ( t ) ) c 1 v p , k ( t ) c 2 v p , k 2 ( t ) + α 11 α 12 α 21 α 22 a p , k ( t ) δ p , k ( t ) + w k ( t )
where t [ 0 , 2 ] ; the sampling interval is 0.001; [ w k ( 1 ) ( t ) w k ( 2 ) ( t ) ] T is 0.5 , 0.5 ; the corresponding random transformation results are as shown in Figure 1; and the system’s tracking trajectory is r d ( t ) = [ 2 + sin ( 3 t ) , 1 + cos ( 2 t ) ] T .
1. Control Input Parameters: a p , k ( t ) : The acceleration control command of the p-th vehicle at time t during the k-th iteration is measured in meters per second squared ( m / s 2 ). It is a command issued by the vehicle control system to adjust vehicle speed and can be implemented by controlling actuators such as the throttle and brakes. δ p , k ( t ) : The steering angle control command of the p-th vehicle at time t during the k-th iteration, typically measured in radians (rad). It is used to control the vehicle’s driving direction and is executed through the steering system.
2. Nonlinear Function Parameters: c 1 and c 2 : Constants used to describe nonlinear factors in the vehicle’s own motion characteristics. c 1 mainly reflects the resistance influence related to the first power of velocity, such as rolling friction resistance between the tires and the road surface; c 2 mainly reflects the resistance influence related to the square of velocity, such as air resistance. Their values need to be determined through experiments or simulations based on the vehicle’s actual performance and driving environment.
3. Control Gain Matrix Parameters: α 11 : The coefficient quantifying the influence of the acceleration control input a p , k ( t ) on the vehicle’s position x p , k ( t ) . In actual vehicle motion, changes in acceleration indirectly affect the distance traveled by the vehicle over time, thereby influencing position. α 11 quantifies this effect. α 12 : The coefficient quantifying the influence of the steering angle control input δ p , k ( t ) on the vehicle’s position x p , k ( t ) . Steering operations change the vehicle’s driving direction, which, in turn, affects its position. α 12 measures this influence. α 21 : The coefficient quantifying the influence of the acceleration control input a p , k ( t ) on the vehicle’s velocity v p , k ( t ) , typically related to the vehicle’s dynamic performance. It reflects the direct effect of the acceleration command on changes in vehicle velocity. α 22 : The coefficient quantifying the influence of the steering angle control input δ p , k ( t ) on the vehicle’s velocity v p , k ( t ) . When turning, the vehicle needs to overcome additional resistance, leading to velocity changes. α 22 quantifies this effect. Due to the different mechanisms and degrees of influence of acceleration and steering angle on position and velocity, the G matrix is asymmetric, with G = [ 0.10 . 5 ; 0.80 . 2 ] .
Selecting the estimated matrix
G ^ = cos ( 4 π / 3 ) sin ( 4 π / 3 ) ; sin ( 4 π / 3 ) cos ( 4 π / 3 ) ,
we have
spec ( G G ^ ) = spec 0.1 0.5 0.8 0.2 × cos ( 4 π / 3 ) sin ( 4 π / 3 ) sin ( 4 π / 3 ) cos ( 4 π / 3 )
= 0.5879 + 0.1853 i , 0.5879 0.1853 i C .
The results satisfy the conditions of the theorem. The adaptive iterative learning algorithm is evaluated using the maximum absolute error. By setting the parameter Γ = 5 for the parameter update rule, Figure 2 illustrates the tracking error of the system. Figure 3 shows that the different iteration periods correspond to different lengths of tracking trajectories. It can be observed that, when the iteration count reaches 50, the tracking error of the system approaches 0. Figure 4, Figure 5, Figure 6 and Figure 7 show the trial lengths with different iterations, namely, τ 1 = 0.3 , τ 3 = 0.8 , τ 10 = 1 , and τ 50 = 2 showing in Table 1, where τ k = 2 represents the desired trial length, which demonstrates how the actual output of the system tracks the expected trajectory under varying iteration counts when subjected to non-uniform trial lengths. Figure 4 shows the actual trajectory and desired trajectory of the vehicle system at the first iteration with trial length τ 1 = 0.3 . Figure 5 shows the actual trajectory and desired trajectory of the vehicle system at the third iteration with trial length τ 3 = 0.6 . Figure 6 shows the actual trajectory and desired trajectory of the vehicle system at the 10th iteration with trial length τ 10 = 1 . Figure 7 shows the actual trajectory and desired trajectory of the vehicle system at the 50th iteration with trial length τ 50 = 2 . Notably, when the iteration number is 50, the actual output of the system fully tracks the expected trajectory. Figure 8 shows the control input.

4. Conclusions

For vehicle systems with asymmetric control gain matrices, when the trial length of the system varies with the number of iterations, the system can still ensure that the tracking error tends to zero through the design of the adaptive iterative learning control algorithm. Compared with existing research on adaptive iterative learning control with non-uniform trial lengths, this paper assumes that the control gain matrix of the system is asymmetric and proves the convergence of the method by designing a Lyapunov composite energy function. The effectiveness of the method is confirmed through simulation experiments performed on a vehicle system. In future work, we will further address the additional interference (time delays or actuator failures and so on) and uncertainty modeling in nonlinear models and apply the results to actual systems.

Author Contributions

Conceptualization, H.W.; Methodology, Y.T.; Software, Y.T. and Z.C.; Validation, H.W.; Formal analysis, Y.T.; Resources, Z.C.; Data curation, Z.C.; Writing—original draft, Y.T.; Visualization, H.W.; Supervision, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work of this paper was supported by the 2021 Characteristic and innovative science research project of Guangdong Provincial Department of Education (Project No.2021KTSCX270).

Data Availability Statement

The originalcontributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, J.; Zhang, Y.; Li, Z. Control systems for intelligent vehicles: A review. IEEE Trans. Intell. Transp. Syst. 2020, 21, 4925–4945. [Google Scholar]
  2. Li, Y.; Chen, H.; Li, X. Path tracking control for autonomous vehicles: A survey. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3823–3832. [Google Scholar]
  3. Martinez, C.M.; Heucke, M.; Wang, F.-Y.; Gao, B.; Cao, D. Driving Style Recognition for Intelligent Vehicle Control and Advanced Driver Assistance: A Survey. IEEE Trans. Intell. Transp. Syst. 2018, 19, 666–676. [Google Scholar] [CrossRef]
  4. Smith, A.; Johnson, B. Robust control for autonomous vehicles in dynamic environments. IEEE Trans. Control Syst. Technol. 2022, 30, 567–580. [Google Scholar]
  5. Jin, L.; Wei, L.; Li, S. Gradient-Based Differential Neural-Solution to Time-Dependent Nonlinear Optimization. IEEE Trans. Autom. Control 2023, 68, 620–627. [Google Scholar] [CrossRef]
  6. Liu, M.; Chen, L.; Du, X.; Jin, L.; Shang, M. Activated Gradients for Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2156–2168. [Google Scholar] [CrossRef] [PubMed]
  7. Liu, M.; Li, Y.; Chen, Y.; Qi, Y.; Jin, L. A Distributed Competitive and Collaborative Coordination for Multirobot Systems. IEEE Trans. Mob. Comput. 2024, 23, 11436–11448. [Google Scholar] [CrossRef]
  8. Huang, H.; Jin, L.; Zeng, Z. A Momentum Recurrent Neural Network for Sparse Motion Planning of Redundant Manipulators With Majorization-Minimization. IEEE Trans. Ind. Electron. 2025. early access. [Google Scholar] [CrossRef]
  9. Xie, X.; Liu, Y.; Li, Q. Neural network-based adaptive event-triggered control for cyber–physical systems under resource constraints and hybrid cyberattacks. Automatica 2023, 152, 110977. [Google Scholar] [CrossRef]
  10. Tang, H.; Wu, J.; Wu, F.; Chen, L.; Liu, Z.; Yan, H. An Optimization Framework for Collaborative Control of Power Loss and Voltage in Distribution Systems With DGs and EVs Using Stochastic Fuzzy Chance Constrained Programming. IEEE Access 2020, 8, 49013–49027. [Google Scholar] [CrossRef]
  11. Xie, X.; Li, S.; Xu, B.; Li, Q. Resilient adaptive event-triggered H∞ control for networked stochastic control systems under denial-of-service attacks. Trans. Inst. Meas. Control 2021, 44, 580–594. [Google Scholar] [CrossRef]
  12. Chen, R.; Lu, Z.-R.; Su, C.; Ni, Y.-Q. Stochastic optimization of levitation control system for maglev vehicles subjected to random guideway irregularity. J. Sound Vib. 2025, 594, 118682. [Google Scholar] [CrossRef]
  13. Chen, Y.; Chen, J.; Yi, C. A pre-defined finite time neural solver for the time-variant matrix equation E(t)X(t)G(t)=D(t). J. Frankl. Inst. 2024, 361, 106710. [Google Scholar] [CrossRef]
  14. Xiang, D.; Lin, H.; Ouyang, J.; Huang, D. Combined improved A* and greedy algorithm for path planning of multi-objective mobile robot. Sci. Rep. 2022, 12, 13273. [Google Scholar] [CrossRef] [PubMed]
  15. Zeng, L.; Belić, M.R.; Mihalache, D.; Zhang, Q.; Xiang, D.; Zhu, X. Robust dynamics of soliton pairs and clusters in the nonlinear Schrodinger equation with linear potentials. Nonlinear Dyn. 2023, 111, 21895–21902. [Google Scholar] [CrossRef]
  16. Hao, S.; Liu, R.; Lin, X.; Li, C.; Guo, H.; Ye, Z.; Wang, C. Configuration Design and Gait Planning of a Six-Bar Tensegrity Robot. Appl. Sci. 2022, 12, 11845. [Google Scholar] [CrossRef]
  17. Li, B.; Chen, S.; Li, Z. Target Tracking Algorithm for Mobile Robots Based on Siamese Neural Networks. IEEE Access 2023, 11, 144902–144917. [Google Scholar] [CrossRef]
  18. Xiang, D.; Wang, H.; He, D.; Zhai, C. Research on Histogram Equalization Algorithm Based on Optimized Adaptive Quadruple Segmentation and Cropping of Underwater Image (AQSCHE). IEEE Access 2023, 11, 69356–69365. [Google Scholar] [CrossRef]
  19. Ding, Y.Q.; Li, X.D. A unified adaptive control approach of nonlinear continuous non-parameterized systems with asymmetric control gains for trajectory tracking in different domains. Int. J. Robust Nonlinear Control 2022, 33, 3964–3987. [Google Scholar] [CrossRef]
  20. Ding, Y.Q.; Li, X.D. Adaptive ILC methods with less adaption parameters for non-parameterized nonlinear continuous systems with nonsingular control gain matrices. Int. J. Adapt. Control Signal Process. 2024, 38, 3656–3672. [Google Scholar] [CrossRef]
  21. Ding, Y.Q.; Jia, H.; Wei, Y.; Xu, Q.; Wan, K. An adaptive learning control for MIMO nonlinear system with nonuniform trial lengths and invertible control gain matrix. Electronics 2024, 13, 2896. [Google Scholar] [CrossRef]
  22. Xu, J.X. A survey on iterative learning control for nonlinear systems. Int. J. Control 2011, 84, 1275–1294. [Google Scholar] [CrossRef]
  23. Zhang, G.; Wang, J.; Yang, P.; Guo, S. Iterative learning sliding mode control for output-constrained upper-limb exoskeleton with non-repetitive tasks. Appl. Math. Model. 2021, 97, 366–380. [Google Scholar] [CrossRef]
  24. Liu, Y.; Fan, Y.; Jia, Y. Iterative learning formation control for continuous-time multi-agent systems with randomly varying trial lengths. J. Franklin Inst. 2020, 357, 9268–9287. [Google Scholar] [CrossRef]
  25. Wan, K. Robust decentralized iterative learning control for large-scale interconnected systems described by 2-D Fornasini–Marchesini systems with iteration-dependent uncertainties including boundary states, disturbances, and reference trajectory. Int. J. Adapt. Control Signal Process. 2022, 36, 3205–3229. [Google Scholar] [CrossRef]
  26. Wei, Y.S.; Bao, X.J.; Shang, W.; Liao, J.Z. Iterative Learning Control for Non-Linear Discrete-Time Systems with Iterative Varying Reference Trajectory and Varying Trial Lengths Under Random Initial State Shifts. J. Intell. Robot. Syst. 2025, 111, 58. [Google Scholar] [CrossRef]
  27. Tayebi, A.; Chien, C.-J. A unified adaptive iterative learning control framework for uncertain nonlinear systems. IEEE Trans. Autom. Control 2007, 52, 1907–1913. [Google Scholar] [CrossRef]
  28. Wei, Y.; Xu, Q. Iterative learning control for linear discrete-time systems with randomly variable input trail length. Complexity 2018, 2018, 2763210. [Google Scholar] [CrossRef]
  29. Wei, Y.; Chen, Y.; Shang, W. Feedback higher-order iterative learning control for nonlinear systems with non-uniform iteration lengths and random initial state shifts. J. Franklin Inst. 2023, 360, 10745–10765. [Google Scholar] [CrossRef]
  30. Shi, J.; He, X.; Zhou, D. Iterative learning control for nonlinear stochastic systems with variable pass length. J. Franklin Inst. 2016, 353, 4016–4038. [Google Scholar] [CrossRef]
  31. Wei, Y.; Wan, K.; Lan, X. Open-closed-loop iterative learning control for linear systems with iteratively variable trail lengths. IEEE Access 2019, 7, 132619–132627. [Google Scholar] [CrossRef]
  32. Hussain, N.A.A.; Ali, S.S.A.; Ridao, P.; Cieslak, P.; Al-Saggaf, U.M. Implementation of Nonlinear Adaptive U-Model Control Synthesis Using a Robot Operating System for an Unmanned Underwater Vehicle. IEEE Access 2020, 8, 194498–194513. [Google Scholar] [CrossRef]
  33. Shen, D.; Xu, J.X. Adaptive learning control for nonlinear systems with randomly varying iteration lengths. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1119–1132. [Google Scholar] [CrossRef]
  34. Cao, J.; Zeng, Z.; Yao, B.; Lian, L. Adaptive Iterative Learning Control for Underwater Glider Systems with Nonlinear Coupled Dynamics and Input Saturation. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 7521–7532. [Google Scholar]
  35. Meng, D.; Moore, K.L. Robust iterative learning control for nonrepetitive uncertain systems. IEEE Trans. Autom. Control 2017, 62, 907–913. [Google Scholar] [CrossRef]
  36. Garus, J. Thrust Allocation in Propulsion System of Underwater Robotic Vehicle via Linear Programming. Solid State Phenom. 2013, 210, 326–332. [Google Scholar] [CrossRef]
  37. Sakagami, Y.; Watanabe, K.; Ishii, K. Hydrodynamic Compensation for Underwater Manipulators. IEEE J. Ocean. Eng. 2022, 47, 567–579. [Google Scholar]
  38. Liu, Y.; Zhang, W.; Wang, H.; Chen, C.L.P. Robust Adaptive Control for Underwater Vehicles with Input Saturation and Output Constraints via Symmetric Positive Definite Gain Matrix Synthesis. IEEE Trans. Cybern. 2023, 53, 2876–2888. [Google Scholar]
Figure 1. Variation profiles of system disturbance w k ( t ) .
Figure 1. Variation profiles of system disturbance w k ( t ) .
Symmetry 17 01203 g001
Figure 2. Absolute error of the vehicle system.
Figure 2. Absolute error of the vehicle system.
Symmetry 17 01203 g002
Figure 3. An example of varying trial lengths along the iteration axis.
Figure 3. An example of varying trial lengths along the iteration axis.
Symmetry 17 01203 g003
Figure 4. Actual trajectory and desired trajectory of the vehicle system at the first iteration with trial length τ 1 = 0.3 .
Figure 4. Actual trajectory and desired trajectory of the vehicle system at the first iteration with trial length τ 1 = 0.3 .
Symmetry 17 01203 g004
Figure 5. Actual trajectory and desired trajectory of the vehicle system at the third iteration with trial length τ 3 = 0.6 .
Figure 5. Actual trajectory and desired trajectory of the vehicle system at the third iteration with trial length τ 3 = 0.6 .
Symmetry 17 01203 g005
Figure 6. Actual trajectory and desired trajectory of the vehicle system at the 10th iteration with trial length τ 10 = 1 .
Figure 6. Actual trajectory and desired trajectory of the vehicle system at the 10th iteration with trial length τ 10 = 1 .
Symmetry 17 01203 g006
Figure 7. Actual trajectory and desired trajectory of the vehicle system at the 50th iteration with trial length τ 50 = 2 .
Figure 7. Actual trajectory and desired trajectory of the vehicle system at the 50th iteration with trial length τ 50 = 2 .
Symmetry 17 01203 g007
Figure 8. Control input of the system.
Figure 8. Control input of the system.
Symmetry 17 01203 g008
Table 1. Trial lengths with different iterations and tracking effects.
Table 1. Trial lengths with different iterations and tracking effects.
IterationsTrial LengthTracking Effect
1 τ 1 = 0.3 Not well tracked
3 τ 3 = 0.8 Not well tracked
10 τ 10 = 1 Well tracked
50 τ 50 = 2 Well tracked
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Y.; Chen, Z.; Wu, H. Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths. Symmetry 2025, 17, 1203. https://doi.org/10.3390/sym17081203

AMA Style

Tang Y, Chen Z, Wu H. Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths. Symmetry. 2025; 17(8):1203. https://doi.org/10.3390/sym17081203

Chicago/Turabian Style

Tang, Yangbo, Zetao Chen, and Hongjun Wu. 2025. "Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths" Symmetry 17, no. 8: 1203. https://doi.org/10.3390/sym17081203

APA Style

Tang, Y., Chen, Z., & Wu, H. (2025). Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths. Symmetry, 17(8), 1203. https://doi.org/10.3390/sym17081203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop