Next Article in Journal
Tensorial Maclaurin Approximation Bounds and Structural Properties for Mixed-Norm Orlicz–Zygmund Spaces
Next Article in Special Issue
Caputo Barrier Functions and Their Applications to the Safety, Safety-and-Stability, and Input-to-State Safety of a Class of Fractional-Order Systems
Previous Article in Journal
Computationally Efficient Design of 16-Poles and 24-Slots IPMSM for EV Traction Considering PWM-Induced Iron Loss Using Active Transfer Learning
Previous Article in Special Issue
Remarks on Sequential Caputo Fractional Differential Equations with Fractional Initial and Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Explicit Form of Solutions of Second-Order Delayed Difference Equations: Application to Iterative Learning Control

1
Department of Mathematics, Eastern Mediterranean University, T.R. North Cyprus Mersin 10, Famagusta 99628, Turkey
2
Research Center of Econophysics, Azerbaijan State University of Economics (UNEC), Istiqlaliyyat Str. 6, Baku 1001, Azerbaijan
3
Department of Mathematics and Statistics, College of Science, King Faisal University, Hofuf 31982, Al Ahsa, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(6), 916; https://doi.org/10.3390/math13060916
Submission received: 5 February 2025 / Revised: 8 March 2025 / Accepted: 8 March 2025 / Published: 10 March 2025

Abstract

:
A system of inhomogeneous second-order difference equations with linear parts given by noncommutative matrix coefficients are considered. The closed form of its solution is derived by means of a newly defined delayed matrix sine/cosine using the Z transform and determining function. This representation helps with analyzing iterative learning control by applying appropriate updating laws and ensuring sufficient conditions for achieving asymptotic convergence in tracking.
MSC:
39A06; 93C55; 93C40

1. Introduction

Second-order delayed difference equations are crucial in real-world applications because they model dynamic systems where the future state depends not only on the current and past states but also on delayed interactions. These equations arise in various domains, including population dynamics, where they are used to model species growth with delayed responses due to gestation or maturation periods; economic systems, being applied in financial markets and supply chain modeling where past fluctuations influence current trends; engineering and control systems, where they are found in signal processing, vibration analysis, and control systems where delays affect stability and performance; epidemiology, being used to model the spread of diseases where incubation periods and immunity delays play a role.
The proposed method improves upon existing techniques in several ways:
Enhanced stability analysis—it provides new stability criteria that better capture the effects of delays, reducing uncertainties in predictions; Higher computational efficiency—the method optimizes numerical computations, allowing for faster simulations and real-time applications; Broader applicability—it extends to more complex and nonlinear systems, making it useful for modeling real-world phenomena with varying delay structures; Improved accuracy—by refining approximation methods or incorporating machine learning techniques, the approach yields more precise solutions. These advancements contribute to more reliable modeling, better decision making, and improved system performance across multiple disciplines.
The proposed method may face significant challenges when dealing with singular matrices or nonpermutable coefficients due to the following reasons:
Limitations exist in handling singular matrices. Singular matrices lack an inverse, which can hinder solving linear systems directly. Many numerical techniques, including those used for stability analysis and iterative solutions, rely on matrix inversion or decomposition, which fails in the singular case. This limitation restricts the method’s applicability to systems where the coefficient matrices are nonsingular or can be regularized (e.g., by perturbation methods or pseudo-inverses). There are limitations with nonpermutable coefficients. If the system involves coefficients that do not commute under multiplication (e.g., in noncommutative algebra or certain coupled systems), traditional solution techniques may not directly apply. Many iterative or closed-form solutions assume a structure that allows for the reordering of terms, which may not hold in these cases. This constraint affects applications in quantum mechanics, advanced control systems, and coupled network dynamics where nonpermutable interactions are essential. The method remains effective for a broad class of second-order delayed difference equations with well-conditioned coefficient structures. For singular matrices, alternative techniques like regularization, generalized inverses, or alternative formulations may be necessary. For nonpermutable coefficients, more advanced algebraic methods or computational techniques may need to be incorporated, potentially requiring significant modifications to the proposed approach. If these extensions are not currently feasible, it should be explicitly stated that the method is best suited for nonsingular, permutable coefficient systems, and potential workarounds or future research directions should be proposed.
The proposed method can be highly beneficial in ILC systems, where tasks are performed repeatedly, and performance is improved over iterations by learning from past errors. Below are concrete examples where the method could be applied:
  • Trajectory tracking in robotics: in robotic arms used for precision tasks (e.g., surgical robots, automated welding arms), trajectory tracking is critical.
  • ILC is often used to refine movement paths over successive iterations, compensating for dynamic disturbances and model inaccuracies.
  • The second-order delayed difference equation framework models the system’s response more accurately, accounting for actuator delays and sensor latencies.
  • The improved stability analysis ensures better convergence of the learning process, reducing oscillations or divergence issues in robot movements.
  • Compared to traditional ILC methods, this approach can handle systems with more complex dynamics and variable delays, leading to faster convergence and smoother trajectory tracking.
By leveraging the proposed method, industries relying on precision control, automation, and iterative improvements can achieve higher accuracy, efficiency, and adaptability, making it a significant advancement in ILC applications.
In what follows, we use the following notations:
  • Θ is a zero matrix, and I is an identity matrix;
  • Z a b : = a , a + 1 , . . . , b for a , b Z ± , a b ;
  • M r × p is the space of r × p matrices;
  • An empty sum i = a b z i = 0 , and an empty product i = a b z i = 1 for integers a < b , where z i is a given function that does not have to be defined for each i Z b a in this case;
  • x t + 1 x t = : Δ x t is the forward difference operator;
  • x t + 2 2 x t + 1 + x t = : Δ 2 x t .
Iterative learning control (ILC) is a control strategy used for systems that perform the same task repeatedly. It improves performance over iterations by learning from previous executions. The idea is to adjust the control input based on errors from past trials, refining it until the desired performance is achieved.
Key Concepts of ILC
  • Repetitive tasks: ILC is useful in systems where the same task is performed multiple times, such as robotic arms, industrial automation, and medical rehabilitation devices.
  • Error correction: the controller updates the input signal for the next iteration based on the difference between the desired and actual output from the previous iteration.
  • Feedforward control: unlike traditional feedback control, ILC predicts and compensates for errors before they occur in future iterations.
  • Convergence: a well-designed ILC algorithm ensures that the system output approaches the desired output over iterations.
General ILC Algorithm
The control input for iteration k + 1 is updated as
u k + 1 ( t ) = u k ( t ) + L e k ( t )
where u k ( t ) is the control input at iteration k, e k ( t ) is the error at iteration k (the difference between the desired and actual output), and L is the learning filter or gain.
Applications of ILC
  • Robotics: improving precision in repetitive tasks.
  • Industrial automation: enhancing accuracy in machining and assembly lines.
  • Medical applications: assisting in rehabilitation by improving repetitive movements.
  • Motion control: used in servo systems to improve trajectory tracking.
ILC is a powerful control strategy designed for dynamic systems that operate repetitively over a finite time interval. It has been successfully implemented in various practical applications, including robotics, chemical batch processes, and hard disk drive systems, as highlighted in References [1,2,3,4] and the works cited therein.
In recent years, considerable attention has been given to the iterative learning control and robust control of discrete systems by many researchers. Notably, Li et al. [5] investigated ILC for linear continuous systems with time delays using two-dimensional system theory. Similarly, Wan [6] studied ILC for two-dimensional discrete systems under a general model. Extensive research on ILC for discrete systems has often been carried out by analyzing a constructed Roesser model, as demonstrated in References [5,6,7,8]. To the best of our knowledge, the application of the Roesser model to ILC for discrete systems was initially explored in Reference [2].
Approximately two decades ago, Diblik and Khusainov [9,10] introduced explicit representations for solutions to discrete linear systems with a single pure delay using delayed discrete exponential matrices. Later, Khusainov et al. [11] extended this approach to derive analytical solutions for oscillatory second-order systems with pure delays by introducing delayed discrete sine and cosine matrices. These pioneering contributions spurred significant advancements in the analytical solutions of retarded integer and fractional differential equations, as well as delayed discrete systems, as seen in References [12,13,14,15]. Building on these results, Diblik and Morçivkova [16,17] extended the analysis to discrete linear systems with two pure delays, while Pospişil [18] applied the Z transform to address multi-delayed systems with linear components represented by permutable matrix coefficients. In 2018, Mahmudov [19] provided explicit solutions for discrete linear delayed systems with nonconstant coefficients and nonpermutable matrices, including first-order differences. Mahmudov [20] later generalized these findings, removing the singularity condition on the non-delayed coefficient matrix and deriving explicit solutions using the Z transform. Furthermore, Diblik and Mencakova [12] presented closed-form solutions for purely delayed discrete linear systems with second-order differences, while Elshenhab and Wang [21] recently addressed explicit representations for second-order difference systems with multiple pure delays and noncommutative coefficient matrices.
These studies have yielded numerous insights into the qualitative theory of discrete delay systems, encompassing stability analysis, optimal control theory, and iterative learning control, as highlighted in References [22,23,24,25,26,27].
Although significant progress has been made in studying linear discrete systems and linear delayed discrete systems, research on iterative learning control for delayed linear discrete systems with higher-order differences remains limited. Notable examples include References [6,7], with only a few works addressing delayed linear discrete systems with higher-order differences through the construction of delayed discrete matrix functions.
Therefore, motivated by [12,19,21], we consider an explicit representation of solutions of the following discrete second-order systems with a single delay:
Δ 2 y t + A y t + B y t m = f t , t Z 0 , m Z 1
where m is a delay, A , B M d × d , y : Z m R d is a solution, and f : Z 0 R d is a function.
Let φ : Z m 1 R d be a function. We attach to (1) the following initial conditions:
y t = φ t , t Z m 1 .
It is well known that the initial value problem (1) and (2) has a unique solution in Z m .
More precisely, we study the iterative learning control problem for delayed linear discrete systems with a second-order difference as follows:
Δ 2 y k t + A y k t + B y k t m = F u k t , t Z 0 T , k Z 1 , y k t = φ t , t Z m 1 , z k t = C y k t + D u k t ,
where k denotes the kth iteration, T is a given fixed positive integer, y k · : Z m T R d denotes the state, u k · : Z 0 T R r denotes the dominant input, and z k · : Z 0 T R p denotes the output. A , B M d × d , F M r × d , C M d × p , D M r × p are constant matrices.
Here is a summary of the key contributions:
  • This work introduces new delayed discrete matrix functions, which are regarded as extensions of the sine and cosine functions.
  • New representation of solutions: This work proposes a new representation for the solutions for the second-order delay difference equations with noncommutative matrices. This representation is likely used in various aspects of this paper, including deriving the prior estimation of the state. This representation is new even for the second-order difference equations with commutative matrices.
  • Application to convergence laws and iterative learning control: the new solution representations are applied to derive convergence laws for ILC systems, providing insights into the convergence behavior of the system via the proposed iterative learning control updating laws.
  • Extension of ILC problems: this work extends iterative learning control to address problems involving second-order delay difference equations with noncommutative matrices, potentially presenting new methods or solutions for ILC in these contexts.

2. Delayed Discrete Matrix Sine/Cosine

One of the tools in this study is the Z transform, defined as
Z f t z = t = 0 f t z t for   z R .
Z transform is considered component-wisely, i.e., the Z transform of a vector-valued function is a vector of Z -transformed coordinates.
Definition 1.
We say that the function f : Z 0 R d is exponentially bounded if there exists b 1 , b 2 > 0 such that
f t b 1 b 2 t for t Z 0 .
Lemma 1.
The Z transform Z f t z of exponentially bounded function f : Z 0 R d exists for all sufficiently large z.
The next lemma provides some features of the Z transform.
Lemma 2.
Assume that  f 1 , f 2 : Z 0 R d  are exponentially bounded functions. Then, for sufficiently large  z R , we have
1.
Z a f 1 t + b f 2 t = a Z f 1 t + b Z f 2 t , a , b R ;
2.
Z 1 z l t = δ l , t for l Z 0 , where δ is the Kroneker delta,
δ l , t = 1 , t = l , 0 , t l .
3.
Z 1 F 1 z F 2 z t = f 1 f 2 t . Here, the convolution operation ∗ is defined by
f g t = j = 0 t f j g t j ;
4.
Z 1 z z 1 t = σ t for z > 1 ; here, σ is the step function, defined as
σ t = 1 , t 0 , 0 , t < 0 .
5.
Z 1 1 z 1 l t = t 1 l 1 σ t l , l Z 0 .
6.
Z f 1 t + n = z n Z f 1 t j = 0 n 1 f 1 j z n j , n Z 0 .
We introduce the determining matrix equation for Q t ; s ,   t = 1 , 2 , . . .
Q t + 1 ; s = A Q t ; s + B Q t ; s 1 , Q 0 ; s = 0 0 0 0 , Q 1 ; 0 = 1 0 0 1 , t = 1 , 2 , . . . 10 , s = 1 , 2 , . . 10 .
where I is an identity matrix; Θ is a zero matrix.
Remark 1.
1.
Simple calculations show that
s = 0 s = 1 s = 2 s = 3 s = p Q 1 ; s I Θ Θ Θ Θ Q 2 ; s A B Θ Θ Θ Q 3 ; s A 2 A B + B A B 2 Θ Θ Q 4 ; s A 3 A A B + B A + B A 2 A B 2 + B A B + B A B 3 Θ Q p + 1 ; s A p B p
2.
If A and B are commutative, that is,  A B = B A , we have
Q t + 1 ; j = t j A t j B j σ t j .
3.
If  A = Θ , then
Q t + 1 ; j = Θ , t + 1 j , B j , t + 1 = j . .
Definition 2.
The delayed discrete matrix  M c ( t , A , m )  is defined as
M c ( t , A , m ) : = Θ , if t Z m 1 , I , if t Z m 1 , I A t 2 + A 2 t m 4 + ( 1 ) l A l t ( l 1 ) m 2 l , if t Z ( l 1 ) ( m + 2 ) + 2 l ( m + 2 ) + 1 , l = 0 , 1 , 2 , .
Here,
  • Θ represents the zero matrix.
  • I is the identity matrix.
  • a b  denotes the binomial coefficient, defined as  a b = a ! b ! ( a b ) ! , with  a b = 0  if  b > a  or  a < 0 .
Definition 3
([12]). The delayed discrete matrix M s ( t , A , m ) is defined as
M s ( t , A , m ) : = Θ , if t Z m , I t + m 1 , if t Z m + 1 2 , I t + m 1 A t 3 + A 2 t m 5 + ( 1 ) l A l t ( l 1 ) m 2 l + 1 , if t Z ( l 1 ) ( m + 2 ) + 3 l ( m + 2 ) + 2 , l = 0 , 1 , 2 , .
Definition 4
([12]). The delayed discrete matrix sine/cosine is defined as follows:
Sin A , B t : = l = 0 t 1 m + 2 0 i l 1 l t i m 2 l + 1 Q l + 1 ; i : Z 0 M n × n , Cos A , B t : = l = 0 t m + 2 0 i l 1 l t i m 2 l Q l + 1 ; i : Z 0 M n × n .
Remark 2.
If A = Θ , then
Sin Θ , B t + m : = l = 0 t + m 1 m + 2 1 l t + m l m 2 l + 1 B l = M s ( t , B , m ) , Cos Θ , B t + m : = l = 0 t + m m + 2 1 l t + m l m 2 l B l = M c ( t , B , m ) .
Lemma 3
(Binomial formula for noncommutative matrices). Let A , B M d × d be two noncommutative matrices. Then, for any t Z 0 , we have
A + B t = i = 0 t Q t + 1 ; i .
Proof. 
From Equation (4), it can be easily seen that for t = 0 , 1 , 2 the identity (5) is true. Now, we use induction; assuming that (5) is true for t = n , we prove it for t = n + 1 :
A + B n + 1 = A + B i = 0 n Q n + 1 ; i = i = 0 n A Q n + 1 ; i + i = 0 n B Q n + 1 ; i = i = 0 n A Q n + 1 ; i + i = 1 n + 1 B Q n + 1 ; i 1 = i = 0 n + 1 A Q n + 1 ; i + B Q n + 1 ; i 1 = i = 0 n + 1 Q n + 2 ; i .
Here, we used the property Q n + 1 ; n + 1 = Θ = Q n + 1 ; 1 .
Lemma 4
(Gronwall inequality [28]). Let
y t b t + a t j = 0 t 1 f j y j , t Z 0 .
Then,
y t b t + a t j = 0 t 1 b j f j i = j + 1 t 1 1 + a j f i , t Z 0 .
Lemma 5.
For any t Z 0 , we have the following identities:
A + B σ m t = i m + 1 t i 0 Q t + 1 i m ; i , A + B z m t = i m + 1 t i 0 z i m Q t + 1 ; i . Z 1 z 1 2 + A + B z m 1 = l = 0 0 i l 1 l t i m 1 2 l + 1 Q l + 1 ; i , Z 1 1 z j + m ( z 1 ) 2 + A + B z m 1 = l = 0 0 i l ( 1 ) l t j m i m 1 2 l + 1 Q ( l + 1 ; i ) , Z 1 z z 1 z 1 2 + A + B z m 1 = l = 0 0 i l 1 l t i m 2 l Q l + 1 ; i .
Proof. 
The first two identities follow from Lemma 3. We start with the identity
I C j t = 0 t + j 1 j 1 C t = I , where C < 1 .
For sufficiently large z R , such that
A ( z 1 ) 2 + B z m ( z 1 ) 2 < 1 ,
we derive
z 2 2 z + 1 + A + B z m 1 = 1 ( z 1 ) 2 I + A ( z 1 ) 2 + B z m ( z 1 ) 2 1 = 1 ( z 1 ) 2 j = 0 ( 1 ) j A ( z 1 ) 2 + B z m ( z 1 ) 2 j = 1 ( z 1 ) 2 t = 0 ( 1 ) t ( z 1 ) 2 t A + 1 z m B t .
Next, we use the formulas
A + B z m t = 0 i t z i m Q ( t + 1 ; i ) ,
j = 0 t δ ( i m , j ) t j 1 2 l + 1 = t i m 1 2 l + 1 ,
and
δ ( j + m + i m , t ) t 1 2 l 1 = t j m i m 1 2 l 1 .
Now, consider the inverse Z transform of the original series:
Z 1 1 ( z 1 ) 2 l = 0 ( 1 ) l ( z 1 ) 2 l A + 1 z m B l = l = 0 Z 1 ( 1 ) l ( z 1 ) 2 l + 2 A + 1 z m B l = l = 0 Z 1 0 i l ( 1 ) l z i m ( z 1 ) 2 l + 2 Q ( l + 1 ; i ) = l = 0 0 i l ( 1 ) t δ ( i m , t ) t 1 2 l + 1 Q ( l + 1 ; i ) = l = 0 0 i l ( 1 ) l t i m 1 2 l + 1 Q ( l + 1 ; i ) .
Using similar steps for A j ( t ) , we find
A j ( t ) = Z 1 1 z j + m ( z 1 ) 2 + A + B z m 1 = l = 0 0 i l ( 1 ) l δ ( j + m + i m , t ) t 1 2 l + 1 Q ( l + 1 ; i ) = l = 0 0 i l ( 1 ) l t j m i m 1 2 l + 1 Q ( l + 1 ; i ) .
For the third Z 1 transform, we have
Z 1 z z 1 l = 0 ( 1 ) l ( z 1 ) 2 l A + 1 z m B l = l = 0 Z 1 ( 1 ) l ( z 1 ) 2 l + 1 A + 1 z m B l = l = 0 0 i l ( 1 ) l Z 1 1 z i m 1 ( z 1 ) 2 l + 1 Q ( l + 1 ; i ) = l = 0 0 i l ( 1 ) l δ ( i m 1 , t ) t 1 2 l Q ( l + 1 ; i ) = l = 0 0 i l ( 1 ) l t i m 2 l Q ( l + 1 ; i ) .
Definition 5.
The delayed discrete matrix sine/cosine is defined as follows:
Sin A , B t : = l = 0 0 i l 1 l t i m 2 l + 1 Q l + 1 ; i : Z 0 M n × n , Cos A , B t : = l = 0 0 i l 1 l t i m 2 l Q l + 1 ; i : Z 0 M n × n
Lemma 6.
For all t Z 0 , one has
Sin A , B t l s t , Cos A , B t l c t .
where
l s t : = l = 0 t m 1 m + 2 t 2 l + 1 A + B l , l c t : = l = 0 t m 1 m + 2 t 2 l A + B l
Proof. 
We only have the first inequality:
Sin A , B t l = 0 t m 1 m + 2 0 i l t i m 2 l + 1 Q l + 1 ; i l = 0 t m 1 m + 2 0 i l t i m 2 l + 1 l i A l i B i l = 0 t m 1 m + 2 t 2 l + 1 A + B l .
Lemma 7.
Under the exponential boundedness of f : Z 0 R d , a solution of (1), (2) has the same property; that is, it is exponentially bounded.
Proof. 
Δ y t + 1 = Δ y t A y t B y t m + f t j = 0 r 1 Δ y j + 1 = j = 0 r 1 Δ y j j = 0 r 1 A y j j = 0 r 1 B y j m + j = 0 r 1 f j Δ y r = Δ φ 0 j = 0 r 1 A y j j = 0 r 1 B y j m + j = 0 r 1 f j
Summing the above equality from 0 to t 1 , we obtain
r = 0 t 1 Δ y r = r = 0 t 1 Δ φ 0 r = 0 t 1 j = 0 r 1 A y j r = 0 t 1 j = 0 r 1 B y j m + r = 0 t 1 j = 0 r 1 f j
equivalently
y t = φ 0 + t Δ φ 0 j = 0 t 1 t j A y j j = 0 t 1 t j B y j m + j = 0 t 1 t j f j .
Taking the norm and applying the triangle inequality, we have
y t φ 0 + t Δ φ 0 + j = 0 t 1 t j A y j + j = 0 t 1 t j B y j m + j = 0 t 1 t j f j φ 0 + t Δ φ 0 + j = m 1 t m j B φ j + j = 0 t 1 t j A y j + j = 0 t m 1 t m j B y j + j = 0 t 1 t j f j .
In this stage, without losing generality, it is assumed that b 2 > 1 . Then,
j = 0 t 1 t j f j j = 0 t 1 t j b 1 b 2 j t t + 1 2 b 1 b 2 t .
Thus,
y t a t + t A + B j = 0 t 1 y j . b t : = φ 0 + t Δ φ 0 + j = m 1 t m j B φ j + t t + 1 2 b 1 b 2 t
From the Gronwall inequality,
y t b t + t A + B j = 0 t 1 b j i = j + 1 t 1 1 + j A + B b t + t A + B j = 0 t 1 b j 1 + t A + B t b t 1 + t 2 A + B 1 + t A + B t .
Therefore, one can easily see that there exists constants b ^ 1 , b ^ 2 > 0 such that
y t b ^ 1 b ^ 2 t , t Z 0 .

3. Explicit Solutions

Below, we state and prove the main theorem of this paper. The main instrument used is the Z transform. We give a closed analytical form of the solution of problem (1), (2) in terms of the delayed discrete matrix sine/cosine.
Theorem 1.
Let f : Z 0 R d be an exponentially bounded function. The solution y ( t ) of the IVP problem (1), (2) has the following form:
y t = Cos A , B t φ 0 + Sin A , B t Δ φ 0 + i = m 1 Sin A , B t i m 1 B φ i + j = 0 t 2 Sin A , B t j 1 f j ,
for t Z 2 .
Proof. 
We recall that Lemma 7 says that the Z transform of the solution of (1) exists. Therefore, one can apply the Z transform to both sides of the delayed system (1) to obtain
t = 0 y t + 2 z t 2 t = 0 y t + 1 z t + t = 0 y t z t + A t = 0 y t z t + B t = 0 y t m z t = t = 0 f t z t , z 2 t = 0 y t z t φ 0 1 z φ 1 2 z t = 0 y t z t φ 0 + t = 0 y t z t + A t = 0 y t z t + B z m X z + t = m 1 φ t z t = t = 0 f t z t , z 1 2 + A + B z m X z = z z 1 φ 0 + z Δ φ 0 B z m t = m 1 φ t z t + F z .
This implies
X z = z z 1 z 1 2 + A + B z m 1 φ 0 + z z 1 2 + A + B z m 1 Δ φ 0 z 1 2 + A + B z m 1 t = m 1 B φ t z t + m + z 1 2 + A + B z m 1 F z .
In order to obtain an explicit form of y t , we take the inverse Z transform to have
y t = A 0 t + A 1 t j = m 1 A j t + A f t , A j t = Z 1 1 z j + m z 1 2 + A + B z m 1 B φ j t = sin ( t j m 1 ) B φ j
where
A 0 t = Z 1 z z 1 z 1 2 + A + B z m 1 φ 0 t , A 1 t = Z 1 z z 1 2 + A + B z m 1 Δ φ 0 t , A j t = Z 1 1 z j + m z 1 2 + A + B z m 1 B φ j t , j Z m 1 , A f t = Z 1 z 1 2 + A + B z m 1 F z t .
Using Lemma 5, we obtain the desired representation (7). □
Lemma 8.
Cos A , B t  and  Sin A , B t  satisfy the following equations:
Δ Cos A , B t = A Sin A , B t B Sin A , B t m ,
Δ Sin A , B t = Cos A , B t ,
Δ 2 Cos A , B t = A Cos A , B t 1 B Cos A , B t 1 m ,
Δ 2 Sin A , B t = A Sin A , B t B Sin A , B t m .
Proof. 
First, we prove identity (8). It is a consequence of the definition of the determining function Q l + 1 ; i :
Δ Cos A , B t = Cos A , B t + 1 Cos A , B t = l = 1 0 i l 1 l t + 1 i m 2 l t i m 2 l Q l + 1 ; i = l = 1 0 i l 1 l t i m 2 l 1 Q l + 1 ; i = A l = 1 0 i l 1 l t i m 2 l 1 Q l ; i + B l = 1 0 i l 1 l t i m 2 l 1 Q l ; i 1 = A l = 0 0 i l 1 l t i m 2 l + 1 Q l + 1 ; i B l = 0 0 i l 1 l t m i m 2 l + 1 Q l + 1 ; i = A Sin A , B t B Sin A , B t m .
The proof of identity (9) is much more simple:
Δ Sin A , B t = Sin A , B t + 1 Sin A , B t = l = 0 0 i l 1 l t + 1 i m 2 l + 1 t i m 2 l + 1 Q l + 1 ; i = l = 0 0 i l 1 l t i m 2 l Q l + 1 ; i = Cos A , B t .
Equations (10) and (11) can be proved by applying (9) and (8). □
The condition f : Z 0 R d is an exponentially bounded can be eliminated through direct verification, that is why the proof of the following theorem is not included.
Theorem 2.
The solution of IVP (1), (2) can be rewritten in the following form:
y t = Cos A , B t φ 0 + Sin A , B t Δ φ 0 + j = m 1 Sin A , B t j m 1 B φ j + j = 0 t 2 Sin A , B t j 1 f j .

4. Convergence Results

Lemma 9
([29] Chapter 5.6). For a matrix A R d × d and ε > 0 , there exists a matrix norm · such that
A ρ ( A ) + ε ,
where ρ ( A ) denotes the spectral radius of matrix A.
The proof provided is a detailed and rigorous mathematical argument demonstrating the convergence of the error sequence e k in the λ norm under the given conditions. Below are some clarifications and highlights for better understanding:
  • Key assumption: the inequality
    ρ ( I D L 1 ) < 1
    ensures that the spectral radius of matrix I D L 1 is less than 1, which is a critical condition for the contraction and convergence of the error sequence.
  • Iterative relation: the proof builds upon the iterative equation that expresses the evolution of error e k ( t ) as a combination of the previous error and additional terms influenced by C, F, and L 1 .
  • Norm bound: by bounding the λ -norm of the error, the proof systematically shows that the error decreases geometrically is controlled by choosing an appropriate λ within the specified range.
  • Choice of λ : the selection of λ is crucial.
  • Convergence: the result
    e k + 1 λ < ψ e k λ ,
    with ψ < 1 implies that the sequence e k λ converges to 0 as k .
From (7), one can see that the state y k ( t ) of (3) has the following form:
y k t = Cos A , B t φ 0 + Sin A , B t Δ φ 0 + i = m 1 Sin A , B t i m 1 B φ i + j = 0 t 2 Sin A , B t j 1 F u k ( j ) .
Consider
y ˜ k t = Cos A , B t φ 0 + Sin A , B t Δ φ 0 + i = m 1 Sin A , B t i m 1 B φ i + j = 0 t 2 Sin A , B t j 1 F u k ( j ) .
Let z d be a desired reference trajectory, and
e k ( t ) : = z d ( t ) z k ( t ) ,
e ˜ k ( t ) : = z k ( t ) z d ( t ) .
Here, e k ( t ) and e ˜ k ( t ) represent the kth iteration error.
Introduce δ y k ( t ) : = y k + 1 ( t ) y k ( t ) and δ u k ( t ) : = u k + 1 ( t ) u k ( t ) . We construct the following P-type learning law:
δ u k ( t ) = L 1 e k ( t ) .
When D = Θ , we set
δ u k ( t ) = L 2 e ˜ k ( t + 2 ) ,
where L 1 and L 2 are r × p learning gain parameter matrices determined in (21) and (27), respectively. Thus, from (3),
δ y k ( t ) = j = 0 t 2 Sin A , B t j 1 F δ u k ( j ) ,
δ y ˜ k ( t ) = j = 0 t 2 Sin A , B t j 1 F δ u k ( j ) .
Taking account of (3) together with (15) and (17), separately, we are ready to give the convergence analysis for e k λ in the following two theorems.
Theorem 3.
Assume that z d ( t ) = z k ( t ) ( t Z m 1 ). Consider (3) with the P-type learning law (17). For arbitrary initial input u 1 ( t ) , if
ρ ( I D L 1 ) < 1 ,
then we have
lim k e k λ = 0 .
Proof. 
For (3) with t Z 0 T , according to (15), we can obtain the relation between the kth error and the ( k + 1 ) th error:
e k + 1 ( t ) e k ( t ) = z k ( t ) z k + 1 ( t ) = C δ y k ( t ) D δ u k ( t ) .
According to (17), we have
e k + 1 ( t ) = ( I D L 1 ) e k ( t ) C δ y k ( t ) .
Taking norm · on R n for (22) and from Lemma 9, we have
| e k + 1 ( t ) | ( ρ ( I D L 1 ) + ε ) | e k ( t ) | + C | δ y k ( t ) | ,
where ε is an arbitrary positive number.
When 0 t 2 , obviously, δ y k ( 0 ) , δ y k ( 1 ) , and δ y k ( 2 ) become d-dimensional zero vectors. According to (21) and (23), it is easy to obtain
lim k | e k ( t ) | = 0 .
When t Z 3 T , multiplying both sides of (23) by λ t and then taking the λ norm, we have
e k + 1 λ ( ρ ( I D L 1 ) + ε ) e k λ + C δ y k λ .
Now, we estimate the value of λ t | δ y k ( t ) | . According to (6), (17), and (19), we have
λ t | δ y k ( t ) | = λ t j = 0 t 2 Sin A , B t j 1 F | δ u k ( j ) | λ t l s t F j = 0 t 2 | δ u k ( j ) | λ t l s t F L 1 j = 0 t 2 | e k ( j ) | λ t l s t F L 1 j = 0 t 2 λ j λ j | e k ( j ) | λ t l s t F L 1 e k λ j = 0 t 2 λ t j λ 2 ( T 1 ) l s t F L 1 e k λ .
Taking the supremum norm for both sides of (25), we obtain
δ y k λ = sup t Z 0 T { λ t | δ y k ( t ) | } λ 2 ( T 1 ) l s T F L 1 e k λ .
Now linking (24) and (26), we have
e k + 1 λ ( ρ ( I D L 1 ) + ε ) + μ λ e k λ ,
where
μ λ : = C λ 2 ( T 1 ) l s T F L 1 .
By (21), one derives
ρ ( I D L 1 ) + ε + μ λ < 1
when
0 < λ < min 1 , 1 ρ ( I D L 1 ) ε C ( T 1 ) l s T F L 1 .
Finally, we obtain
e k + 1 λ < ( ρ ( I D L 1 ) + ε ) + μ λ e k λ ,
which implies
lim k e k λ = 0 .
Theorem 4.
Assume that y d ( t ) = y k ( t ) ( t Z m 1 ) . Consider (3) with D = Θ and (18). For arbitrary initial input u 1 ( t ) , if
ρ ( I C B L 2 ) < 1 , C B 0 ,
then
lim k e k λ = 0 ,
on Z 3 T .
Proof. 
For (3) with D = Θ and t Z 3 T , we can obtain the relation between the kth error and the ( k + 1 ) th error via (16):
e k + 1 ( t ) e k ( t ) = z k + 1 ( t ) z k ( t ) = C δ y ˜ k ( t ) .
Substituting (20) into (28), we obtain
e k + 1 ( t ) = e k ( t ) + C δ y ˜ k ( t ) = e k ( t ) C j = 0 t 2 Sin A , B t j 1 F δ u k ( j ) = e k ( t ) C Sin A , B 1 F δ u k ( t 2 ) C j = 0 t 3 Sin A , B t j 1 F δ u k ( j ) = e k ( t ) C F δ u k ( t 2 ) C j = 0 t 3 Sin A , B t j 1 F δ u k ( j ) .
Due to (15) and (18), we have
e k + 1 ( t ) = ( I C F L 2 ) e k ( t ) C j = 0 t 3 Sin A , B t j 1 F L 2 e k ( j + 2 ) .
Taking norm · for (29) and from Lemma 9, we have
| e k + 1 ( 3 ) | ( ρ ( I C F L 2 ) + ε ) | e k ( 3 ) | ,
for t = 3 . From (27), it is easy to obtain
lim k | e k ( 3 ) | = 0 .
When t Z 3 T , we have
| e k + 1 ( t ) | ( ρ ( I C F L 2 ) + ε ) | e k ( t ) | + C j = 0 t 3 Sin A , B t j 1 F L 2 | e k ( j + 2 ) | ( ρ ( I C F L 2 ) + ε ) | e k ( t ) | + C Sin A , B t F L 2 j = 0 t 3 | e k ( j + 2 ) | ( ρ ( I C F L 2 ) + ε ) | e k ( t ) | + C l s t F L 2 e k λ j = 0 t 3 λ ( j + 2 ) .
Then, by taking the λ norm, we obtain
e k + 1 λ ( ρ ( I C F L 2 ) + ε ) e k λ + C l s T F L 2 e k λ j = 0 t 3 λ t ( j + 2 ) ( ρ ( I C F L 2 ) + ε ) e k λ + λ T 2 C l s T F L 2 e k λ ( ρ ( I C F L 2 ) + ε + γ λ ) e k λ ,
where 0 < λ < 1 and
γ λ = λ T 2 C l s T F L 2 .
We choose λ from the set
0 < λ < min 1 , 1 ρ ( I C B L 2 ) ε T 2 C l s T F L 2 ,
and, according to (30), we have
e k + 1 λ < ρ ( I C B L 2 ) + ε + γ λ e k λ ,
which means that
lim k e k λ = 0 .
Thus, the proof is completed. □

5. Applicaitons

5.1. Example 1

We consider the discrete-time system:
y k + 1 ( t ) = A y k ( t ) + F u k ( t ) ,
where
A = 0.8 0.2 0.1 0.9 , F = 0.5 0 0 0.3 .
The goal is to track a desired trajectory y d ( t ) by iteratively updating the control input. We apply a P-type ILC update:
u k + 1 ( t ) = u k ( t ) + L e k ( t ) ,
where e k ( t ) = y d ( t ) y k ( t ) is the tracking error, and L is the learning gain matrix:
L = 0.7 0 0 0.6 .
The error propagation follows:
e k + 1 ( t ) = ( I L B ) e k ( t ) .
Compute I L B
I L B = 1 0 0 1 0.7 0 0 0.6 0.5 0 0 0.3
= 1 ( 0.7 × 0.5 ) 0 0 1 ( 0.6 × 0.3 ) = 0.65 0 0 0.82 .
Since I L B < 1 , the error decreases over iterations, ensuring convergence.
Assume initial error
e 0 ( t ) = 1 1 .
and error evolution (Figure 1)
e k ( t ) = ( I L B ) k e 0 ( t ) .

5.2. Perturbed System Model

We introduce small perturbations to matrices A and B:
A δ = A + Δ A , B δ = B + Δ B .
Assume that
Δ A = 0.05 0.02 0.01 0.03 , Δ B = 0.02 0 0 0.01 .
Thus, the perturbed system is
A δ = 0.85 0.18 0.09 0.93 , B δ = 0.52 0 0 0.29 .
With this uncertainty, the new error propagation becomes
e k + 1 ( t ) = ( I L B δ ) e k ( t ) .
Compute I L B δ :
I L B δ = 1 0.7 × 0.52 0 0 1 0.6 × 0.29 = 0.636 0 0 0.826 .
The perturbed system still converges but more slowly due to increased error propagation (Figure 2):

5.3. Example 2

We consider a discrete-time second-order linear system in two dimensions:
y k ( t + 2 ) = 2 y k ( t + 1 ) + 1 0 0 1 y k ( t ) + 0.3 0.1 0.2 0.3 y k ( t 3 ) + 1 2 u k ( t ) , z k ( t ) = 0.2 0.3 y k ( t ) + 2 u k ( t ) ,
The control input is updated iteratively as
u k + 1 ( t ) = u k ( t ) + L e k ( t ) , L = 1 / 2000
where the tracking error is given by
e k ( t ) = z d ( t ) z k ( t ) = 2 t sin t + 8 z k ( t ) .
I L F = 1 1 1 2000 1 2 .
Since I L F < 1 , the error decreases over iterations, ensuring convergence.
Figure 3 illustrates the system output compared to the desired trajectory over different iterations. The system aims to align with the desired trajectory as the iterations increase.
Figure 4 shows the error norm decreasing over time, demonstrating the convergence of the ILC process.
The ILC approach effectively refines the control input to improve trajectory tracking. The figures demonstrate the system’s convergence as errors reduce over iterations (Figure 5).
Block Diagram of an ILC System
Mathematics 13 00916 i001

6. Conclusions

A system of inhomogeneous second-order difference equations with linear parts given by noncommutative matrix coefficients was considered. The closed-form solution was derived using newly defined delayed matrix sine/cosine functions via the Z transform and determining function. This representation helped analyze iterative learning control by applying appropriate updating laws and ensuring sufficient conditions for achieving asymptotic convergence in tracking.
Future work may focus on controllability, stability, existence and uniqueness problems of multiple delayed discrete semilinear/linear systems.

Author Contributions

Conceptualization, N.I.M. and M.A. (Muath Awadalla); methodology, N.I.M.; validation, N.I.M., M.A. (Muath Awadalla) and M.A. (Meraa Arab); investigation, N.I.M., M.A. (Muath Awadalla) and M.A. (Meraa Arab); writing—original draft preparation, N.I.M. and M.A. (Muath Awadalla); writing—review and editing, N.I.M., M.A. (Muath Awadalla) and M.A. (Meraa Arab). All authors have read and agreed to the published version of this manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. KFU250955].

Data Availability Statement

Data are contained within this article.

Acknowledgments

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. KFU250955].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering operation of robots by learning. J. Robot. Syst. 1984, 1, 123–140. [Google Scholar] [CrossRef]
  2. Sun, M.X.; Huang, B.J. Iterative Learning Control; National Defense Industry Press: Beijing, China, 1999. [Google Scholar]
  3. Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A survey of iterative learning control. IEEE Control Syst. Mag. 2006, 26, 96–114. [Google Scholar]
  4. Ahn, H.S.; Chen, Y.Q.; Moore, K.L. Iterative learning control: Brief survey and categorization. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 2007, 37, 1099–1121. [Google Scholar] [CrossRef]
  5. Li, X.D.; Chow, T.W.S.; Ho, J.K.L. 2-D system theory based iterative learning control for linear continuous systems with time delays. IEEE Trans Circuits Syst. I Regul. Pap. 2005, 52, 1421–1430. [Google Scholar]
  6. Wan, K. Iterative learning control of two-dimensional discrete systems in general model. Nonlinear Dyn. 2021, 104, 1315–1327. [Google Scholar] [CrossRef]
  7. Zhu, Q.; Xu, J.; Huang, D.; Hu, G. Iterative learning control design for linear discrete-time systems with multiple high-order internal models. Automatica 2015, 62, 65–76. [Google Scholar] [CrossRef]
  8. Wei, Y.S.; Li, X.D. Iterative learning control for linear discrete-time systems with high relative degree under initial state vibration. IET Control Theory Appl. 2016, 10, 1115–1126. [Google Scholar] [CrossRef]
  9. Diblík, J.; Khusainov, D.Y. Representation of solutions of discrete delayed system y(k + 1) = Ay(k) + By(km) + f(k) with commutative matrices. J. Math. Anal. Appl. 2006, 318, 63–76. [Google Scholar] [CrossRef]
  10. Diblík, J.; Khusainov, D.Y. Representation of solutions of linear discrete systems with constant coefficients and pure delay. Adv. Differ. Equ. 2006, 2006, 1–13. [Google Scholar] [CrossRef]
  11. Khusainov, D.Y.; Diblík, J.; Růžičková, M.; Lukáčová, J. Representation of a solution of the Cauchy problem for an oscillating system with pure delay. Nonlinear Oscil. 2008, 11, 276–285. [Google Scholar] [CrossRef]
  12. Diblík, J.; Mencáková, K. Representation of solutions to delayed linear discrete systems with constant coefficients and with second-order differences. Appl. Math. Lett. 2020, 105, 106309. [Google Scholar] [CrossRef]
  13. Elshenhab, A.M.; Wang, X.T. Representation of solutions for linear fractional systems with pure delay and multiple delays. Math. Methods Appl. Sci. 2021, 44, 12835–12860. [Google Scholar] [CrossRef]
  14. Elshenhab, A.M.; Wang, X.T. Representation of solutions of linear differential systems with pure delay and multiple delays with linear parts given by non-permutable matrices. Appl. Math. Comput. 2021, 410, 126443. [Google Scholar] [CrossRef]
  15. Pospíšil, M. Representation of solutions of systems of linear differential equations with multiple delays and nonpermutable variable coefficients. Math. Model. Anal. 2020, 25, 303–322. [Google Scholar] [CrossRef]
  16. Diblík, J.; Morávková, B. Discrete matriy delayed eyponential for two delays and its property. Adv. Differ. Equ. 2013, 2013, 139. [Google Scholar] [CrossRef]
  17. Diblík, J.; Morávková, B. Representation of the solutions of linear discrete systems with constant coefficients and two delays. Abstr. Appl. Anal. 2014, 2014, 320476. [Google Scholar] [CrossRef]
  18. Pospíšil, M. Representation of solutions of delayed difference equations with linear parts given by pairwise permutable matrices via Z-transform. Appl. Math. Comput. 2017, 294, 180–194. [Google Scholar] [CrossRef]
  19. Mahmudov, N.I. Representation of solutions of discrete linear delay systems with non permutable matrices. Appl. Math. Lett. 2018, 85, 8–14. [Google Scholar] [CrossRef]
  20. Mahmudov, N.I. Delayed linear difference equations: The method of Z-transform. Electron. J. Qual. Theory Differ. Equ. 2020, 2020, 1–12. [Google Scholar] [CrossRef]
  21. Elshenhab, A.M.; Wang, X.T. Representation of solutions of delayed linear discrete systems with permutable or nonpermutable matrices and second-order differences. Rev. Real Acad. Cienc. Exactas Físicas Nat. Ser. A Matemáticas 2022, 116, 58. [Google Scholar] [CrossRef]
  22. Diblík, J. Relative and trajectory controllability of linear discrete systems with constant coefficients and a single delay. IEEE Trans. Autom. Control 2019, 64, 2158–2165. [Google Scholar] [CrossRef]
  23. Diblík, J.; Mencáková, K. A note on relative controllability of higher-order linear delayed discrete systems. IEEE Trans. Autom. Control 2020, 65, 5472–5479. [Google Scholar] [CrossRef]
  24. Liang, C.; Wang, J.; Fečkan, M. A study on ILC for linear discrete systems with single delay. J. Differ. Equ. Appl. 2018, 24, 358–374. [Google Scholar] [CrossRef]
  25. Liang, C.; Wang, J.; Shen, D. Iterative learning control for linear discrete delay systems via discrete matriy delayed eyponential function approach. J. Differ. Equ. Appl. 2018, 24, 1756–1776. [Google Scholar] [CrossRef]
  26. Medved, M.; Škripková, L. Sufficient conditions for the eyponential stability of delay difference equations with linear parts defined by permutable matrices. Electron. J. Qual. Theory Differ. Equ. 2012, 2012, 1–13. [Google Scholar] [CrossRef]
  27. Pospíšil, M. Relative controllability of delayed difference equations to multiple consecutive states. AIP Conf. Proc. 2017, 1863, 480002. [Google Scholar] [CrossRef]
  28. Agarwal, R.P. Difference Equations and Inequalities: Theory, Methods, and Applications, 2nd ed.; Marcel Dekker Inc.: New York, NY, USA, 2000. [Google Scholar]
  29. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
Figure 1. Tracking error norm reduction over iterations.
Figure 1. Tracking error norm reduction over iterations.
Mathematics 13 00916 g001
Figure 2. Tracking error norm over iterations for nominal and perturbed systems.
Figure 2. Tracking error norm over iterations for nominal and perturbed systems.
Mathematics 13 00916 g002
Figure 3. System output trajectory over multiple iterations.
Figure 3. System output trajectory over multiple iterations.
Mathematics 13 00916 g003
Figure 4. Error norm over time steps.
Figure 4. Error norm over time steps.
Mathematics 13 00916 g004
Figure 5. Flowchart of an iterative learning control.
Figure 5. Flowchart of an iterative learning control.
Mathematics 13 00916 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahmudov, N.I.; Awadalla, M.; Arab, M. Explicit Form of Solutions of Second-Order Delayed Difference Equations: Application to Iterative Learning Control. Mathematics 2025, 13, 916. https://doi.org/10.3390/math13060916

AMA Style

Mahmudov NI, Awadalla M, Arab M. Explicit Form of Solutions of Second-Order Delayed Difference Equations: Application to Iterative Learning Control. Mathematics. 2025; 13(6):916. https://doi.org/10.3390/math13060916

Chicago/Turabian Style

Mahmudov, Nazim I., Muath Awadalla, and Meraa Arab. 2025. "Explicit Form of Solutions of Second-Order Delayed Difference Equations: Application to Iterative Learning Control" Mathematics 13, no. 6: 916. https://doi.org/10.3390/math13060916

APA Style

Mahmudov, N. I., Awadalla, M., & Arab, M. (2025). Explicit Form of Solutions of Second-Order Delayed Difference Equations: Application to Iterative Learning Control. Mathematics, 13(6), 916. https://doi.org/10.3390/math13060916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop