Next Article in Journal
Real-Time Task Scheduling and Resource Planning for IIoT-Based Flexible Manufacturing with Human–Machine Interaction
Previous Article in Journal
Bayesian Inference on the Impact of Serious Life Events on Insomnia and Obesity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex, Temporally Variant SVD via Real ZN Method and 11-Point ZeaD Formula from Theoretics to Experiments

1
School of Humanities and Management, Youjiang Medical University for Nationalities, Baise 533000, China
2
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
3
Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China
4
School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen 518107, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1841; https://doi.org/10.3390/math13111841
Submission received: 22 April 2025 / Revised: 23 May 2025 / Accepted: 28 May 2025 / Published: 31 May 2025

Abstract

:
The complex, temporally variant singular value decomposition (SVD) problem is proposed and investigated in this paper. Firstly, the original problem is transformed into an equation system. Then, by using the real zeroing neurodynamics (ZN) method, matrix vectorization, Kronecker product, vectorized transpose matrix, and dimensionality reduction technique, a dynamical model, termed the continuous-time SVD (CTSVD) model, is derived and investigated. Furthermore, a new 11-point Zhang et al. discretization (ZeaD) formula with fifth-order precision is proposed and studied. In addition, with the use of the 11-point and other ZeaD formulas, five discrete-time SVD (DTSVD) algorithms are further acquired. Meanwhile, theoretical analyses and numerical experimental results substantiate the correctness and convergence of the proposed CTSVD model and DTSVD algorithms.

1. Introduction

As a kind of fundamental matrix decomposition in linear algebra, singular value decomposition (SVD) plays an important role in modern scientific research and engineering applications [1,2,3,4,5,6,7,8]. For instance, in [1], SVD was used for the security protection of medical images. In ref. [2], a recommender system algorithm using SVD and Gower’s Ranking was proposed. In ref. [3], a method of SVD-based parameter identification for discrete-time stochastic systems with unknown exogenous inputs was presented. A k-SVD based compressive sensing method for visual chaotic image encryption was discussed in [4]. A robust speech steganography using the differential SVD method was provided in [5]. In ref. [6], an adaptive signal denoising method based on SVD for the fault diagnosis of rolling bearings was analyzed and discussed. An SVD method was proposed and applied for a temporally variant symmetric matrix in the real number domain [7]. In ref. [8], a decomposition model for SVD in the real number domain was presented and investigated.
In recent years, a dynamical method termed the zeroing neurodynamics (ZN) method has been proposed and widely used for solving temporally variant problems [9,10,11,12,13,14,15,16,17,18,19,20,21,22], such as nonlinear system control [9], predictive computations [10], robot control [11,12,13,14], matrix inversion [15,16], nonlinear optimization [17,18,19], matrix inequality [20], and QR decomposition [21,22]. Generally, when the ZN method is used to solve the temporally variant problem, the error function is defined first and then the value of the error function is forced to converge to zero by the design formula, so a dynamical model, also called a continuous-time solution model, is obtained. Moreover, to facilitate the implementation of modern electronic hardware, the dynamical model needs to be discretized into a discrete algorithm. For that purpose, a new class of finite difference formulas, termed the Zhang et al. discretization (ZeaD) formula [23,24,25,26], has been proposed and used. For example, in [23], a four-point ZeaD formula was derived and applied to matrix inversion. In [24,25], the ZeaD formula was presented and used for manipulator motion generation and nonlinear optimization, respectively. A model for robot control was proposed in [26] by using the eight-point ZeaD formula.
Unlike [7,8], this paper is dedicated to solving the problem of the SVD of temporally variant matrices in the complex number domain. In other words, the SVD method proposed in this paper primarily targets complex-valued matrices. To the best of our knowledge, the problem of complex, temporally variant SVD, which is formulated and studied in this paper, has not yet been investigated by other researchers. In order to address this difficult problem, an equation system is presented first. Then, by using the real ZN method, i.e., applying the ZN method in the real number domain, six error functions are defined. Additionally, the linear design formula is applied, as well as the matrix vectorization, Kronecker product, vectorized transpose matrix, and dimensionality reduction technique, and a dynamical model, i.e., a continuous-time SVD (CTSVD) model, is derived for solving complex temporally variant SVD. Meanwhile, a new 11-point ZeaD formula is proposed. Moreover, five discrete-time SVD (DTSVD) algorithms are further obtained by using the 11-point and other ZeaD formulas.
For better readability, the remaining contents of this paper are organized into four sections. The problem formulation and related preparation are provided in Section 2. In Section 3, the CTSVD model and the new 11-point ZeaD formula as well as five DTSVD algorithms are derived and investigated. Meanwhile, the corresponding theoretical analyses are given in this section. In Section 4, the numerical experiments are described and the results are displayed. The concluding remarks are given in Section 5.
Additionally, the main contributions of this paper are listed below.
  • The complex, temporally variant SVD problem is formulated and studied in this paper for the first time.
  • A new 11-point ZeaD formula with O ( τ 5 ) precision is proposed and investigated.
  • A new CTSVD model and five DTSVD algorithms are derived, with experiments described and theoretics verified.

2. Problem Formulation and Preparation

Generally, the temporally variant SVD problem in the complex number domain is described as
C ( t ) = U ( t ) Σ ( t ) V ( t ) ,
in which C ( t ) C m × n is a given smooth temporally variant matrix, U ( t ) C m × m and V ( t ) C n × n are unknown temporally variant unitary matrices, Σ ( t ) R m × n is an unknown non-negative diagonal matrix, and superscript is the conjugate transpose operation of a matrix. According to the definitions of unitary matrix and diagonal matrix, the following equations are obtained:
Σ ( t ) = U * ( t ) C ( t ) V ( t ) , U ( t ) U * ( t ) = I m , V ( t ) V * ( t ) = I n , σ i j ( t ) = 0 , i j ,
where I m R m × m and I n R n × n are identity matrices; σ i j ( t ) represents the ( i , j ) th element of Σ ( t ) . Since a complex number is expressed as the sum of its real and imaginary parts, we obtain
C ( t ) = C R ( t ) + C I ( t ) i , U ( t ) = U R ( t ) + U I ( t ) i , U * ( t ) = U R T ( t ) U I T ( t ) i , V ( t ) = V R ( t ) + V I ( t ) i , V * ( t ) = V R T ( t ) V I T ( t ) i ,
with superscript T representing the transpose operator of a matrix and i denoting the pure imaginary unit. Consequently, the following equations are derived:
Σ ( t ) = U R T ( t ) C R ( t ) V R ( t ) + U R T ( t ) C I ( t ) V R ( t ) i U I T ( t ) C R ( t ) V R ( t ) i + U I T ( t ) C I ( t ) V R ( t ) + U R T ( t ) C R ( t ) V I ( t ) i U R T ( t ) C I ( t ) V I ( t ) + U I T ( t ) C R ( t ) V I ( t ) + U I T ( t ) C I ( t ) V I ( t ) i , U R ( t ) U R T ( t ) U R ( t ) U I T ( t ) i + U I ( t ) U R T ( t ) i + U I ( t ) U I T ( t ) = I m , V R ( t ) V R T ( t ) V R ( t ) V I T ( t ) i + V I ( t ) V R T ( t ) i + V I ( t ) V I T ( t ) = I n , σ i j ( t ) = 0 , i j ,
that is,
Σ ( t ) = U R T ( t ) C R ( t ) V R ( t ) + U I T ( t ) C I ( t ) V R ( t ) U R T ( t ) C I ( t ) V I ( t ) + U I T ( t ) C R ( t ) V I ( t ) + U R T ( t ) C I ( t ) V R ( t ) U I T ( t ) C R ( t ) V R ( t ) + U R T ( t ) C R ( t ) V I ( t ) + U I T ( t ) C I ( t ) V I ( t ) i , U R ( t ) U R T ( t ) + U I ( t ) U I T ( t ) + U I ( t ) U R T ( t ) U R ( t ) U I T ( t ) i = I m , V R ( t ) V R T ( t ) + V I ( t ) V I T ( t ) + V I ( t ) V R T ( t ) V R ( t ) V I T ( t ) i = I n , σ i j ( t ) = 0 , i j .
Since the real and imaginary parts of both sides of the above equations are equal, we further have
U R T ( t ) C R ( t ) V R ( t ) + U I T ( t ) C I ( t ) V R ( t ) U R T ( t ) C I ( t ) V I ( t ) + U I T ( t ) C R ( t ) V I ( t ) = Σ ( t ) , U R T ( t ) C I ( t ) V R ( t ) U I T ( t ) C R ( t ) V R ( t ) + U R T ( t ) C R ( t ) V I ( t ) + U I T ( t ) C I ( t ) V I ( t ) = 0 , U R ( t ) U R T ( t ) + U I ( t ) U I T ( t ) = I m , U I ( t ) U R T ( t ) U R ( t ) U I T ( t ) = 0 , V R ( t ) V R T ( t ) + V I ( t ) V I T ( t ) = I n , V I ( t ) V R T ( t ) V R ( t ) V I T ( t ) = 0 , σ i j ( t ) = 0 , i j .
Evidently, the original problem (1) is a description in complex form and (2) is a description in real form. Therefore, we obtain the solution of (1) by solving (2). That is to say, if we acquire the solution of (2), when t [ t 0 , t f ) [ 0 , + ) , the SVD of C ( t ) is achieved.

3. Dynamical Model and Algorithms

In this section, the CTSVD model is first obtained by the real ZN method and then a new 11-point ZeaD formula is proposed. By using the 11-point and other ZeaD formulas, five DTSVD algorithms are derived and investigated. Meanwhile, the corresponding theoretical analyses are provided to ensure the correctness of the proposed model and algorithms.

3.1. CTSVD Model and Theoretical Analyses

To facilitate description and comprehension, the following definitions and lemmas [8,21,22,27,28], including matrix vectorization, Kronecker product, vectorized transpose matrix, and dimensionality reduction technique, are presented.
Definition 1
(Vectorization of matrix [27,28]). The mathematical symbol vec(·) denotes the large column vector formed by concatenating all the columns of a matrix. If A is an m × n matrix, then the vectorization of matrix A is vec(A)  = [ a 11 a 21 a m 1 a 12 a 22 a m n ] T .
Definition 2
(Kronecker product [27,28]). AB is the Kronecker product of A and B. If A is an m × n matrix and B is a p × q matrix, then AB is an m p × n q matrix and equals the block matrix [ a 11 B a 1 n B ; ; a m 1 B a m n B ] .
Definition 3
(Vectorized transpose matrix [8,21,22,27,28]). For a transpose situation, T m n is the orthogonal m n × m n permutation matrix whose ( i , j ) th element is 1 if j = 1 + m ( i 1 ) ( m n 1 ) f l o o r ( i 1 ) / n or 0 otherwise.
Definition 4
([8,27]). The mathematical symbol diag(·) denotes the vector consisting of the diagonal elements of a matrix.
Lemma 1
([21,22]). With A, F, G, and H respectively representing matrices of m × n , m × i , i × j , and j × n , the vectorized form of A = FGH is vec(A) = ( H T F) vec(G).
Lemma 2
(Dimensionality reduction [8]). At any time instant t [ t 0 , t f ) [ 0 , + ) , an m n × ( n 2 + n ) / 2 constant matrix I ˜ m n is constructed for an m × n temporally variant diagonal matrix D ( t ) , such that vec( D ( t ) ) =  I ˜ m n  diag( D ( t ) ) holds true.
In order to solve (2), based on the real ZN method, six error functions are defined, shown below:
W 1 ( t ) = Σ ( t ) U R T ( t ) C R ( t ) V R ( t ) U I T ( t ) C I ( t ) V R ( t ) + U R T ( t ) C I ( t ) V I ( t ) U I T ( t ) C R ( t ) V I ( t ) , W 2 ( t ) = U R T ( t ) C I ( t ) V R ( t ) U I T ( t ) C R ( t ) V R ( t ) + U R T ( t ) C R ( t ) V I ( t ) + U I T ( t ) C I ( t ) V I ( t ) , W 3 ( t ) = I m U R ( t ) U R T ( t ) U I ( t ) U I T ( t ) , W 4 ( t ) = U I ( t ) U R T ( t ) U R ( t ) U I T ( t ) , W 5 ( t ) = I n V R ( t ) V R T ( t ) V I ( t ) V I T ( t ) , W 6 ( t ) = V I ( t ) V R T ( t ) V R ( t ) V I T ( t ) .
Next, the linear design formula is applied to zero out W 1 through W 6 . Specifically, by substituting the previous equations into the linear design formula [10,11,12,13 W ˙ p ( t ) = ϑ p W p ( t ) , with W ˙ p ( t ) denoting the time derivative of W p ( t ) , p = 1 , 2 , 3 , 4 , 5 , 6 , we obtain
ϑ 1 W 1 ( t ) = Σ ˙ ( t ) U ˙ R T ( t ) C R ( t ) V R ( t ) U R T ( t ) C ˙ R ( t ) V R ( t ) U R T ( t ) C R ( t ) V ˙ R ( t ) U ˙ I T ( t ) C I ( t ) V R ( t ) U I T ( t ) C ˙ I ( t ) V R ( t ) U I T ( t ) C I ( t ) V ˙ R ( t ) + U ˙ R T ( t ) C I ( t ) V I ( t ) + U R T ( t ) C ˙ I ( t ) V I ( t ) + U R T ( t ) C I ( t ) V ˙ I ( t ) U ˙ I T ( t ) C R ( t ) V I ( t ) U I T ( t ) C ˙ R ( t ) V I ( t ) U I T ( t ) C R ( t ) V ˙ I ( t ) , ϑ 2 W 2 ( t ) = U ˙ R T ( t ) C I ( t ) V R ( t ) + U R T ( t ) C ˙ I ( t ) V R ( t ) + U R T ( t ) C I ( t ) V ˙ R ( t ) U ˙ I T ( t ) C R ( t ) V R ( t ) U I T ( t ) C ˙ R ( t ) V R ( t ) U I T ( t ) C R ( t ) V ˙ R ( t ) + U ˙ R T ( t ) C R ( t ) V I ( t ) + U R T ( t ) C ˙ R ( t ) V I ( t ) + U R T ( t ) C R ( t ) V ˙ I ( t ) + U ˙ I T ( t ) C I ( t ) V I ( t ) + U I T ( t ) C ˙ I ( t ) V I ( t ) + U I T ( t ) C I ( t ) V ˙ I ( t ) , ϑ 3 W 3 ( t ) = U ˙ R ( t ) U R T ( t ) U R ( t ) U ˙ R T ( t ) U ˙ I ( t ) U I T ( t ) U I ( t ) U ˙ I T ( t ) , ϑ 4 W 4 ( t ) = U ˙ I ( t ) U R T ( t ) + U I ( t ) U ˙ R T ( t ) U ˙ R ( t ) U I T ( t ) U R ( t ) U ˙ I T ( t ) , ϑ 5 W 5 ( t ) = V ˙ R ( t ) V R T ( t ) V R ( t ) V ˙ R T ( t ) V ˙ I ( t ) V I T ( t ) V I ( t ) V ˙ I T ( t ) , ϑ 6 W 6 ( t ) = V ˙ I ( t ) V R T ( t ) + V I ( t ) V ˙ R T ( t ) V ˙ R ( t ) V I T ( t ) V R ( t ) V ˙ I T ( t ) .
Let ϑ = ϑ p and p = 1 , 2 , 3 , 4 , 5 , 6 ; then, we have
Σ ˙ ( t ) + U ˙ R T ( t ) C I ( t ) V I ( t ) C R ( t ) V R ( t ) U ˙ I T ( t ) C I ( t ) V R ( t ) + C R ( t ) V I ( t ) U R T ( t ) C R ( t ) + U I T ( t ) C I ( t ) V ˙ R ( t ) + U R T ( t ) C I ( t ) U I T ( t ) C R ( t ) V ˙ I ( t ) = ϑ W 1 ( t ) + U R T ( t ) C ˙ R ( t ) V R ( t ) + U I T ( t ) C ˙ I ( t ) V R ( t ) U R T ( t ) C ˙ I ( t ) V I ( t ) + U I T ( t ) C ˙ R ( t ) V I ( t ) , U ˙ R T ( t ) C I ( t ) V R ( t ) + C R ( t ) V I ( t ) + U ˙ I T ( t ) C I ( t ) V I ( t ) C R ( t ) V R ( t ) + U R T ( t ) C I ( t ) U I T ( t ) C R ( t ) V ˙ R ( t ) + U R T ( t ) C R ( t ) + U I T ( t ) C I ( t ) V ˙ I ( t ) = ϑ W 2 ( t ) U R T ( t ) C ˙ I ( t ) V R ( t ) + U I T ( t ) C ˙ R ( t ) V R ( t ) U R T ( t ) C ˙ R ( t ) V I ( t ) U I T ( t ) C ˙ I ( t ) V I ( t ) , U ˙ R ( t ) U R T ( t ) U R ( t ) U ˙ R T ( t ) U ˙ I ( t ) U I T ( t ) U I ( t ) U ˙ I T ( t ) = ϑ W 3 ( t ) , U ˙ R ( t ) U I T ( t ) + U I ( t ) U ˙ R T ( t ) + U ˙ I ( t ) U R T ( t ) U R ( t ) U ˙ I T ( t ) = ϑ W 4 ( t ) , V ˙ R ( t ) V R T ( t ) V R ( t ) V ˙ R T ( t ) V ˙ I ( t ) V I T ( t ) V I ( t ) V ˙ I T ( t ) = ϑ W 5 ( t ) , V ˙ R ( t ) V I T ( t ) + V I ( t ) V ˙ R T ( t ) + V ˙ I ( t ) V R T ( t ) V R ( t ) V ˙ I T ( t ) = ϑ W 6 ( t ) .
Then, by using matrix vectorization, Kronecker product, vectorized transpose matrix, and the results of Lemma 1, we let M 0 ( t ) = C I ( t ) V I ( t ) C R ( t ) V R ( t ) , M 1 ( t ) = C I ( t ) V R ( t ) + C R ( t ) V I ( t ) , M 2 ( t ) = U R T ( t ) C R ( t ) + U I T ( t ) C I ( t ) , M 3 ( t ) = U R T ( t ) C I ( t ) U I T ( t ) C R ( t ) , M 4 ( t ) = U R ( t ) I m , M 5 ( t ) = I m U R ( t ) , M 6 ( t ) = U I ( t ) I m , M 7 ( t ) = I m U I ( t ) , M 8 ( t ) = V R ( t ) I n , M 9 ( t ) = I n V R ( t ) , M 10 ( t ) = V I ( t ) I n , and M 11 ( t ) = I n V I ( t ) . Meanwhile, by applying the results of Lemma 2, we have vec Σ ˙ ( t ) = I ˜ m n diag Σ ˙ ( t ) . Therefore, the following matrix equation is derived.
ϑ vec W ^ 1 ( t ) vec W ^ 2 ( t ) vec W 3 ( t ) vec W 4 ( t ) vec W 5 ( t ) vec W 6 ( t ) = I ˜ m n M 12 ( t ) M 13 ( t ) M 14 ( t ) M 15 ( t ) M 21 ( t ) M 13 ( t ) M 12 ( t ) M 15 ( t ) M 14 ( t ) M 31 ( t ) M 32 ( t ) M 33 ( t ) M 34 ( t ) M 34 ( t ) M 31 ( t ) M 42 ( t ) M 43 ( t ) M 34 ( t ) M 34 ( t ) M 51 ( t ) M 52 ( t ) M 52 ( t ) M 54 ( t ) M 55 ( t ) M 51 ( t ) M 52 ( t ) M 52 ( t ) M 64 ( t ) M 65 ( t ) diag Σ ˙ ( t ) vec U ˙ R ( t ) vec U ˙ I ( t ) vec V ˙ R T ( t ) vec V ˙ I T ( t ) ,
in which
W ^ 1 ( t ) = W 1 ( t ) U R T ( t ) C ˙ R ( t ) V R ( t ) U I T ( t ) C ˙ I ( t ) V R ( t ) + U R T ( t ) C ˙ I ( t ) V I ( t ) U I T ( t ) C ˙ R ( t ) V I ( t ) / ϑ ,
W ^ 2 ( t ) = W 2 ( t ) + U R T ( t ) C ˙ I ( t ) V R ( t ) U I T ( t ) C ˙ R ( t ) V R ( t ) + U R T ( t ) C ˙ R ( t ) V I ( t ) + U I T ( t ) C ˙ I ( t ) V I ( t ) / ϑ ,
and M 12 ( t ) = ( M 0 ( t ) I m ) T m m , M 13 ( t ) = ( M 1 ( t ) I m ) T m m , M 14 ( t ) = ( I n M 2 ( t ) ) T n n , M 15 ( t ) = ( I n M 3 ( t ) ) T n n , M 21 ( t ) = 0 m n × n , M 31 ( t ) = 0 m m × n , M 32 ( t ) = ( M 4 ( t ) + M 5 ( t ) T m m ) , M 33 ( t ) = ( M 6 ( t ) + M 7 ( t ) T m m ) , M 34 ( t ) = 0 m m × n n , M 42 ( t ) = M 7 ( t ) T m m M 6 ( t ) , M 43 ( t ) = M 4 ( t ) M 5 ( t ) T m m , M 51 ( t ) = 0 n n × n , M 52 ( t ) = 0 n n × m m , M 54 ( t ) = ( M 8 ( t ) T n n + M 9 ( t ) ) , M 55 ( t ) = ( M 10 ( t ) T n n + M 11 ( t ) ) , M 64 ( t ) = M 11 ( t ) M 10 ( t ) T n n , and M 65 ( t ) = M 8 ( t ) T n n M 9 ( t ) . Further, let
s ˙ ( t ) = diag Σ ˙ ( t ) vec U ˙ R ( t ) vec U ˙ I ( t ) vec V ˙ R T ( t ) vec V ˙ I T ( t ) ,
w ( t ) = vec W ^ 1 ( t ) vec W ^ 2 ( t ) vec W 3 ( t ) vec W 4 ( t ) vec W 5 ( t ) vec W 6 ( t ) ,
and
M ( t ) = I ˜ m n M 12 ( t ) M 13 ( t ) M 14 ( t ) M 15 ( t ) M 21 ( t ) M 13 ( t ) M 12 ( t ) M 15 ( t ) M 14 ( t ) M 31 ( t ) M 32 ( t ) M 33 ( t ) M 34 ( t ) M 34 ( t ) M 31 ( t ) M 42 ( t ) M 43 ( t ) M 34 ( t ) M 34 ( t ) M 51 ( t ) M 52 ( t ) M 52 ( t ) M 54 ( t ) M 55 ( t ) M 51 ( t ) M 52 ( t ) M 52 ( t ) M 64 ( t ) M 65 ( t ) .
Thus, the CTSVD model is obtained as
s ˙ ( t ) = ϑ M + ( t ) w ( t ) ,
where superscript + denotes the pseudoinverse of a matrix; s ˙ ( t ) R ( m ^ + 2 m 2 + 2 n 2 ) × 1 , w ( t ) R 2 ( m 2 + n 2 + m n ) × 1 , and M ( t ) R 2 ( m 2 + n 2 + m n ) × ( m ^ + 2 m 2 + 2 n 2 ) if m n , m ^ = m , else m ^ = n .
Theorem 1.
With design parameter ϑ > 0 , as time t + , for a smoothly temporally variant matrix C ( t ) C m × n , the solution of the dynamical model, i.e., the CTSVD model (7), starting from a random initial value, converges to the theoretical solution of (2).
Proof. 
Firstly, in order to obtain the solution of (2), by using the real ZN method, six error functions are defined, shown in (3). Next, the linear design formula is applied to zero out the values of these error functions, i.e., W 1 through W 6 . Note that the theoretical solution of the linear design formula W ˙ ( t ) = ϑ W ( t ) is W ( t ) = e x p ( ϑ t ) W ( 0 ) , with W ( 0 ) denoting the initial value of W ( t ) . Therefore, when ϑ > 0 and time t + , we have W ( t ) 0 . Specifically, by employing the Laplace transformation for the ( i , j ) th element of W ( t ) , with initial condition w i j ( 0 ) , we have
s w i j ( s ) w i j ( 0 ) = ϑ w i j ( s ) ,
i.e.,
w i j ( s ) = w i j ( 0 ) s + ϑ .
By using the final value theorem of Laplace transformation, we further have
lim t + w i j ( t ) = lim s 0 s w i j ( s ) = lim s 0 s w i j ( 0 ) s + ϑ = 0 .
Evidently, the solution of (4) satisfies (2) when ϑ > 0 and time t + . We let ϑ = ϑ p with p = 1 , 2 , 3 , 4 , 5 , 6 , and (5) is obtained. Then, by using matrix vectorization, Kronecker product, vectorized transpose matrix, dimensionality reduction technique, and the results of Lemma 1, (6) is further obtained. Meanwhile, in (6), by using the results of Lemma 2, we zero out the non-diagonal elements of Σ ( t ) , i.e., σ i j ( t ) = 0 , i j . Thus, the solution of (6) satisfies (2). Additionally, (7) is another form of (6). The proof is therefore completed. □
Note that the larger the value of parameter ϑ is, the faster the convergence speed of the CTSVD model becomes [29]. However, the value of ϑ cannot be infinitely large, as it depends on the hardware limitations of the digital system. In practical applications, ϑ can be adjusted according to specific requirements. In the process of SVD, every element on the diagonal of Σ ( t ) must be non-negative. Therefore, if Σ ( t ) has negative elements on the diagonal, we change the sign of the corresponding columns of U ( t ) or rows of V * ( t ) , such that Σ ( t ) only has non-negative elements [7,8].

3.2. DTSVD Algorithms and Theoretical Analyses

In this section, a new 11-point ZeaD formula is proposed and studied. With the use of the 11-point and other ZeaD formulas, five DTSVD algorithms are further presented and investigated.

3.2.1. 11-Point and Other ZeaD Formulas

The 11-point ZeaD formula is proposed using Theorem 2.
Theorem 2.
With = ˙ denoting the computational assignment operation, f k + i denoting f ( ( k + i ) τ ) , and τ ( 0 , 1 ) s denoting the sufficiently-small sampling gap, respectively, the 11-point ZeaD formula is expressed as
f ˙ k 140 333 f k + 1 + 14921 279720 f k 42 185 f k 1 28 111 f k 2 14 111 f k 3 + 28 555 f k 4 + 56 555 f k 5 + 98 1665 f k 6 62 777 f k 7 49 2664 f k 8 + 98 4995 f k 9 / τ ,
 and its truncation error is O ( τ 5 ) .
Proof. 
By applying Taylor expansion [30,31], the following ten equations are derived:
f k + 1 = u 1 + u 2 τ + u 3 τ 2 + u 4 τ 3 + u 5 τ 4 + u 6 τ 5 + u 70 τ 6 ,
f k 1 = u 1 u 2 τ + u 3 τ 2 u 4 τ 3 + u 5 τ 4 u 6 τ 5 + u 71 τ 6 ,
f k 2 = u 1 2 u 2 τ + u 3 ( 2 τ ) 2 u 4 ( 2 τ ) 3 + u 5 ( 2 τ ) 4 u 6 ( 2 τ ) 5 + u 72 ( 2 τ ) 6 ,
f k 3 = u 1 3 u 2 τ + u 3 ( 3 τ ) 2 u 4 ( 3 τ ) 3 + u 5 ( 3 τ ) 4 u 6 ( 3 τ ) 5 + u 73 ( 3 τ ) 6 ,
f k 4 = u 1 4 u 2 τ + u 3 ( 4 τ ) 2 u 4 ( 4 τ ) 3 + u 5 ( 4 τ ) 4 u 6 ( 4 τ ) 5 + u 74 ( 4 τ ) 6 ,
f k 5 = u 1 5 u 2 τ + u 3 ( 5 τ ) 2 u 4 ( 5 τ ) 3 + u 5 ( 5 τ ) 4 u 6 ( 5 τ ) 5 + u 75 ( 5 τ ) 6 ,
f k 6 = u 1 6 u 2 τ + u 3 ( 6 τ ) 2 u 4 ( 6 τ ) 3 + u 5 ( 6 τ ) 4 u 6 ( 6 τ ) 5 + u 76 ( 6 τ ) 6 ,
f k 7 = u 1 7 u 2 τ + u 3 ( 7 τ ) 2 u 4 ( 7 τ ) 3 + u 5 ( 7 τ ) 4 u 6 ( 7 τ ) 5 + u 77 ( 7 τ ) 6 ,
f k 8 = u 1 8 u 2 τ + u 3 ( 8 τ ) 2 u 4 ( 8 τ ) 3 + u 5 ( 8 τ ) 4 u 6 ( 8 τ ) 5 + u 78 ( 8 τ ) 6 ,
and
f k 9 = u 1 9 u 2 τ + u 3 ( 9 τ ) 2 u 4 ( 9 τ ) 3 + u 5 ( 9 τ ) 4 u 6 ( 9 τ ) 5 + u 79 ( 9 τ ) 6 ,
therein, u 1 = f ( k τ ) , u 2 = f ˙ ( k τ ) , u 3 = f ¨ ( k τ ) / 2 ! , u 4 = f ( k τ ) / 3 ! , u 5 = f ( 4 ) ( k τ ) / 4 ! , u 6 = f ( 5 ) ( k τ ) / 5 ! , and u 7 q = f ( 6 ) ( β q ) / 6 ! . q = 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 ; f ˙ , f ¨ , f , f ( 4 ) , f ( 5 ) , and f ( 6 ) are the first-, second-, third-, fourth-, fifth-, and sixth-order derivatives, separately; symbol ! is the factorial operator; β q correspondingly lies in the intervals ( k τ , ( k + 1 ) τ ) , ( ( k 1 ) τ , k τ ) , ( ( k 2 ) τ , k τ ) , ( ( k 3 ) τ , k τ ) , ( ( k 4 ) τ , k τ ) , ( ( k 5 ) τ , k τ ) , ( ( k 6 ) τ , k τ ) , ( ( k 7 ) τ , k τ ) , ( ( k 8 ) τ , k τ ) , and ( ( k 9 ) τ , k τ ) . Let (9), (10), (11), (12), (13), (14), (15), (16), (17), and (18) separately multiply 1, −27/50, −3/5, −3/10, 3/25, 6/25, 7/50, −93/490, −7/160, and 7/150. By adding them together, we have
f ˙ k = 140 333 f k + 1 + 14921 279720 f k 42 185 f k 1 28 111 f k 2 14 111 f k 3 + 28 555 f k 4 + 56 555 f k 5 + 98 1665 f k 6 62 777 f k 7 49 2664 f k 8 + 98 4995 f k 9 / τ + τ 5 1 720 f ( 6 ) ( β 0 ) 3 4000 f ( 6 ) ( β 1 ) 4 75 f ( 6 ) ( β 2 ) 243 800 f ( 6 ) ( β 3 ) + 256 375 f ( 6 ) ( β 4 ) + 125 24 f ( 6 ) ( β 5 ) + 1134 125 f ( 6 ) ( β 6 ) 4807 155 f ( 6 ) ( β 7 ) 3584 225 f ( 6 ) ( β 8 ) + 4719 137 f ( 6 ) ( β 9 ) .
Specifically, the error term of (19) is
τ 5 1 720 f ( 6 ) ( β 0 ) 3 4000 f ( 6 ) ( β 1 ) 4 75 f ( 6 ) ( β 2 ) 243 800 f ( 6 ) ( β 3 ) + 256 375 f ( 6 ) ( β 4 ) + 125 24 f ( 6 ) ( β 5 ) + 1134 125 f ( 6 ) ( β 6 ) 4807 155 f ( 6 ) ( β 7 ) 3584 225 f ( 6 ) ( β 8 ) + 4719 137 f ( 6 ) ( β 9 ) .
Then, (19) can be further rewritten as
f ˙ k = 140 333 f k + 1 + 14921 279720 f k 42 185 f k 1 28 111 f k 2 14 111 f k 3 + 28 555 f k 4 + 56 555 f k 5 + 98 1665 f k 6 62 777 f k 7 49 2664 f k 8 + 98 4995 f k 9 / τ + O ( τ 5 ) .
Thus, (8) is obtained as
f ˙ k 140 333 f k + 1 + 14921 279720 f k 42 185 f k 1 28 111 f k 2 14 111 f k 3 + 28 555 f k 4 + 56 555 f k 5 + 98 1665 f k 6 62 777 f k 7 49 2664 f k 8 + 98 4995 f k 9 / τ ,
and its truncation error is O ( τ 5 ) . The proof is therefore completed. □
It is worth noting that compared with other ZeaD formulas, the 11-point ZeaD formula proposed in this paper is currently the ZeaD formula with the highest computational precision known. Therefore, for this study, we chose this formula as the main research object. For the purpose of comparison, four ZeaD formulas [23,24,26] with different points and precisions, i.e., two-, four-, six-, and eight-point ZeaD formulas, were adopted and are listed below:
f ˙ k = f k + 1 f k / τ + O ( τ ) ,
f ˙ k = 7 10 f k + 1 3 5 f k + 1 10 f k 1 1 5 f k 2 / τ + O ( τ 2 ) ,
f ˙ k = 1 2 f k + 1 5 48 f k 1 4 f k 1 1 8 f k 2 1 12 f k 3 + 1 16 f k 4 / τ + O ( τ 3 ) ,
and
f ˙ k = 100 217 f k + 1 671 13020 f k 40 217 f k 1 40 217 f k 2 40 217 f k 3 + 85 868 f k 4 + 108 1085 f k 5 5 93 f k 6 / τ + O ( τ 4 ) .
Note that (21) and (23) are acquired by setting parameters a 1 = 1 / 5 and η 1 = η 2 = η 3 = 2 / 5 in references [23,26], respectively; the Euler forward formula (20) (also termed the two-point ZeaD formula) is seen as the first and also the simplest of the ZeaD formulas.

3.2.2. DTSVD Algorithms

By using (20), (21), (22), (23), and (8) to discretize the previous dynamical model, i.e., the CTSVD model (7), the DTSVD-1, DTSVD-2, DTSVD-3, DTSVD-4, and DTSVD-5 algorithms are respectively acquired:
s k + 1 = s ^ k + s k + O ( τ 2 ) ,
s k + 1 = 10 7 s ^ k + 6 7 s k 1 7 s k 1 + 2 7 s k 2 + O ( τ 3 ) ,
s k + 1 = 2 s ^ k + 5 24 s k + 1 2 s k 1 + 1 4 s k 2 + 1 6 s k 3 1 8 s k 4 + O ( τ 4 ) ,
s k + 1 = 217 100 s ^ k + 671 6000 s k + 2 5 s k 1 + 2 5 s k 2 + 2 5 s k 3 17 80 s k 4 27 125 s k 5 + 7 60 s k 6 + O ( τ 5 ) ,
and
s k + 1 = 333 140 s ^ k 14921 117600 s k + 27 50 s k 1 + 3 5 s k 2 + 3 10 s k 3 3 25 s k 4 6 25 s k 5 7 50 s k 6 + 93 490 s k 7 + 7 160 s k 8 7 150 s k 9 + O ( τ 6 ) ,
therein, step-size h = τ ϑ , s ^ k = h M k + w k , M k + = M + ( t k ) , w k = w ( t k ) , and t k = k τ .
Moreover, the correctness and precision of the previous algorithms are guaranteed by the following theorem.
Theorem 3.
With sufficiently-small τ ( 0 , 1 ) s , let O ( τ l ) represent a vector with every entry being O ( τ l ) , l Z + . The DTSVD algorithms (24), (25), (26), (27), and (28) are 0-stable, consistent, and convergent, which converge with the orders of their truncation errors being O ( τ 2 ) , O ( τ 3 ) , O ( τ 4 ) , O ( τ 5 ) , and O ( τ 6 ) , separately.
Proof. 
With · F denoting the Frobenius-norm of a matrix, let us consider the proof of (28) at first. The corresponding characteristic polynomial [30,32] of (28) is expressed as below:
P 10 ( ς ) = ς 10 + 14921 117600 ς 9 27 50 ς 8 3 5 ς 7 3 10 ς 6 + 3 25 ς 5 + 6 25 ς 4 + 7 50 ς 3 93 490 ς 2 7 160 ς + 7 150 .
The ten solutions of the above polynomial (retain the six significant digits after the decimal point) are
ς 1 = 1.000000 , ς 2 = 0.530619 , ς 3 = 0.571892 , ς 4 = 0.810609 , ς 5 = 0.066253 + 0.858405 i , ς 6 = 0.066253 0.858405 i , ς 7 = 0.586450 + 0.606005 i , ς 8 = 0.586450 0.606005 i , ς 9 = 0.515204 + 0.307328 i , ς 10 = 0.515204 0.307328 i ,
and the corresponding moduli of them are
| ς 1 | = 1.000000 , | ς 2 | = 0.530619 , | ς 3 | = 0.571892 , | ς 4 | = 0.810609 , | ς 5 | = | ς 6 | = 0.860958 , | ς 7 | = | ς 8 | = 0.843307 , | ς 9 | = | ς 10 | = 0.599905 ,
where i is the pure imaginary unit and | · | is the modulus of a number. Since these roots are less than or equal to one, with one being simple, according to Result 1 in Appendix A, the DTSVD algorithm (28) is 0-stable. Evidently, (28) has a truncation error of O ( τ 6 ) . Thus, from Results 2 through 4 in Appendix A, we know that (28) is consistent and convergent, and it converges with the order of its truncation error (i.e., O ( τ 6 ) ). The proofs of (24), (25), (26), and (27) are omitted since they are similar to that of (28). The proof is therefore completed. □
Furthermore, in order to facilitate readers’ understanding and programming implementation, the pseudocode of the DTSVD-5 algorithm (28) is presented as follows.
Pseudocode of DTSVD-5 (28) algorithm.
Input: C R ( t ) and C I ( t )
1: Set task duration t f , sampling gap τ , design parameter ϑ , step-size h = τ ϑ ,
generate random initial value C R ( 0 ) and C I ( 0 ) .
2: For t k = 0 : t f
3: Compute t k = k τ , M k + = M + ( t k ) , w k = w ( t k ) , and s ^ k = h M k + w k .
4: If k + 1 10
5: Compute s k + 1 via Euler backward formula.
6: else
7: Compute s k + 1 via (28).
8: End
Output: Σ ( t k ) , U R ( t k ) , U I ( t k ) , V R ( t k ) , and V I ( t k ) .
Since the pseudocodes of the DTSVD-1 (24), -2 (25), -3 (26), and -4 (27) algorithms are similar to that of the DTSVD-5 (28) algorithm, they are omitted for brevity.

4. Numerical Experiments

In this section, numerical experiments with three examples are described to substantiate the precision and performance of the proposed dynamical model and algorithms. Additionally, the task duration for all numerical experiments was set as t f = 20 s uniformly and the initial values were randomly generated from the interval ( 1 , 1 ) . Moreover, we observed the residual errors W p ( t ) F as well as W p ( t k ) F , p = 1 , 2 , 3 , 4 , 5 , 6 , to measure the computational accuracy of the proposed model and algorithms, respectively. Meanwhile, in order to more intuitively reflect the overall accuracy of complex, temporally variant SVD, the residual errors of the original problem (1), i.e., C ( t ) U ( t ) Σ ( t ) V * ( t ) F as well as C ( t k ) U ( t k ) Σ ( t k ) V * ( t k ) F , were also computed and displayed.
Note that the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms separately required one, three, five, seven, and ten initial values before we could use them. Since the first initial value was generated randomly, the Euler method was adopted to compute these values, except for the DTSVD-1 (24) algorithm, which needed only one initial value.

4.1. Example Description

Three examples to be discussed are given below.
Example 1.
C R ( t ) = sin ( t ) cos ( t ) sin ( t ) sin ( t ) 5 sin ( t ) cos ( t ) cos ( t ) sin ( t ) 3 sin ( t ) ,
C I ( t ) = cos ( t ) + 2 sin ( t ) cos ( t ) sin ( t ) cos ( t ) + 4 sin ( t ) sin ( t ) cos ( t ) sin ( t ) + 1 ,
Example 2.
C R ( t ) = 3 sin ( t ) cos ( t ) sin ( t ) cos ( t ) 9 cos ( t ) cos ( t ) cos ( t ) sin ( t ) 1 sin ( t ) sin ( t ) sin ( t ) cos ( t ) ,
C I ( t ) = sin ( t ) cos ( t ) cos ( t ) sin ( t ) sin ( t ) + 8 sin ( t ) sin ( t ) cos ( t ) cos ( t ) + 2 cos ( t ) sin ( t ) sin ( t ) ,
Example 3.
C R ( t ) = 3 sin ( t ) cos ( t ) cos ( t ) sin ( t ) cos ( t ) 9 cos ( t ) sin ( t ) sin ( t ) sin ( t ) cos ( t ) 1 sin ( t ) cos ( t ) ,
C I ( t ) = sin ( t ) sin ( t ) sin ( t ) cos ( t ) cos ( t ) sin ( t ) + 8 cos ( t ) sin ( t ) cos ( t ) sin ( t ) cos ( t ) + 2 sin ( t ) .
Evidently, by using formula C ( t ) = C R ( t ) + C I ( t ) i , the original form of C ( t ) is obtained easily. Additionally, the corresponding dimensions of C ( t ) in these three examples are 3 × 3 , 4 × 3 , and 3 × 4 , respectively.

4.2. Experimental Results of the CTSVD Model

The ODE (ordinary differential equation) solver [31] in the MATLAB R2024a routine was first applied to execute the CTSVD model (7). Specifically, we used the ODE function “ode45” in this research, and the solving results are displayed in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. Note that in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, the solution trajectories and residual error trajectories corresponding to different initial values are represented in different colors.
The solution trajectories of Σ ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 corresponding to Example 1 are shown in Figure 1. It is worth noting that Σ ( t ) was a real-valued diagonal matrix, that is, its non-diagonal elements were zero, and the imaginary part of its diagonal elements were also zero. Therefore, in the three subgraphs (a), (b), and (c) of Figure 1, only the real part trajectories of the diagonal elements of Σ ( t ) , i.e., σ 11 ( t ) , σ 22 ( t ) , and σ 33 ( t ) , are shown.
In addition, the solution trajectories of U R ( t ) , U I ( t ) , V R ( t ) , and V I ( t ) , synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 corresponding to Example 1 are shown in Figure 2, Figure 3, Figure 4 and Figure 5. Evidently, from these figures, we can see that the real and imaginary values of all elements of matrices U ( t ) and V ( t ) changed over time, that is, they were temporally variant. Meanwhile, due to the page limitation and similarity, the solution trajectories synthesized by the CTSVD model (7) corresponding to Examples 2 and 3 are omitted.
From Figure 6, we see that the residual errors of the CTSVD model (7) corresponding to Example 1, with three random initial values and ϑ = 10 , converged to near-zero within one second. Specifically, it can be seen from the subgraphs (a), (b), (c), (d), (e), and (f) in Figure 6 that the values of the residual errors of the CTSVD model (7), i.e., W 1 ( t ) F , W 2 ( t ) F , W 3 ( t ) F , W 4 ( t ) F , W 5 ( t ) F , and W 6 ( t ) F , converged rapidly to near-zero, which means that the values of the error functions, i.e., W 1 ( t ) , W 2 ( t ) , W 3 ( t ) , W 4 ( t ) , W 5 ( t ) , and W 6 ( t ) , quickly became near-zero. This indicates that the linear design formula successfully zeros out the error functions’ values, and this simultaneously verifies the correctness of the results of Theorem 1.
Similarly, as shown in Figure 7 and Figure 8, the residual errors of the CTSVD model (7) corresponding to Examples 2 and 3, with the same parameters as Example 1, also rapidly converged to near-zero. Moreover, as seen from Figure 9, the residual errors of original problem (1) for Examples 1–3 also converged to near-zero rapidly. Therefore, the results of Theorem 1 were confirmed to be correct again.

4.3. Experimental Results of DTSVD Algorithms

The results of the numerical experiments for the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms, with random initial values, h = 0.1 , and τ = 0.001 s or 0.0001 s, are respectively displayed in Figure 10, Figure 11 and Figure 12, and the corresponding data are listed in Table 1, Table 2 and Table 3. Note that due to space limitations, the solution trajectory figures, as well as the residual error figures for Examples 2 and 3, synthesized by the DTSVD algorithms are not provided.
By comparing Figure 10 and Figure 11, we can observe that the residual error trajectories synthesized by the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms for Example 1 changed regularly, which was consistent with the results of Theorem 3. From the subgraphs (a), (b), (c), (d), (e), and (f) in Figure 10 as well as Figure 11, we can observe that the values of the residual errors, i.e., W 1 ( t k ) F , W 2 ( t k ) F , W 3 ( t k ) F , W 4 ( t k ) F , W 5 ( t k ) F , and W 6 ( t k ) F , for the five DTSVD algorithms all converged rapidly to near-zero. Among these algorithms, the DTSVD-5 (28) algorithm had the fastest convergence speed and highest computational precision. In addition, as illustrated in Figure 10 and Figure 11, the values of the error functions, i.e., W 1 ( t k ) , W 2 ( t k ) , W 3 ( t k ) , W 4 ( t k ) , W 5 ( t k ) , and W 6 ( t k ) , rapidly approached zero, indicating the effectiveness of the proposed DTSVD algorithms.
Furthermore, according to the data in Table 1 and Table 2, we know that the residual errors of DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms changed in the manner of O ( τ 2 ) , O ( τ 3 ) , O ( τ 4 ) , O ( τ 5 ) , and O ( τ 6 ) , respectively, which verifies the results of Theorem 3. Meanwhile, as shown in Figure 12 and Table 3, the residual errors of the original problem (1) corresponding to the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms were also proportional to O ( τ 2 ) , O ( τ 3 ) , O ( τ 4 ) , O ( τ 5 ) , and O ( τ 6 ) , respectively. Thus, Theorem 3’s results were reconfirmed.

5. Conclusions

In this research, the problem of complex, temporally variant SVD was formulated and investigated. To solve this difficult problem, an equation system was provided first. The dynamical model, i.e., the CTSVD model, was derived and proposed by applying the real ZN method, matrix vectorization, Kronecker product, vectorized transpose matrix, and dimensionality reduction technique. Meanwhile, a new 11-point ZeaD formula was proposed and studied. Furthermore, five DTSVD algorithms were obtained, with the use of the 11-point and other ZeaD formulas. It is worth mentioning that the proposed DTSVD-5 algorithm had the highest computational accuracy among the five DTSVD algorithms, i.e., O ( τ 6 ) precision. Finally, theoretical analyses and numerical experimental results verified the feasibility and effectiveness of the proposed dynamical model and algorithms. Our future research aims to apply the proposed SVD model and algorithms to real-world applications, such as signal processing, communications, and machine learning. Additionally, we will consider non-ideal conditions, including the existence of matrix singularities or noise interference.

Author Contributions

Conceptualization, X.K. and Y.Z.; Data curation, J.C., X.K. and Y.Z.; Formal analysis, J.C., X.K. and Y.Z.; Funding acquisition, J.C. and Y.Z.; Investigation, J.C., X.K. and Y.Z.; Methodology, J.C., X.K. and Y.Z.; Project administration, X.K. and Y.Z.; Resources, J.C., X.K. and Y.Z.; Software, J.C.; Supervision, X.K. and Y.Z.; Validation, J.C., X.K. and Y.Z.; Visualization, J.C., X.K. and Y.Z.; Writing—original draft, J.C.; Writing—review and editing, J.C., X.K. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was aided by the National Natural Science Foundation of China under grant 62376290, the Natural Science Foundation of Guangdong Province under grant 2024A1515011016, and the School Level Scientific Research Project of Youjiang Medical University for Nationalities under grant yy2020gcky037.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors wish to express their sincere thanks to Sun Yat-sen University for its support and assistance in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this paper:
SVDSingular value decomposition
ZNZeroing neurodynamics
CTSVDContinuous-time SVD
ZeaDZhang et al. discretization
DTSVDDiscrete-time SVD
ODEOrdinary differential equation

Appendix A

In the Appendix, the following four results [30,32] for an N-step method are provided.
Result 1: A linear N-step method i = 0 N α i x k + i = τ i = 0 N β i ψ k + i can be checked for 0-stability by determining the roots of its characteristic polynomial P N ( ς ) = i = 0 N α i ς i . If all roots denoted by ς of the polynomial P N ( ς ) satisfy | ς | 1 , with | ς | = 1 being simple, then the N-step method is 0-stable (i.e., has 0-stability).
Result 2: A linear N-step method is said to be consistent (i.e., has consistency) of order w if the truncation error for the exact solution is of order O ( τ w + 1 ) , where w > 0 .
Result 3: A linear N-step method is convergent, i.e., x [ t / τ ] x * ( t ) , for all t [ 0 , t f ] , as τ 0 if and only if the method is 0-stable and consistent. That is, 0-stability plus consistency means convergence, which is also known as the Dahlquist equivalence theorem.
Result 4: A 0-stable consistent method converges with the order of its truncation error.

References

  1. Ye, C.; Tan, S.; Wang, J.; Shi, L.; Zuo, Q.; Xiong, B. Double security level protection based on chaotic maps and SVD for medical images. Mathematics 2025, 13, 182. [Google Scholar] [CrossRef]
  2. Saifudin, I.; Widiyaningtyas, T.; Zaeni, I.; Aminuddin, A. SVD-GoRank: Recommender system algorithm using SVD and Gower’s ranking. IEEE Access 2025, 13, 19796–19827. [Google Scholar] [CrossRef]
  3. Tsyganov, A.; Tsyganova, Y. SVD-based parameter identification of discrete-time stochastic systems with unknown exogenous inputs. Mathematics 2024, 12, 1006. [Google Scholar] [CrossRef]
  4. Xie, Z.; Sun, J.; Tang, Y.; Tang, X.; Simpson, O.; Sun, Y. A k-SVD based compressive sensing method for visual chaotic image encryption. Mathematics 2023, 11, 1658. [Google Scholar] [CrossRef]
  5. Xue, Y.; Mu, K.; Wang, Y.; Chen, Y.; Zhong, P.; Wen, J. Robust speech steganography using differential SVD. IEEE Access 2019, 7, 153724–153733. [Google Scholar] [CrossRef]
  6. Wang, B.; Ding, C. An adaptive signal denoising method based on reweighted SVD for the fault diagnosis of rolling bearings. Sensors 2025, 25, 2470. [Google Scholar] [CrossRef]
  7. Baumann, M.; Helmke, U. Singular value decomposition of time-varying matrices. Future Gen. Comput. Syst. 2003, 19, 353–361. [Google Scholar] [CrossRef]
  8. Chen, J.; Zhang, Y. Online singular value decomposition of time-varying matrix via zeroing neural dynamics. Neurocomputing 2020, 383, 314–323. [Google Scholar] [CrossRef]
  9. Huang, M.; Zhang, Y. Zhang neuro-PID control for generalized bi-variable function projective synchronization of nonautonomous nonlinear systems with various perturbations. Mathematics 2024, 12, 2715. [Google Scholar] [CrossRef]
  10. Uhlig, F. Zhang neural networks: An introduction to predictive computations for discretized time-varying matrix problems. Numer. Math. 2024, 156, 691–739. [Google Scholar] [CrossRef]
  11. Guo, P.; Zhang, Y.; Li, S. Reciprocal-kind Zhang neurodynamics method for temporal-dependent Sylvester equation and robot manipulator motion planning. IEEE Trans. Neural Netw. Learn. Syst. 2025, 1–15. [Google Scholar] [CrossRef] [PubMed]
  12. Jerbi, H.; Alshammari, O.; Ben, A.; Kchaou, M.; Simos, T.; Mourtas, S.; Katsikis, V. Hermitian solutions of the quaternion algebraic Riccati equations through zeroing neural networks with application to quadrotor control. Mathematics 2024, 12, 15. [Google Scholar] [CrossRef]
  13. Jin, L.; Zhang, G.; Wang, Y.; Li, S. RNN-based quadratic programming scheme for tennis-training robots with flexible capabilities. IEEE Trans. Syst. Man Cybern. 2023, 53, 838–847. [Google Scholar] [CrossRef]
  14. Yang, Y.; Li, X.; Wang, X.; Liu, M.; Yin, J.; Li, W.; Voyles, R.; Ma, X. A strictly predefined-time convergent and anti-noise fractional-order zeroing neural network for solving time-variant quadratic programming in kinematic robot control. Neural Netw. 2025, 186, 107279. [Google Scholar] [CrossRef]
  15. Hua, C.; Cao, X.; Liao, B. Real-time solutions for dynamic complex matrix inversion and chaotic control using ODE-based neural computing methods. Comput. Intell. 2025, 41, e70042. [Google Scholar] [CrossRef]
  16. Zhang, B.; Zheng, Y.; Li, S.; Chen, X.; Mao, Y.; Pham, D. Inverse-free hybrid spatial-temporal derivative neural network for time-varying matrix Moore-Penrose inverse and its circuit schematic. IEEE Trans. Circuits-II 2025, 72, 499–503. [Google Scholar] [CrossRef]
  17. Yan, D.; Li, C.; Wu, J.; Deng, J.; Zhang, Z.; Yu, J.; Liu, P. A novel error-based adaptive feedback zeroing neural network for solving time-varying quadratic programming problems. Mathematics 2024, 12, 2090. [Google Scholar] [CrossRef]
  18. Peng, Z.; Huang, Y.; Xu, H. A novel high-efficiency variable parameter double integration ZNN model for time-varying Sylvester equations. Mathematics 2025, 13, 706. [Google Scholar] [CrossRef]
  19. Uhlig, F. Adapted AZNN methods for time-varying and static matrix problems. Electron. J. Linear Algebra 2023, 39, 164–180. [Google Scholar] [CrossRef]
  20. Guo, D.; Lin, X. Li-function activated Zhang neural network for online solution of time-varying linear matrix inequality. Neural Process. Lett. 2020, 52, 713–726. [Google Scholar] [CrossRef]
  21. Katsikis, V.N.; Mourtas, S.D.; Stanimirovic, P.S.; Zhang, Y. Solving complex-valued time-varying linear matrix equations via QR decomposition with applications to robotic motion tracking and on angle-of-arrival localization. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3415–3424. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, J.; Kang, X.; Zhang, Y. Continuous and discrete ZND models with aid of eleven instants for complex QR decomposition of time-varying matrices. Mathematics 2023, 11, 3354. [Google Scholar] [CrossRef]
  23. Hu, C.; Kang, X.; Zhang, Y. Three-step general discrete-time Zhang neural network design and application to time-variant matrix inversion. Neurocomputing 2018, 306, 108–118. [Google Scholar] [CrossRef]
  24. Liu, K.; Liu, Y.; Zhang, Y.; Wei, L.; Sun, Z.; Jin, L. Five-step discrete-time noise-tolerant zeroing neural network model for time-varying matrix inversion with application to manipulator motion generation. Eng. Appl. Artif. Intel. 2021, 103, 104306. [Google Scholar] [CrossRef]
  25. Sun, M.; Wang, Y. General five-step discrete-time Zhang neural network for time-varying nonlinear optimization. Bull. Malays. Math. Sci. Soc. 2020, 43, 1741–1760. [Google Scholar] [CrossRef]
  26. Chen, J.; Zhang, Y. Discrete-time ZND models solving ALRMPC via eight-instant general and other formulas of ZeaD. IEEE Access 2019, 7, 125909–125918. [Google Scholar] [CrossRef]
  27. Brookes, M. The Matrix Reference Manual; Imperial College London Staff Page. 2020. Available online: http://www.ee.imperial.ac.uk/hp/staff/dmb/matrix/intro.html (accessed on 31 March 2025).
  28. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1991. [Google Scholar]
  29. Zhang, Y.; Yang, R.; Li, S. Reciprocal Zhang dynamics (RZD) handling TVUDLMVE (time-varying under-determined linear matrix-vector equation). In Proceedings of the 37th Chinese Control and Decision Conference (CCDC), Xiamen, China, 16–19 May 2025; pp. 5794–5800. [Google Scholar]
  30. Griffiths, D.F.; Higham, D.J. Numerical Methods for Ordinary Differential Equations: Initial Value Problems; Springer: London, UK, 2010. [Google Scholar]
  31. Mathews, J.H.; Fink, K.D. Numerical Methods Using MATLAB; Prentice-Hall: Englewood Cliffs, NJ, USA, 2004. [Google Scholar]
  32. Suli, E.; Mayers, D.F. An Introduction to Numerical Analysis; Cambridge University Press: Oxford, UK, 2003. [Google Scholar]
Figure 1. Solution trajectories of Σ ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 1. Solution trajectories of Σ ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g001
Figure 2. Solution trajectories of U R ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 2. Solution trajectories of U R ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g002
Figure 3. Solution trajectories of U I ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 3. Solution trajectories of U I ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g003
Figure 4. Solution trajectories of V R ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 4. Solution trajectories of V R ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g004
Figure 5. Solution trajectories of V I ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 5. Solution trajectories of V I ( t ) synthesized by the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g005
Figure 6. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Figure 6. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 1.
Mathematics 13 01841 g006
Figure 7. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 2.
Figure 7. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 2.
Mathematics 13 01841 g007
Figure 8. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 3.
Figure 8. Residual errors of the CTSVD model (7) with three random initial values and ϑ = 10 for Example 3.
Mathematics 13 01841 g008
Figure 9. Residual errors of the original problem (1), i.e., C ( t ) U ( t ) Σ ( t ) V * ( t ) F , corresponding to the CTSVD model (7) with three random initial values and ϑ = 10 for Examples 1–3.
Figure 9. Residual errors of the original problem (1), i.e., C ( t ) U ( t ) Σ ( t ) V * ( t ) F , corresponding to the CTSVD model (7) with three random initial values and ϑ = 10 for Examples 1–3.
Mathematics 13 01841 g009
Figure 10. Residual errors of the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values, h = 0.1 , and τ = 0.01 s for Example 1.
Figure 10. Residual errors of the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values, h = 0.1 , and τ = 0.01 s for Example 1.
Mathematics 13 01841 g010
Figure 11. Residual errors of the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values, h = 0.1 , and τ = 0.001 s for Example 1.
Figure 11. Residual errors of the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values, h = 0.1 , and τ = 0.001 s for Example 1.
Mathematics 13 01841 g011
Figure 12. Residual errors of the original problem (1), i.e., C ( t k ) U ( t k ) Σ ( t k ) V * ( t k ) F , corresponding to the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values and h = 0.1 for Examples 1–3. (a) Example 1 with τ = 0.01 s. (b) Example 2 with τ = 0.01 s. (c) Example 3 with τ = 0.01 s. (d) Example 1 with τ = 0.001 s. (e) Example 2 with τ = 0.001 s. (f) Example 3 with τ = 0.001 s.
Figure 12. Residual errors of the original problem (1), i.e., C ( t k ) U ( t k ) Σ ( t k ) V * ( t k ) F , corresponding to the DTSVD-1 (24), -2 (25), -3 (26), -4 (27), and -5 (28) algorithms with random initial values and h = 0.1 for Examples 1–3. (a) Example 1 with τ = 0.01 s. (b) Example 2 with τ = 0.01 s. (c) Example 3 with τ = 0.01 s. (d) Example 1 with τ = 0.001 s. (e) Example 2 with τ = 0.001 s. (f) Example 3 with τ = 0.001 s.
Mathematics 13 01841 g012
Table 1. Residual errors of (24), (25), and (26), with h = 0.1 .
Table 1. Residual errors of (24), (25), and (26), with h = 0.1 .
Algorithm τ  (s)Example 1Example 2Example 3Law
DTSVD-1 (24) W 1 ( t k ) F 0.01 3.041 × 10 3 3.183 × 10 3 3.082 × 10 3 O ( τ 2 )
0.001 2.653 × 10 5 3.405 × 10 5 3.028 × 10 5
W 2 ( t k ) F 0.01 1.791 × 10 3 1.599 × 10 3 1.888 × 10 3
0.001 2.252 × 10 5 1.877 × 10 5 1.915 × 10 5
W 3 ( t k ) F 0.01 7.008 × 10 4 1.145 × 10 3 6.417 × 10 4
0.001 7.271 × 10 6 1.158 × 10 5 6.512 × 10 6
W 4 ( t k ) F 0.01 5.660 × 10 4 3.952 × 10 4 3.291 × 10 4
0.001 5.911 × 10 6 4.119 × 10 6 3.357 × 10 6
W 5 ( t k ) F 0.01 8.677 × 10 4 6.405 × 10 4 1.146 × 10 3
0.001 9.024 × 10 6 6.546 × 10 6 1.159 × 10 5
W 6 ( t k ) F 0.01 2.216 × 10 4 3.295 × 10 4 3.936 × 10 4
0.001 2.250 × 10 6 3.357 × 10 6 4.120 × 10 6
DTSVD-2 (25) W 1 ( t k ) F 0.01 5.127 × 10 5 4.322 × 10 5 4.300 × 10 5 O ( τ 3 )
0.001 5.306 × 10 8 4.813 × 10 8 4.556 × 10 8
W 2 ( t k ) F 0.01 5.620 × 10 5 3.735 × 10 5 4.215 × 10 5
0.001 6.168 × 10 8 4.071 × 10 8 3.931 × 10 8
W 3 ( t k ) F 0.01 1.360 × 10 5 1.333 × 10 5 7.166 × 10 6
0.001 1.564 × 10 8 1.493 × 10 8 9.021 × 10 9
W 4 ( t k ) F 0.01 1.156 × 10 5 1.044 × 10 5 5.195 × 10 6
0.001 1.308 × 10 8 1.152 × 10 8 5.605 × 10 9
W 5 ( t k ) F 0.01 1.582 × 10 5 8.336 × 10 6 1.329 × 10 5
0.001 1.823 × 10 8 9.041 × 10 9 1.506 × 10 8
| W 6 ( t k ) F 0.01 3.671 × 10 6 5.195 × 10 6 1.044 × 10 5
0.001 3.420 × 10 9 5.598 × 10 9 1.153 × 10 8
DTSVD-3 (26) W 1 ( t k ) F 0.01 3.360 × 10 6 1.837 × 10 6 1.827 × 10 6 O ( τ 4 )
0.001 3.782 × 10 10 2.063 × 10 10 1.983 × 10 10
W 2 ( t k ) F 0.01 2.348 × 10 6 1.284 × 10 6 1.414 × 10 6
0.001 3.015 × 10 10 1.526 × 10 10 1.410 × 10 10
W 3 ( t k ) F 0.01 1.257 × 10 6 8.094 × 10 7 4.141 × 10 7
0.001 1.643 × 10 10 9.092 × 10 11 4.606 × 10 11
W 4 ( t k ) F 0.01 9.666 × 10 7 4.619 × 10 7 2.576 × 10 7
0.001 1.205 × 10 10 5.549 × 10 11 2.868 × 10 11
W 5 ( t k ) F 0.01 1.532 × 10 6 4.105 × 10 7 8.087 × 10 7
0.001 2.014 × 10 10 4.623 × 10 11 9.104 × 10 11
W 6 ( t k ) F 0.01 2.350 × 10 7 2.576 × 10 7 4.613 × 10 7
0.001 2.862 × 10 11 2.854 × 10 11 5.540 × 10 11
Table 2. Residual errors of (27) and (28), with h = 0.1 , in addition to Table 1.
Table 2. Residual errors of (27) and (28), with h = 0.1 , in addition to Table 1.
Algorithm τ  (s)Example 1Example 2Example 3Law
DTSVD-4 (27) W 1 ( t k ) F 0.01 3.404 × 10 7 1.308 × 10 7 1.295 × 10 7 O ( τ 5 )
0.001 4.090 × 10 12 1.485 × 10 12 1.461 × 10 12
W 2 ( t k ) F 0.01 3.403 × 10 7 1.033 × 10 7 1.496 × 10 7
0.001 4.637 × 10 12 1.349 × 10 12 1.397 × 10 12
W 3 ( t k ) F 0.01 1.541 × 10 7 7.472 × 10 8 2.744 × 10 8
0.001 2.076 × 10 12 9.294 × 10 13 3.490 × 10 13
W 4 ( t k ) F 0.01 1.044 × 10 7 5.119 × 10 8 1.687 × 10 8
0.001 1.346 × 10 12 6.408 × 10 13 2.067 × 10 13
W 5 ( t k ) F 0.01 1.987 × 10 7 3.024 × 10 8 7.492 × 10 8
0.001 2.449 × 10 12 3.502 × 10 13 9.293 × 10 13
W 6 ( t k ) F 0.01 2.333 × 10 8 1.763 × 10 8 5.120 × 10 8
0.001 3.246 × 10 13 2.120 × 10 13 6.395 × 10 13
DTSVD-5 (28) W 1 ( t k ) F 0.01 7.144 × 10 8 1.858 × 10 8 1.839 × 10 8 O ( τ 6 )
0.001 1.005 × 10 13 2.576 × 10 14 2.796 × 10 14
W 2 ( t k ) F 0.01 5.873 × 10 8 1.417 × 10 8 1.859 × 10 8
0.001 8.367 × 10 14 2.046 × 10 14 1.882 × 10 14
W 3 ( t k ) F 0.01 3.591 × 10 8 9.548 × 10 9 4.505 × 10 9
0.001 5.795 × 10 14 1.235 × 10 14 4.841 × 10 15
W 4 ( t k ) F 0.01 2.356 × 10 8 6.529 × 10 9 2.694 × 10 9
0.001 3.619 × 10 14 9.160 × 10 15 3.370 × 10 15
W 5 ( t k ) F 0.01 4.502 × 10 8 4.568 × 10 9 9.545 × 10 9
0.001 7.083 × 10 14 5.772 × 10 15 1.223 × 10 14
W 6 ( t k ) F 0.01 5.364 × 10 9 2.694 × 10 9 6.528 × 10 9
0.001 7.938 × 10 15 3.953 × 10 15 8.195 × 10 15
Table 3. Residual errors of the original problem (1), i.e., C ( t k ) U ( t k ) Σ ( t k ) V * ( t k ) F , for (24), (25), (26), (27), and (28), with h = 0.1 .
Table 3. Residual errors of the original problem (1), i.e., C ( t k ) U ( t k ) Σ ( t k ) V * ( t k ) F , for (24), (25), (26), (27), and (28), with h = 0.1 .
Algorithm τ  (s)Example 1Example 2Example 3Law
DTSVD-1 (24) 0.01 2.369 × 10 3 2.473 × 10 3 2.473 × 10 3 O ( τ 2 )
0.001 2.401 × 10 5 2.587 × 10 5 2.588 × 10 5
DTSVD-2 (25) 0.01 5.134 × 10 5 3.024 × 10 5 3.024 × 10 5 O ( τ 3 )
0.001 5.903 × 10 8 3.253 × 10 8 3.253 × 10 8
DTSVD-3 (26) 0.01 2.111 × 10 6 1.302 × 10 6 1.302 × 10 6 O ( τ 4 )
0.001 2.557 × 10 10 1.520 × 10 10 1.521 × 10 10
DTSVD-4 (27) 0.01 3.044 × 10 7 1.086 × 10 7 1.082 × 10 7 O ( τ 5 )
0.001 4.347 × 10 12 1.287 × 10 12 1.290 × 10 12
DTSVD-5 (28) 0.01 5.075 × 10 8 1.506 × 10 8 1.506 × 10 8 O ( τ 6 )
0.001 7.519 × 10 14 2.203 × 10 14 2.225 × 10 14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Kang, X.; Zhang, Y. Complex, Temporally Variant SVD via Real ZN Method and 11-Point ZeaD Formula from Theoretics to Experiments. Mathematics 2025, 13, 1841. https://doi.org/10.3390/math13111841

AMA Style

Chen J, Kang X, Zhang Y. Complex, Temporally Variant SVD via Real ZN Method and 11-Point ZeaD Formula from Theoretics to Experiments. Mathematics. 2025; 13(11):1841. https://doi.org/10.3390/math13111841

Chicago/Turabian Style

Chen, Jianrong, Xiangui Kang, and Yunong Zhang. 2025. "Complex, Temporally Variant SVD via Real ZN Method and 11-Point ZeaD Formula from Theoretics to Experiments" Mathematics 13, no. 11: 1841. https://doi.org/10.3390/math13111841

APA Style

Chen, J., Kang, X., & Zhang, Y. (2025). Complex, Temporally Variant SVD via Real ZN Method and 11-Point ZeaD Formula from Theoretics to Experiments. Mathematics, 13(11), 1841. https://doi.org/10.3390/math13111841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop