Next Article in Journal
On Extended Lr-Norm-Based Derivatives to Intuitionistic Fuzzy Sets
Previous Article in Journal
Modified Unit-Half-Normal Distribution with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kalman Filter for Linear Discrete-Time Rectangular Singular Systems Considering Causality

1
School of Electrical and Control Engineering, Shaanxi University of Science and Technology, Xi’an 710021, China
2
School of Automation, Guangdong University of Petrochemical Technology, Maoming 525000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(1), 137; https://doi.org/10.3390/math12010137
Submission received: 8 December 2023 / Revised: 24 December 2023 / Accepted: 29 December 2023 / Published: 31 December 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
This paper proposes a Kalman filter for linear rectangular singular discrete-time systems, where the singular matrix in the system is a rectangular matrix without full column rank. By using two different restricted equivalent transformation methods and adding the measurement equation to the state equation, the system is transformed into a square singular system satisfying regularity and observability. During this process, the causality of the system is taken into account, and multiple matrix transformations are applied accordingly. Based on these modifications, state estimation results are obtained using the Kalman filter. Finally, a numerical example is employed to demonstrate the effectiveness of our approach.

1. Introduction

State estimation has always been an important topic in practical applications. Since the least squares method was proposed, many state estimation methods for linear systems have been proposed [1,2,3,4,5]. Among these, the Kalman filter [6] is the optimal state estimation method for linear Gaussian systems. Therefore, many advanced Kalman filters have been proposed, and these studies have achieved good filtering effects [7,8,9,10], which provide many theoretical references for singular systems.
Singular systems, also known as descriptor systems, were first proposed by Rosenbrock in 1974 to address power grid systems [11]. The remarkable characteristic of the system is that there may be a singular matrix in the dynamic space equation of the system. Compared to regular systems, singular systems can represent physical systems better than regular ones and have important applications in circuit systems, chemical systems and economic systems [12,13]. Thus, there are many contributions based on singular systems [14,15]. In singular systems, some properties are similar to regular systems, such as stability, controllability, observability, and detectability [16]. However, an important difference between singular systems and regular systems is that singular systems may be impulsive. As a result, research on singular systems is more complex compared to regular systems.
The state estimation problem of singular systems has attracted widespread attention due to its application background. The state estimation methods for linear discrete singular systems mainly include reduced-order estimation methods based on singular value decomposition (SVD), maximum likelihood method, least squares method and minimum variance estimation method [17]. Apart from these methods, many state estimation methods for singular systems under different situations have also been proposed. Liu et al. considered the estimation problem of discrete-time linear fractional-order singular systems [18,19]. Yu discussed the singular systems with multiplicative noise using the restricted equivalent transformation method and recursive Riccati equation [20,21]. Zhang proposed an optimal recursive filtering method for singular random discrete systems based on the ARMA innovation model and time-domain innovation analysis method [22]. Zhang [23] investigates the positive real lemmas for singular fractional-order linear time-invariant systems. In addition, there are many studies on the causality, controllability, stability, regular performance perception, and stabilization of singular systems, which further improve the theories of singular systems [24,25,26,27].
However, most of the research is aimed at square matrix singular systems, and many conclusions about square matrix systems can not be applied when the singular matrix is a rectangular matrix. One of the facts is that only square systems have regularity [28]. Zhang et al. [29] extended the regularity of square matrix singular systems to non-square singular systems, proposed the concept of generalized regularity, and gave the corresponding necessary and sufficient conditions. The necessary and sufficient conditions show that when the matrix array is not full rank, the solution of the system is not unique. Therefore, the state estimation problem of rectangular systems without full column rank must satisfy the premise that the whole system is observable. For the problem of state estimation of rectangular singular systems, Tian [30] decomposes the restricted equivalent transformation of the system equation into two parts by means of singular value decomposition when the system is regular and strongly controllable, and then he uses the generalized inverse theory to transform the singular system into the problem of state estimation of normal systems. However, this paper only considers the case that the singular matrix of the transformed equation is full column rank. Wen [31] proposed a method for decomposing the state using QR decomposition theory, which does not assume that the singular matrix has full rank, and extended the problem to time-varying situations, but did not consider the system’s impulsive nature.
In discrete singular systems, the impulse property of the system is the causality of the system. If the rectangular singular system is a noncausal system, the state value of the system will be related to future information, and the existence of this characteristic will affect the estimation of the system state value. However, to my knowledge, existing state estimation methods for linear rectangular singular systems have not taken into account the causal properties of the system. Therefore, the main contribution of this paper is to propose a state estimation method for linear rectangular singular systems considering noncausal situations, providing a new approach for the state estimation problem of rectangular singular systems. The main work of this paper is: two restricted equivalent transformations were applied to the system, followed by multiple matrix transformations based on the causality of the system. Additionally, a portion of the measurement equation is incorporated into the state equation to transform the system into a non-singular one. Based on these modifications, optimal state estimation results are obtained using the Kalman filter method. Finally, the effectiveness of the proposed method is validated through numerical examples.
The remaining sections of this paper are arranged as follows: Section 2 presents the problems to be addressed and the corresponding assumptions, Section 3 applies a series of system transformations to convert the system into a non-singular form, Section 4 outlines the state estimation method based on the Kalman filter, Section 5 provides numerical simulations, and Section 6 concludes the paper.
Notations. The notations throughout this paper are fairly standard except specifically stated. R n and R m × n denote the n-dimensional Euclidean space and the set of all m × n matrices, respectively. The superscript X T and X 1 represent the transposition and the inverse one of matrix X, respectively. I n × n denotes the n-dimensional identity matrix and O m × n denotes m × n matrix whose elements are all zeros. E w is the expectation of random variable w and r a n k M is the rank of matrix M. z ^ ( k | k 1 ) and z ^ ( k | k ) denote the prediction and estimation of state z ( k ) , respectively. P ( k | k 1 ) and P ( k | k ) denote the prediction error covariance and estimation error covariance of a state at instant k, respectively. For M R m × n , M = a i j denotes the matrix M is composed of elements a i j ,   i = 1 , 2 , · · · , m , j = 1 , 2 , · · · , n , and the element in the ith row and jth column of matrix M is a i j . z C denotes z is a complex number, where C denotes a complex domain.

2. Problem Formulation

Consider a discrete-time linear singular system:
E x ( k + 1 ) = A x ( k ) + w ( k )
y ( k + 1 ) = H x ( k + 1 ) + v ( k )
where x R n , y R m . The matrix E is a singular matrix and E R s × n , r a n k E = r , r s < n , m > n r , A R s × n , H R m × n , w ( k ) , v ( k ) are zero-mean Gaussian noise. The system satisfies the following assumptions:
Assumption 1.
E { w ( j ) w T ( k ) } = δ j k Q
E { v ( j ) v T ( k ) } = δ j k R
E { v ( k ) w T ( k ) } = 0
where δ j k is the Kronecker function.
Assumption 2.
There exists a complex z C such that r a n k z E A = s [32].
Assumption 3.
The system is observable and [29]:
r a n k E H = n
In this paper, the estimation of x ( k + 1 ) will be obtained by systems (1) and (2) based on measurements ( y ( k + 1 ) , y ( k ) , , y ( 1 ) ) .

3. System Transformation Based on Constrained Equivalence Transformation

In this section, we will sequentially perform two different constrained equivalence transformations on the system. In this process, a part of the observation equation will be combined with the state equation, so that the state equation becomes a square system that satisfies the regularity and observability of square singular matrices. Finally, the system will be transformed into a system that can be estimated using state estimation methods for square singular systems. Therefore, in this section, we first introduce the two constrained equivalence transformation methods that will be used.

3.1. Two Constrained Equivalence Transformation Methods

3.1.1. The First Type of Restricted Equivalent Transformation

For the matrix E in Equation (1), there exists an orthogonal matrix U R r × r , V R n × n such that matrix E can be decomposed according to its singular values as:
U E V = Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r )
where, r a n k Σ = r , Σ R r × r , and Σ = d i a g [ σ 1 , σ 2 , · · · , σ r ] > 0 . σ i , i = 1 , 2 , , r represents the non-zero singular values of matrix E. We perform the corresponding matrix transformation on the state x and let x = V X , X = X 1 X 2 . X 1 R r , X 2 R n r . Equation (1) transforms to:
Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) X 1 ( k + 1 ) X 2 ( k + 1 ) = U A V X 1 ( k ) X 2 ( k ) + U w ( k )
Equation (3) is the restricted equivalent equation transformed from Equation (1) through the first type of restricted equivalent transformation.

3.1.2. The Second Type of Restricted Equivalent Transformation

Consider a square singular system:
E ˜ x ( k + 1 ) = A ˜ x ( k ) + w ˜ ( k )
where, E ˜ R n × n , A ˜ R n × n , x R n , w ˜ R n is zero-mean Gaussian noise.
For the matrix pair ( E ˜ , A ˜ ) , there exists an orthogonal matrix F R n × n , T R n × n such that:
F E ˜ T = I a × a O a × ( n a ) O ( n a ) × a ( n a ) × ( n a ) ,   F A ˜ T = M O a × ( n a ) O ( n a ) × a I ( n a ) × ( n a )
where M R a × a , a = deg ( det ( z E A ) ) , det ( ) represents the determinant of the target matrix, deg ( ) represents the order of the target determinant, thus a r a n k E . R ( n a ) × ( n a ) is a nilpotent matrix, and b = 0 , b = min { j | j = 0 } . In this paper, we refer to the degree of the nilpotent matrix as b.
We perform the corresponding matrix transformation on the state x and let x = T x ¯ , x ¯ = x ¯ 1 x ¯ 2 . x ¯ 1 R a ,   x ¯ 2 R n a . Equation (4) transforms to:
I a × a O a × ( n a ) O ( n a ) × a ( n a ) × ( n a ) x ¯ 1 ( k + 1 ) x ¯ 2 ( k + 1 ) = M O a × ( n a ) O ( n a ) × a I ( n a ) × ( n a ) x ¯ 1 ( k ) x ¯ 2 ( k ) + F w ˜ ( k )
Let F = F 1 F 2 , Equation (5) can be split into:
x ¯ 1 ( k + 1 ) = M x ¯ 1 ( k ) + F 1 w ˜ ( k )
x ¯ 2 ( k + 1 ) = x ¯ 2 ( k ) + F 2 w ˜ ( k )
x ¯ 2 ( k + 1 ) = i = 1 b i 1 F 2 w ˜ ( k + i )
Equations (6) and (8) are the restricted equivalent systems transformed from (1) through the second type of restricted equivalent transformation.
If the system is a non-causal system, when the first type of restricted equivalent transformation method is used to transform the system, the transformed system cannot highlight the non-causality. However, using the second type of restricted equivalent transformation method can highlight the non-causality of the system. In addition, the principle of the first type of restricted equivalent transformation method is relatively simple, so causal singular systems are suitable for using the first type of restricted equivalent transformation method for transformation, and non-causal singular systems are suitable for using the second type of restricted equivalent transformation method for transformation.

3.2. The System Transformation Based on Constrained Equivalence Transformation

We have obtained Equation (3) through the first type of restricted equivalent transformation. Let U A V = A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 , U = U 1 U 2 , H V = H ¯ = H ¯ 1 H ¯ 2 , Equation (3) can be decomposed into:
Σ X 1 ( k + 1 ) = A ¯ 11 X 1 ( k ) + A ¯ 12 X 2 ( k ) + U 1 w ( k )
O ( s r ) × 1 = A ¯ 21 X 1 ( k ) + A ¯ 22 X 2 ( k ) + U 2 w ( k )
where, A ¯ 11 R r × r , A ¯ 12 R r × ( n r ) , A ¯ 21 R ( s r ) × r , A ¯ 22 R ( s r ) × ( n r ) .
Theorem 1.
The matrix A ¯ 21 A ¯ 22 R ( s r ) × n is a row full rank matrix.
Proof. 
According to Assumption 2, r a n k z E A = s , and both matrix U and matrix V are orthogonal matrices. Recall when a matrix is multiplied by an invertible matrix, the rank of the matrix remains unchanged. Therefore, we have:
  s = r a n k z E A = r a n k U ( z E A ) V   = r a n k z Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22   = r a n k z Σ A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22
Therefore, z Σ A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 is full row rank, which means that r a n k A ¯ 21 A ¯ 22 = s r . Thus, the matrix A ¯ 21 A ¯ 22 is a matrix with full row rank. □
Equation (9) can be transformed into:
X 1 ( k + 1 ) = Σ 1 A ¯ 11 X 1 ( k ) + A ¯ 12 X 2 ( k ) + U 1 w ( k )
Substituting x = V X = V X 1 X 2 , H V = H ¯ = H ¯ 1 H ¯ 2 into Equation (2), it transforms into:
y ( k + 1 ) = H ¯ 1 H ¯ 2 X 1 ( k + 1 ) X 2 ( k + 1 ) + v ( k + 1 )
Lemma 1.
The constrained equivalence transformation method does not change the observability of the system.
Matrix U and matrix V are both orthogonal matrices, Therefore:
  n = r a n k E H = r a n k U O n × m O m × n I m × m E H V   = r a n k U E V H V = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 1 H ¯ 2
Therefore, the system formed by Equations (3) and (13) is still observable.
According to Equation (14) and given r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) = r a n k E = r , The matrix H ¯ is composed of at least n r linearly independent row vectors to ensure that r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 1 H ¯ 2 = n . Without losing generality, these n r row vectors come from the first n r rows of matrix H ¯ , which is denoted as H ¯ 21 H ¯ 22 , H ¯ 11 H ¯ 12 R ( n r ) × n , then we have:
r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 11 H ¯ 12 = n
Let H ¯ = H ¯ 1 H ¯ 2 = H ¯ 11 H ¯ 12 H ¯ 21 H ¯ 22 , H ¯ 11 R ( n r ) × r , H ¯ 12 R ( n r ) × ( n r ) , H ¯ 21 R ( m ( n r ) ) × r , H ¯ 22 R ( m ( n r ) ) × ( n r ) .
Based on H ¯ = H ¯ 1 H ¯ 2 = H ¯ 11 H ¯ 12 H ¯ 21 H ¯ 22 , Equation (13) can be rewritten as:
  y ( k + 1 ) = y 1 ( k + 1 ) y 2 ( k + 1 )   = H ¯ 11 H ¯ 12 H ¯ 21 H ¯ 22 X 1 ( k + 1 ) X 2 ( k + 1 ) + v ( k + 1 )   = H ¯ 11 H ¯ 12 H ¯ 21 H ¯ 22 X 1 ( k + 1 ) X 2 ( k + 1 ) + v 1 ( k + 1 ) v 2 ( k + 1 )
and Equation (16) can be divided into two parts:
y 1 ( k + 1 ) = H ¯ 11 H ¯ 12 X 1 ( k + 1 ) X 2 ( k + 1 ) + v 1 ( k + 1 )
y 2 ( k + 1 ) = H ¯ 21 H ¯ 22 X 1 ( k + 1 ) X 2 ( k + 1 ) + v 2 ( k + 1 )
Theorem 2.
Matrix H ¯ 12 R ( n r ) × ( n r ) is an invertible matrix.
Proof. 
According to Equation (15), r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 11 H ¯ 12 = n . Where Σ R r × r is a diagonal matrix. Thus, elementary transformations can be performed on matrix Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 11 H ¯ 12 as:
Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 11 H ¯ 12 Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n r ) × r H ¯ 12
Therefore, r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n r ) × r H ¯ 12 = n . Thus we can conclude that r a n k H ¯ 12 = n r given r a n k Σ = r .
Therefore, Matrix H ¯ 12 R ( n r ) × ( n r ) is an invertible matrix. □
Theorem 3.
In matrix O ( n r ) × r H ¯ 12 , there exist at least n s row vectors, denoted as O ( n s ) × r H ¯ b , such that r a n k A ¯ 21 A ¯ 22 O ( n s ) × r H ¯ b = n r .
Proof. 
According to Theorem 2, r a n k 0 H ¯ 12 = r a n k H ¯ 11 H ¯ 12 = n r . According to Theorem 1, r a n k A ¯ 21 A ¯ 22 0 H ¯ 12 = r a n k A ¯ 21 A ¯ 22 H ¯ 11 H ¯ 12 n r .
The equivalent proposition of Theorem 3 is: There are at least n s row vectors in the matrix O ( n r ) × r H ¯ 12 that cannot be linearly represented by the row vectors of matrix A ¯ 21 A ¯ 22 . Assuming its converse proposition holds true: at most n r l < n s ( l > s r ) row vectors in the matrix O ( n r ) × r H ¯ 12 cannot be linearly represented by the row vectors of matrix A ¯ 21 A ¯ 22 . The equivalent proposition of the converse proposition is: There are at least l row vectors in the matrix O ( n r ) × r H ¯ 12 that can be linearly represented by the row vectors of the matrix A ¯ 21 A ¯ 22 . Based on the equivalent proposition of the converse proposition and r a n k A ¯ 21 A ¯ 22 = s r , we have:
r a n k O ( n r ) × r H ¯ 12 [ ( n r ) l ] + ( s r ) < [ ( n r ) ( s r ) ] + ( s r ) = n r
Equation (19) contradicts r a n k O ( n r ) × r H ¯ 12 = n r . Therefore, there are at least n s row vectors in the matrix O ( n r ) × r H ¯ 12 that cannot be linearly represented by the row vectors of matrix A ¯ 21 A ¯ 22 , which means there exist at least n s row vectors, denoted as O ( n s ) × r H ¯ b , in matrix O ( n r ) × r H ¯ 12 such that:
r a n k A ¯ 21 A ¯ 22 O ( n s ) × r H ¯ b = r a n k A ¯ 21 A ¯ 22 + r a n k O ( n s ) × r H ¯ b = ( s r ) + ( n s ) = n r
 □
Without losing generality, matrix O ( n s ) × r H ¯ b comes from the first n s rows of matrix r a n k O ( n r ) × r H ¯ 12 = n r . Correspondingly, the first n s rows of H ¯ 11 H ¯ 12 are denoted as H ¯ a H ¯ b , H ¯ a H ¯ b R ( n s ) × n . Based on Theorem 3, we can infer:
r a n k A ¯ 21 A ¯ 22 H ¯ a H ¯ b = n r
Matrix H ¯ a H ¯ b is a part of H ¯ 11 H ¯ 12 , thus, let H ¯ 11 H ¯ 12 = H ¯ a H ¯ b H ¯ c H ¯ d , H ¯ a R ( n s ) × r , H ¯ b R ( n s ) × ( n r ) , H ¯ c R ( s r ) × r , H ¯ d R ( s r ) × ( n r ) . Based on H ¯ 11 H ¯ 12 = H ¯ a H ¯ b H ¯ c H ¯ d , Equation (17) can be rewritten as:
  y 1 ( k + 1 ) = y a b ( k + 1 ) y c d ( k + 1 )   = H ¯ a H ¯ b H ¯ c H ¯ d X 1 ( k + 1 ) X 2 ( k + 1 ) + v ( k + 1 )   = H ¯ a H ¯ b H ¯ c H ¯ d X 1 ( k + 1 ) X 2 ( k + 1 ) + v a b ( k + 1 ) v c d ( k + 1 )
and Equation (21) can be divided into two parts:
y a b ( k + 1 ) = H ¯ a H ¯ b X 1 ( k + 1 ) X 2 ( k + 1 ) + v a b ( k + 1 )
y c d ( k + 1 ) = H ¯ c H ¯ d X 1 ( k + 1 ) X 2 ( k + 1 ) + v c d ( k + 1 )
Substituting Equation (12) into Equation (22):
  H ¯ b X 2 ( k + 1 ) = y a b ( k + 1 ) + v a b ( k + 1 )   + H ¯ a Σ 1 ( A ¯ 11 X 1 ( k ) + A ¯ 12 X 2 ( k ) + U 1 w ( k ) )
Combining Equations (7) and (19):
  Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b X 1 ( k + 1 ) X 2 ( k + 1 ) = A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 X 1 ( k ) X 2 ( k )   + U w ( k ) v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) O s × 1 y a b ( k + 1 )
where, Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b R n × n , r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b = r a n k Σ + r a n k H ¯ b = r + ( n s ) . Let Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b = Ω , A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 = Γ .
Theorem 4.
Equation (25) is regular, in other words, there exists a complex number z C , such that r a n k z Ω Γ = n .
Proof. 
According to r a n k Ω = r + ( n s ) , Σ O r × ( n r ) O ( s r ) × r H ¯ b is a row full rank matrix. Therefore, regardless of the value of matrix A ¯ 11 A ¯ 12 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 , there exists a complex number z C , such that:
r a n k z Σ O r × ( n r ) O s r × r H ¯ b A ¯ 11 A ¯ 12 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 = r + ( n s )
According to Theorem 3, r a n k A ¯ 21 A ¯ 22 O ( n s ) × r H ¯ b = r a n k A ¯ 21 A ¯ 22 O ( n s ) × r H ¯ b = n r , which means Σ O r × ( n r ) O ( s r ) × r H ¯ b is a row full rank matrix. Therefore, regardless of the value of matrix A ¯ 11 A ¯ 12 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 , there exists a complex number z C , such that:
r a n k z O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b A ¯ 21 A ¯ 22 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 = n r
According Equation (11):
r a n k z Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 = s
In summary, regardless of the value of Γ = A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 , there exists a complex number z C , such that:
r a n k z Σ 0 0 0 0 H ¯ b A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 = [ r + ( n s ) + n r + s ] / 2 = n
Therefore, r a n k z Ω Γ = n . □
Combining Equations (18) and (23), we have:
y c d ( k + 1 ) y 2 ( k + 1 ) = H ¯ c H ¯ d H ¯ 21 H ¯ 22 X 1 ( k + 1 ) X 2 ( k + 1 ) + v c d ( k + 1 ) v 2 ( k + 1 )
Let y c d ( k + 1 ) y 2 ( k + 1 ) = Y ( k + 1 ) , U w ( k ) v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) = w ¯ ( k ) , H ¯ c H ¯ d H ¯ 21 H ¯ 22 = C , v c d v 2 = v ¯ , O s × 1 y a b ( k + 1 ) = y ¯ ( k + 1 ) , Equations (25) and (26) can be rewritten as follows, respectively:
Ω X ( k + 1 ) = Γ X ( k ) y ¯ ( k + 1 ) + w ¯ ( k )
Y ( k + 1 ) = C X ( k + 1 ) + v ¯ ( k + 1 )
Theorem 5.
r a n k Ω C = n .
Proof. 
The following equation holds:
r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ a H ¯ b
Therefore:
r a n k Ω C = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) O ( n s ) × r H ¯ b H ¯ c H ¯ d H ¯ 21 H ¯ 22 = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ a H ¯ b H ¯ c H ¯ d H ¯ 21 H ¯ 22 = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 1 H ¯ 2 = n
 □
According to Theorems 4 and 5, the system composed of Equations (27) and (28) is observable and regular.
According to the second type of restricted equivalent transformation, there exist orthogonal matrixes G R n × n , D R n × n , such that:
G Ω D = I q × q O q × ( n q ) O ( n q ) × q N , G Γ D = B O q × ( n q ) O ( n q ) × q I ( n q ) × ( n q )
where, B R q × q , q = deg ( det ( z Ω Γ ) ) r a n k Ω , N R ( n q ) × ( n q ) , is a nilpotent matrix with degree h. Let X = D X ¯ = D X ¯ 1 X ¯ 2 , X ¯ 1 R q , X ¯ 2 R ( n q ) , Equation (27) can be transformed into:
  I q × q O q × ( n q ) O ( n q ) × q N X ¯ 1 ( k + 1 ) X ¯ 2 ( k + 1 ) = B O q × ( n q ) O ( n q ) × q I ( n q ) × ( n q ) X ¯ 1 ( k ) X ¯ 2 ( k )   G y ¯ ( k + 1 ) + G w ¯ ( k )
Let G = G 1 G 2 , C D = C ¯ = C ¯ 1 C ¯ 2 , Equation (30) can be divided into two parts:
X ¯ 1 ( k + 1 ) = B X ¯ 1 ( k ) G 1 y ¯ ( k + 1 ) + G 1 w ¯ ( k )
N X ¯ 2 ( k + 1 ) = X ¯ 2 ( k ) G 2 y ¯ ( k + 1 ) + G 2 w ¯ ( k )
From Equation (32), we have:
X ¯ 2 ( k + 1 ) = i = 1 h N i 1 G 2 ( y ¯ ( k + 1 + i ) + w ¯ ( k + i ) )
Substituting X = D X ¯ = D X ¯ 1 X ¯ 2 , C D = C ¯ = C ¯ 1 C ¯ 2 into Equation (23):
Y ( k + 1 ) = C ¯ X ¯ ( k + 1 ) + v ¯ ( k + 1 )
Equation (33) contains the unknown term at the current time, i = 1 h N i 1 G 2 y ¯ ( k + 1 + i ) . This prevents us from estimating the system state based on the current information. Therefore, further transformations are needed to convert the system into a known system that does not include unknown terms.

3.3. Transforming the System into a Known System

In Section 3.2, after performing a constrained equivalence transformation on the system, we obtained a system composed of Equations (3) and (13). Then, we combined a part of the observation equation with the state equation and performed a second constrained equivalence transformation, resulting in a system composed of Equations (31), (33) and (34). In this chapter, we will transform the system into a known system by a series of transformations that eliminate the unknown terms.
Performing a simple row transformation on Equation (25):
  O ( n s ) × r H ¯ b Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) X 1 ( k + 1 ) X 2 ( k + 1 ) = H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 X 1 ( k ) X 2 ( k )   + v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) U w ( k ) y a b ( k + 1 ) O s × 1
In Equation (35), matrices Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) and A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 are both rectangular matrices with more columns than rows. We can rewrite both matrices as a combination of a square matrix and several column vectors:
  Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) = Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) O r × ( n s ) O ( s r ) × ( n s )   A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 = A ¯ 11 A ¯ a A ¯ 21 A ¯ c A ¯ b A ¯ d
If we perform the second type of constrained equivalence transformation on the matrix pair Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) , A ¯ 11 A ¯ a A ¯ 21 A ¯ b , transformation matrices can be denoted as G 1 R s × s and D 1 R s × s , respectively. Let D ¯ 1 = D 1 O s × ( n s ) O ( n s ) × s I ( n s ) × ( n s ) , matrix Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) O r × ( n s ) O ( s r ) × ( n s ) and A ¯ 11 A ¯ a A ¯ 21 A ¯ c A ¯ b A ¯ d can be transformed into:
  G 1 Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) O r × ( n s ) O ( s r ) × ( n s ) D ¯ 1   = I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 N 1 O p 1 × ( n s ) O ( s p 1 ) × ( n s )   G 1 A ¯ 11 A ¯ a A ¯ 21 A ¯ c A ¯ b A ¯ d D ¯ 1 = B 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 I ( s p 1 ) × ( s p 1 ) A 11 A 12
where, p 1 = deg det z Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) A ¯ 11 A ¯ a A ¯ 21 A ¯ b , A 11 A 12 = G 1 A ¯ b A ¯ d , N 1 R ( s p 1 ) × ( s p 1 ) , B 1 R p 1 × p 1 . N 1 is a nilpotent matrix with degree g 1 . Let:
I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 N 1 O p 1 × ( n s ) O ( s p 1 ) × ( n s ) = Ω ¯ 1
B 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 I ( s p 1 ) × ( s p 1 ) A 11 A 12 = Γ ¯ 1
For the nilpotent matrix N 1 , all of its eigenvalues are 0. Therefore, if matrix N 1 transforms into a Jordan matrix, all the diagonal elements of the Jordan matrix are 0. Consequently, there exists an invertible matrix P 1 R ( s p 1 ) × ( s p 1 ) such that N 1 can be transformed into a Jordan matrix as follows, denoted as N ¯ 1 :
N ¯ 1 = P 1 N 1 P 1 1 = 0 1 0 0 1 0 0 0 0
The degree of matrix N 1 is g 1 , thus in Equation (38), the first g 1 1 rows contain the number 1.
Left multiplying matrix Ω ¯ 1 and Γ ¯ 1 by matrix I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 P 1 , and right multiplying them by I p 1 × p 1 O p 1 × ( s p 1 ) O p 1 × ( n s ) O ( s p 1 ) × p 1 P 1 1 O ( s p 1 ) × ( n s ) O ( n s ) × p 1 O ( n s ) × ( s p 1 ) I ( n s ) × ( n s ) :
  I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 P 1 Ω ¯ 1 I p 1 × p 1 O p 1 × ( s p 1 ) O p 1 × ( n s ) O ( s p 1 ) × p 1 P 1 1 O ( s p 1 ) × ( n s ) O ( n s ) × p 1 O ( n s ) × ( s p 1 ) I ( n s ) × ( n s )   = I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 N ¯ 1 O p 1 × ( n s ) O ( s p 1 ) × ( n s )
  I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 P 1 Γ ¯ 1 I p 1 × p 1 O p 1 × ( s p 1 ) O p 1 × ( n s ) O ( s p 1 ) × p 1 P 1 1 O ( s p 1 ) × ( n s ) O ( n s ) × p 1 O ( n s ) × ( s p 1 ) I ( n s ) × ( n s )   = B 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 I ( s p 1 ) × ( s p 1 ) A 11 P 1 A 12
Let P A 12 = a ¯ i j , Expanding N ¯ 1 O ( s p 1 ) × ( n s ) and I ( s p 1 ) × ( s p 1 ) P 1 A 12 , we obtain:
N ¯ 1 O ( s p 1 ) × ( n s ) = 0 1 0 0 0 0 1 0 0 0 0 0 0
I ( s p 1 ) × ( s p 1 ) P 1 A 12 = 1 a ¯ 11 a ¯ 1 , n s 1 a ¯ g 1 1 , 1 a ¯ g 1 1 , n s 1 a ¯ g 1 , 1 a ¯ g 1 , n s 1 a ¯ s p 1 , 1 a ¯ s p 1 , n s
In Equation (41), all elements in rows g 1 to s p 1 are zeros. Therefore, We perform the same elementary column transformations on the matrix blocks N ¯ 1 O ( s p 1 ) × ( n s ) and I ( s p 1 ) × ( s p 1 ) P 1 A 12 , respectively:
  0 1 0 0 0 0 1 0 0 0 0 0 0   0 1 0 0 0 0 1 0 0 0 0 0 0   1 0 0 1 0 0 0 0
  1 a ¯ 11 a ¯ 11 1 a ¯ 21 a ¯ 2 s 1 a ¯ g 1 1 , 1 a ¯ g 1 1 , n s 1 a ¯ g 1 , 1 a ¯ g 1 , n s 1 a ¯ s p 1 , 1 a ¯ s p 1 , n s   1 0 0 1 a ¯ 21 a ¯ 2 s 1 a ¯ g 1 1 , 1 a ¯ g 1 1 , n s 1 0 0 1 0 0   0 0 0 0 1 1 a ¯ 21 a ¯ 2 s 0 1 a ¯ g 1 1 , 1 a ¯ g 1 1 , n s 1 0 0 1 0 0 0
Denoting the final results of Equations (43) and (44) as a combination of a square matrix and several column vectors:
  1 0 0 1 0 0 0 0   = I ( g 1 1 ) × ( g 1 1 ) O ( g 1 1 ) × ( s p 1 g 1 1 ) O ( s p 1 g 1 1 ) × ( g 1 1 ) O ( s p 1 g 1 1 ) × ( s p 1 g 1 1 ) O ( g 1 1 ) × ( n s ) O ( s p 1 g 1 1 ) × ( n s )
  0 0 0 0 1 1 a ¯ 21 a ¯ 2 s 0 1 a ¯ g 1 1 , 1 a ¯ g 1 1 , n s 1 0 0 1 0 0 0   = B ¯ 1 A ¯ 1 = B ¯ 11 B ¯ 12 B ¯ 21 B ¯ 22 A ¯ 11 O ( s p 1 g 1 1 ) × ( n s )
where, B ¯ 1 R ( s p 1 ) × ( s p 1 ) , noting that matrix B ¯ 1 is non-invertible.
If we perform the second type of constrained equivalence transformation on matrices I ( g 1 1 ) × ( g 1 1 ) O ( g 1 1 ) × ( s p 1 g 1 1 ) O ( s p 1 g 1 1 ) × ( g 1 1 ) O ( s p 1 g 1 1 ) × ( s p 1 g 1 1 ) and B ¯ 1 , transformation matrices can be denoted as G 2 R ( s p 1 ) × ( s p 1 ) and D 2 R ( s p 1 ) × ( s p 1 ) , respectively. Matrix B ¯ 1 A ¯ 1 and I ( g 1 1 ) × ( g 1 1 ) O ( g 1 1 ) × ( s p 1 g 1 1 ) O ( s p 1 g 1 1 ) × ( g 1 1 ) O ( s p 1 g 1 1 ) × ( s p 1 g 1 1 ) O ( g 1 1 ) × ( n s ) O ( s p 1 g 1 1 ) × ( n s ) can be transformed as follows:
  G 2 I ( g 1 1 ) × ( g 1 1 ) O ( g 1 ) × ( s p 1 g 1 1 ) O ( s p 1 g 1 1 ) × ( g 1 ) O ( s p 1 g 1 1 ) × ( s p 1 g 1 1 ) O ( g 1 1 ) × ( n s ) O ( s p 1 g 1 1 ) × ( n s )   × D 2 O ( n s ) × ( s p 1 ) O ( s p 1 ) × ( n s ) I ( n s ) × ( n s )   = I p 2 × p 2 O p 2 × ( s p 1 p 2 ) O ( s p 1 p 2 ) × p 2 N 2 O p 2 × ( n s ) O ( s p 1 p 2 ) × ( n s )
  G 2 B ¯ 11 B ¯ 12 B ¯ 21 B ¯ 22 A ¯ 11 O ( s p 1 g 1 1 ) × ( n s ) D 2 O ( n s ) × ( s p 1 ) O ( s p 1 ) × ( n s ) I ( n s ) × ( n s )   = B 2 O p 2 × ( s p 1 p 2 ) O ( s p 1 p 2 ) × p 2 I ( s p 1 p 2 ) × ( s p 1 p 2 ) A 21 A 22
where, p 2 = deg det z N ¯ 1 O ( s p 1 ) × ( n s ) I ( s p 1 ) × ( s p 1 ) P 1 A 12 , B 2 R p 2 × p 2 , A 21 A 22 = G 2 A ¯ 11 O ( s p 1 g 1 1 ) × ( n s ) , N 2 R ( s p 1 p 2 ) × ( s p 1 p 2 ) . N 2 is a nilpotent matrix with degree g 2 .
The matrix blocks N 1 O ( s p 1 ) × ( n s ) and I ( s p 1 ) × ( s p 1 ) A 12 in Equation (37) have undergone the following transformations through Equations (37)–(48), where the entire process consists of elementary transformations or multiplication with invertible matrices:
  N 1 O ( s p 1 ) × ( n s ) I p 2 × p 2 O p 2 × ( s p 1 p 2 ) O ( s p 1 p 2 ) × p 2 N 2 O p 2 × ( n s ) O ( s p 1 p 2 ) × ( n s )   I ( s p 1 ) × ( s p 1 ) A 12 B 2 O p 2 × ( s p 1 p 2 ) O ( s p 1 p 2 ) × p 2 I ( s p 1 p 2 ) × ( s p 1 p 2 ) A 21 A 22
Applying a similar transformation process as in Equations (37)–(48) on the matrix blocks N 2 O ( s p 1 p 2 ) × ( n s ) and I ( s p 1 p 2 ) × ( s p 1 p 2 ) A 22 in Equation (49), we can obtain an expression similar to Equation (49):
  N 2 O ( s p 1 p 2 ) × ( n s )   I p 3 × p 3 O p 3 × ( s p 1 p 2 p 3 ) O ( s p 1 p 2 p 3 ) × p 3 N 3 O p 3 × ( n s ) O ( s p 1 p 2 p 3 ) × ( n s )   I ( s p 1 p 2 ) × ( s p 1 p 2 ) A 22 B 3 O p 3 × ( s p 1 p 2 p 3 ) O p 3 × ( s p 1 p 2 p 3 ) I ( s p 1 p 2 p 3 ) × ( s p 1 p 2 p 3 ) A 31 A 32
We refer to Equation (49) as the first transformation and Equation (50) as the second transformation, then the result of the ith transformation is:
  N i O ( s j = 1 i p j ) × ( n s ) I p i + 1 × p i + 1 O p i + 1 × ( s j = 1 i + 1 p j ) O ( s j = 1 i + 1 p j ) × p i + 1 N i + 1 O p i + 1 × ( n s ) O ( s j = 1 i + 1 p j ) × ( n s )   I ( s j = 1 i p j ) × ( s j = 1 i p j ) A i , 2 B i + 1 O p i + 1 × ( s j = 1 i + 1 p j ) O p 3 × ( s j = 1 i + 1 p j ) I ( s j = 1 i + 1 p j ) × ( s j = 1 i + 1 p j ) A i + 1 , 1 A i + 1 , 2
where, N i + 1 R ( s j = 1 i + 1 p j ) × ( s j = 1 i + 1 p j ) , B i + 1 R p i + 1 × p i + 1 . When the tth transformation results in N t + 1 = O ( s j = 1 i + 1 p j ) × ( s j = 1 i + 1 p j ) or A i + 1 , 2 = O ( s j = 1 i + 1 p j ) × ( n s ) , there will be two situations.
If A i + 1 , 2 = O ( s j = 1 i + 1 p j ) × ( n s ) , the degree of matrix N i + 1 at this point is g t + 1 , and Equation (36) finally transforms into:
  Σ O r × ( s r ) O ( s r ) × r O ( s r ) × ( s r ) O r × ( n s ) O ( s r ) × ( n s )   I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 N 1 O p 1 × ( n s ) O ( s p 1 ) × ( n s )   I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j N i + 1 O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × ( n s )   A ¯ 11 A ¯ a A ¯ 21 A ¯ c A ¯ b A ¯ d B 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 I ( s p 1 ) × ( s p 1 ) A 11 A 12   d i a g { B 1 , B 2 , , B t + 1 } O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × ( n s )
If N t + 1 = O ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) , Equation (36) finally transforms into:
  I p 1 × p 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 N 1 O p 1 × ( n s ) O ( s p 1 ) × ( n s )   I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j O ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × ( n s )   B 1 O p 1 × ( s p 1 ) O ( s p 1 ) × p 1 I ( s p 1 ) × ( s p 1 ) A 11 A 12   d i a g { B 1 , B 2 , , B t + 1 } O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) A t + 1 , 1 A t + 1 , 2
If the transformation takes the form of Equation (52), it indicates that the system is non-causal. If it transforms into Equation (53), it indicates that the system is causal. Let d i a g { B 1 , B 2 , , B t + 1 } = D a , D a R j = 1 t + 1 p j × j = 1 t + 1 p j .
The block matrix N i + 1 O ( s j = 1 t + 1 p j ) × ( n s ) and I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × ( n s ) in Equation (52) can be transformed similar to Equations (38), and we denote the transformation matrix as P t + 1 R ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) . The result of Equation (52) can be further transformed as follows:
  I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j P t + 1   × I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j N i + 1 O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × ( n s )   × I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j P t + 1 1 O ( s j = 1 t + 1 p j ) × ( n s ) O ( n s ) × j = 1 t + 1 p j O ( n s ) × ( s j = 1 t + 1 p j ) I ( n s ) × ( n s )   = I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j N ¯ i + 1 O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × ( n s )   I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j P t + 1   × D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × ( n s )   × I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j P t + 1 1 O ( s j = 1 t + 1 p j ) × ( n s ) O ( n s ) × j = 1 t + 1 p j O ( n s ) × ( s j = 1 t + 1 p j ) I ( n s ) × ( n s )   = D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × ( n s )
where, matrix N ¯ i + 1 is similar in form to Equation (38) and can be denoted as follows:
  N ¯ i + 1 = P t + 1 N t + 1 P t + 1 1   = 0 1 0 0 0 1 0 0 0 0 0   N ¯ a O ( g t + 1 1 ) × ( s j = 1 t + 1 p j g t + 1 1 ) O ( s j = 1 t + 1 p j g t + 1 1 ) × ( g 1 1 ) O ( s j = 1 t + 1 p j g t + 1 1 ) × ( s j = 1 t + 1 p j g t + 1 1 )
In Equation (55), the first g t + 1 1 row vectors of matrix N ¯ t + 1 contain the number 1.
Taking the non-causal system represented by Equations (52) and (54) as an example, since the entire transformation process represented by Equations (52) and (54) is a sequence of elementary transformations or multiplication by invertible matrices, the entire process can be viewed as left multiplying by an invertible matrix and right multiplying by another invertible matrix. Let us denote the left multiplication matrix as F R s × s , the right multiplication matrix as T R n × n , and let X 1 X 2 = T χ = T χ 1 χ 2 χ 3 . Then, Equation (35) can be transformed into:
  I ( n s ) × ( n s ) O ( n s ) × s O s × ( n s ) F O ( n s ) × r H ¯ b Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) T χ 1 ( k + 1 ) χ 2 ( k + 1 ) χ 3 ( k + 1 )   = I ( n s ) × ( n s ) O ( n s ) × s O s × ( n s ) F H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 T χ 1 ( k ) χ 2 ( k ) χ 3 ( k )   + I ( n s ) × ( n s ) O ( n s ) × s O s × ( n s ) F v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) U w ( k )   I ( n s ) × ( n s ) O ( n s ) × s O s × ( n s ) F y a b ( k + 1 ) O s × 1
Let O ( n s ) × r H ¯ b T = H ¯ b 1 H ¯ b 2 H ¯ b 3 , H ¯ b 1 R ( n s ) × j = 1 t + 1 p j , H ¯ b 2 R ( n s ) × ( s j = 1 t + 1 p j ) , H ¯ b 3 R ( n s ) × ( n s ) , H ¯ a Σ 1 A ¯ 11 H ¯ a Σ 1 A ¯ 12 T = Φ 1 Φ 2 Φ 3 , Φ 1 R ( n s ) × j = 1 t + 1 p j , Φ 2 R ( n s ) × ( s j = 1 t + 1 p j ) , Φ 3 R ( n s ) × ( n s ) , substituting these into Equation (56) gives:
  H ¯ b 1 H ¯ b 2 H ¯ b 3 I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O j = 1 t + 1 ( s j = 1 t + 1 p j ) × p j N ¯ t + 1 O j = 1 t + 1 ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k + 1 ) χ 2 ( k + 1 ) χ 3 ( k + 1 )   = Φ 1 Φ 2 Φ 3 D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) A t + 1 , 1 O j = 1 t + 1 ( s j = 1 t + 1 p j ) × p j I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O j = 1 t + 1 ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k ) χ 2 ( k ) χ 3 ( k )   + v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) F U w ( k ) y a b ( k + 1 ) O s × 1
In Equation (57), H ¯ b 3 R ( n s ) × ( n s ) may not be an invertible matrix, which may result in the presence of unknown terms similar to Equation (33) in subsequent transformations of Equation (57). Therefore, theorem 6 is given to solve this problem.
Theorem 6.
In matrix O ( n r ) × r H ¯ 12 , there exist n s row vectors, denoted as O ( n s ) × r H ¯ b , such that when matrix O ( n s ) × r H ¯ b undergoes the right multiplication transformation described in Equation (58) (equivalent to a column transformation), the resulting matrix H ¯ 4 is invertible.
O ( n s ) × r H ¯ b T = H ¯ 1 H ¯ 2 H ¯ 3 H ¯ 4
where, H ¯ 1 R ( n s ) × j = 1 t + 1 p j , H ¯ 2 R ( n s ) × ( g t + 1 1 ) , H ¯ 3 R ( n s ) × ( s j = 1 t + 1 p j g t + 1 + 1 ) , H ¯ 4 R ( n s ) × ( n s ) .
Proof. 
Let:
O ( n r ) × r H ¯ 12 T = H 1 H 2 H 3 H 4
H ¯ 11 Σ 1 A ¯ 11 H ¯ 11 Σ 1 A ¯ 12 = Φ ¯ 1 Φ ¯ 2 Φ ¯ 3 Φ ¯ 4
Left multiplying matrix O ( n r ) × r H ¯ 12 Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) and H ¯ 11 Σ 1 A ¯ 11 H ¯ 11 Σ 1 A ¯ 12 A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 by matrix I ( n r ) × ( n r ) O ( n r ) × s O s × ( n r ) F , and right multiplying them by T , we obtain:
  I ( n r ) × ( n r ) O ( n r ) × s O s × ( n r ) F O ( n r ) × r H ¯ 12 Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) T   = H 1 H 2 H 3 H 4 I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( g t + 1 1 ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j g t + 1 + 1 ) O j = 1 t + 1 p j × ( n s ) O ( g t + 1 1 ) × j = 1 t + 1 p j N ¯ a O ( g t + 1 1 ) × ( s j = 1 t + 1 p j g t + 1 + 1 ) O ( g t + 1 1 ) × ( n s ) O ( s j = 1 t + 1 p j g t + 1 1 ) × j = 1 t + 1 p j O ( s j = 1 t + 1 p j g t + 1 1 ) × ( g 1 1 ) O ( s j = 1 t + 1 p j g t + 1 + 1 ) × ( s j = 1 t + 1 p j g t + 1 + 1 ) O ( s j = 1 t + 1 p j g t + 1 1 ) × ( n s )   I ( n r ) × ( n r ) O ( n r ) × s O s × ( n r ) F H ¯ 11 Σ 1 A ¯ 11 H ¯ 11 Σ 1 A ¯ 12 A ¯ 11 A ¯ 12 A ¯ 21 A ¯ 22 T   = Φ ¯ 1 Φ ¯ 2 Φ ¯ 3 Φ ¯ 4 D a O j = 1 t + 1 p j × ( g t + 1 1 ) O j = 1 t + 1 p j × ( s j = 1 t + 1 p j g t + 1 + 1 ) A t + 1 , 1 O ( g t + 1 1 ) × j = 1 t + 1 p j I ( g t + 1 1 ) × ( g t + 1 1 ) O ( g t + 1 1 ) × ( s j = 1 t + 1 p j g t + 1 + 1 ) O ( g t + 1 1 ) × ( n s ) O ( s j = 1 t + 1 p j g t + 1 1 ) × j = 1 t + 1 p j O ( s j = 1 t + 1 p j g t + 1 1 ) × ( g 1 1 ) I ( s j = 1 t + 1 p j g t + 1 + 1 ) × ( s j = 1 t + 1 p j g t + 1 + 1 ) O ( s j = 1 t + 1 p j g t + 1 1 ) × ( n s )
From Equation (15) we know that:
r a n k O ( n r ) × r H ¯ 12 Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) = r a n k Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) H ¯ 11 H ¯ 12 = n
The transformation in Equation (59) does not change the rank of O ( n r ) × r H ¯ 12 Σ O r × ( n r ) O ( s r ) × r O ( s r ) × ( n r ) , thus we can observe the following from Equation (59):
  j = 1 t + 1 p j + g t + 1 1 = r a n k I j = 1 t + 1 p j × j = 1 t + 1 p j + N ¯ a = r a n k Σ = r   r a n k H 3 H 4 = n ( g t + 1 1 + j = 1 t + 1 p j ) = n r
Therefore, matrix H 3 H 4 R ( n r ) × ( n r ) is invertible, and all column vectors of matrix H 8 H 4 R ( n r ) × ( n s ) are linearly independent. Hence, there exist at least n s linearly independent row vectors in matrix H 4 , denoted as matrix H ¯ 4 R ( n s ) × ( n s ) , and we can infer that matrix H ¯ 4 is invertible. Similarly, we denote matrix Ψ 1 Ψ 2 Ψ 3 Ψ 4 , which comes from matrix Φ ¯ 1 Φ ¯ 2 Φ ¯ 3 Φ ¯ 4 , as the matrix corresponding to matrix H ¯ 4 . □
Let H ¯ 1 H ¯ 2 H ¯ 3 H ¯ 4 = H ¯ 1 H ¯ 23 H ¯ 4 , Ψ 1 Ψ 2 Ψ 3 Ψ 4 = Ψ 1 Ψ 23 Ψ 4 , replace H ¯ b 1 H ¯ b 2 H ¯ b 3 and Φ 1 Φ 2 Φ 3 with H ¯ 1 H ¯ 23 H ¯ 4 and Ψ 1 Ψ 23 Ψ 4 , respectively, Equation (57) can be rewritten as:
  H ¯ 1 H ¯ 23 H ¯ 4 I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j N ¯ t + 1 O ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k + 1 ) χ 2 ( k + 1 ) χ 3 ( k + 1 )   = Ψ 1 Ψ 23 Ψ 4 D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k ) χ 2 ( k ) χ 3 ( k )   + v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) F U w ( k ) y a b ( k + 1 ) O s × 1
Based on O ( n s ) × r H ¯ b , let H ¯ 11 H ¯ 12 = H ¯ c H ¯ d H ¯ a H ¯ b . Since n s row vectors have been selected from matrix O ( n r ) × r H ¯ 12 to replace O ( n s ) × r H ¯ b , the observation Equation (28) is correspondingly rewritten as:
Y ( k + 1 ) = C χ ( k + 1 ) + v ¯ ( k + 1 )
where, C = H ¯ c H ¯ d H ¯ 21 H ¯ 22 T .
The system formed by Equations (61) and (62) is still regular and observable (the proof process is similar to Theorems 4 and 5).
Equations (1) and (2) undergo a series of transformations and are ultimately transformed into Equations (61) and (62). During this process, before adding the measurement equation to the state equation, all transformations involving the measurement equation are column transformations. In other words, the measurement noises are not fused with each other. Therefore, the noises in Equation (61) and the noises in Equation (62) do not have a correlation. Therefore, the classical Kalman filter can be used for systems (61) and (62).
Through the transformations and decompositions from Equations (36)–(61), the entire system becomes a solvable known system that does not contain unknown terms at the current time. For causal systems represented by Equation (53), the transformation process is similar to that of non-causal systems, and can also obtain a solvable known system. Therefore, in subsequent analysis, we will mainly focus on non-causal systems represented by Equation (52). Next, we will perform state estimation on the system in a form similar to standard Kalman filtering.

4. Filtering Algorithm

4.1. One-Step Prediction

Equation (61) can be split into two parts:
  H ¯ 1 H ¯ 23 H ¯ 4 χ 1 ( k + 1 ) χ 2 ( k + 1 ) χ 3 ( k + 1 ) = Ψ 1 Ψ 23 Ψ 4 χ 1 ( k ) χ 2 ( k ) χ 3 ( k ) + v a b ( k + 1 )   + H ¯ a Σ 1 U 1 w ( k ) + y a b ( k + 1 )
  I j = 1 t + 1 p j × j = 1 t + 1 p j O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) O j = 1 t + 1 p j × ( n s ) O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j N ¯ t + 1 O ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k + 1 ) χ 2 ( k + 1 ) χ 3 ( k + 1 )   = D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j I ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × ( n s ) χ 1 ( k ) χ 2 ( k ) χ 3 ( k ) + F U w ( k )
From Equation (30), we can obtain Equations (31) and (33). Similarly, let F = F 1 F 2 , from Equation (64) we can obtain the following two equations:
χ 1 ( k + 1 ) = D a χ 1 ( k ) + A t + 1 , 1 χ 3 ( k ) + F 1 U w ( k )
χ 2 ( k + 1 ) = i = 1 g t + 1 N ¯ t + 1 i 1 F 2 U w ( k + i )
Substituting Equations (65) and (66) into Equation (63):
  χ 3 ( k + 1 ) = H ¯ 4 1 { H ¯ 1 [ D a χ 1 ( k ) + A t + 1 , 1 χ 3 ( k ) + F 1 U w ( k )   + H ¯ 23 i = 1 g t + 1 N ¯ t + 1 i 1 F 2 U w ( k + i ) } + H ¯ 4 1 { Ψ 1 Ψ 23 P ¯ t + 1 Ψ 4 χ 1 ( k ) χ 2 ( k ) χ 3 ( k )   + v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) + y a b ( k + 1 ) }
According to Equations (65)–(67), the state prediction, which can be denoted as χ ^ ( k + 1 | k ) = χ ^ 1 ( k + 1 | k ) χ ^ 2 ( k + 1 | k ) χ ^ 3 ( k + 1 | k ) , are, respectively:
χ ^ 1 ( k + 1 | k ) = D a χ ^ 1 ( k | k ) + A t + 1 , 1 χ ^ 3 ( k | k )
χ ^ 2 ( k + 1 | k ) = 0
  χ ^ 3 ( k + 1 | k ) = H ¯ 4 1 y a b ( k + 1 ) }   + { H ¯ 4 1 Ψ 1 H ¯ 4 1 H ¯ 1 D a H ¯ 4 1 Ψ 23 P ¯ t + 1 H ¯ 4 1 Ψ 4 H ¯ 4 1 H ¯ 1 A t + 1 , 1 χ ^ 1 ( k | k ) χ ^ 2 ( k | k ) χ ^ 3 ( k | k )
where, χ ^ 1 ( k | k ) χ ^ 2 ( k | k ) χ ^ 3 ( k | k ) = χ ^ ( k | k ) is the state estimation of the system. combining Equations (68)–(70):
  χ ^ ( k + 1 | k ) = O j = 1 t + 1 p j × 1 O ( s j = 1 t + 1 p j ) × 1 H ¯ 4 1 y a b ( k + 1 )   + D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j O ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × ( n s ) H ¯ 4 1 Ψ 1 H ¯ 4 1 H ¯ 1 D a H ¯ 4 1 Ψ 23 H ¯ 4 1 Ψ 4 H ¯ 4 1 H ¯ 1 A t + 1 , 1   × χ ^ 1 ( k | k ) χ ^ 2 ( k | k ) χ ^ 3 ( k | k )
Let:
Λ = D a O j = 1 t + 1 p j × ( s j = 1 t + 1 p j ) A t + 1 , 1 O ( s j = 1 t + 1 p j ) × j = 1 t + 1 p j O ( s j = 1 t + 1 p j ) × ( s j = 1 t + 1 p j ) O ( s j = 1 t + 1 p j ) × ( n s ) H ¯ 4 1 Ψ 1 H ¯ 4 1 H ¯ 1 D a H ¯ 4 1 Ψ 23 H ¯ 4 1 Ψ 4 H ¯ 4 1 H ¯ 1 A t + 1 , 1
According to Equations (65)–(71), the state prediction error of χ ^ ( k + 1 | k ) = χ ^ 1 ( k + 1 | k ) χ ^ 2 ( k + 1 | k ) χ ^ 3 ( k + 1 | k ) , denoted as χ ˜ ( k + 1 | k ) = χ ˜ 1 ( k | k ) χ ˜ 2 ( k | k ) χ ˜ 3 ( k | k ) , is:
  χ ˜ ( k + 1 | k ) = χ ( k + 1 ) χ ^ ( k + 1 | k )   = Λ χ ˜ ( k | k ) + F 1 U w ( k ) i = 1 h N t + 1 i 1 F 2 U w ( k + i ) ϑ ( k )
where:
ϑ ( k ) = H ¯ 4 1 { v a b ( k + 1 ) + H ¯ a Σ 1 U 1 w ( k ) F 1 U w ( k ) H ¯ 23 P ¯ t + 1 i = 1 g t + 1 N t + 1 i 1 F 2 U w ( k + i ) }
According to Equation (72), the predicted error covariance of χ ^ ( k + 1 | k ) , denoted as P ¯ ( k + 1 | k ) , is:
  P ¯ ( k + 1 | k ) = E { χ ˜ ( k + 1 | k ) χ ˜ T ( k + 1 | k ) }   = Λ P ¯ ( k | k ) Λ T + Q ¯
where,
  Q ¯ = E F 1 U w ( k ) i = 1 h N t + 1 i 1 F 2 U w ( k + i ) ϑ ( k ) F 1 U w ( k ) i = 1 h N t + 1 i 1 F 2 U w ( k + i ) ϑ ( k ) T   = F 1 U Q U T F 1 T O Π O i = 1 h N t + 1 i 1 F 2 U Q U T F 2 T ( N t + 1 i 1 ) T O Π T O E { ϑ ( k ) ϑ T ( k ) }
where,
Π = F 1 U Q U 1 T Σ T H ¯ a T F 1 U Q U T F 1 T
E { ϑ ( k ) ϑ T ( k ) } = H ¯ 4 1 R + H ¯ a Σ 1 U 1 Q U 1 T Σ T H ¯ a T + F 1 U Q U T F 1 T H ¯ 4 T + H ¯ 4 1 H ¯ 23 P ¯ t + 1 i = 1 h N t + 1 i 1 F 2 U Q U T F 2 T ( N t + 1 i 1 ) T P ¯ t + 1 T H ¯ 23 T Π Π T H ¯ 4 T
According to Equation (62), the predicted value of the observation is:
Y ^ ( k + 1 | k ) = C χ ^ ( k + 1 | k )

4.2. State Update

Using the projection theorem, the state update equation is:
χ ^ ( k + 1 | k + 1 ) = χ ^ ( k + 1 | k ) + K ( k + 1 ) [ Y ( k + 1 ) Y ^ ( k + 1 | k ) ]
where,
K ( k + 1 ) = P ¯ ( k + 1 | k ) C T [ C P ¯ ( k + 1 | k ) C T + R ¯ ] 1
R ¯ = E { v ¯ ( k + 1 ) v ¯ T ( k + 1 ) }
The estimated error covariance of χ ^ ( k + 1 | k + 1 ) , denoted as P ¯ ( k + 1 | k + 1 ) , is:
P ¯ ( k + 1 | k + 1 ) = [ I K ( k + 1 ) C ] P ¯ ( k + 1 | k )
Based on x = V X = V X 1 X 2 and X 1 X 2 = T χ = T χ 1 χ 2 χ 3 , we obtain the origin estimation of Equations (1) and (2), denoted as x ^ ( k + 1 | k + 1 ) :
x ^ ( k + 1 | k + 1 ) = V T χ ^ ( k + 1 | k + 1 )
It should be pointed out that Equations (71), (73), (76), (77) and (79) are all classical formulas of the Kalman filter. The entire estimation method in this paper can be summarized into a flowchart as shown in Figure 1.

5. Numerical Simulation

Considering a linear discrete-time singular system in the form of Equations (1) and (2), we set:
E = 0 1 0.5 0.25 0.5 0 2 0.5 1 0.5 0.2 0 0 1 0 0 0.5 0 1 0 0 0 0 0 ,
A = 0.5 0.5 1 1.5 2 4.5 1.5 1.75 0 0.5 6 18 1 0.5 0 1 2 2 1.5 1 0.5 0.25 3.5 7
H = 1 0.5 1 0.5 2 2 3.5 0 1 0.5 3 3 3 0.5 1 1.5 2 3 1 1 0 0 0.5 0 0.25 0 4 0 3 0.25
Q = 0.25 0 0 0 0 0.2 0 0 0 0 0.4 0 0 0 0 0.1 ,
R = 0.5 0 0 0 0 0 0.1 0 0 0 0 0 0.3 0 0 0 0 0 0.2 0 0 0 0 0 0.1
The initial values of the system are x ( 0 ) = 1 1 1 1 1 1 T and The estimation initial values of the system are x ^ ( 0 ) = 0 0 0 0 0 0 T and P ( 0 ) = I 6 × 6 . This numerical example represents a non-causal system and the matrix transformation process can be found in the Appendix A.
Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show the proposed filtering results for six sub-states with 100 steps. Figure 8 show the root mean square error (RMSE) results using 1000 steps. From Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, we can see that each component of state estimation can effectively follow the true state component. From Figure 8, we can see that each component of state estimation tends to converge.

6. Conclusions and Future Work

This paper considers the causality of the system and proposes a state estimation method based on the Kalman filter for linear discrete-time rectangular singular systems. The feasibility of the algorithm is verified through numerical simulation. Compared to existing methods, the main contribution of this paper is to propose a state estimation method for linear rectangular singular systems which is non-causal, providing a new approach for solving the state estimation problem of rectangular singular systems. Although the numerical simulations in this paper have shown that the proposed filter tends to converge, this paper has not yet conducted a theoretical analysis of the convergence and stability. Compared to general systems, theoretical analysis of convergence and stability about rectangular singular systems could be more difficult and more complex due to the less constrained information in the state equation, which needs to be studied in the future. In addition, non-causal properties may also exist in nonlinear singular systems. For example, at a certain moment, the matrix pair ( E , A ) of the nonlinear singular system, which can be obtained by Taylor expansion, may be non-causal. Therefore, more research is needed to solve nonlinear singular systems with non-causal properties. Furthermore, the control problem of rectangular singular systems is also worth studying. For example, if the system (1) has some input as B u ( k ) on the right-hand side, how to gain the state estimation and how to solve the control problems? These problems also need to be solved.

Author Contributions

Conceptualization, J.Z., C.W. and W.L.; methodology, C.W.; software, J.Z.; validation, J.Z.; formal analysis, J.Z.; investigation, J.Z.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, C.W. and W.L.; visualization, C.W.; supervision, C.W. and W.L.; project administration, C.W.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Natural Science Foundation of China 62125307, 61933013 and U22A2046.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Matrix Transformation of the Numberical Simulation

The transformation process of the matrix E and A can be Summarized in two steps. The first step: Using the first type of restricted equivalence transformation(RET).
0 1 0.5 0.25 0.5 0 2 0.5 1 0.5 0.2 0 0 1 0 0 0.5 0 1 0 0 0 0 0 0.5 0.5 1 1.5 2 4.5 1.5 1.75 0 0.5 6 18 1 0.5 0 1 2 2 1.5 1 0.5 0.25 3.5 7 the   first   type   of   RET 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 1 0 0 0 3 4 0 1 0 0 0 5 0 0 1 0 0 0
The second step: transformation through Equations (36)–(60):
1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 1 0 0 0 3 4 0 1 0 0 0 5 0 0 1 0 0 0 move   4 th   column   to   1 st   column 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 1 0 0 3 4 0 0 1 0 0 5 0 0 0 1 0 0 Eliminate   element   2 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 3 4 0 0 1 0 0 5 0 0 0 1 0 0 move   1 st   column   to   6 th   column 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 3 4 0 0 1 0 0 5 0 0 0 1 0 0 0 the   second   type   of   RET 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 5 0 0 0 0 1 0 0
4 th   column   minus   3 rd   column ,   3 rd   row   adds   4 th   row 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 5 0 0 0 0 1 0 0 multiply   3 rd   row   by   1 ,   multiply   3 rd   column   by   1   1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 5 0 0 0 0 1 0 0 move   2 nd   column   to   6 th   column 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 5 0 0 0 0 1 0 0 0 4 th   column   adds   6 th   column   to   Eliminate   Element   1   1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 5 0 0 0 0 1 0 0 0 the   second   type   of   RET 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0
The transformation process of the matrix H is Related to matrix E, and matrix H is transformed to:
1 0.5 1 0.5 2 2 3.5 0 1 0.5 3 3 3 0.5 1 1.5 2 3 1 1 0 0 0.5 0 0.25 0 4 0 3 0.25 0 0 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 1.167 3.283 0.117 5.117 2 2.167
According to the transformation in this section, the system in “Numerical Simulation” is non-causal system. The 2nd row and 3rd row of the measurement equation are selected to add into the state equation.

References

  1. Hanieh, M.; Yao, H. Extended Kalman filtering for state estimation of a Hill muscle model. IET Control Theory Appl. 2018, 12, 384–394. [Google Scholar]
  2. Li, Z.; Sun, M.; Duan, Q.; Mao, Y. Robust State Estimation for Uncertain Discrete Linear Systems with Delayed Measurements. Mathematics 2022, 10, 1365. [Google Scholar] [CrossRef]
  3. Mahdieh, A.; Majid, H. Distributed trust-based unscented Kalman filter for non-linear state estimation under cyber-attacks: The application of manoeuvring target tracking over wireless sensor networks. IET Control Theory Appl. 2021, 15, 1987–1998. [Google Scholar]
  4. Wang, M.; Liu, W.F.; Wen, C. A High-Order Kalman Filter Method for Fusion Estimation of Motion Trajectories of Multi-Robot Formation. Sensors 2022, 22, 5590. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, Q.; Xu, Y.; Wang, X.; Yu, Z.; Deng, T. Real-Time Wind Field Estimation and Pitot Tube Calibration Using an Extended Kalman Filter. Mathematics 2021, 9, 646. [Google Scholar] [CrossRef]
  6. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  7. Huang, Y.; Xue, C.; Zhu, F.; Wang, W.; Zhang, Y.; Chambers, J.A. Adaptive Recursive Decentralized Cooperative Localization for Multirobot Systems with Time-Varying Measurement Accuracy. IEEE Trans. Instrum. Meas. 2021, 70, 1–25. [Google Scholar] [CrossRef]
  8. Wang, J.; Wen, C. Real-Time Updating High-Order Extended Kalman Filtering Method Based on Fixed-Step Life Prediction for Vehicle Lithium-Ion Batteries. Sensors 2022, 22, 2574. [Google Scholar] [CrossRef]
  9. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A Novel Adaptive Kalman Filter with Inaccurate Process and Measurement Noise Covariance Matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [Google Scholar] [CrossRef]
  10. Lu, Z.; Wang, N.; Dong, S. Improved Square-Root Cubature Kalman Filtering Algorithm for Nonlinear Systems with Dual Unknown Inputs. Mathematics 2024, 12, 99. [Google Scholar] [CrossRef]
  11. Rosenbrock, H.H. Structural properties of linear dynamical systems. Int. J. Control 1974, 20, 191–202. [Google Scholar] [CrossRef]
  12. Dai, L. Singular Control Systems; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  13. Lewis, F.L.; Mertzios, V.G. Recent advances in singular systems. Circuits Syst. Sign. Process. 1989, 8, 341–355. [Google Scholar]
  14. Feng, Y.; Yagoubi, M.; Chevrel, P. Dilated LMI characterisations for linear time-invariant singular systems. Int. J. Control 2010, 11, 2276–2284. [Google Scholar] [CrossRef]
  15. Xia, Y.Q.; Boukas, E.K.; Shi, P.; Zhang, J.H. Stability and stabilization of continuous-time singular hybrid systems. Automatica 2009, 45, 1504–1509. [Google Scholar] [CrossRef]
  16. Jin, Z.; Zhang, Q.; Zhang, Y. The Impulse Analysis of the Regular Singular System via Kronecker Indices. Int. Conf. Appl. Math. Model. Stat. Appl. 2018, 143, 62–67. [Google Scholar]
  17. Zheng, J.; Ran, C. Robust time-varying Kalman predictor for uncertain singular system with missing measurement and colored noises. In Proceedings of the 2022 34th Chinese Control and Decision Conference (CCDC), Hefei, China, 15–17 August 2022; Volume 34, pp. 3483–3488. [Google Scholar]
  18. Nosrati, K.; Belikov, J.; Tepljakov, A.; Petlenkov, E. Extended fractional singular Kalman filter. Appl. Math. Comput. 2023, 448, 127950. [Google Scholar] [CrossRef]
  19. Nosrati, K.; Shafiee, M. Kalman filtering for discrete-time linear fractional-order singular systems. IET Control Theory Appl. 2018, 12, 1254–1266. [Google Scholar] [CrossRef]
  20. Lu, X.; Wang, L.; Wang, H.; Wang, X. Kalman Filtering for Delayed Singular Systems with Multiplicative Noise. IEEE/CAA J. Autom. Sin. 2016, 3, 51–58. [Google Scholar]
  21. Yu, X.; Pu, S.; Li, J. An Optimal Filter for Singular Systems with Stochastic Multiplicative Disturbance. IEEE Trans. Circuits Syst. Express Briefs 1989, 67, 3607–3611. [Google Scholar] [CrossRef]
  22. Zhang, H.; Xie, L.; Soh, Y.C. Optimal Recursive Filtering, Prediction, and Smoothing for Singular Stochastic Discrete-Time Systems. IEEE Trans. Autom. Control 1999, 44, 2154–2158. [Google Scholar] [CrossRef]
  23. Zhang, Q.H.; Lu, J.G. Positive real lemmas for singular fractional order systems. IET Control Theory Appl. 2020, 14, 2805–2813. [Google Scholar] [CrossRef]
  24. Wang, J.X.; Wu, H.-C.; Ji, X.-F.; Liu, X.H. Robust Finite-Time Stabilization for Uncertain Discrete-Time Linear Singular Systems. IEEE Access 2020, 8, 100645–100651. [Google Scholar] [CrossRef]
  25. Cui, Y.; Shen, J.; Feng, Z.; Chen, Y. Stability Analysis for Positive Singular Systems with Time-Varying Delays. IEEE Trans. Autom. Control 2018, 63, 1487–1494. [Google Scholar] [CrossRef]
  26. Liu, H.X.; Lin, C.; Cheng, B.; Ge, W.T. Stabilization for Rectangular Descriptor Fractional Order Systems. IEEE Access 2019, 7, 177556–177561. [Google Scholar] [CrossRef]
  27. Abhinav, K.; Mamoni, P.M.K. Impulse Eliminations for Rectangular Descriptor Systems: A Unified Approach. IEEE Control Syst. Lett. 2023, 7, 1357–1362. [Google Scholar]
  28. Xu, S.-Y.; James, L. Robust Control and Filtering of Singular Systems; Springer: Berlin/Heidelberg, Germany, 2006; p. 332. [Google Scholar]
  29. Zhang, G.S. Regularizability, controllability and observability of rectangular descriptor systems by dynamic compensation. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; pp. 4393–4398. [Google Scholar]
  30. Tian, H.-W.; Shi, Y. Reduced-order Kalman recursive filter for non-square descriptor systems with correlated noise. In Proceedings of the 30th Chinese Control Conference, Yantai, China, 22–24 July 2011; pp. 1428–1431. [Google Scholar]
  31. Wen, C.-B.; Cheng, X.S. A State Space Decomposition Filtering Method for a Class of Discrete-Time Singular Systems. IEEE Access 2019, 7, 50372–50379. [Google Scholar] [CrossRef]
  32. Duan, G.R.; Chen, Y. Generalized regularity and regularizability of rectangular descriptor systems. J. Control Theory Appl. 2007, 5, 159–163. [Google Scholar] [CrossRef]
Figure 1. The entire estimation method.
Figure 1. The entire estimation method.
Mathematics 12 00137 g001
Figure 2. The true value of state and proposed filter for x 1 ( k ) .
Figure 2. The true value of state and proposed filter for x 1 ( k ) .
Mathematics 12 00137 g002
Figure 3. The true value of state and proposed filter for x 2 ( k ) .
Figure 3. The true value of state and proposed filter for x 2 ( k ) .
Mathematics 12 00137 g003
Figure 4. The true value of state and proposed filter for x 3 ( k ) .
Figure 4. The true value of state and proposed filter for x 3 ( k ) .
Mathematics 12 00137 g004
Figure 5. The true value of state and proposed filter for x 4 ( k ) .
Figure 5. The true value of state and proposed filter for x 4 ( k ) .
Mathematics 12 00137 g005
Figure 6. The true value of state and proposed filter for x 5 ( k ) .
Figure 6. The true value of state and proposed filter for x 5 ( k ) .
Mathematics 12 00137 g006
Figure 7. The true value of state and proposed filter for x 6 ( k ) .
Figure 7. The true value of state and proposed filter for x 6 ( k ) .
Mathematics 12 00137 g007
Figure 8. The RMSE of all sub-state.
Figure 8. The RMSE of all sub-state.
Mathematics 12 00137 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, J.; Wen, C.; Liu, W. Kalman Filter for Linear Discrete-Time Rectangular Singular Systems Considering Causality. Mathematics 2024, 12, 137. https://doi.org/10.3390/math12010137

AMA Style

Zheng J, Wen C, Liu W. Kalman Filter for Linear Discrete-Time Rectangular Singular Systems Considering Causality. Mathematics. 2024; 12(1):137. https://doi.org/10.3390/math12010137

Chicago/Turabian Style

Zheng, Jinhui, Chenglin Wen, and Weifeng Liu. 2024. "Kalman Filter for Linear Discrete-Time Rectangular Singular Systems Considering Causality" Mathematics 12, no. 1: 137. https://doi.org/10.3390/math12010137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop