Next Article in Journal
Machine-Learning-Based DDoS Attack Detection Using Mutual Information and Random Forest Feature Importance Method
Next Article in Special Issue
Nonnegative Estimation of Variance Components for a Nested Three-Way Random Model
Previous Article in Journal
Two New Models for Dynamic Linear Elastic Beams and Simplifications for Double Symmetric Cross-Sections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving the Sylvester-Transpose Matrix Equation under the Semi-Tensor Product

by
Janthip Jaiprasert
and
Pattrawut Chansangiam
*,†
Department of Mathematics, School of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(6), 1094; https://doi.org/10.3390/sym14061094
Submission received: 23 April 2022 / Revised: 19 May 2022 / Accepted: 21 May 2022 / Published: 26 May 2022
(This article belongs to the Special Issue Matrix Equations and Symmetry)

Abstract

:
This paper investigates the Sylvester-transpose matrix equation A X + X T B = C , where all mentioned matrices are over an arbitrary field. Here, ⋉ is the semi-tensor product, which is a generalization of the usual matrix product defined for matrices of arbitrary dimensions. For matrices of compatible dimensions, we investigate criteria for the equation to have a solution, a unique solution, or infinitely many solutions. These conditions rely on ranks and linear dependence. Moreover, we find suitable matrix partitions so that the matrix equation can be transformed into a linear system involving the usual matrix product. Our work includes the studies of the equation A X = C , the equation X B = C , and the classical Sylvester-transpose matrix equation.

1. Introduction

Linear algebraic matrix equations arise naturally in pure and applied mathematics. The Sylvester matrix equation A X + X B = C and related matrix equations play a crucial role in differential equations, control and system theory, signal processing, and related application areas; see e.g., [1]. The Sylvester-transpose matrix equation A X + X T B = C can be applied to solved certain problems in control theory, such as eigenstructure assignment [2], and fault diagnosis in dynamic systems; see more information in a survey [3]. Theoretical interests of the Sylvester-transpose matrix equation or related ones deal with existence and uniqueness of solutions (e.g., concerning Moore-Penrose inverses [4]), and structures of symmetric/Hermitian/reflexive solutions (e.g., [5,6]). Computational aspects of the matrix equations concern methods to find or approximate solutions; e.g., [7]. The studies of such matrix equations often concern real/complex matrices, and the matrix product showed up in the equation is the usual matrix product. Thus, there are at least two ways to extend previous studies about linear matrix equations. The first way is, instead of real/complex matrices, to consider matrices over a suitable algebraic framework, such as fields (e.g., [8]), commutative rings (e.g., [9,10]), or principal ideal domains (e.g., [6]). Recall that linear matrix equations are closely related to linear systems, and the theory of the linear systems for matrices/vectors over a field fits with the classical theory for real/complex matrices. Hence, in this paper, we consider matrices over a field F. Let us denote by F m × n the set of m × n matrices over F.
The second way to extend the studies of matrix equations is to consider a generalization of the usual matrix product. It is well known that the usual matrix product of matrices A F m × n and B F p × q is well-defined if the pair ( A , B ) satisfies the matching dimension condition, i.e., n = p . For matrices A F m × n and B F p × q of arbitrary dimensions, D, Cheng [11] defined their semi-tensor product (STP) by
A B = ( A I α n ) ( B I α p )
where α is the least common multiple of n and p. Here, the operation ⊗ is the tensor (Kronecker) multiplication. In particular when n = p , the STP A B reduces to the usual product A B . It turns out that the STP posseses rich algebraic properties like the usual matrix product, such as the associativity, the left/right distribution over the matrix addition, and certain identity-like properties. Moreover, the STP is compatible with the scalar multiplication, the transposition, and the inversion; see e.g., [11,12,13,14]. Although, Cheng’s works concern real matrices, we can generalize all results to matrices over an arbitrary filed except the results concerning eigenvalues and determinants, which required the underlying field to be algebraically closed and of characteristic not equal to two, respectively. The semi-tensor product provides a convenient way to convert higher-dimensional data or a multilinear function to a simple expression of matrices/vectors. For examples, logical operations, fuzzy operations and Boolean operations can be represented in terms of the STP between certain representing matrix and a data vector; see e.g., [11]. Thus, the STP is a useful tool in several areas of pure mathematics, such as classical and fuzzy logic [15,16], multi-variable polynomials [11], lattice theory [11], finite-dimensional algebra [11], tensor fields [11], and connections in differential geometry [11]. The STP serves as a powerful tool in applied mathematics, e.g., Boolean networks [17], game theory [18], and dynamic/nonlinear systems [19,20].
In recent years, many authors have developed the theory of such matrix equations in which the usual matrix product is replaced by the STP. In 2015, Yao et al. [21] investigated the matrix equation
A X = B
where A and B are given real matrices, and X is an unknown. They discussed the compatability for matrix dimensions, criteria for solvability and unique solvability, and how to transform the matrix Equation (1) into a linear system involving the usual matrix product. This matrix equation was investigated when the product was the second matrix-matrix (MM-2) semi-tensor product in [22]. The matrix equation
A X B = C
was discussed by Ji [23] where A , B , and C are given complex matrices. Li et al. [24] investigated coupled matrix equations A X = B and X C = D . There was also a development in a nonlinear matrix equation A X X = B in [25].
In this paper, we discuss the Sylvester-transpose matrix equation
A X + X T B = C
in which the matrix products are the STPs. All matrices are considered in a full generality, namely, matrices over an arbitrary field. Note that when we require the solution X to be symmetric, the solution of Equation (3) is the same as the symmetric solution of the Sylvester equation A X + X B = C . When all matrix dimensions are compatible, we investigate criteria for the matrix equation to be solvable and uniquely solvable. These criteria rely on the ranks and linear dependence of associated matrices. Moreover, we transform the matrix equation into a linear system under the usual matrix product, so that we can apply both theory and computational aspects to solve the equation.
The rest of this paper is organized as follows. In Section 2, we recall prerequisite theory for matrices over a field, involving the tensor product, the vector operator, and linear systems. In Section 3, we discuss Equation (3) when the unknown X is a vector. In Section 4, we investigate Equation (3) when the unknown is a matrix. Finally, we summarize the work in Section 5.

2. Preliminaries

Throughout this paper, let F be a field. Denote by F m × n the set of m × n matrices whose entries come from the field F. The j-th column of a matrix A is written as col j ( A ) .
In Section 1, we discussed the semi-tensor product of matrices, which involves the tensor product. Now, we recall the tensor product and the vector operator for matrices over a field. The tensor (Kronecker) product of matrices A = [ a i j ] F m × n and B F a × b is defined to be A B = [ a i j B ] F m a × n b , i.e., the matrix whose ( i , j ) -th block is given by a i j B . The vector operator Vc ( · ) is an assignment that turns a matrix A = [ a i j ] F m × n into a column vector defined as
V c ( A ) = a 11 a 21 a m 1 a 1 n a 2 n a m n T F m n × 1 .
It is clear that the vector operator is linear and bijective. Moreover, the vector operator relates the usual product and the tensor product as follows.
Lemma 1.
For any A F m × n , B F n × p and C F p × q , we have
V c ( A B C ) = ( C T A ) V c ( B ) .
Proof. 
The proof is essentially the same as that for real matrices; see, e.g., [26]. Indeed, write C = [ c i j ] . For convenience, let us denote the kth column of any matrix X by X k . For each k = 1 , , q , we have
( A B C ) k = A ( B C ) k = A B C k = A i = 1 p c i k B i = [ c 1 k A c p k A ] V c ( B ) = ( C k T A ) V c ( B ) .
Combining all columns, we obtain
V c ( A B C ) = ( A B C ) 1 ( A B C ) q = ( C 1 T A ) V c ( B ) ( C q T A ) V c ( B ) = C 1 T A C q T A V c ( B ) .
The latter term is just ( C T A ) V c ( B ) . □
Let us denote the i-th column of the k × k identity matrix I k by e i k , for each i = 1 , 2 , , k .
Lemma 2.
For any A F m × n , we have
V c ( A I k ) = I n I m e 1 k I m e 2 k I m e k k V c ( A ) .
Proof. 
The proof is essentially the same as that for real matrices; see, e.g., [26]. □
Lemma 3.
For any X F m × n , we have
V c ( X T ) = W [ n , m ] V c ( X ) .
Here, W [ n , m ] is the swap matrix defined by
W [ n , m ] = i = 1 m j = 1 n E i j E i j T = [ I m e 1 n I m e 2 n I m e n n ] F m n × m n
and each E i j F m × n has entry 1 in position ( i , j ) , and all other entries are zero.
Proof. 
The proof is essentially the same as that for real matrices; see, e.g., [26]. Indeed, write X = [ x i j ] . Note that for each i , j , we have E i j T X E i j T = x i j E i j T . Thus, we can write
X T = i = 1 m j = 1 n x i j E i j T = i = 1 m j = 1 n E i j T X E i j T .
The desired result now follows from Lemma 1. □
The theory of linear systems involving matrices over an arbitrary field works in the same way as that for real matrices. The following result is known as the Kronecker-Capelli theorem.
Theorem 1.
(See, e.g., [8].) Let A F m × n and b F m × 1 . Then, x F n × 1 is a solution of the linear system A x = b if and only if rank A b = rank A . Moreover,
(i) 
The system A x = b has a unique solution if and only if rank A b = rank A = n .
(ii) 
The system A x = b has infinitely many solutions if and only if rank A b = rank A < n .

3. The Sylvester-Transpose Equation in an Unknown Vector

In this section, we investigate the Sylvester-transpose equation
A X + X T B = C
where A F m × n , B F a × b , and C F r × s are given matrices, and
X = x 1 x 2 x p T F p × 1
is an unknown. Let us denote t 1 = lcm { n , p } and t 2 = lcm { p , a } ; here, lcm stands for the least common multiple.

3.1. A Simple Case m = r

In this subsection, we consider the simple case m = r . First of all, we discuss the compatible condition for matrix dimensions.
Remark 1.
From the definition of the semi-tensor product, we have
A X = ( A I t 1 n ) ( X I t 1 p ) F m t 1 n × t 1 p , X T B = ( X T I t 2 p ) ( B I t 2 a ) F t 2 p × b t 2 a .
Since C F r × s , this forces m = r = m t 1 / n = t 2 / p , and s = t 1 / p = b t 2 / a . Thus, all matrix dimensions in Equation (4) are compatible if and only if
(i) 
p = t 2 / m = n / s ,
(ii) 
s = b t 2 / a .
In particular, we have p n , i.e., t 1 = n .
From now on, suppose that the compatible conditions in Remark 1 hold. We have
A X = A ( X I n p ) and X T B = ( X T I m ) ( B I s b ) .
Let us partition A and B I s / b into p-equal size blocks matrices A ˇ 1 , A ˇ 2 , , A ˇ p F r × s and B ˇ 1 , B ˇ 2 , , B ˇ p F r × s , so that
A X + X T B = A ˇ 1 A ˇ 2 A ˇ p X + X T B ˇ 1 B ˇ 2 B ˇ p = C .
Since X is a column vector (5), we obtain
x 1 ( A ˇ 1 + B ˇ 1 ) + x 2 ( A ˇ 2 + B ˇ 2 ) + + x p ( A ˇ p + B ˇ p ) = C .
Hence, we can deduce the following theorem.
Proposition 1.
Assume that all matrix dimensions in Equation (4) are compatible. Then, the following statements are equivalent:
(i) 
Equation (4) has a solution.
(ii) 
The set { A ˇ 1 + B ˇ 1 , A ˇ 2 + B ˇ 2 , , A ˇ p + B ˇ p , C } is linearly dependent in the vector space F r × s Moreover, if Equation (4) has a solution, then.
rank A ˇ 1 + B ˇ 1 A ˇ p + B ˇ p = rank A ˇ 1 + B ˇ 1 A ˇ p + B ˇ p C .
Proof. 
From the above discussion, we see that Equation (4) is equivalent to Equation (7). Thus, the solvability of Equation (4) means that we can write C as a linear combination of A ˇ 1 + B ˇ 1 , A ˇ 2 + B ˇ 2 , . . . , A ˇ p + B ˇ p , or, equivalently, the condition ( i i ) holds. If ( i ) or ( i i ) holds, then Equation (7) says that each column of C can be written as a linear combination of some columns of A ˇ 1 + B ˇ 1 , A ˇ 2 + B ˇ 2 , , A ˇ p + B ˇ p , i.e., the rank condition (8) holds. □
Remark 2.
If the rank condition (8) holds, then Equation (4) may have no solution, even in the case of real matrices. To see this, let
A = 0 1 1 2 1 0 0 1 R 2 × 4 , B = 2 0 1 1 1 2 0 1 R 4 × 2 and C = 1 0 0 3 R 2 × 2 .
Let us write
A ˇ 1 = 0 1 1 0 , A ˇ 2 = 1 2 0 1 , B ˇ 1 = 2 0 1 1 , B ˇ 2 = 1 2 0 1 .
We have rank [ A ˇ 1 + B ˇ 1 A ˇ 2 + B ˇ 2 C ] = rank [ A ˇ 1 + B ˇ 1 A ˇ 2 + B ˇ 2 ] = 2 . However, when we put the matrix Equation (4) into a system of linear equations, we see that it has no solution.
Theorem 2.
Assume that all matrix dimensions in Equation (4) are compatible. Then, Equation (4) is equivalent to the following linear system:
A ¯ X = V c ( C )
where
A ¯ = V c ( A ˇ 1 + B ˇ 1 ) V c ( A ˇ p + B ˇ p ) = col 1 ( A ˇ 1 ) + col 1 ( B ˇ 1 ) col 1 ( A ˇ p ) + col 1 ( B ˇ p ) col 2 ( A ˇ 1 ) + col 2 ( B ˇ 1 ) col 2 ( A ˇ p ) + col 2 ( B ˇ p ) col s ( A ˇ 1 ) + col s ( B ˇ 1 ) col s ( A ˇ p ) + col s ( B ˇ p ) .
Thus, the following statements are equivalent:
(i) 
The Sylvester-transpose Equation (4) has a solution.
(ii) 
The linear system (9) has a solution.
(iii) 
The set { V c ( A ˇ 1 + B ˇ 1 ) , V c ( A ˇ 2 + B ˇ 2 ) , , V c ( A ˇ p + B ˇ p ) , V c ( C ) } is linearly dependent in the vector space F r s × 1 .
(iv) 
rank A ¯ V c ( C ) = rank A ¯ .
Proof. 
We can apply the vector operator to Equation (7) to obtain the following equivalent equation:
x 1 V c ( A ˇ 1 + B ˇ 1 ) + x 2 V c ( A ˇ 2 + B ˇ 2 ) + + x p V c ( A ˇ p + B ˇ p ) = V c ( C ) .
A direct computation reveals that we can put (10) into the linear system (9). It follows that the solvability of the Sylvester-transpose Equation (4) means that we can write V c ( C ) as a linear combination of V c ( A ˇ 1 + B ˇ 1 ) , V c ( A ˇ 2 + B ˇ 2 ) , , V c ( A ˇ p + B ˇ p ) , or, equivalently, the set { V c ( A ˇ 1 + B ˇ 1 ) , V c ( A ˇ 2 + B ˇ 2 ) , , V c ( A ˇ p + B ˇ p ) , V c ( C ) } is linearly dependent. Now, since V c ( A i ˇ + B i ˇ ) and V c ( C ) are column vectors for all (i), the assertion ( i i i ) is equivalent to the rank condition ( i v ) on the angmented matrices, namely, rank A ¯ V c ( C ) = rank A ¯ . □

3.2. The General Case

In this subsection, we study the solvability of the Sylvester-transpose Equation (4) when m is not necessarily equal to r. In this case, n is not necessarily equal to t 1 ; so that the matrix partitionings are more complicated than the simple case m = r .
Remark 3.
Let A F m × n , B F a × b , and C F r × s be given matrices, and let X F p × 1 be an unknown. By the definition of the semi-tensor product, we have
A X = ( A I t 1 n ) ( X I t 1 p ) F m t 1 n × t 1 p , X T B = ( X T I t 2 p ) ( B I t 2 a ) F t 2 p × b t 2 a .
Thus, all matrix dimensions in Equation (4) are compatible if and only if
(i) 
r = m t 1 / n = t 2 / p ,
(ii) 
s = t 1 / p = b t 2 / a .
Now, we assume that all matrix dimensions in Equation (4) are compatible according to Remark 3. Since s = t 1 / p , and r = t 2 / p , we obtain
A X = ( A I t 1 n ) ( X I s ) and X T B = ( X T I r ) ( B I t 2 a ) .
We split A I t 1 / n = A ˇ i and B I t 2 / a = B ˇ i , where A ˇ i , B ˇ i F r × s for each i = 1 , , p , so that
A X + X T B = A ˇ 1 A ˇ 2 A ˇ p X + X T B ˇ 1 B ˇ 2 B ˇ p = C .
Consequently, we have
x 1 ( A ˇ 1 + B ˇ 1 ) + x 2 ( A ˇ 2 + B ˇ 2 ) + + x p ( A ˇ p + B ˇ p ) = C .
Applying the vector operator to Equation (12), we obtain
x 1 V c ( A ˇ 1 + B ˇ 1 ) + x 2 V c ( A ˇ 2 + B ˇ 2 ) + + x p V c ( A ˇ p + B ˇ p ) = V c ( C ) .
From Equation (13), we can deduce the following result.
Theorem 3.
Assume that all matrices in the Sylvester-transpose Equation (4) are compatible. Then, Equation (4) is equivalent to the linear system
A ¯ X = V c ( C )
where
A ¯ = V c ( A ˇ 1 + B ˇ 1 ) V c ( A ˇ p + B ˇ p ) = col 1 ( A ˇ 1 ) + col 1 ( B ˇ 1 ) col 1 ( A ˇ p ) + col 1 ( B ˇ p ) col 2 ( A ˇ 1 ) + col 2 ( B ˇ 1 ) col 2 ( A ˇ p ) + col 2 ( B ˇ p ) col s ( A ˇ 1 ) + col s ( B ˇ 1 ) col s ( A ˇ p ) + col s ( B ˇ p ) .
Thus, the following statements are equivalent:
(i) 
The Sylvester-transpose Equation (4) has a solution.
(ii) 
The linear system Equation (14) has a solution.
(iii) 
The set { V c ( A ˇ 1 + B ˇ 1 ) , V c ( A ˇ 2 + B ˇ 2 ) , , V c ( A ˇ p + B ˇ p ) , V c ( C ) } is linearly dependent in the vector space F r s × 1 .
(iv) 
rank A ¯ V c ( C ) = rank A ¯ .
Applying Theorem 3 together with Kronecker-Capelli Theorem 1 yields the following:
Corollary 1.
(i) 
The Sylvester-transpose Equation (14) has a unique solution if and only if
rank A ¯ V c ( C ) = rank A ¯ = r s ,
(ii) 
Equation (14) has infinitely many solutions if and only if
rank A ¯ V c ( C ) = rank A ¯ < r s .
Example 1.
Consider the Sylvester-transpose Equation (4) when.
A = 1 2 0 3 1 3 , B = 1 3 0 2 2 1 3 1 2 1 3 0 2 2 0 3 0 3 1 3 2 3 2 0 T and C = 4 4 4 6 4 6 6 6 4 4 6 6 .
Denote
A ˇ 1 = 1 0 2 0 1 0 3 0 1 0 3 0 , A ˇ 2 = 0 0 0 2 0 0 0 3 0 1 0 3 , B ˇ 1 = 1 2 0 3 1 3 0 3 1 2 0 3 , B ˇ 2 = 2 2 2 1 2 3 3 0 2 1 3 0 .
According to Theorem 3, Equation (4) is equivalent to the linear system (14), where
A ¯ = V c ( A ˇ 1 + B ˇ 1 ) V c ( A ˇ 2 + B ˇ 2 ) = 2 3 3 2 2 2 3 3 2 3 2 3 2 3 3 2 2 2 3 3 2 3 2 3 T , V c ( C ) = 4 6 6 4 4 4 6 6 4 6 4 6 T .
We can solve this linear system to obtain X = x 1 2 x 1 T , where x 1 R is arbitrary.

4. The Sylvester-Transpose Equation in an Unknown Matrix

In this section, we investigate the Sylvester-transpose matrix equation
A X + X T B = C
where A F m × n , B F a × b , and C F r × s are given matrices, and X F p × q is an unknown matrix.
Let us denote t 1 = lcm { n , p } and t 2 = lcm { p , a } . By the definition of the STP, we have
A X = ( A I t 1 n ) ( X I t 1 p ) F m t 1 n × q t 1 p , X T B = ( X T I t 2 p ) ( B I t 2 a ) F q t 2 p × b t 2 a .
Thus, all matrix dimensions in Equation (15) are compatible if and only if
(i)
r = m t 1 / n = q t 2 / p ,
(ii)
s = q t 1 / p = b t 2 / a .
From now on, assume that all matrix dimensions are compatible. To transform the Sylvester-transpose Equation (15) into a linear system, we make the following matrix partitioning:
  • A I t 1 / n = [ A ˇ i j ] , where A ˇ i j F t 2 p × t 1 p for each i = 1 , , q and j = 1 , , p ,
  • X I t 1 / p = [ X ˇ i j ] , where X ˇ i j = x i j I t 1 / p for each i = 1 , , p and j = 1 , , q ,
  • B I t 2 / a = [ B ˇ i j ] , where B ˇ i j F t 2 p × t 1 p for each i = 1 , , p and j = 1 , , q ,
  • X T I t 2 / p = [ X ¯ i j ] , where X ¯ i j = x i j I t 2 / p for each i = 1 , , p and j = 1 , , q ,
  • C = [ C ˇ i j ] , where C ˇ i j F t 2 p × t 1 p for each i , j = 1 , , q .
Let X ˇ j be the j-th column block of X I t 1 / p , and X ¯ i the i-th row block of X T I t 2 / p . Then, we obtain
A X + X T B = A ˇ 11 A ˇ 1 p A ˇ q 1 A ˇ q p X ˇ 11 X ˇ 1 q X ˇ p 1 X ˇ p q + X ¯ 11 X ¯ p 1 X ¯ 1 q X ¯ p q B ˇ 11 B ˇ 1 q B ˇ p 1 B ˇ p q = C ˇ 11 C ˇ 1 q C ˇ q 1 C ˇ q q .
By considering each block matrix in the above equation, we obtain
C ˇ i j = A ˇ i 1 X ˇ 1 j + + A ˇ i p X ˇ p j + X ¯ i 1 B ˇ 1 j + + X ¯ i p B ˇ p j = x 1 j A ˇ i 1 + + x p j A ˇ i p + x 1 i B ˇ 1 j + + x p i B ˇ p j .
Applying the vector operater to the above equation yields
V c ( C ˇ i j ) = x 1 j V c ( A ˇ i 1 ) + + x p j V c ( A ˇ i p ) + x 1 i V c ( B ˇ 1 j ) + + x p i V c ( B ˇ p j ) .
From now on, denote β = t 1 t 2 / p 2 . From Equation (16), we arrive at a necessary criterion for the solvability of Equation (15) as follows.
Proposition 2.
Assume that the Sylvester-transpose matrix Equation (15) has a solution. Then, for each i = 1 , , q and j = 1 , , q , the set
{ V c ( A ˇ i 1 ) , , V c ( A ˇ i p ) , V c ( B ˇ 1 j ) , , V c ( B ˇ p j ) , V c ( C ˇ i j ) }
is linearly dependent in the vector space F β × 1 .
Now, putting Equation (16) for all i , j together yields
A ˜ X + X T B ˜ = C ˜
where
A ˜ = V c ( A ˇ 11 ) V c ( A ˇ 1 p ) V c ( A ˇ q 1 ) V c ( A ˇ q p ) , B ˜ = V c ( B ˇ 11 ) V c ( B ˇ 1 q ) V c ( B ˇ p 1 ) V c ( B ˇ p q ) , C ˜ = V c ( C ˇ 11 ) V c ( C ˇ 1 q ) V c ( C ˇ q 1 ) V c ( C ˇ q q ) .
We apply the vector operator to Equation (17) to obtain
V c ( C ˜ ) = ( I q A ˜ ) V c ( X ) + ( B ˜ T I β q ) V c ( X T I β ) .
By introducing the notation
K β , p , q = I p I q e 1 β I q e β β ,
and utilizing Lemmas 2 and 3, we can rewrite Equation (18) as
V c ( C ˜ ) = ( I q A ˜ ) V c ( X ) + ( B ˜ T I β q ) K β , p , q V c ( X T ) = ( I q A ˜ ) V c ( X ) + ( B ˜ T I β q ) K β , p , q W [ q , p ] V c ( X ) .
Then, we can deduce the following result.
Theorem 4.
Assume that all matrix dimensions in Equation (15) are compatible. Then, the Sylvester-transpose matrix Equation (15) is equivalent to the following linear system:
M V c ( X ) = V c ( C ˜ ) ,
where
M = ( I q A ˜ ) + ( B ˜ T I β q ) K β , p , q W [ q , p ] .
Thus, the following statments are equivalent:
(i) 
The Sylvester-transpose matrix Equation (15) has a solution.
(ii) 
The linear system (19) has a solution.
(iii) 
rank M V c ( C ˜ ) = rank M .
Applying Theorem 4 together with Theorem 1 yields the following:
Corollary 2.
The Sylvester-transpose Equation (15) has a unique solution if and only if
rank M V c ( C ˜ ) = rank M = β q 2 .
The matrix Equation (15) has infinitely many solutions if and only if
rank M V c ( C ˜ ) = rank M < β q 2 .
Example 2.
Consider the Sylvester-transpose matrix Equation (15) where
A = 1 0 1 2 1 3 , B = 1 0 2 0 1 3 0 2 3 1 1 0 2 1 1 2 1 0 1 1 3 0 1 1 , and C = 4 0 4 1 3 6 0 6 6 2 3 1 7 1 5 4 5 6 1 7 6 2 4 4 .
Let us write
A ˇ 11 = 1 0 0 0 1 0 , A ˇ 12 = 0 1 0 0 0 1 , A ˇ 21 = 2 0 1 0 2 0 , A ˇ 11 = 0 3 0 1 0 3 , B ˇ 11 = 1 0 2 0 2 3 , B ˇ 12 = 0 1 3 1 1 0 , B ˇ 21 = 2 1 1 1 1 3 , B ˇ 22 = 2 1 0 0 1 1 .
Let us form two matrices
A ˜ = V c ( A 11 ˇ ) V c ( A 12 ˇ ) V c ( A 21 ˇ ) V c ( A 22 ˇ ) , B ˜ = V c ( B 11 ˇ ) V c ( B 12 ˇ ) V c ( B 21 ˇ ) V c ( B 22 ˇ ) .
According to Theorem 4, this matrix equation is equivalent to the linear system (19) where
M = ( I 2 A ˜ ) + ( B ˜ T I 12 ) K 6 , 2 , 2 W [ 2 , 2 ] , A ˜ = 1 0 0 1 0 0 2 0 0 2 1 0 0 0 1 0 0 1 0 1 3 0 0 3 T , B ˜ = 1 0 0 2 2 3 2 1 1 1 1 3 0 1 1 1 3 0 2 0 1 1 0 1 T .
We can solve the linear system to obtain a unique solution
X = 2 1 0 1 .
Remark 4.
Consider Equation (15) in a simple case of m = r or, equivalently, t 1 = n or p n . We have t 1 / p = n / p = s / q , t 2 / p = r / q = m / q , and β = m s / q 2 . Thus, the block partitionings are reduced to simple ones.
Consider Equation (15) when n = p = a . We have t 1 = t 2 = p and m = q = r = b . It follows that all block matrices A ˇ i j , B ˇ i j , C ˇ i j , X ˇ i j , and X ¯ i j are of dimension 1 × 1 ; that is, we do not need to partition them. Moreover, A ˜ = A , B ˜ = B , and C ˜ = C . We also have β = 1 , which implies K β , p , q = I p I q = I p q . Thus, the linear system (19) reduces to M V c ( X ) = V c ( C ) where
M = ( I q A ) + ( B T I q ) W [ q , p ] .
This work includes the studies of the matrix equation A X = C in [21] (by putting B = 0 ), and the equation X B = C (by putting A = 0 and substituting X T with X).

5. Conclusions

We investigated the Sylvester-transpose equation A X + X T B = C , where A , B , C , and X are matrices over an arbitrary field. Here, the product ⋉ was the semi-tensor product, which is a generalization of the usual matrix product.When all matrix dimensions were compatible, we found criteria for the equation to have a solution, a unique solution, or infinitely many solutions, according to ranks and linear independence. Moreover, we found suitable matrix partitions for which the matrix equation under the semi-tensor product could be transformed into a linear system under the usual product. Thus, we solved the equation via traditional methods. This work included the studies of the equation A X = C and the equation X B = C .

Author Contributions

Writing—original draft preparation, J.J.; writing—review and editing, P.C.; supervision, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geir, E.D.; Fernando, P. A Course in Robust Control Theory: A Convex Approach; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Fletcher, L.R.; Kuatsky, J.; Nichols, N.K. Eigenstructure assignment in descriptor systems. IEEE Trans. Autom. Control 1986, 31, 1138–1141. [Google Scholar] [CrossRef]
  3. Frank, P.M. Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy a survey and some new results. Automatica 1990, 26, 459–474. [Google Scholar] [CrossRef]
  4. Piao, F.; Zhang, Q.; Wang, Z. The solution to matrix equation AX + XTC = B. J. Frankl. Inst. 2007, 344, 1056–1062. [Google Scholar] [CrossRef]
  5. Chu, K.E. Symmetric solutions of linear matrix equations by matrix decompositions. Linear Alg. Appl. 1989, 119, 35–50. [Google Scholar] [CrossRef] [Green Version]
  6. Prokip, V.M. The structure of symmetric solutions of the matrix equation AX=B over a principal ideal domain. Int. J. Anal. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  7. Song, C.; Feng, J.E. An iterative algorithm to solve the generalized coupled Sylvester-transpose matrix equations. Trans. Inst. Meas. Control 2016, 38, 1–13. [Google Scholar] [CrossRef]
  8. Shafarevich, I.R.; Remizov, A.O. Linear Algebra and Geometry, 1st ed.; Springer: Heidelberg, Germany, 2013. [Google Scholar]
  9. Brown, W.C. Matrices Over Commutative Rings; Marcel Dekker: New York, NY, USA, 1993. [Google Scholar]
  10. McDonald, B.R. Linear Algebra over Commutative Rings; Marcel Dekker: New York, NY, USA, 1984; Volume 87. [Google Scholar]
  11. Cheng, D.; Qi, H.; Zhao, Y. An introduction to Semi-Tensor Product of Matrices and Its Application; World Scientific Publishing: Singapore, 2012. [Google Scholar]
  12. Cheng, D. Semi-tensor product of matrices and its application to Morgen’s problem. Sci. China Ser. Inf. Sci. 2001, 44, 195–212. [Google Scholar]
  13. Cheng, D.; Qi, H.; Xue, A. A survey on semi-tensor product of matrices. J. Syst. Sci. Complex. 2007, 20, 304–322. [Google Scholar] [CrossRef]
  14. Cheng, D. Semi-Tensor Product of Matrices and Its Application: A Survey; Higher Education Press; International Press: Hangzhou, China, 2007; pp. 641–668. [Google Scholar]
  15. Cheng, D.; Qi, H. Matrix expression of logic and fuzzy control. In Proceeding of the 44th IEEE Conference on Decision and Control, Seville, Spain, 12–15 December 2005; pp. 3273–3278. [Google Scholar]
  16. Fan, H.; Feng, J.E.; Meng, M.; Wang, B. General decomposition of fuzzy relations: Semi-tensor product approach. Fuzzy Sets Syst. 2020, 384, 75–90. [Google Scholar] [CrossRef]
  17. Cheng, D. Input-state approach to Boolean networks. IEEE Trans. Neural Netw. 2009, 20, 512–521. [Google Scholar] [CrossRef] [PubMed]
  18. Cheng, D.; Xu, T.; Qi, H. Evolutionarily stable strategy of networked evolutionary games. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1335–1345. [Google Scholar] [CrossRef]
  19. Cheng, D. Semi-tensor product of matrices and its applications to dynamic systems. In New Directions and Applications in Control Theory, Lecture Notes in Control and Information Sciences; Springer: Heidelberg, Germany, 2005; pp. 61–79. [Google Scholar]
  20. Cheng, D.; Hu, X.; Wang, Y. Non-regular feedback linearization of nonlinear system via a normal form algorithm. Automatica 2004, 40, 439–447. [Google Scholar] [CrossRef]
  21. Yao, J.; Feng, J.E.; Meng, M. On solutions of the matrix equation AX = B with respect to semi-tensor product. J. Franklin Inst. 2016, 353, 1109–1131. [Google Scholar] [CrossRef]
  22. Wang, J. On solutions of the matrix equation AlX = B with respect to MM-2 semitensor product. J. Math. 2021, 2021. [Google Scholar] [CrossRef]
  23. Ji, Z.D.; Li, J.F.; Zhou, X.L.; Duan, F.J.; Li, T. On solutions of matrix equation AXB = C under semi-tensor product. Linear Multilinear Algebra 2019, 69, 1–29. [Google Scholar] [CrossRef]
  24. Li, J.F.; Li, T.; Li, W.; Chen, Y.M.; Huang, R. Solvability of matrix equations AX = B, XC = D under semi-tensor product. Linear Multilinear Algebra 2016, 2016, 1705–1733. [Google Scholar]
  25. Wang, Z.; Feng, J.E.; Huang, H.L. Solvability of the matrix equation AX2 = B with semi-tensor product. Electron. Res. Arch. 2021, 29, 2249–2267. [Google Scholar] [CrossRef]
  26. Turkington, D.A. Matrix Calculus & Zero-One Matrices: Statistical and Econometric Applications; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jaiprasert, J.; Chansangiam, P. Solving the Sylvester-Transpose Matrix Equation under the Semi-Tensor Product. Symmetry 2022, 14, 1094. https://doi.org/10.3390/sym14061094

AMA Style

Jaiprasert J, Chansangiam P. Solving the Sylvester-Transpose Matrix Equation under the Semi-Tensor Product. Symmetry. 2022; 14(6):1094. https://doi.org/10.3390/sym14061094

Chicago/Turabian Style

Jaiprasert, Janthip, and Pattrawut Chansangiam. 2022. "Solving the Sylvester-Transpose Matrix Equation under the Semi-Tensor Product" Symmetry 14, no. 6: 1094. https://doi.org/10.3390/sym14061094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop