Next Article in Journal
FedRP: Region-Specific Personalized Identification for Large-Scale IoT Systems
Previous Article in Journal
An Improved NSGA-II for Three-Stage Distributed Heterogeneous Hybrid Flowshop Scheduling with Flexible Assembly and Discrete Transportation
Previous Article in Special Issue
A Hybrid Bayesian Machine Learning Framework for Simultaneous Job Title Classification and Salary Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Overview of Methods for Solving the System of Equations A1XB1 = C1 and A2XB2 = C2

1
Department of Mathematics, Shanghai University, Shanghai 200444, China
2
Collaborative Innovation Center for the Marine Artificial Intelligence, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1307; https://doi.org/10.3390/sym17081307
Submission received: 30 June 2025 / Revised: 29 July 2025 / Accepted: 2 August 2025 / Published: 12 August 2025
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)

Abstract

This paper primarily investigates the solutions to the system of equations A 1 X B 1 = C 1 and A 2 X B 2 = C 2 . This system generalizes the classical equation A X B = C , as well as the system of equations A X = B and X C = D , and finds broad applications in control theory, signal processing, networking, optimization, and other related fields. Various methods for solving this system are introduced, including the generalized inverse method, the vec -operator method, matrix decomposition techniques, Cramer’s rule, and iterative algorithms. Based on these approaches, the paper discusses general solutions, symmetric solutions, Hermitian solutions, and other special types of solutions over different algebraic structures, such as number fields, the real field, the complex field, the quaternion division ring, principal ideal domains, regular rings, strongly *-reducible rings, and operators on Banach spaces. In addition, matrix systems related to the system A 1 X B 1 = C 1 and A 2 X B 2 = C 2 are also explored.
MSC:
15A03; 15A09; 15A24; 15B33; 15B57; 65F10; 65F45

1. Introduction

The solution of linear matrix equations has garnered significant attention due to its wide-ranging applications in control theory [1,2,3,4], signal processing [5], computer vision [6], network analysis [7], machine learning [8,9,10], robotics [11], and optimization problems [12,13,14,15,16,17]. Previous studies have investigated various solution methods and associated numerical algorithms for the matrix equation
A X B = C
under different algebraic structures, and have discussed its applications in color image processing [18]. In contrast, Ref. [19] focused on several approaches for solving the system of matrix equations
A X = C , X B = D
and provided a comprehensive overview of the current state of research. Furthermore, (1) and (2) can be regarded as special cases of the more general system
A 1 X B 1 = C 1 , A 2 X B 2 = C 2 ,
which introduces additional analytical and computational challenges. Compared to (1) and (2), the general form (3) involves an unknown matrix X that is simultaneously affected by multiple coefficient matrices on both the left and right. Moreover, the coupling of the two equations further complicates the analysis of solvability and the construction of explicit solutions.
On the other hand, the matrix system (3) captures the intricate interdependencies within systems governed by multiple variables and constraints, making its solution of considerable practical significance. In control systems, (3) is widely utilized for system identification, stability analysis, and optimal control design. Specifically, in the modeling and analysis of multi-input multi-output systems, it is frequently solved to extract system dynamics or determine controller design parameters [20]. Furthermore, such matrix systems play a vital role in signal processing, particularly in multidimensional signal processing, adaptive filtering, and joint estimation tasks [21]. In image processing and computer vision, the system (3) arises in applications such as image reconstruction, feature matching, and camera calibration [22,23], where the solution typically involves solving and integrating multiple matrix equations. Additionally, the matrix system (3) is fundamental in high-dimensional data analysis, machine learning, and large-scale optimization, as they contribute significantly to reducing computational complexity and enhancing algorithmic efficiency [24].
The problem of solving the matrix system (3) has garnered considerable attention from the research community, owing to its theoretical importance and wide range of practical applications. Over the years, numerous scholars have explored this system and related formulations, yielding a variety of results concerning existence conditions, solution structures, and computational strategies. As the field has matured, increasingly refined and effective methods have been proposed for analyzing and solving such matrix equations.
This paper presents a comprehensive review of several major classes of methods for solving the system (3), including generalized inverse techniques, vec -operator approaches, matrix decomposition strategies, Cramer’s rule, and numerical algorithms. Each class of methods offers distinct advantages and limitations depending on the underlying algebraic framework and the specific type of solution sought. A comparative summary of these methods is provided in Figure 1.
The structure of the paper is as follows. Section 2 introduces the notations used throughout the paper, along with preliminary concepts and definitions. In Section 3, we review various solutions to the matrix system (3) based on generalized inverse methods, which are among the earliest and most widely used approaches. Section 4 presents solutions derived using the vec -operator methods. Matrix decomposition methods for solving the system (3) over the complex field, as well as Cramer’s rule-based approaches over the quaternion algebra, are discussed in Section 5 and Section 6, respectively. Section 7 focuses on iterative methods for computing symmetric-type solutions of the system (3) over the real field. Finally, Section 8 provides concluding remarks and outlines potential directions for future research on solving the system (3).

2. Preliminaries

To facilitate the discussion in the following sections, this section introduces some preliminary concepts and definitions.
Throughout the text, let F denote a field, C represent the complex field, and  R denote the real field. The notations F m × n , C m × n , and  R m × n refer to the sets of m × n dimensional matrices over F , C , and  R , respectively.
The collections of quaternions, commutative quaternions (also known as reduced biquaternions), and split quaternions are defined as follows [25,26,27]:
Q = q = q 0 + q 1 i + q 2 j + q 3 k : i 2 = j 2 = k 2 = 1 , i j k = 1 , q 0 , q 1 , q 2 , q 3 R , CQ = q = q 0 + q 1 i + q 2 j + q 3 k : i 2 = j 2 = k 2 = 1 , i j k = 1 , q 0 , q 1 , q 2 , q 3 R , SQ = q = q 0 + q 1 i + q 2 j + q 3 k : i 2 = j 2 = k 2 = 1 , i j k = 1 , q 0 , q 1 , q 2 , q 3 R .
Correspondingly, m × n dimensional quaternion matrices, commutative quaternion matrices, and split quaternion matrices are denoted as Q m × n , CQ m × n , and  SQ m × n , respectively. For A , B Q m × n (or CQ m × n or SQ m × n ), the matrix A can be written as
A = A 1 + A 2 i + A 3 j + A 4 k ,
where A 1 , A 2 , A 3 , A 4 R m × n , and 
B = B 1 + B 2 j ,
where B 1 , B 2 C m × n . From these definitions, it is clear that quaternions, commutative quaternions, and split quaternions are all four-dimensional Clifford algebras, but they differ in their multiplication properties. While all three are associative algebras, only commutative quaternions satisfy the commutative law. The study of these three quaternion types employs different methods, but they are interrelated. Additionally, only quaternions have an inverse for every nonzero element, making their study the most extensive. Tools such as the generalized inverse, rank, and matrix decomposition can be applied to solve systems over quaternions. In contrast, the study of commutative and split quaternions requires utilizing their representations and transforming them into the real or complex fields for analysis.
We use R to denote a ring. Let R n × m represent the set of n × m matrices over R . A principal ideal domain is a ring in which every ideal is a principal ideal. A regular ring is one in which, for any element a in the ring, there exists an element a , such that a = a a a , also known as a von Neumann regular ring. In fact, a regular ring is one where every element has a right inverse, and this property allows the concept of a right inverse to play a crucial role in solving equations in regular rings. If for any a R , the condition a * a = 0 implies a = 0 , then R is a *-reducing ring. A ring R n × n is strongly *-reducing if, for  a i R , i = 1 , 2 , , n , the equation i = 1 n a i * a i = 0 implies a i = 0 for each i = 1 , 2 , , n . A Banach space is a complete normed linear space. Let E and F be Banach spaces. The set of operators that are both linear and bounded from E to F is denoted by B ( E , F ) .
The symbols I and 0 represent the identity matrix and the zero matrix, respectively. For matrices over a field, we denote the rank, image, and kernel of a matrix A as rank ( A ) , im ( A ) , and  ker ( A ) . The rank of a matrix over quaternions can also be defined in a similar way. Assume that A Q m × n . Denote rank ( A ) as the dimension of the right column space of A. Additionally, for a principal ideal domain R , if  A R n × m , then rank ( A ) = r if there exists an r × r nonzero minor of A, and every ( r + 1 ) × ( r + 1 ) minor of A is zero. If A = C G , then G is called a right divisor of A, and A is referred to as a left multiple of G. The greatest right divisor of A is defined as the right divisor that is a left multiple of every right divisor of A.
For a matrix A, A T denotes the transpose of A, A ¯ represents the conjugate of A, and  A * denotes the conjugate transpose of A. Specifically, for a complex matrix A C m × n , it can be expressed as A = A 1 + A 2 i , where A 1 , A 2 R m × n , A ^ = A 1 A 2 i , and A * = A 1 T A 2 T i . For quaternion, commutative quaternion, and split quaternion matrices, A can be written as A = A 1 + A 2 i + A 3 j + A 4 k , where A 1 , A 2 , A 3 , A 4 R m × n , A ^ = A 1 A 2 i A 3 j A 4 k , and A * = A 1 T A 2 T i A 3 T j A 4 T k . For matrices A and B, the inner product is given by A , B = tr ( A * B ) . Additionally, P L represents the orthogonal projector onto the subspace L C n . The symbols R r ( A ) , N r ( A ) , R l ( A ) , and  N l ( A ) stand for the right column space, the right null space, the left row space, and the left null space of a matrix A Q m × n , respectively.
Next, we introduce some special matrices and their definitions. Let e i be the i-th column of the identity matrix of order n, and  J = [ e n , e n 1 , , e 1 ] . Some definitions of special matrices are listed in Table 1, where η { i , j , k } .
Furthermore, assume that A = ( a i j ) C m × n . If  i = 1 m a i j = 0 for j = 1 , 2 , , n and j = 1 n a i j = 0 for i = 1 , 2 , , m , then A is called an indeterminate admittance matrix. If A is both Hermitian and an indeterminate admittance matrix, it is referred to as a Hermitian indeterminate admittance matrix, or HIA matrix.
Finally, we introduce the Hankel matrices, which are defined in the following form:
H n = a 1 a 2 a 3 a n a 2 a 3 a 4 a n + 1 a 3 a 4 a 5 a n + 2 a n a n + 1 a n + 2 a 2 n 1 .
Denote HAQ n × n as the collection of all n × n Hankel quaternion matrices.
In this section, we have introduced various supplementary definitions for solving the system (3). In the next section, we will formally begin presenting the relevant conclusions regarding the solutions of the system (3).

3. The Generalized Inverse Methods

In 1954, Penrose introduced a generalization of the matrix inverse via the unique solution to four matrix equations:
A A A = A , A A A = A , ( A A ) * = A A , ( A A ) * = A A ,
where A is referred to as the general inverse or the Moore–Penrose inverse of A [28]. In the subsequent discussion, we define the symbols L A = I A A and R A = I A A .
Remark 1.
The Moore–Penrose inverse exists uniquely for elements in the real field, the complex field, and the quaternion division ring.
As a special case of the Moore–Penrose inverse, Rao and Mitra introduced the concept of the inner inverse (g-inverse), which satisfies the equation:
A A A = A ,
where A is defined as the g-inverse of A [29]. We define notations F A = I A A and E A = I A A .
Remark 2.
For any regular element in a ring, the inner inverse exists but is not necessarily unique.
Both the Moore–Penrose inverse and the inner inverse are fundamental tools for solving matrix equations. They can be applied in various algebraic structures, such as fields, skew fields, rings, etc., and also play a key role in providing explicit solutions to specific matrix equations.
This section is organized into two main parts. Section 3.1 discusses the application of generalized inverse methods in general fields, with particular emphasis on related results in the fields of complex and real numbers. Section 3.2 explores relevant results in the quaternion division ring, principal ideal domains, regular rings, strongly *-reducible rings, and Banach spaces. Additionally, this section examines solutions of special types, such as Hermitian solutions over the complex field, centro-symmetric solutions, and P-anti-symmetric solutions over the quaternion division ring. The computation of these special solutions often involves solving more complex systems of equations.

3.1. The General, Complex, and Real Fields

The solution to the system (3) over general fields was first presented by Mitra in 1973 using the g-inverse [30]. Subsequently, Woude provided various characterizations of solvability conditions, including rank and null space criteria, and extended these results to more complex systems [31,32,33,34]. In 1988, John offered a more concise expression for the general solution to the system (3) [35]. Following decades of development, the least-squares solution representation for the system (3) over the complex field was introduced in 2011 and later applied to the Hermitian solution of the system A X B = C [36]. Moreover, He, Wang, and others progressively investigated solutions for systems with three and four equations over the complex field  [37,38]. In 2020, researchers employed projection operators to provide alternative representations of the system (3) [39].
In 1973, Mitra first provided the necessary and sufficient conditions for the solvability of the matrix equation system (3) and the explicit expression for its solution [30].
Theorem 1
(General solutions for (3) over F [30]). Assume that A 1 F p × m , B 1 F n × q , A 2 F r × m , B 2 F n × l , C 1 F p × q , and C 2 F r × l . A necessary and sufficient condition for the system of equation (3) to be solvable is
A 1 * A 1 ( A 1 * A 1 + A 2 * A 2 ) 1 A 2 * C 2 B 2 * ( B 1 B 1 * + B 2 B 2 * ) 1 B 1 B 1 * = A 2 * A 2 ( A 1 * A 1 + A 2 * A 2 ) 1 A 1 * C 1 B 1 * ( B 1 B 1 * + B 2 B 2 * ) 1 B 2 B 2 * .
In this case, the general solution is
X = ( A 1 * A 1 + A 2 * A 2 ) 1 ( A 1 * C 1 B 1 * + Y + Z + A 2 * C 2 B 2 * ) ( B 1 B 1 * + B 2 B 2 * ) 1 + U ( A 1 * A 1 + A 2 * A 2 ) 1 ( A 1 * A 1 + A 2 * A 2 ) U ( B 1 B 1 * + B 2 B 2 * ) ( B 1 B 1 * + B 2 B 2 * ) 1 ,
where U is arbitrary, Y and Z satisfy
A 2 * A 2 ( A 1 * A 1 + A 2 * A 2 ) 1 Y = A 1 * A 1 ( A 1 * A 1 + A 2 * A 2 ) 1 A 2 * C 2 B 2 * , Y ( B 1 B 1 * + B 2 B 2 * ) 1 B 1 B 1 * = A 1 * C 1 B 1 * ( B 1 B 1 * + B 2 B 2 * ) 1 B 2 B 2 * ,
and
A 1 * A 1 ( A 1 * A 1 + A 2 * A 2 ) 1 Z = A 2 * A 2 ( A 1 * A 1 + A 2 * A 2 ) 1 A 1 * C 1 B 1 * , Z ( B 1 B 1 * + B 2 B 2 * ) 1 B 2 B 2 * = A 2 * C 2 B 2 * ( B 1 B 1 * + B 2 B 2 * ) 1 B 1 B 1 * .
In 1987, Woude studied measurement feedback for noninteracting control systems. He characterized the conditions for the solvability of the system ( 3 ) in terms of ranks, null spaces, and column spaces of matrices [31,32]. Three years later, Mitra, using block matrices, introduced an additional equivalent condition for solvability and provided an explicit expression for the solution when it exists [33]. Their conclusions are summarized in Theorem 2.
Theorem 2
(General solutions for (3) over F [31,32,33]). Suppose that A 1 F p × m , B 1 F n × q , A 2 F r × m , B 2 F n × l , C 1 F p × q , and C 2 F r × l . The system (3) is solvable if and only if the following conditions hold:
(1) For i = 1 , 2 , rank ( A i ) = rank A i C i , rank ( B i ) = rank B i C i , and 
rank A 1 0 0 A 2 0 0 0 B 1 B 2 = rank A 1 C 1 0 A 2 0 C 2 0 B 1 B 2 .
(2) For i = 1 , 2 , im ( C i ) im ( A i ) , ker ( B i ) ker ( C i ) , and 
C 1 0 0 C 2 ker B 1 B 2 im A 1 A 2 .
(3) For matrices Y and Z properly chosen and fixed, the equation
A 1 A 2 X [ B 1 B 2 ] = C 1 Y Z C 2
is solvable.
(4) For i = 1 , 2 , im ( C i ) im ( A i ) , ker ( B i ) ker ( C i ) , and 
K 1 C 1 L 1 = K 2 C 2 L 2 ,
where L 1 , L 2 , K 1 , and  K 2 satisfy the conditions
im L 1 L 2 = ker [ B 1 B 2 ]
and
im K 1 T K 2 T = ker [ A 1 T A 2 T ] .
When any one of the conditions (1)–(4) holds, the general solution to the system of matrix Equation (3) is given by
X = A 1 A 2 C 1 Y Z C 2 [ B 1 B 2 ] + U A 1 A 2 A 1 A 2 U [ B 1 B 2 ] [ B 1 B 2 ] ,
where the matrix U is arbitrary, and Y, Z are general solutions to
C 1 L 1 + Y L 2 = 0 , K 1 Y + K 2 C 2 = 0 ,
and
K 1 C 1 + K 2 Z = 0 , Z L 1 + C 2 L 2 = 0 .
Woude extended their own result in Theorem 2, by considering the following matrix equation system
A i X B j = C i j , ( i , j ) Γ ,
where A i , B j , and  C i j are given matrices, and  Γ denotes a set of index pairs [34]. The solvability of the system ( 4 ) is divided into two parts: Γ = { ( i , j ) | i , j k ̲ , i j } , and  Γ = { ( i , i ) | i k ̲ } , where k ̲ = { 1 , 2 , , k } for an arbitrary integer k 2 . The following conclusions are derived.
Theorem 3
(General solutions for (4) over F [34]). For i , j k ̲ = { 1 , 2 , , k } , let A i F p i × m , B i F n × q j , and  C i F p i × q j .
(1) Denote
B = [ B 1 B 2 B k ] , A = [ A 1 T A 2 T A k T ] T ,
B ˇ i = [ B 1 B i 1 B i + 1 B k ] , Λ i = [ C i 1 C i i 1 C i i + 1 C i k ] ,
A ˇ i = A 1 A i 1 A i + 1 A k , Δ i = C 1 i C i 1 i C i + 1 i C k i , Γ i = 0 0 C 1 i 0 0 0 0 C i 1 i 0 0 C i 1 C i i 1 0 C i i + 1 C i k 0 0 C i + 1 i 0 0 0 0 C k i 0 0 .
There exists a matrix X such that A i X B j = C i j ( i , j k ̲ , i j ) if and only if
im ( Δ i ) im ( A ˇ i ) , ker ( B ˇ i ) ker ( A i ) , Γ i ker ( B ) im ( A )
for any i k ̲ .
(2) Under the assumption that
i k ̲ im ( B i ) j k ̲ , j i im ( B j ) = i k ̲ im ( B i ) ,
there exists a matrix X such that A i X B i = C i i for i ¯ k ̲ if and only if
im ( C i i ) im ( A i ) , ker ( B i ) ker ( C i i )
for i k ̲ , and 
C 11 0 0 0 C 22 0 0 0 C k k ker B 1 B 2 0 0 0 B 2 B 3 0 0 0 B k 1 B k im A 1 A 2 A k .
Remark 3.
The consistency condition (5) can be written in an equivalent form as
rank ( A ˇ i ) = rank [ A ˇ i Δ i ] , rank ( B ˇ i ) = rank B ˇ i Λ i , rank 0 A B 0 = rank Γ i A B 0 .
Similarly, the second solvability condition in Theorem 3 can also be expressed in terms of ranks.
Remark 4.
For k = 2 , the assumption (6) always holds. Indeed, in this case, both the left-hand side and the right-hand side are equal to im ( B 1 ) im ( B 2 ) . In this scenario, Theorem 3 can be deduced from Theorem 2.
Remark 5.
When
i k ̲ ker ( A i ) + j k ̲ , j i ker ( A j ) = i k ̲ ker ( A i )
holds, which is "dual" to the assumption (6), the consistency condition for (3) can be derived in a similar manner.
To obtain the expression for the solution of the system ( 3 ) using Theorems 1 and 2, two additional matrix equation systems need to be solved. Furthermore, Ref. [35] provides a more specific expression for the solution of the system ( 3 ) .
Theorem 4
(General solutions for (3) over F [35]). Let A 1 F p × m , B 1 F n × q , A 2 F r × m , B 2 F n × l , C 1 F p × q , and C 2 F r × l . The system (3) has a general solution X if and only if the conditions
A 1 A 1 C 1 B 1 B 1 = C 1
and
A 2 E A 1 A 2 ( C 2 A 2 A 1 C 1 B 1 B 2 ) B 2 F B 1 B 2 = C 2 A 2 C 1 B 2
hold.
At this point, the form of the general solution X is given by
X = A 1 C 1 B 1 + F A 1 ( A 2 A 2 A 1 A 1 ) [ C 2 A 2 ( A 1 C 1 B 1 ) B 2 ] ( B 2 B 1 B 1 B 2 ) E B 1 + F A 1 F A 2 A 2 A 1 A 1 U E B 2 B 1 B 1 B 2 E B 1 ,
for arbitrary U with appropriate size.
In 2011, Liu and Yang obtained an explicit representation of the general least-squares solution to the system ( 3 ) over the complex field. Furthermore, this result was used to determine the conditions for the existence of a Hermitian least-squares solution to the matrix equation A X B = C [36].
Theorem 5
(General solutions for (3) over C [36]). For a given A 1 C p × m , B 1 C n × q , A 2 C r × m , B 2 C n × l , C 1 C p × q , C 2 C r × l , denote the following symbols:
P 1 = A 2 L A 1 , Q 1 = R B 1 B 2 , M 1 = R P 1 A 2 , P 2 = A 1 L A 2 , Q 2 = R B 2 B 1 , M 2 = R P 2 A 1 .
(1) Define
E 1 = A 2 A 2 C 2 B 2 B 2 A 2 A 1 C 1 B 1 B 2 , E 2 = A 1 A 1 C 1 B 1 B 1 A 1 A 2 C 2 B 2 B 1 .
Then, the matrix equation system (3) has a least-squares solution if and only if
rank A 1 * C 1 B 1 * 0 A 1 * A 1 0 A 2 * C 2 B 2 * A 2 * A 2 B 1 B 1 * B 2 B 2 * 0 = rank A 1 A 2 + rank [ B 1 B 2 ] .
Under this circumstance, the general least-squares solution can be expressed as
X = A 1 C 1 B 1 + P 1 E 1 B 2 P 1 A 2 M 1 R P 1 E 1 B 2 + M 1 R P 1 E 1 Q 1 P 1 A 2 L M 1 V 1 Q 1 B 2 + L M 1 V 1 R B 1 + L A 1 L P 1 U 1 + L A 1 Z 1 R B 2 + W 1 R Q 1 R B 1 ,
or equivalently,
X = A 2 C 2 B 2 + P 2 E 2 B 1 P 2 A 1 M 2 R P 2 E 2 B 1 + M 2 R P 2 E 2 Q 2 P 2 A 1 L M 2 V 2 Q 2 B 1 + L M 2 V 2 R B 2 + L A 2 L P 2 U 2 + L A 2 Z 2 R B 1 + W 2 R Q 2 R B 2 ,
where U i , V i , W i and Z i ( i = 1 , 2 ) are arbitrary matrices with appropriate sizes.
(2) Suppose that equations A 1 X B 1 = C 1 and A 2 X B 2 = C 2 are solvable, respectively. Denote
E 1 = C 2 A 2 A 1 C 1 B 1 B 2 , E 2 = C 1 A 1 A 2 C 2 B 2 B 1 .
Then, (3) has a solution if and only if
rank C 1 0 A 1 0 C 2 A 2 B 1 B 2 0 = rank A 1 A 2 + rank [ B 1 B 2 ] .
In this case, the general least-squares solution can be either (8) or (9), with arbitrary U i , V i , W i and Z i ( i = 1 , 2 ) .
To consider the least-squares solution to A X B = C , Liu and Yang presented the solvability condition for the system
A X B = C , B * X A * = C *
building upon Theorem 5 [36].
Corollary 1
(General solutions for (11) over C [36]). Let A C p × m , B C m × p , and  C C p × p . Denote P = B * L A , Q = R B A * , and  M = R P B * .
(1) The matrix equation system (11) has a general least-squares solution if and only if the following rank condition holds:
rank A * C B * 0 A * A 0 B C * A B B * B B * A * A 0 = 2 rank [ A * B ] .
When this condition is satisfied, the general least-squares solution of (11) can be expressed as
X = A C B + P E A * P B * M R P E A * + M R P E Q P B * L M V Q A * + L M V R B + L A L P U + L A Z A + W R Q R B ,
where
E = B B C * A A B * A C B A * ,
U, V, W and Z are arbitrary matrices with appropriate sizes.
(2) If equations A X B = C and A * X B * = C * are solvable, then the system of matrix Equation (11) is solvable if and only if
rank C 0 A 0 C * B * B A * 0 = 2 rank [ A * B ] .
In this case, the general Hermitian solution can be expressed as (12), where
E = C * B * A C B A * ,
U, V, W, and Z are arbitrary.
Remark 6.
Corollary 1 provides the solvable conditions and explicit expressions for the least-squares solution X to the system (11). Moreover, the Hermitian solution to A X B = C can be derived from the general solution and is given by
X = 1 2 ( X + X * ) ,
where X is the solution of (11).
In 2013, He and Wang considered the more comprehensive system of matrix equations
A 1 X B 1 = C 1 , A 2 X B 2 = C 2 , A 3 X B 3 = C 3 ,
over C using the inner inverse [37]. The results are stated as follows.
Theorem 6
(General solutions for (13) over C [37]). Assume that A i C p i × m , B i C n × q i , and  C i C p i × q i ( i = 1 , 2 , 3 ) are given matrices. Let
A 11 = A 2 F A 1 , B 11 = E B 1 B 2 , C 11 = C 2 A 2 A 1 C 1 B 1 B 2 , D 11 = E A 11 A 2 , ϕ = A 1 C 1 B 1 + F A 1 A 11 C 11 B 2 F A 1 A 11 A 2 D 1 E A 11 C 11 B 2 + D 11 E A 11 C 11 B 11 E B 1 , A 22 = [ F A 1 F A 11 A 3 ] , B 22 = [ ( E B 11 E B 1 ) T E B 3 T ] T , A = E A 22 F A 1 , B = E B 2 F B 22 , C = E A 22 F A 2 , D = E B 1 F B 22 , E 1 = A 3 C 3 B 3 ϕ , E = E A 22 E 1 F B 22 , M = E A C , N = D F B , S = C L M ,
and
S 1 = [ I m 0 ] , S 2 = I n 0 , S 3 = [ 0 I m ] , S 4 = 0 I n .
Then, the system (13) has a solution X C m × n if and only if
E A i C i = 0 , C i F B i = 0
for i = 1 , 2 , 3 , and
E A 11 C 11 F B 11 = 0 , E M E A E = 0 , E F B L N = 0 , E A E F D = 0 , E C E F B = 0 .
At this point, the general solution to (13) is
X = ϕ + F A 1 F A 11 U 1 + U 2 E B 11 E B 1 + F A 1 U 3 E B 2 + F A 2 U 4 E B 1 ,
or equivalently,
X = A 3 C 3 B 3 F A 3 U 5 U 6 E B 3 ,
where     
U 1 = S 1 [ A 11 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 E B 1 ) A 22 V 7 B 22 + F A 22 V 6 ] , U 2 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 E B 1 ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 2 , U 3 = A E B A C M E A E B A S C E F B N D B A S V 3 E N D B + F A V 1 + V 2 E B , U 4 = M E A E D + F M S S C E F B N + F M F S V 4 + F M V 3 E N + V 5 E D , U 5 = S 3 [ A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) A 22 V 7 B 22 + F A 22 V 6 ] , U 6 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 4 ,
with arbitrary  V i  for  i = 1 , 2 , , 8 .
As applications of Theorem 6, Ref. [37] also derived the Hermitian solutions of the systems
A 1 X A 1 * = C 1 , A 2 X A 2 * = C 2 , A 3 X A 3 * = C 3 ,
and
A 1 X A 1 * = C 1 , A 2 X B 2 = C 2 ,
respectively. The related results are summarized in the following corollaries.
Corollary 2
(Hermitian solutions for (14) over C [37]). Assume that A i C p i × m , C i = C i * C p i × p i ( i = 1 , 2 , 3 ) are given. Denote
A 11 = A 2 F A 1 , C 11 = C 2 A 2 A 1 C 1 A 1 * A 2 * , D 11 = E A 11 A 2 , A 22 = [ F A 1 F A 11 F A 3 ] , A = E A 22 F A 1 , B = F A 2 * F A 22 * , ϕ = A 1 C 1 A 1 * + F A 1 A 11 C 11 A 2 * F A 1 A 11 A 2 D 11 E A 11 C 11 A 2 * + D 11 E A 11 C 11 A 11 * F A 1 * , E 1 = A 3 C 3 B 3 ϕ , E = E A 22 E 1 F B 22 , M = E A C , N = D F B , S = B * F M ,
and
S 1 = I m 0 , S 2 = I m 0 , S 3 = 0 I m , S 4 = 0 I m .
Then, the system (14) has a Hermitian solution X C m × m if and only if
E A i C i = 0 ,
for i = 1 , 2 , 3 , and 
E A 11 C 11 E A 11 * = 0 , E A E E A * = 0 , F B * E F B = 0 , E M E A E = 0 .
In this case, the Hermitian solution to (14) can be expressed as
X = 1 2 ( X ^ + X ^ * ) ,
with
X ^ = ϕ + F A 1 F A 11 U 1 + U 2 ( F A 1 F A 11 ) * + F A 1 U 3 F A 2 * + F A 2 U 4 F A 1 * ,
or equivalently,
X ^ = A 3 C 3 A 3 * F A 3 U 5 U 6 F A 3 * ,
where
U 1 = S 1 [ A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 E A 1 * ) A 22 V 7 A 22 * + F A 22 V 6 ] , U 2 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 * ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 2 , U 3 = A E B A B * M E A E B A S B * E F B N A * B A S V 3 E N A * B + F A V 1 + V 2 E B , U 4 = M E A E A * + F M S S B * E F B N + F M F S V 4 + F M V 3 E N + V 5 E D , U 5 = S 3 [ A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) A 22 V 7 A 22 * + F A 22 V 6 ] , U 6 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 4 ,
and V i ( i = 1 , 2 , , 8 ) are arbitrary.
Corollary 3
(Hermitian solutions for (15) over C [37]). Let A 1 C p × m , C 1 = C 1 * C p × p , A 2 C r × m , B 2 C m × l , C 2 C r × l . Define the following symbols as
A 11 = A 2 F A 1 , B 11 = E A 1 * B 2 , C 11 = C 2 A 2 A 1 C 1 A 1 * B 2 , A 22 = [ F A 1 F A 11 F B 2 * ] , D 11 = E A 11 A 2 , B 2 = [ ( E B 11 E A 1 * ) T ( E A 2 * ) T ] T , A = E A 22 F A 1 , B = E B 2 F B 22 , C = E A 22 F A 2 , D = F A 1 * F B 22 , ϕ = A 1 C 1 A 1 * + F A 1 A 11 C 11 B 2 F A 1 A 11 A 2 D 11 E A 11 C 11 B 2 + D 11 E A 11 C 11 B 11 F A 1 * , E 1 = B 2 * C 2 * A 2 * ϕ , E = E A 22 E 1 F B 22 , M = E A C , N = D F B , S = C F M ,
and
S 1 = I m 0 , S 2 = I m 0 , S 3 = 0 I m , S 4 = 0 I m .
Then, the system (15) has a Hermitian solution X C m × m if and only if
E A 1 C 1 = 0 , E A 2 C 2 = 0 , C 2 F B 2 = 0 , E A 11 C 11 F B 11 = 0 , E M E A E = 0 , E C E F B = 0 .
In this case, the general solution to the system (15) can be expressed as
X = 1 2 ( X ^ + X ^ * ) ,
with
X ^ = ϕ + F A 1 F A 11 U 1 + U 2 E B 11 E A 1 * + F A 1 U 3 E B 2 + F A 2 U 4 E A 1 * ,
or equivalently,
X ^ = B 2 * C 2 * A 2 * E B 2 * U 5 U 6 F A 22 * ,
where
U 1 = S 1 [ A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 E A 1 * ) A 22 V 7 B 22 + F A 22 V 6 ] , U 2 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 * ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 2 , U 3 = A E B A C M E A E B A S C E F B N D B A S V 3 E N D B + F A V 1 + V 2 E B , U 4 = M E A E D + F M S S C E F B N + F M F S V 4 + F M V 3 E N + V 5 E D , U 5 = S 3 [ A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) A 22 V 7 B 22 + F A 22 V 6 ] , U 6 = [ E A 22 ( E 1 F A 1 U 3 E B 2 F A 2 U 4 F A 1 ) B 22 + A 22 A 22 V 7 + V 8 E B 22 ] S 4 ,
and  V i ( i = 1 , 2 , , 8 ) are arbitrary matrices with appropriate sizes.
Recently, Zhang et al. introduced a simple method to solve the system ( 3 ) using the orthogonal projector method [39]. Recall that for a subspace L C n , P L denotes the orthogonal projector onto L .
Theorem 7
(General solutions for (3) over C [39]). For given matrices A 1 C p × m , B 1 C n × q , A 2 C r × m , B 2 C n × l , C 1 C p × q , and  C 2 C r × l , define
L 1 = ( F A 1 F A 1 F A 2 K A 1 A 1 ) ( C 1 ˜ Z 1 E B 1 + Z 2 E B 2 ) + A 1 A 1 W 1 + F A 1 F A 2 F K W 2 , L 2 = E K A 1 A 1 C 1 ˜ E B 1 E K A 1 A 1 C 1 ˜ B 1 B 1 Q E B 2 E B 1 + Z 1 E K A 1 A 1 Z 1 E B 1 + E K A 1 A 1 Z 2 E Q E B 2 E B 1 , J 1 = K A 1 A 1 C 1 ˜ K A 1 A 1 ( Z 1 E B 1 + Z 2 E B 2 ) + F K W 2 , J 2 = E K A 1 A 1 C 1 ˜ B 1 B 1 Q + Z 2 E K A 1 A 1 Z 2 Q Q , K = A 1 A 1 F A 2 , Q = E B 2 B 1 B 1 , C 1 ˜ = A 2 C 2 B 2 A 1 C 1 B 1 .
Then, the system (3) has a general solution if and only if
A 1 A 1 C 1 B 1 B 1 = C 2 , A 2 A 2 C 2 B 2 B 2 = C 2 , P T ( A 1 C 1 B 1 A 1 C 2 B 2 ) P S = 0 ,
where T = im ( A 1 * ) im ( A 2 * ) and S = ker ( B 1 ) ker ( B 2 ) .
In this case, the general solution to the equations is given by
X = A 1 C 1 B 1 + F A 1 L 1 + L 2 E B 1 ,
or equivalently,
X = A 2 C 2 B 2 + F A 2 J 1 + J 2 E B 2 ,
where Z 1 , Z 2 , W 1 , W 2 are arbitrary matrices.
At the end of this subsection, we present the most general form of the result for ( 3 ) to date. In 2021, the consistency conditions and solutions for the system of matrix equations over C were proposed [38]:
A 1 X B 1 = C 1 , A 2 X B 2 = C 2 , A 3 X B 3 = C 3 , A 4 X B 4 = C 4 .
As special cases of ( 16 ) , the related results of ( 13 ) and the system
A 1 X = C 1 , A 2 X = C 2 , A 3 X B 3 = C 3 , A 4 X B 4 = C 4
are also presented in [38].
Theorem 8
(General solutions for (16) over C [38]). Assume that A i C p i × m , B i C n × q i , C i C p i × q i satisfy the condition
A i A i C i B i B i = C i
for i = 1 , 2 , 3 , 4 . Let
S 1 = A 2 L A 1 , S 2 = A 4 L A 3 , T 1 = R B 1 B 2 , T 2 = R B 3 B 4 , P = I A 1 A 1 S 1 S 1 , Q = I B 1 B 1 T 1 T 1 , D 3 = ( I S 1 A 2 ) A 1 , E 3 = B 1 ( I B 2 T 1 ) , R 1 = D 3 C 1 E 3 + S 1 C 2 T 1 , R 2 = D 3 G 1 R S 1 C 2 T 1 + S 1 C 2 L T 1 F 1 E 3 , T t = R Q B 3 Q B 4 , F t = ( Q B 3 ) Q B 4 L T t , Q t = L T t L F t , S s = A 4 P L A 3 P , G s = R S s A 4 P ( A 3 P ) , P s = R G s R S s , G 1 = R S 1 A 2 A 1 , F 1 = B 1 B 2 L T 1 , G 1 = A 1 A 2 , F 1 = B 2 B 1 , G 2 = R S 2 A 4 A 3 , F 2 = B 3 B 4 L T 2 , G 2 = A 3 A 4 , F 2 = B 4 B 3 .
The system of Equation (16) is solvable if and only if
(1)
R S 1 ( C 2 A 2 A 1 C 1 B 1 B 2 L T 1 ) = 0 , R S 2 ( C 4 A 4 A 3 C 3 B 3 B 4 ) L T 2 = 0 .
(2) One of the following systems of matrix equations is solvable:     
(2.1)
R A 3 P A 3 D 3 ( A 1 A 1 G 1 G 1 ) U T 1 B 3 L Q B 3 + R A 3 P A 3 S 1 V ( B 1 B 1 F 1 F 1 ) E 3 B 3 L Q B 3 = R A 3 P A 3 ( A 3 C 3 B 3 R 1 R 2 ) B 3 L Q B 3 , P s A 4 D 3 ( A 1 A 1 G 1 G 1 ) U T 1 B 4 Q t + P s A 4 S 1 V ( B 1 B 1 F 1 F 1 ) E 3 B 4 Q t = P s A 4 ( A 4 C 4 B 4 R 1 R 2 ) B 4 Q t ;
(2.2)
R A 3 P A 3 X B 3 L Q B 3 = R A 3 P A 3 ( A 3 C 3 B 3 R 1 R 2 ) B 3 L Q B 3 , P s A 4 X B 4 Q t = P s A 4 ( A 4 C 4 B 4 R 1 R 2 ) B 4 Q t , L S 1 X R T 1 = 0 , S 1 A 2 X B 2 T 1 = 0 , X = ( I P 1 ) X ( I Q 1 ) ;
(2.3)
R A 3 P A 3 P 0 X Q 0 B 3 L Q B 3 = R A 3 P A 3 ( A 3 C 3 B 3 R 1 R 2 ) B 3 L Q B 3 , P s A 4 P 0 X Q 0 B 4 Q t = P s A 4 ( A 4 C 4 B 4 R 1 R 2 ) B 4 Q t , A 1 P 0 X Q 0 B 1 = 0 , S 1 A 2 P 0 X Q 0 B 2 T 1 = 0 ;
(2.4)
R A 3 P A 3 ( P 0 U Q 0 A 1 A 1 P 0 U Q 0 B 1 B 1 S 1 A 2 P 0 U Q 0 B 2 T 1 = R A 3 P A 3 ( A 3 C 3 B 3 R 1 R 2 ) B 3 L Q B 3 S 1 A 2 A 1 A 1 P 0 U Q 0 B 1 B 1 B 2 T 1 ) B 3 L Q B 3 , P s A 4 ( P 0 U Q 0 A 1 A 1 P 0 U Q 0 B 1 B 1 S 1 A 2 P 0 U Q 0 B 2 T 1 = P s A 4 ( A 4 C 4 B 4 R 1 R 2 ) B 4 Q t S 1 A 2 A 1 A 1 P 0 U Q 0 B 1 B 1 B 2 T 1 ) B 4 Q t ,
where
P 0 = I A 1 A 1 A 2 R S 1 A 2 , Q 0 = I B 2 L T 1 B 2 B 1 B 1 , P 1 = P + A 1 A 1 A 2 R S 1 A 2 , Q 1 = Q + B 2 L T 1 B 2 B 1 B 1 .
(3) For some solution X 0 of (18), the following system is solvable by Z 3 , W, Z 4 , and Z:
( A 3 A 3 G 2 G 2 ) Z 3 T 2 T 2 W + G s G s W L T t L F t + R A 3 P W L T t = R A 3 P ( A 3 ( X 0 + R 1 + R 2 ) ( B 4 B 3 F t ) + C 3 F t ) L T t + G s D L F t C 3 B 3 B 4 L T 2 G 2 R S 2 C 4 T 2 T 2 ,
S 2 S 2 Z 4 ( B 3 B 3 F 2 F 2 ) + G s W F t Z + R S s Z F t F t + R S s Z L Q B 3 = R S s ( ( A 4 G s A 3 ) ( X 0 + R 1 + R 2 ) B 3 + G s C 3 ) L Q B 3 + D F t G 2 C 3 S 2 S 2 C 4 L T 2 F 2 .
Or equivalently, for  X 0 = ( I P 1 ) X ( I Q 1 ) , where X is a solution of (19), the following two equations are separately solvable by Z 3 , W, Z 4 , and  Z 5 , respectively:
E 2 Z 3 H 2 = R A 3 P A 3 X 0 ( B 4 B 3 F t ) L T t + R A 3 P ( A 3 ( R 1 + R 2 ) ( B 4 B 3 F t ) + R 3 ) L T t , E 1 L E 2 W H 2 L F t = L 1 E 1 E 2 ( R A 3 P A 3 ( X 0 + R 1 + R 2 ) B 4 + R 3 ) L T t L F t , R S s S 2 S 2 Z 4 H 3 + E 1 L E 2 Z 5 R H 2 L F t H 2 F t = R S s ( ( A 4 G s A 3 ) ( X 0 + R 1 + R 2 ) B 3 + G s C 3 ) L Q B 3 R S s ( G 2 C 3 + S 2 S 2 C 4 L T 2 F 2 ) ( F t F t + L Q B 3 ) G s ( C 3 B 3 B 4 L T 2 + G 2 R S 2 C 4 T 2 T 2 ) F t E 1 E 2 R A 3 P A 3 ( X 0 + R 1 + R 2 ) ( B 4 B 3 F t ) L T t ( F t ( H 2 L F t ) H 2 F t ) + D F t E 1 E 2 R A 3 P R 3 L T t ( F t ( H 2 L F t ) H 2 F t ) L 1 ( H 2 L F t ) H 2 F t ,
where
R 3 = C 3 F t C 3 B 3 B 4 L T 2 G 2 R S 2 C 4 T 2 T 2 , E 1 = G s ( A 3 A 3 G 2 G 2 ) , E 2 = R A 3 P ( A 3 A 3 G 2 G 2 ) , H 2 = T 2 T 2 L T t , L 1 = G s ( G s D C 3 B 3 B 4 L T 2 G 2 R S 2 C 4 T 2 T 2 ) L T t L F t , H 3 = ( B 3 B 3 F 2 F 2 ) ( F t F t + L Q B 3 ) .
In these cases, the general solution of (16) can be expressed as
X = ( ( A 1 ) C 1 L A 1 S ( A 2 ( A 1 ) C 1 W ) ) ( B 1 ) ( I B 2 T R B 1 ) + ( ( I L A 1 S A 2 ) ( A 1 ) V + L A 1 S C 2 ) T R B 1 + Z ( I L A 1 L S ) Z ( I R T R B 1 ) + ( X 0 + R 2 ) ( I B 2 T R B 1 ) + L A 1 S R S A 2 ( X 0 + R 2 ) ( I B 2 T R B 1 ) + ( I L A 1 S A 2 ) ( X 0 + R 2 ) B 2 L T T R B 1 ,
where
A 1 = A 1 A 2 , A 2 = A 3 A 4 , B 1 = [ B 1 B 2 ] , B 2 = [ B 3 , B 4 ] , ( A 1 ) = ( I S 1 A 2 ) A 1 S 1 , ( B 1 ) = B 1 ( I B 2 T 1 ) T 1 , S = ( I S s A 4 P ) ( A 3 P ) S s , T = ( Q B 3 ) ( I Q B 4 T t ) T t , V = C 1 F + G ( 1 S S ) C 2 T T + ( A 1 ( A 1 ) G G ) Z 2 T T , W = G C 1 + S S C 2 ( 1 T T ) F + S S Z 1 ( ( B 1 ) B 1 F F ) , S = A 2 L A 1 , T = R B 1 B 2 , F = ( B 1 ) B 2 L T , G = R S A 2 ( A 1 ) , F = ( B 2 ) B 1 , G = A 1 ( A 2 ) , C 1 = C 1 0 0 C 2 , C 2 = C 3 U 2 V 2 C 4 ,
and X 0 is a solution of (18), while U 2 and V 2 are
U 2 = C 3 B 3 B 4 L T 2 + G 2 R S 2 C 4 T 2 T 2 + ( A 3 A 3 G 2 G 2 ) Z 3 T 2 T 2 , V 2 = G 2 C 3 + S 2 S 2 C 4 L T 2 F 2 + S 2 S 2 Z 4 ( B 3 B 3 F 2 F 2 ) ,
for some Z 3 and Z 4 satisfying (20) and (21), with arbitrary Z 1 , Z 2 , and Z.
Replace some symbols in Theorem 8, and the corollaries regarding (13) and (17) are provided by [38].
Corollary 4
(General solutions for (13) over C [38]). Let A i C p i × m , B i C n × q i , and  C i C p i × q i satisfy A i A i C i B i B i = C i ( i = 1 , 2 , 3 ) . The system of Equation (13) is consistent if and only if
R S 1 ( C 2 A 2 A 1 C 1 B 1 B 2 ) L T 1 = 0 , G G E ( T 1 B 3 L T t ) T 1 B 3 L T t = R P s A 3 S 1 E , C 0 C 0 E H H = E L ( I F 1 F 1 ) E 3 B 3 L T t ,
where
S 2 = A 3 , T 2 = B 3 , S s = A 3 P , P s = R S s , T t = Q B 3 , Q t = L T t , G = R P s A 3 S 1 P s A 3 A 1 ( I G 1 G 1 ) , H = T 1 B 3 L T t L ( I F 1 F 1 ) E 3 B 3 L T t , E = P s A 3 ( A 3 C 3 B 3 R 1 R 2 ) B 3 L T t , C 0 = P s A 3 D 3 ( A 1 A 1 G 1 G 1 ) ,
and S 1 , T 1 , P, Q, D 3 , E 3 , R 1 , R 2 , G 1 , F 1 are the same as Theorem 8.
Corollary 5
(General solutions for (17) over C [38]). Assume that A i C p i × m for i = 1 , 2 , 3 , 4 , C i C p i × n for i = 1 , 2 , B i C n × q i and C i C p i × q i for i = 3 , 4 , where A i A i C i = C i for i = 1 , 2 , A i A i C i B i B i = C i for i = 3 , 4 . The system of Equation (17) is consistent if and only if
R S 1 ( C 2 A 2 A 1 C 1 ) = 0 , R S 2 ( C 4 A 4 A 3 C 3 B 3 B 4 ) L T 2 = 0 , R A 3 P C 3 = R A 3 P A 3 ( R 1 + R 2 ) B 3 , P s C 4 = P s A 4 ( R 1 + R 2 ) B 4 , A 0 ( A 0 ( G s G s ( A 3 P ) ( A 3 P ) ) ) A 0 H 0 = A 0 H 0 , ( G s G s ( A 3 P ) A 3 P ) H 0 L T 2 = H 0 L T 2 , R S 2 ( R S 2 ( R S s I ) ) H 00 = R S 2 H 00 , ( R S s I ) H 00 L B 3 B 3 F 2 F 2 R S s = H 00 L B 3 B 3 F 2 F 2 ,
where S 1 , S 2 , T 2 , G 2 , F 2 , P 2 , P, S s , G s , D 3 , P s are given in Theorem 8, and 
R 1 = D 3 C 1 , R 2 = S 1 C 2 , A 0 = I A 3 A 3 + G 2 G 2 , H 0 = R A 3 P A 3 ( R 1 + R 2 ) B 4 + G s D C 3 B 3 B 4 L T 2 G 2 R S 2 C 4 T 2 T 2 , H 00 = R S s ( ( A 4 G s A 3 ) ( R 1 + R 2 ) B 3 + G s C 3 ) G 2 C 3 S 2 S 2 C 4 L T 2 F 2 .

3.2. Various Rings and the Quaternion Algebra

Research on the system (3) over rings began with principal ideal domains possessing special properties [40]. Later, Wang observed that the results related to the g-inverse on regular rings could also be similarly generalized [41]. He progressively explored more complex systems with two unilateral constraints and two bilateral constraints, extending these results to simpler equations, including centro-symmetric and P-(anti-)symmetric forms [42,43,44]. In 2015, the general solution to the system (3) was also derived for operators on associative rings with identity, strongly *-reducible rings, and Banach spaces [45].
Özgüler and Akar obtained the necessary and sufficient condition for the existence of the solution to the system (3) over a principal ideal domain [40]. The solvability condition of (3) over R is that the equations each have a solution and a constructed bilateral linear matrix equation has a solution. The specific conclusions are as follows.
Theorem 9
(General solutions for (3) over principal ideal domains [40]). For a principal ideal domain R , assume that A 1 R p × m , A 2 R r × m , B 1 R n × q , B 2 R n × l , C 1 R p × q , and  C 2 R r × l . Let M 1 R p × p , M 2 R q × q , N 1 R r × r , and  N 2 R l × l be unimodular matrices such that
M 1 A 1 = A ^ 1 0 , M 2 A 2 = A ^ 2 0 , B 1 N 1 = [ B ^ 1 0 ] , B 2 N 2 = [ B ^ 2 0 ] ,
where A ^ 1 R k 1 × m , A ^ 2 R k 2 × m are of full row rank and B ^ 1 R n × k 3 , B ^ 2 R n × k 4 are of full column rank. Denote
C ^ 1 = M 1 C 1 N 1 = C ^ 11 C ^ 12 C ^ 13 C ^ 14 , C ^ 2 = M 2 C 2 N 2 = C ^ 21 C ^ 22 C ^ 23 C ^ 24 ,
partitioned such that C ^ 11 R k 1 × k 3 and C ^ 21 R k 2 × k 4 . Additionally, suppose that L 1 , L 2 are the greatest left divisors of A ^ 1 , A ^ 2 , and  R 1 , R 2 are the greatest right divisors of B ^ 1 , B ^ 2 , respectively, such that
A ^ 1 = L 1 U 1 , A ^ 2 = L 2 U 2 , B ^ 1 = V 1 R 1 , B ^ 2 = V 2 R 2 ,
for some left unimodular U 1 , U 2 and right unimodular V 1 , V 2 . Define W 1 and W 2 as
W 1 = L 1 1 C ^ 11 R 1 1 and W 2 = L 2 1 C ^ 21 R 2 1 .
Then, the system (3) has a solution X over R if and only if the following conditions hold:
(1) C ^ i 2 = 0 , C ^ i 3 = 0 , C ^ i 4 = 0 for i = 1 , 2 .
(2) W i R k i × k i + 2 for i = 1 , 2 .
(3) There exist X 1 R m × k 3 , X 2 R m × k 4 , Y 1 R k 1 × n , and  Y 2 R k 2 × n such that
U 1 U 2 [ X 1 X 2 ] + Y 1 Y 2 [ V 1 V 2 ] = W 1 0 0 W 2 .
Remark 7.
When considering the system (3) over a field, the condition (22) can be replaced by
A 1 A 2 [ X 1 X 2 ] + Y 1 Y 2 [ B 1 B 2 ] = C 1 0 0 C 2 .
Under this condition, Theorem 9 reduces to Theorem 2.
From 2004 to 2008, Wang and their collaborators conducted a series of studies, presenting results for the system (3) in regular rings and quaternion skew fields [41,42,43,44]. In a regular ring, where every element possesses an inner inverse, expressing the solution to the system (3) using the inner inverse becomes a natural consequence.
We recall the analogous relations over the regular ring R . For  A 1 R p × m , B 1 R n × 1 , and  C 1 R p × q , we denote the following similarity relation:
A 1 C 1 0 B 1 A 1 0 0 B 1 ,
if there exist invertible matrices P R ( p + n ) × ( p + n ) and Q R ( m + q ) × ( m + q ) such that
A 1 C 1 0 B 1 = P A 1 0 0 B 1 Q .
Using this similarity relation, Wang derived the necessary and sufficient conditions for the solvability of the system (3) in the regular ring R , as well as an explicit expression for its solution [41].
Theorem 10
(General solutions for (3) over regular rings [41]). For a given regular ring R , let A 1 R p × m , A 2 R r × m , B 1 R n × q , B 2 R n × l , C 1 R p × q , and  C 2 R r × l . Denote S = A 2 F A 1 , T = E B 1 B 2 , F = B 2 F T , and  G = E S A 2 . Then the following conditions are equivalent:
(1) The system (3) is consistent.
(2) The equations
G ( A 2 C 2 B 2 A 1 C 1 B 1 ) F = 0
and
A i A i C i B i B i = C i
hold for i = 1 , 2 .
(3) The following similarity relations:
A 1 C 1 0 A 2 0 C 2 0 B 1 B 2 A 1 0 0 A 2 0 0 0 B 1 B 2 , [ A i C i ] [ A i 0 ] , C i B i 0 B i
hold for i = 1 , 2 .
In that case, the general solution of the system (3) can be expressed as
X = A 1 C 1 B 1 + F A 1 S A 2 F G ( A 2 C 2 B 2 A 1 C 1 B 1 ) B 2 B 2 + G G ( A 2 C 2 B 2 A 1 C 1 B 1 ) B 2 T E B 1 + ( W G G W T T ) E B 1 + F A 1 ( Y S S Y B 2 B 2 ) F A 1 S A 2 F G W T B 2 ,
where Y and W are any matrices over R with appropriate dimensions.
Remark 8.
Matrix similarity in a regular ring is defined similarly to matrix similarity in a general domain. However, unlike in a field, matrix similarity in a regular domain cannot be defined using rank equality. Instead, it can be expressed in terms of the inner inverse.
In 2005, Wang considered ( 17 ) as an extension of the system ( 3 ) over quaternion algebra. He provided a necessary and sufficient condition for the existence of the general solution to the system ( 17 ) [42].
Theorem 11
(General solutions for (17) over Q [42]). Assume that A i Q p i × m for i = 1 , 2 , 3 , 4 , C i Q p i × n for i = 1 , 2 , B i Q n × q i , and  C i Q p i × q i for i = 3 , 4 . Denote
S = A 2 F A 1 , K = A 3 F A 1 , T = K F S , G = E S A 2 , M = A 4 F A 1 , N = E B 3 B 4 , P = E M F S F T M F S , Ψ = A 3 [ A 3 C 3 B 3 A 1 C 1 F A 1 S A 2 ( A 2 C 2 A 1 C 1 ) ] B 3 , Φ = S A 2 ( A 2 C 2 A 1 C 1 ) + F S T Ψ B 3 , Q = C 4 A 4 A 1 C 1 B 4 M Φ B 4 .
Then, the system (17) is consistent if and only if
T T Ψ = Ψ , E P E M F S F T Q = 0 , E M F S F T Q F N = 0 , G ( A 2 C 2 A 1 C 1 ) = 0 ,
A j A j C j B j B j = C j ( j = 3 , 4 ) ,
and
A i A i C i = C i ( i = 1 , 2 ) .
At this point, the general solution of (17) can be expressed as
X = A 1 C 1 + F A 1 S A 2 ( A 2 C 2 A 1 C 1 ) + F A 1 F S T Ψ B 3 + F A 1 F S P E M F S F T Q N E B 3 F A 1 F S F T ( M F S F T ) M F S P E M F S F T Q B 4 + F A 1 F S F T Z F A 1 F S F T ( M F S F T ) M F S F T Z B 4 B 4 + F A 1 F S F T ( M F S F T ) Q B 4 + F A 1 F S W E B 3 F A 1 F S F T ( M F S F T ) M F S F P W N B 4 F A 1 F S P P W N E B 3 ,
with arbitrary matrices W and Z over Q .
In the same year, Wang also studied the centro-symmetric solutions to the following system of matrix equations over Q :
A 1 X = C 1 , A 3 X B 3 = C 3 ,
where the solutions were explicitly derived in [43]. In this context, for a given matrix A = ( a i j ) Q m × n , the notation A # = ( a m i + 1 , n j + 1 ) Q m × n is used to denote the matrix obtained by reversing the row and column indices of A.
Theorem 12
(Centro-symmertic solutions for (23) over Q [43]). Let A 1 Q p × m , A 3 Q r × m , B 3 Q n × l , C 1 Q p × n , C 3 Q r × l . Define the following notations as
S = A 1 # F A 1 , K = A 3 F A 1 , T = K F S , G = E S A 1 # , M = A 3 # F A 1 , N = E B 3 B 3 # , P = E M F S F T M F S , Ψ = A 3 [ A 3 C 3 B 3 A 1 C 1 F A 1 S ( A 1 A 1 C 1 ) # + F A 1 S ( A 1 # A 1 C 1 ) ] B 3 , Φ = S A 1 # [ ( A 1 C 1 ) # ( A 1 C 1 ) ] + F S T Ψ B 3 , Q = C 3 # A 3 # A 1 C 1 B 3 # M Φ B 3 # .
Then, there exists a centro-symmetric solution to the system (23) if and only if
T T Ψ = Ψ , E P E M F S F T Q = 0 , E M F S F T Q F N = 0
and
A 3 A 3 C 3 B 3 B 3 = C 3 , A 1 A 1 C 1 = C 1 , G ( A 1 C 1 ) # ( A 1 C 1 ) = 0
hold.
Under these conditions, the centro-symmetric solution of (23) can be expressed as
X = 1 2 ( X 1 + X 1 # ) ,
where
X 1 = A 1 C 1 + F A 1 S A 1 # ( A 1 C 1 ) # ( A 1 C 1 ) + F A 1 F S T Ψ B 3 + F A 1 F S F T ( M F S F T ) Q ( B 3 # ) + F A 1 F S P E M F S F T Q N E B 3 F A 1 F S F T ( M F S F T ) M F S P E M F S F T Q ( B 3 # ) + F A 1 F S F T Z F A 1 F S F T ( M F S F T ) M F S F T Z ( B 3 B 3 ) # + F A 1 F S W E B 3 F A 1 F S F T ( M F S F T ) M F S F P W N B 4 F A 1 F S P P W N E B 3 ,
with arbitrary matrices W , Z over Q .
Remark 9.
The consistency of the system (23) for a centro-symmetric solution is equivalent to the solvability of the following system:
A 1 X = C 1 , A 1 # X = C 1 # , A 3 X B 3 = C 3 , A 3 # X B 3 # = C 3 # ,
which corresponds to the general solutions. Hence, the proof of Theorem 12 follows directly from the results of Theorem 11.
In 2008, Wang et al. established the solvability conditions and expressions for the P-(anti-)symmetric solutions to the system of matrix equation ( 23 ) [44].
Theorem 13
(P-(anti-)symmetric solutions for (23) over Q [44]). Assume that A 1 Q p × m , A 3 Q r × m , B 3 Q n × l , C 1 Q p × n , and  C 3 Q r × l are given matrices. Suppose a nontrivial involution P is expressed as
P = T I k 0 0 I n k T 1 .
Define the following partitions:
A 1 T = [ A 1 A 2 ] , A 1 Q p × k , A 2 Q p × ( n k ) , C 1 T = [ C 1 C 2 ] , C 1 Q p × k , C 2 Q p × ( n k ) , A 2 T = [ A 3 A 4 ] , A 3 Q r × k , A 4 Q r × ( n k ) , T 1 B 2 = B 1 B 2 , B 1 Q k × l , B 2 Q ( n k ) × l .
Let A = A 3 L A 1 , C = A 4 L A 2 , M = R A C , N = B 2 L B 1 , S = C L M , and  E = C 2 A 3 A 1 C 1 B 1 A 4 A 2 C 2 B 2 . Then, the system (23) has a P-(anti-)symmetric solution X Q n × n if and only if
(1) Equations
R A 1 C 1 = 0 , R A 2 C 2 = 0 , R A E L B 2 = 0 , R C E L B 1 = 0 , E L B 1 L N = 0 , R M R A E = 0
hold.
(2) Rank conditions
rank [ A i C i ] = rank ( A i ) , i = 1 , 2 , rank A 1 C 1 B 1 A 3 C b 0 B 2 = rank A 1 A 3 + rank ( B 2 ) , rank A 2 C 2 B 2 A 4 C b 0 B 1 = rank A 2 A 4 + rank ( B 1 ) , rank B 1 B 2 C b = rank B 1 B 2 , rank C 1 B 1 A 1 0 C 2 B 2 0 A 2 C b A 3 A 4 = rank A 1 0 0 A 2 A 3 A 4
are all satisfied.
Under these conditions, the P-symmetric or P-anti-symmetric solution X can be expressed as
X = T X 1 0 0 X 2 T 1 ,
or
X = T 0 X 1 X 2 0 T 1 ,
where X 1 , X 2 are given by
X 1 = A 1 C 1 + A E B 1 A C M E B 1 A S C E L B 1 N B 2 B 1 A S V R N B 2 B 1 + L A 1 ( L A Y + Z R B 1 ) , X 2 = A 2 C 2 + L A 2 M E B 2 + L A 2 S S C E L B 1 N + L A 2 L M ( V S S V N N ) + L A 2 W R B 2 .
where Y, V, W, and Z are arbitrary matrices.
Remark 10.
In recent studies, many researchers have focused on solving matrix systems over dual quaternions. For example, Ref. [46] presented the general solution to (2) using the Moore-Penrose inverse. Subsequently, Jin et al. extended this result to (23[47].
Dajić studied the system ( 3 ) over associative rings with a unit, strongly *-reducible rings, and operators between Banach spaces. The necessary and sufficient conditions for the existence of solutions, as well as expressions for the general solutions, were derived [45].
Theorem 14
(General solutions for (3) over associative rings with a unit [45]). For an associative ring R with a unit. Let A 1 R p × m , A 2 R r × m , B 1 R n × q , B 2 R n × l , C 1 R p × q , and  C 2 R r × l . Assume that A 1 , A 2 , B 1 , B 2 are regular. Denote
S = A 2 ( 1 A 1 A 1 ) , T = ( I B 1 B 1 ) B 2 , F = B 1 B 2 ( I T T ) , G = ( 1 S S ) A 2 A 1 ,
where S , T , F , G are regular.
(1) The system (3) is solvable if and only if the equation
A 1 A 2 X B 1 B 2 = A 1 X B 1 A 1 X B 2 A 2 X B 1 A 2 X B 2 = C 1 V W C 2
is consistent for some V , W R .
(2) When C 1 = A 1 A 1 C 1 B 1 B 1 , (3) is consistent if and only if there exist M , N R such that
S S N F + G M T T = C 2 G C 1 F S S C 2 T T .
(3) If A 1 A 1 C 1 B 1 B 1 = C 1 and A 2 A 2 C 2 B 2 B 2 = C 2 , then the system (3) has a general solution if and only if
( I S S ) ( C 2 G C 1 F ) ( I T T ) = 0 .
If one of the above conditions holds for (3), then the general solution can be expressed as
X = [ A 1 C 1 ( I A 1 A 1 ) S ( A 2 A 1 C 1 W ) ] B 1 [ I B 2 T ( I B 1 B 1 ) ] + [ ( I ( I A 1 A 1 ) S A 2 ) A 1 V + ( I A 1 A 1 ) S C 2 ] T ( I B 1 B 1 ) + Z ( A 1 A 1 + ( I A 1 A 1 ) S S ) Z ( B 1 B 1 + T T ( I B 1 B 1 ) ) ,
where
V = C 1 F + G ( I S S ) C 2 T T + ( A 1 A 1 G G ) Z 2 T T ,
W = G C 1 + S S C 2 ( I T T ) F + S S Z 1 ( B 1 B 1 F F ) ,
and Z 1 , Z 2 and Z arbitrary elements of R . It worth noting that F and G can be simplified to B 2 B 1 and A 1 A 2 , respectively.
Theorem 15
(General solutions for (3) over strongly *-reducing rings with a unit [45]). For a strongly *-reducing R , assume that A 1 R p × m , A 2 R r × m , B 1 R n × q , B 2 R n × l , C 1 R p × q , and  C 2 R r × l , where A 1 * A 1 + A 2 * A 2 and B 1 B 1 * + B 2 B 2 * are regular. If the conditions of Theorem 14 hold, the general solution to (3) is given by
X = ( A 1 * A 1 + A 2 * A 2 ) [ A 1 * C 1 B 1 * + A 2 * C 2 B 2 * + A 2 * W B 1 * + A 1 * V B 2 * ] ( B 1 B 1 * + B 2 B 2 ) + Z ( A 1 * A 1 + A 2 * A 2 ) ( A 1 * A 1 + A 2 * A 2 ) Z ( B 1 B 1 * + B 2 B 2 * ) ( B 1 B 1 * + B 2 B 2 ) ,
where Z R is arbitrary and V , W are as defined in (24) and (25), respectively.
Theorem 16
(General solutions for the operator system (3) between Banach spaces [45]). Suppose that E , F , G , D , N , M be Banach spaces. Given A 1 B ( F , E ) , A 2 B ( F , N ) , B 1 B ( D , G ) , B 2 B ( M , G ) and assume that
T = ( I G B 1 B 1 ) B 2 , S = A 2 ( I F A 1 A 1 )
are regular. If  A 1 A 1 C 1 B 1 B 1 = C 1 and A 2 A 2 C 2 B 2 B 2 = C 2 , then the system of Equation (3) has a general solution if and only if
( I N S S ) C 2 ( I M T T ) = ( I N S S ) A 2 A 1 C 1 B 1 B 2 ( I M T T ) .
In this case, the general solution is given by
X = ( A 1 C 1 ( I F A 1 A 1 ) S ( A 2 A 1 C 1 W ) ) B 1 ( I G B 2 T ( I G B 1 B 1 ) ) + ( ( I F ( I F A 1 A 1 ) S A 2 ) A 1 V + ( I F A 1 A 1 ) S C 2 ) T ( I G B 1 B 1 ) + Z ( A 1 A 1 + ( I F A 1 A 1 ) S S ) Z ( B 1 B 1 + T T ( I G B 1 B 1 ) ) ,
where
V = C 1 B 1 B 2 ( I M T T ) + A 1 A 2 ( I N S S ) C 2 T T + A 1 A 1 Q T T A 1 A 2 ( I N S S ) A 2 A 1 Q T T , W = ( I N S S ) A 2 A 1 C 1 + S S C 2 ( I M T T ) B 2 B 1 + S S P B 1 B 1 S S P B 1 B 2 ( I M T T ) B 2 B 1 ,
P , Q , and Z are arbitrary elements of B ( D , N ) , B ( M , E ) and B ( G , F ) , respectively.
Remark 11.
Naturally, Theorem 16 can be extended to matrices over R . The specific conclusions are identical to those in Theorem 16, so they will not be repeated here.
In this section, we have introduced the generalized inverse method for solving the system (3). As one of the earliest and most widely used approaches for this class of problems, the generalized inverse method provides a fundamental framework for deriving solutions. Most of the subsequent methods discussed in this dissertation are either extensions of, or closely related to, the generalized inverse approach.

4. The vec -operator Methods

In this section, we present the method for solving the system (3) involving the vector operators. First, we introduce the definition of the vec -operator and its related properties.
Let A = ( a i j ) m × n be an m × n matrix, where a j = ( a 1 j , a 2 j , , a m j ) T for j = 1 , 2 , , n . The  vec -operator of A is defined as
vec ( A ) = a 1 a 2 a n ,
where the vector consists of the columns of the matrix A stacked on top of each other.
Remark 12.
The vec -operator is both a bijection and a linear mapping.
The inverse operator of A can also be defined. For an m n -dimensional vector a , we use the notation Invec m , n ( a ) to denote the m × n matrix A such that vec ( A ) = a .
Next, we define the Kronecker product of two matrices. For an m × n matrix A = ( a i j ) and a k × l matrix B, the Kronecker product of A and B is given by
A B = a 11 B a 12 B a 1 n B a 21 B a 22 B a 2 n B a m 1 B a m 2 B a m n B ,
which results in a matrix of order m k × n l .
The vector operators have broad applications in solving various systems of equations, primarily because they satisfy the following properties in the complex domain. Let A C p × m , B C m × n , and  C C n × q . Then, the following identity holds:
vec ( A B C ) = ( C T A ) vec ( B ) .
Using this result, complex systems of linear equations, including (3), can be transformed into the form A x = b .
For quaternions, commutative quaternions, and split quaternions, their real or complex representations can be employed to convert them into real or complex systems of equations for further solution. Additionally, specialized vector operators can be designed for different types of solutions, ensuring that when the matrix is recovered, it retains the properties of a special matrix.
Finally, for the equation A x = b , the following lemma provides the solvability conditions and the expression for the general or least-squares solution using the Moore–Penrose inverse.
Lemma 1
([28,48]). For the given matrix equation A x = b , where A C m × n and b C n , the equation is consistent if and only if
A A b = b .
When the equation is consistent, the general solution is given by
x = A b + ( I n A A ) y ,
where y C n is arbitrary. If the matrix equation A x = b is inconsistent, then the least-squares solution is given by
x = A b .
This form provides the unique minimal norm general or least-squares solution.
This section is organized into several parts. We begin by discussing the solution of the system (3) in the complex domain using the vec -operator. Following that, we explore the solutions in the quaternion ring, including Hermitian, η -bi-Hermitian, (anti-)centro-Hermitian, and Hankel solutions. Lastly, we address the η -Hermitian solution for split quaternions and the Hermitian solution for commutative quaternions.

4.1. The Complex Field

The concept of using vector operators to solve the system (3) was first introduced by Navarra in 2001 [49]. Subsequently, scholars recognized that vector operators could be designed for symmetric, anti-symmetric, and bi-symmetric matrices, which led to the exploration of special solutions for (3). As a result, from 2016 to 2018, the least-squares Hermitian solution and Hermitian indeterminate admittance solution of (3) were extensively studied [50,51,52].
In 2001, Navarra et al. derived necessary and sufficient conditions for the existence of a general solution to (3) and presented an expression of the general solution using the vec -operator and Inver -operator [49]. This marked the first application of the vec in solving the system (3).
Theorem 17
(General Solutions for (3) over C [49]). Suppose that A 1 , A 2 C p × m , B 1 , B 2 C n × q , and  C 1 , C 2 C p × q are known matrices. The system of matrix Equation (3) is consistent if and only if
A 1 A 1 C 1 B 1 B 1 = C 1
and
G G vec ( F ) = vec ( F ) ,
where G = B 2 * A 2 + E * D , D = A 2 A 1 A 1 , E = B 1 B 1 B 2 , and  F = C 2 A 2 A 1 C 1 B 1 B 2 .
If the general solution to (3) exists, the expression for the general solution is given by
X = A 1 C 1 B 1 + Invec m , n G vec ( F ) + ( I G G ) vec ( V ) A 1 A 1 Invec m , n ( G vec ( F ) + ( I G G ) vec ( V ) ) B 1 B 1 ,
with arbitrary V C m × n .
Remark 13.
Recall the fact that the matrix equation A X B = C has a Hermitian solution if and only if the pair of matrix Equation (11) has a general solution X. The Hermitian solution of A X B = C can be represented as
1 2 ( X + X * ) .
Based on this result, Ref. [49] presents the consistent condition and expressions of the Hermitian solution of A X B = C .
Fifteen years later, the least-squares Hermitian solution with the minimum-norm of (3) has been considered. Ref. [50] directly focused on the properties of complex Hermitian matrices, designing vec s and vec a operators for symmetric and anti-symmetric matrices to describe the real and imaginary parts of Hermitian matrices.
For a matrix A = ( a i j ) R n × n , the operator vec s ( A ) is defined as
vec s ( A ) = ( a 11 , , a n 1 , a 22 , , a n 2 , , a ( n 1 ) ( n 1 ) , a n ( n 1 ) , a n n ) T R n ( n + 1 ) 2 .
The operator vec a ( A ) is denoted as
vec a ( A ) = ( a 21 , , a n 1 , a 31 , , a n 2 , , a ( n 1 ) ( n 2 ) , a n ( n 2 ) , a n ( n 1 ) ) T R n ( n 1 ) 2 .
Let x = ( x 1 , x 2 , , x k ) T C k , y = ( y 1 , y 2 , , y k ) T C k , and  A = ( A 1 , A 2 , , A k ) , where A i C m × n ( i = 1 , 2 , , k ). Define the circle product satisfying the following conditions.
( 1 )   A x = x 1 A 1 + x 2 A 2 + + x k A k C m × n .
( 2 )   A ( x , y ) = ( A x , A y ) .
For E = ( e i j ) R n × n , where e i j = 1 and other elements of E are zeros, define
K s = ( E 11 , E 21 + E 12 , , E n 1 + E 1 n , E 22 , E 32 + E 23 , , E n 2 + E 2 n , , E ( n 1 ) ( n 1 ) , E n ( n 1 ) + E ( n 1 ) n , E n n ) R n × n 2 ( n + 1 ) 2 ,
and
K a = ( E 21 E 12 , , E n 1 E 1 n , E 32 E 23 , , E n 2 E 2 n , , E n ( n 1 ) E ( n 1 ) n ) R n × n 2 ( n 1 ) 2 .
With the above definitions, an equivalent characterization of Hermitian matrices is given as follows. For a square real matrix X R n × n , X is a symmetric matrix if and only if
X = K s vec s ( X ) ,
and X is an anti-symmetric matrix if and only if
X = K a vec a ( X ) .
Hence, for a complex square matrix X C n × n , X is a Hermitian matrix if and only if
X = K s vec s ( Re ( X ) ) + i K a vec a ( Im ( X ) ) .
Based on above results, the main theorem of [50] is given.
Theorem 18
(Hermitian solutions for (3) over C [50]). Let A 1 C p × m , B 1 C n × q , C 1 C p × q , A 2 C r × m , B 2 C n × l , C 2 C r × l . Define the complex matrices F i j C p × q , G i j C p × q , H i j C r × l , and  K i j C r × l for n i j 1 as
F i j = A 1 i B 1 j , if i = j , A 1 i B 1 j + A 1 j B 1 i , if i > j , G i j = 0 , if i = j , 1 ( A 1 i B 1 j A 1 j B 1 i ) , if i > j ,
H i j = B 2 i B 2 j , if i = j , A 2 i B 2 j + A 2 j B 2 i , if i > j , K i j = 0 , if i = j , 1 ( A 2 i B 2 j A 2 j B 2 i ) , if i > j ,
where A 1 i C p , A 2 i C r are the i-th column vectors of matrices A 1 and A 2 , and  B 1 j C q , B 2 j C l are the j-th row vectors of matrices B 1 and B 2 , respectively. For  n i j 1 , define
Γ ^ i j = Re ( F i j ) Im ( F i j ) Re ( H i j ) Im ( H i j ) , Υ ^ i j = Re ( G i j ) Im ( G i j ) Re ( K i j ) Im ( K i j ) , Ω 0 = Re ( E ) Im ( E ) Re ( F ) Im ( F ) ,
P = Γ ^ 11 , Γ ^ 11 Γ ^ 11 , Γ ^ 21 Γ ^ 11 , Γ ^ n n Γ ^ 21 , Γ ^ 11 Γ ^ 21 , Γ ^ 22 Γ ^ 21 , Γ ^ n n Γ ^ n n , Γ ^ 11 Γ ^ n n , Γ ^ 21 Γ ^ n n , Γ ^ n n ,
U = Γ ^ 11 , Υ ^ 21 Γ ^ 11 , Υ ^ 31 Γ ^ 11 , Υ ^ n ( n 1 ) Γ ^ 21 , Υ ^ 21 Γ ^ 21 , Υ ^ 31 Γ ^ 21 , Υ ^ n ( n 1 ) Γ ^ n n , Υ ^ 21 Γ ^ n n , Υ ^ 31 Γ ^ n n , Υ ^ n ( n 1 ) ,
V = Υ ^ 21 , Υ ^ 21 Υ ^ 21 , Υ ^ 31 Υ ^ 21 , Υ ^ n ( n 1 ) Υ ^ 31 , Υ ^ 21 Υ ^ 31 , Υ ^ 31 Υ ^ 31 , Υ ^ n ( n 1 ) Υ ^ n ( n 1 ) , Υ ^ 21 Υ ^ n ( n 1 ) , Υ ^ 31 Υ ^ n ( n 1 ) , Υ ^ n ( n 1 ) ,
e 1 = Γ ^ 11 , Ω 0 Γ ^ 21 , Ω 0 Γ ^ n n , Ω 0 , e 2 = Υ ^ 21 , Ω 0 Υ ^ 31 , Ω 0 Υ ^ n ( n 1 ) , Ω 0 .
Let
W = P U U T V , e = e 1 e 2 .
The system (3) is consistent for Hermitian solutions when
W W e = e .
Then, the Hermitian or least-squares Hermitian solution of (3) is of the form
X = [ K s K a ] W e + ( I W W ) y ,
where y R n 2 is an arbitrary vector. The system (3) has a unique minimum-norm Hermitian or minimum-norm least-squares Hermitian solution satisfying
X = [ K s i K a ] W e .
Solving the system (3) using Theorem 18 involves the operation of the inner product of the matrix, which is computationally expensive. Ref. [52] used real representation to transform the problem in the complex number field into the real field, improving the effectiveness of the method.
Define e i as the i-th column of I n . Let K s be
e 1 e 2 e 3 e n 1 e n 0 0 0 0 0 0 0 0 e 1 0 0 0 e 2 e 3 e n 1 e n 0 0 0 0 0 e 1 0 0 0 e 2 0 0 0 0 0 0 0 0 e 1 0 0 0 e 2 0 e n 1 e n 0 0 0 0 0 e 1 0 0 0 e 2 0 e n 1 e n R n × n 2 ( n + 1 ) 2 ,
and K a be
e 2 e 3 e n 1 e n 0 0 0 0 e 1 0 0 0 e 3 e n 1 e n 0 0 e 1 0 0 e 2 0 0 0 0 0 e 1 0 0 e 2 0 e n 0 0 0 e 1 0 0 e 2 e n 1 R n × n 2 ( n 1 ) 2 .
The symbols K s and K a present another equivalent characterization of Hermitian matrices. For a real square matrix X R n × n , X is a symmetric matrix if and only if
X = K s vec s ( X ) .
X is an anti-symmetric matrix if and only if
X = K a vec a ( X ) .
Thus, for a complex square matrix X C n × n , X is a Hermitian matrix if and only if
X = K s vec s ( Re ( X ) ) + i K a vec a ( Im ( X ) ) .
Additionally, the definition of the real representation of a complex matrix is presented. For a complex matrix A = A 1 + A 2 i C m × n , where A 1 , A 2 R m × n , denote
A R = A 1 A 2 A 2 A 1 R 2 m × 2 n ,
and
A r R = [ A 1 A 2 ] R m × 2 n .
Assume X C n × n , then vec ( X R ) = F vec ( X r R ) , where
F = G K L G R 4 n 2 × 2 n 2 ,
with
G = I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 , K = 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n , L = 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 I n .
Based on the real representation of complex matrices, Ref. [52] presents another expression of the consistent conditions and expressions for the general and least-squares solutions for (3).
Theorem 19
(Hermitian solutions for (3) over C [52]). Let A 1 , A 2 C p × m , B 1 , B 2 C n × q , C 1 , C 2 C m × n . Denote
W = ( B 1 R ) T A 1 R ( B 2 R ) T A 2 R ,
and
Q = K s 0 0 K a .
The system (3) is consistent when
[ I ( W F Q ) ( W F Q ) ] vec ( C 1 r R ) vec ( C 2 r R ) = 0 .
Then, the general or least-squares solution of (3) is of the form
vec s ( X 1 ) vec a ( X 2 ) = ( W F Q ) vec ( C 1 r R ) vec ( C 2 r R ) + [ I ( W F Q ) ( W F Q ) ] y ,
where y R n 2 is an arbitrary vector. The system (3) has a unique minimal norm general or least-squares solution satisfying
vec s ( X 1 ) vec a ( X 2 ) = ( W F Q ) vec ( C 1 r R ) vec ( C 2 r R ) .
Remark 14.
Based on the real representation of complex numbers, the system of matrix Equation (3) is equal to
W F Q vec a ( X 1 ) vec a ( X 2 ) = vec ( E r R ) vec ( F r R ) .
Remark 15.
Compared to Theorem 18, Theorem 19 does not involve complex number operations, resulting in improved computational efficiency. This conclusion is also verified by the numerical experiments presented in [52].
Later, Hermitian indeterminate admittance (HIA) solutions of the system (3) have been considered by Liang et al. [51].
For describing Hermitian indeterminate admittance matrices, Ref. [51] designed operators vec s and vec a . For A = ( a i j ) R n × n , denote
vec s ( A ) = ( a 11 , a 21 , , a n 1 , 1 , a 22 , a 32 , , a n 1 , 2 , , a n 1 , n 1 ) T R n ( n 1 ) 2 ,
and
vec a ( A ) = ( a 21 , a 31 , , a n 1 , 1 , a 32 , a 42 , , a n 1 , 2 , , a n 1 , n 2 ) R ( n 1 ) ( n 2 ) 2 .
On the other hand, let
K S = ( E 11 E n 1 E 1 n + E n n , E 21 + E 12 E n 1 E 2 n E 1 n E n 2 + E n n , , E n 1 , 1 + E 1 , n 1 E n 1 E n 1 , n E n , n 1 E 1 n + E n n , E 22 E 2 n E n 2 + E n n , , E n 1 , n 1 E n , n 1 E n 1 , n + E n n ) R n × n 2 ( n 1 ) 2 ,
and
K A = ( E 11 + E 22 + + E n n , E 21 E 12 E n 1 E 2 n + E n 2 + E 1 n , E 31 E 13 E n 1 E 3 n + E 1 n + E n 3 , , E n 1 , 1 E 1 , n 1 E n 1 E n 1 , n + E 1 n + E n , n 1 , E 32 E 23 E 3 n E n 2 + E 2 n + E n 3 , , E n 1 , n 2 E n 2 , n 1 E n 1 , n E n , n 2 + E n , n 1 + E n 2 , n ) R n × n ( n 2 ) ( n 1 ) + n 2 .
For a square real matrix X R n × n , X is a symmetry matrix if and only if
X = K S vec s ( X ) ,
and X is an anti-symmetry matrix if and only if
X = K A vec a ( X ) .
Hence, for a complex square matrix X C n × n , X is a Hermitian matrix if and only if
X = K S vec s ( Re ( X ) ) + i K A vec a ( Im ( X ) ) .
The results of [51] presenting the consistency conditions of (3) having HIA solutions are as follows:
Theorem 20
(HIA solutions for (3) over C [51]). Assume that A 1 , A 2 C p × m , B 1 , B 2 C n × q , C 1 , C 2 C m × n . Denote that
F i j = A 1 i B 1 j A 1 n B 1 j A 1 i B 1 n + A 1 n B 1 n , i = j , A 1 i B 1 j + A 1 j B 1 i A 1 i B 1 n A 1 n B 1 j A 1 j B 1 n A 1 n B 1 i + A 1 n B 1 n , i > j , G i j = 0 , i = j , 1 ( A 1 i B 1 j A 1 j B 1 i A 1 n B 1 j A 1 i B 1 n + A 1 j B 1 n + A 1 n B 1 i ) , i > j , M i j = A 2 i B 2 j A 2 n B 2 j A 2 i B 2 n + A 2 n B 2 n , i = j , A 2 i B 2 j + A 2 j B 2 i A 2 i B 2 n A 2 n B 2 j A 2 j B 2 n A 2 n B 2 i + A 2 n B 2 n , i > j , N i j = 0 , i = j , 1 ( A 2 i B 2 j A 2 j B 2 i A 2 n B 2 j A 2 i B 2 n + A 2 j B 2 n + A 2 n B 2 i ) , i > j ,
where A 1 i C p , A 2 i C r is the i-th column vector of matrix A 1 and A 2 , B 1 j C q , B 2 j C l is the j-th row vector of matrix B 1 and B 2 , respectively. Let
S i j = F i j M i j , T i j = G i j N i j , R = E F ,
S ^ i j = Re ( S i j ) Im ( S i j ) , T ^ i j = Re ( T i j ) Im ( T i j ) , R 0 = Re ( R ) Im ( R ) ,
U = S W W T T , v = v 1 v 2 ,
where n i j 1 ,
S = S ^ 11 , S ^ 11 S ^ 11 , S ^ 21 S ^ 11 , S ^ n 1 , n 1 S ^ 21 , S ^ 11 S ^ 21 , S ^ 21 S ^ 21 , S ^ n 1 , n 1 S ^ n 1 , n 1 , S ^ 11 S ^ n 1 , n 1 , S ^ 21 S ^ n 1 , n 1 , S ^ n 1 , n 1 ,
T = T ^ 11 , T ^ 21 T ^ 11 , T ^ 31 T ^ 11 , T ^ n 1 , n 2 T ^ 21 , T ^ 11 T ^ 21 , T ^ 21 T ^ 21 , T ^ n 1 , n 2 T ^ n 1 , n 2 , T ^ 11 T ^ n 1 , n 2 , T ^ 21 T ^ n 1 , n 2 , T ^ n 1 , n 2 ,
W = S ^ 11 , T ^ 11 S ^ 11 , T ^ 21 S ^ 11 , T ^ n 1 , n 2 S ^ 21 , T ^ 11 S ^ 21 , T ^ 21 S ^ 21 , T ^ n 1 , n 2 S ^ n 1 , n 1 , T ^ 11 S ^ n 1 , n 1 , T ^ 21 S ^ n 1 , n 1 , T ^ n 1 , n 2 ,
v 1 = S ^ 11 , R 0 S ^ 21 , R 0 S ^ n 1 , n 1 , R 0 , v 2 = T ^ 11 , R 0 T ^ 21 , R 0 T ^ n 1 , n 2 , R 0 .
Then, the system (3) has a Hermitian indeterminate admittance solution X if and only if
U U v = v .
At this point,
X = [ K S i K A ] [ U v s . + ( I U U ) Y ] ,
with arbitrary Y.
Liang et al. have presented another method of solving the Hermitian indeterminate admittance of (3) [51].
For α i = e i e n for i = 1 , 2 , , n , let
K s = α 1 α 2 α n 1 0 0 0 0 0 0 α 1 0 α 2 α n 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 α n 2 α n 1 0 0 0 α 1 0 α 2 0 α n 2 α n 1 α 1 α 2 α 1 α n 1 α 1 α 2 α n 1 α 2 α n 2 α n 1 α n 2 α n 1
and
K a = α 2 α 3 α n 1 0 0 0 0 α 1 0 0 α 3 α n 1 0 0 0 α 1 0 α 2 0 α 4 0 0 0 0 0 0 α 3 0 0 0 0 0 0 0 α n 1 0 0 α 1 0 α 2 0 α n 2 α 2 α 1 α 3 α 1 α n 1 α 1 α 3 α 2 α n 1 α 2 α 4 α 3 α n 1 α n 2 .
Then, assume that A R n × n , then A is a symmetry indeterminate admittance matrix if and only if
vec ( X ) = K s vec s ( X ) ,
and A is an anti-symmetry indeterminate admittance matrix if and only if
vec ( X ) = K a vec a ( X ) .
Hence, for a complex matrix A C n × n , then A is a Hermitian indeterminate admittance matrix if and only if
vec ( X ) = K s vec s ( Re ( X ) ) + i K a vec a ( Im ( X ) ) .
Naturally, the following theorem holds.
Theorem 21
(HIA Solutions for (3) over C [51]). Suppose that A 1 , A 2 Q p × m , B 1 , B 2 Q n × q , C 1 , C 2 Q m × n . Let
P = ( B 1 T A 1 ) K s i ( B 1 T A 1 ) K a ( B 2 T A 2 ) K s i ( B 2 T A 2 ) K a , K = vec ( C 1 ) vec ( C 2 ) , P 0 = Re ( P ) Im ( P ) , K 0 = Re ( K ) Im ( K ) .
Then the existence conditions of the Hermitian indeterminate admittance solution X of (3) can be expressed as
P 0 P 0 K 0 = K 0 .
In this case, X satisfies
vec ( X ) = ( K s i K a ) [ P 0 K 0 + ( I P 0 P 0 ) Y ] ,
with arbitrary Y.
In the process of considering special solutions using vector operators over the complex number field, it is essential to treat the real and imaginary parts separately. In the following subsection, we demonstrate that when applying these methods to find special solutions over the quaternion field, the four components of quaternions must be addressed individually.

4.2. The Quaternion Algebra

This subsection considers the system (3) over quaternions with general and various Hermitian solutions. It is well known that quaternion algebra is noncommutative, which means that the Equation (26) does not hold over Q . However, with the help of the quaternion complex representation, the least-squares Hermitian, η -bi-Hermitian, (anti-)centro-Hermitian, and Hankel solutions for (3) can all be derived  [53,54,55,56,57]. Additionally, the (anti-)centro-Hermitian solution of (3) was also considered using its real representation [58].
We first introduce the complex representation and related properties for quaternions. Any quaternion matrix A Q m × n can be represented as A = A 1 + A 2 j , where A 1 , A 2 C m × n . Denote Φ ( A ) = [ A 1 A 2 ] C m × 2 n , Ψ ( A ) = [ Re ( A 1 ) Im ( A 1 ) Re ( A 2 ) Im ( A 2 ) ] C m × 4 n , and 
f ( A ) = A 1 A 2 A 2 ¯ A 1 ¯ C 2 m × 2 n .
Remark 16.
For k R , A Q m × n , and  B Q n × r , the following conclusions can be derived:
(1) k Φ ( A ) = Φ ( k A ) ,
(2) k Ψ ( A ) = Ψ ( k A ) ,
(3) k f ( A ) = f ( k A ) ,
(4) f ( A B ) = f ( A ) f ( B ) ,
(5) Φ ( A B ) = Φ ( A ) f ( B ) .
Thus, Φ, Ψ, and f are linear maps.
Based on the real representation, Kronecker product, and (26), the  vec -operator over quaternions can be described.
Lemma 2
(The structure of the vec -operator over Q [53]). For A = A 1 + A 2 j Q p × m , B = B 1 + B 2 j Q m × n , and  C = C 1 + C 2 j Q n × q , the following holds:
vec ( Φ ( A B C ) ) = ( f ( C ) T A 1 , f ( C j ) * A 2 ) vec ( Φ ( B ) ) vec ( j Φ ( B ) j ) .
Based on different definitions of special matrices, the expression
vec ( Φ ( B ) ) vec ( j Φ ( B ) j )
has corresponding structures, which can be used to simplify the system (3) into the form A x = b .
To derive the general solution and least-squares solution, Ref. [53] introduced the below symbols. For a matrix X = ( x i j ) R n × n , define the following vectors:
vec S ( X ) = ( x 11 , 2 x 21 , , 2 x n 1 , x 22 , 2 x 32 , , 2 x n 2 , , x n n ) T R n ( n + 1 ) 2 ,
and
vec A ( X ) = 2 ( x 21 , x 31 , , x n 1 , x 32 , x 42 , , x n 2 , , x n ( n 1 ) ) T R n ( n 1 ) 2 .
The vec S and vec A operators can also be used to express the symmetry and anti-symmetry matrices. Suppose X R n × n , then:
( 1 )   X SR n × n vec ( X ) = K S ( n ) vec S ( X ) , where K S ( n ) R n 2 × n ( n + 1 ) 2 is given by
1 2 2 e 1 e 2 e 3 e n 1 e n 0 0 0 0 0 0 0 0 e 1 0 0 0 2 e 2 e 3 e n 1 e n 0 0 0 0 0 e 1 0 0 0 e 2 0 0 0 0 0 0 0 0 e 1 0 0 0 e 2 0 2 e n 1 e n 0 0 0 0 0 e 1 0 0 0 e 2 0 e n 1 2 e n ,
( 2 )   X ASR n × n vec ( X ) = K A ( n ) vec A ( X ) , where the matrix K A ( n ) R n 2 × n ( n 1 ) 2 is given by
1 2 e 2 e 3 e n 1 e n 0 0 0 0 e 1 0 0 0 e 3 e n 1 e n 0 0 e 1 0 0 e 2 0 0 0 0 0 e 1 0 0 e 2 0 e n 0 0 0 e 1 0 0 e 2 e n 1 .
Remark 17.
K S ( n ) and K A ( n ) are standard column orthogonal matrices, which satisfy
K S ( n ) T K S ( n ) = I , K A ( n ) T K A ( n ) = I ,
respectively.
Notice that for X = X 1 + X 2 j Q n × n to be Hermitian, it must satisfy that
Re ( X 1 ) SR n × n and Im ( X 1 ) , Re ( X 2 ) , Im ( X 2 ) ASR n × n .
Hence, X satisfies
vec ( Φ ( B ) ) vec ( j Φ ( B ) j ) = K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) vec S ( Re ( X 1 ) ) vec A ( Im ( X 1 ) ) vec A ( Re ( X 2 ) ) vec A ( Im ( X 2 ) ) .
The main results of [53] can be presented as follows.
Theorem 22
(Hermitian solutions for (3) over Q [53]). For A 1 = A 11 + A 12 j Q p × m , B 1 Q n × q , A 2 = A 21 + A 22 j Q p × n , B 2 Q n × l , C 1 Q p × q , and  C 2 Q p × l , let W 1 = diag ( K S , K A , K A , K A ) . Define
Q 1 = ( f ( B 1 ) T A 11 , f ( B 1 j ) * A 12 ) K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) ,
Q 2 = ( f ( B 2 ) T A 21 , f ( B 2 j ) H A 22 ) K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) K S ( n ) i K A ( n ) 0 0 0 0 K A ( n ) i K A ( n ) ,
Q 0 = Q 1 Q 2 , P 1 = Re ( Q 0 ) , P 2 = Im ( Q 0 ) , e = vec ( Re ( Φ ( C 1 ) ) ) vec ( Re ( Φ ( C 2 ) ) ) vec ( Im ( Φ ( C 1 ) ) ) vec ( Im ( Φ ( C 2 ) ) ) ,
and
R = ( I P 1 P 1 ) P 2 T , H = R + ( I R R ) Z P 2 ( P 1 ) P 1 T ( I P 2 T R ) , Z = ( I + ( I R R ) P 2 P 1 P 1 T P 2 T ( I R R ) ) 1 , S 11 = I P 1 P 1 + P 1 T P 2 T Z ( I R R ) P 2 P 1 , S 12 = P 1 T P 2 T ( I R R ) Z , S 22 = ( I R R ) Z .
(1) The quaternion matrix system (3) has a Hermitian solution X Q n × n if and only if
S 11 S 12 S 12 T S 22 e = 0 .
Then the Hermitian solution of (3) can be expressed as
vec ( Ψ ( X ) ) = W 1 ( P 1 H T P 2 P 1 , H T ) e + W 1 ( I P 1 P 1 R R ) y ,
with an arbitrary real vector y.
(2) If (28) holds, then (3) has a unique Hermitian solution X if and only if
rank P 1 P 2 = 2 n 2 n .
The unique Hermitian solution can be expressed as
vec ( Ψ ( X ) ) = W 1 ( P 1 H T P 2 P 1 , H T ) e .
(3) When (3) is inconsistent, the least-squares solutions satisfy
vec ( Ψ ( X ) ) = W 1 ( P 1 H T P 2 P 1 , H T ) e + W 1 ( I P 1 P 1 R R ) y ,
where y is an arbitrary real vector.
Yuan et al. have considered the more complex η -anti-Hermitian solution and η -anti-bi-Hermitian solutions [54].
The vec B operator for A = ( a i j ) R n is designed for bi-symmetry matrices, with the following definition:
( 1 ) When n = 2 p is even, for the given matrix A R 2 p × 2 p , define
vec B ( A ) = ( a 11 , a 21 , , a 2 p , 1 , a 22 , a 32 , , a 2 p 1 , 2 , , a p p , a p + 1 , p ) T R p ( p + 1 ) .
( 2 ) When n = 2 p + 1 is odd, for  A R ( 2 p + 1 ) × ( 2 p + 1 ) , define
vec B ( A ) = ( a 11 , a 21 , , a 2 p + 1 , 1 , a 22 , a 32 , , a 2 p , 2 , , a p + 1 , p + 1 ) T R ( p + 1 ) 2 .
Similarly, the  vec A B operator for A = ( a i j ) R n is designed for anti-bi-symmetry matrices, defined as follows:
( 1 ) If n = 2 p , for  A R 2 p × 2 p , define
vec A B ( A ) = ( a 21 , , a 2 p 1 , 1 , a 32 , a 42 , , a 2 p 2 , 2 , , a p , p 1 , a p + 1 , p 1 ) T R p ( p 1 ) .
( 2 ) If n = 2 p + 1 , for  A = ( a i j ) R ( 2 p + 1 ) × ( 2 p + 1 ) , define
vec A B ( A ) = ( a 21 , , a 2 p , 1 , a 32 , , a 2 p 1 , 2 , , a p + 1 , p ) T R p 2 .
Suppose X R n × n , then
X BSR n × n vec ( X ) = K B vec B ( X ) ,
where K B is given by
e 1 e 2 e p e p + 1 e 2 p 1 e n 0 0 0 0 0 0 0 e 1 0 0 e n 0 e 2 e p e p + 1 e n 1 0 0 0 0 e 1 e n 0 0 0 e 2 e n 1 0 e p e p + 1 0 0 e n e 1 0 0 0 e n 1 e 2 0 e p + 1 e p 0 e n 0 0 e 1 0 e n 1 e p + 1 e p 0 0 0 0 e n e n 1 e p + 1 e p e 2 e 1 0 0 0 0 0 0 ,
when n = 2 p , with order n 2 × p ( p + 1 ) ;
e 1 e 2 e p + 1 e 2 p e n 0 0 0 0 0 0 0 0 e 1 0 e n 0 e 2 e p + 1 e 2 p 0 0 0 0 0 0 0 0 0 0 0 0 e p e p + 1 e p + 2 0 0 0 e 1 + e n 0 0 0 e 2 + e n 1 e 2 p 0 e p + e p + 2 0 e p + 1 0 0 0 0 0 0 0 e 2 p e p + 2 e p + 1 e p 0 0 e n 0 e 1 0 e n 1 e p + 1 e 2 0 0 0 0 e n e n 1 e p + 1 e 2 e 1 0 0 0 0 0 0 0 ,
when n = 2 p + 1 , with order n 2 × ( p + 1 ) 2 .
Additionally,
X ABR n × n vec ( X ) = K A B vec A B ( X ) ,
where K A B is given by
e 2 e p e p + 1 e n 1 0 0 0 0 0 0 e 1 0 0 e n e 3 e p e p + 1 e n 2 0 0 0 e 1 e n 0 0 e 2 e n 1 0 e p + 1 e p + 2 0 e n e 1 0 0 e n 1 e 2 0 e p + 2 e p + 1 e n 0 0 e 1 e n 2 e p + 1 e p e 3 0 0 e n 1 e p + 1 e p e 2 0 0 0 0 0 0 ,
when n = 2 p , with order n 2 × p ( p 1 ) ;
e 2 e p e p + 1 e p + 2 e 2 p 0 0 0 0 e 1 0 0 0 e n e 3 e p + 1 e 2 p 1 0 0 e 1 0 e n 0 0 0 0 e p + 1 0 0 e 1 e n 0 0 0 e 2 e n 1 0 e p e p + 2 0 e n 0 e 1 0 0 0 0 e p + 1 e n 0 0 0 e 1 e n 1 e p + 1 e 3 0 e n 1 e p + 2 e p + 1 e p e 2 0 0 0 0 ,
when n = 2 p + 1 , with order n 2 × p 2 .
Furthermore, X = X 1 + X 2 j η BQ n × n if and only if Re ( X 1 ) BSR n × n and
Im ( X 1 ) BSR n × n , Re ( X 2 ) , Im ( X 2 ) ABSR n × n , η = i , Re ( X 2 ) BSR n × n , Im ( X 1 ) , Im ( X 2 ) ABSR n × n , η = j , Im ( X 2 ) BSR n × n , Im ( X 1 ) , Re ( X 2 ) ABSR n × n , η = k .
X = X 1 + X 2 j η ABQ n × n if and only if Re ( X 1 ) ABSR n × n and
Im ( X 1 ) ABSR n × n , Re ( X 2 ) , Im ( X 2 ) BSR n × n , η = i , Re ( X 2 ) ABSR n × n , Im ( X 1 ) , Im ( X 2 ) BSR n × n , η = j , Im ( X 2 ) ABSR n × n , Im ( X 1 ) , Re ( X 2 ) BSR n × n , η = k .
Therefore, assume that X Q n × n , then
( 1 )   X η B Q n × n vec ( Ψ ( X ) ) = K η B vec η B ( Ψ ( X ) ) , where
K i B = K B 0 0 0 0 K A B 0 0 0 0 K B 0 0 0 0 K B , vec i B ( Ψ ( X ) ) = vec B ( Re ( X 1 ) ) vec A B ( Im ( X 1 ) ) vec B ( Re ( X 2 ) ) vec B ( Im ( X 2 ) ) ,
K j B = K B 0 0 0 0 K B 0 0 0 0 K A B 0 0 0 0 K B , vec j B ( Ψ ( X ) ) = vec B ( Re ( X 1 ) ) vec B ( Im ( X 1 ) ) vec A B ( Re ( X 2 ) ) vec B ( Im ( X 2 ) ) ,
K k B = K B 0 0 0 0 K B 0 0 0 0 K B 0 0 0 0 K A B , vec k B ( Ψ ( X ) ) = vec B ( Re ( X 1 ) ) vec B ( Im ( X 1 ) ) vec B ( Re ( X 2 ) ) vec A B ( Im ( X 2 ) ) .
( 2 )   X η A B Q n × n vec ( Ψ ( X ) ) = K η A B vec η A B ( Ψ ( X ) ) , where
K i A B = K A B 0 0 0 0 K B 0 0 0 0 K A B 0 0 0 0 K A B , vec i A B ( Ψ ( X ) ) = vec A B ( Re ( A 1 ) ) vec B ( Im ( A 1 ) ) vec A B ( Re ( A 2 ) ) vec A B ( Im ( A 2 ) ) ,
K j A B = K A B 0 0 0 0 K A B 0 0 0 0 K B 0 0 0 0 K A B , vec j A B ( Ψ ( X ) ) = vec A B ( Re ( A 1 ) ) vec A B ( Im ( A 1 ) ) vec B ( Re ( A 2 ) ) vec A B ( Im ( A 2 ) ) ,
K k A B = K A B 0 0 0 0 K A B 0 0 0 0 K A B 0 0 0 0 K B , vec k A B ( Ψ ( X ) ) = vec A B ( Re ( A 1 ) ) vec A B ( Im ( A 1 ) ) vec A B ( Re ( A 2 ) ) vec B ( Im ( A 2 ) ) .
Yuan et al. presented the structure of (27) in [54]. When X η B Q n × n ,
vec ( Φ ( B ) ) vec ( j Φ ( B ) j ) = W 2 K η B vec η B ( Ψ ( X ) ) ;
when X η A B Q n × n ,
vec ( Φ ( B ) ) vec ( j Φ ( B ) j ) = W 2 K η A B vec η A B ( Ψ ( X ) ) ,
where
W 2 = I n 2 i I n 2 0 0 0 0 I n 2 i I n 2 I n 2 i I n 2 0 0 0 0 I n 2 i I n 2 .
Theorem 23
( η -bi-Hermitian solutions for (3) over Q [54]). Assume that A 1 = A 11 + A 12 j Q p × m , A 2 = A 21 + A 22 j Q p × m , B 1 , B 2 Q n × q , C 1 , C 2 Q m × n . Let
Π = f ( B 1 ) T A 11 , f ( B 1 j ) H A 12 f ( B 2 ) T A 21 , f ( B 2 j ) H A 22 W , e = vec ( Re ( Φ ( C 1 ) ) ) vec ( Re ( Φ ( C 2 ) ) ) vec ( Im ( Φ ( C 1 ) ) ) vec ( Im ( Φ ( C 2 ) ) ) , P = Π K η B , P 1 = Re ( P ) , P 2 = Im ( P ) , R = ( I P 1 P 1 ) P 2 T , Z = ( I + ( I R R ) P 2 P 1 P 1 T P 2 T ( I R R ) ) 1 , H = R + ( I R R ) Z 2 P 2 P 1 P 1 T ( I P 2 T R ) , S 11 = I P 1 P 1 + P 1 T P 2 T Z ( I R R ) P 2 P 1 , S 12 = P 1 T P 2 T ( I R R ) Z , S 22 = ( I R R ) Z .
Additionally,
P 1 P 2 = ( P 1 H 2 T P 2 P 1 , H 2 ) , P 1 P 2 P 1 P 2 = P 1 P 1 + R R , I P 1 P 2 P 1 P 2 = S 11 S 12 S 12 T S 22 .
(1) The quaternion matrix system of equations ( 3 ) has a solution X η B Q n × n if and only if
S 11 S 12 S 12 T S 22 e = 0 .
Under this circumstance, X is expressed as
vec ( Ψ ( X ) ) = K η B ( P 1 H 2 T P 2 P 1 , H 2 ) e + ( I P 1 P 1 R R ) y ,
where y is an arbitrary real vector with appropriate dimensions.
(2) If (29) holds, then the quaternion matrix system (3) has a unique solution X η B Q n × n if and only if
rank P 1 P 2 = 3 p ( p + 1 ) + ( p 1 ) p , n = 2 p , 3 ( p + 1 ) 2 + p 2 , n = 2 p + 1 .
In this situation,
vec ( Ψ ( X ) ) = K η B ( P 1 H 2 T P 2 P 1 , H 2 ) e .
(3) Furthermore, the least-squares solutions of (3) are in the form of
vec ( Ψ ( X ) ) = K η B ( P 1 H 2 T P 2 P 1 , H 2 ) e + ( I P 1 P 1 R R ) y ,
where y is an arbitrary real vector with appropriate dimensions. The unique minimum-norm least-squares solution is given by
vec ( Ψ ( X ) ) = K η B ( P 1 H 2 T P 2 P 1 , H 2 ) e .
Theorem 24
( η -anti-bi-Hermitian solutions for (3) over Q [54]). Suppose that A 1 = A 11 + A 12 j Q p × m , A 2 = A 21 + A 22 j Q p × m , B 1 , B 2 Q n × q , C 1 , C 2 Q m × n . Denote
Π = f ( B 1 ) T A 11 , f ( B 1 j ) H A 12 f ( B 2 ) T A 21 , f ( B 2 j ) H A 22 W 2 , e = vec ( Re ( Φ ( C 1 ) ) ) vec ( Re ( Φ ( C 2 ) ) ) vec ( Im ( Φ ( C 1 ) ) ) vec ( Im ( Φ ( C 2 ) ) ) , Q = Π K η A B , Q 1 = Re ( Q ) , Q 2 = Im ( Q ) , R 1 = ( I Q 1 Q 1 ) Q 2 T , Z 1 = ( I + ( I R 1 R 1 ) Q 2 Q 1 Q 1 T Q 2 T ( I R 1 R 1 ) ) 1 , H 1 = R 1 + ( I R 1 R 1 ) Z 1 Q 2 Q 1 Q 1 T ( I Q 2 T R 1 ) , Λ 11 = I Q 1 Q 1 + Q 1 T Q 2 T Z 1 ( I R 1 R 1 ) Q 2 Q 1 , Λ 12 = Q 1 T Q 2 T ( I R 1 R 1 ) Z 1 , Λ 22 = ( I R 1 R 1 ) Z 1 .
Furthermore,
Q 1 Q 2 = ( Q 1 H 1 T Q 2 Q 1 , H 1 ) , Q 1 Q 2 Q 1 Q 2 = Q 1 Q 1 + R 1 R 1 , I Q 1 Q 2 Q 1 Q 2 = Λ 11 Λ 12 Λ 12 T Λ 22 .
(1) The quaternion matrix system of Equation (3) has a solution X η A B Q n × n if and only if
Λ 11 Λ 12 Λ 12 T Λ 22 e = 0 .
In this case,
vec ( Ψ ( X ) ) = K η A B ( Q 1 H 1 T Q 2 Q 1 , H 1 ) e + ( I Q 1 Q 1 R 1 R 1 ) y ,
where y is an arbitrary real vector with appropriate dimensions.
(2) If (31) holds, then the system (3) has a unique solution X if and only if
rank Q 1 Q 2 = 3 ( p 1 ) p + p ( p + 1 ) , n = 2 p , 3 p 2 + ( p + 1 ) 2 , n = 2 p + 1 .
Under this circumstance,
A E = X vec ( Ψ ( X ) ) = K η A B ( Q 1 H 1 T Q 2 Q 1 , H 1 ) e .
(3) The least-squares solution of (3) can be expressed as (32) with arbitrary real vector y. The unique minimum-norm least-squares solution can be in the form of
vec ( Ψ ( X ) ) = K η A B ( Q 1 H 1 T Q 2 Q 1 , H 1 ) e .
Remark 18.
The η-Hermitian and η-anti-Hermitian solutions of (3) can also be derived using similar methods.
Remark 19.
Through precisely describing the (bi-)(anti-)symmetric matrices, the vector operators only need to consider a general form. The relative methods can also handle (anti-)j-self conjugate solutions of (3) or other equations, such as A X B + C Y D = E , ( A X , X B ) = ( C , D ) and so on [55].
Theorem 25
((Anti-)centro-Hermitian solutions for (3) over Q  [56]). Let A 1 = A 11 + A 12 j Q p × m , B 1 Q n × q , A 2 = A 21 + A 22 j Q r × m , B 2 Q n × l , C 1 Q p × q , C 2 Q r × l , and X 0 ( S ) CQ m × n be given matrix. Denote
Q 1 = ( f ( B 1 ) T A 11 , f ( B 1 j ) * A 12 ) I i I 0 0 0 0 I i I I i I 0 0 0 0 I i I W 3 , Q 2 = ( f ( B 2 ) T A 21 , f ( B 2 j ) * A 22 ) I i I 0 0 0 0 I i I I i I 0 0 0 0 I i I W 3 ,
e = vec ( Re ( Φ ( C 1 ) ) ) vec ( Re ( Φ ( C 2 ) ) ) vec ( Im ( Φ ( C 1 ) ) ) vec ( Im ( Φ ( C 2 ) ) ) , Q 0 = Q 1 Q 2 , P 1 = Re ( Q 0 ) , P 2 = Im ( Q 0 ) .
(1) The least-squares (anti-)centro-Hermitian solution of (3) can be expressed as
vec ( Ψ ( X ) ) = W 3 P 1 P 2 e + W 3 I P 1 P 2 P 1 P 2 h ,
with arbitrary vector h of suitable size.
(2) For given X 0 = X 01 + X 02 j ( S ) CQ m × n , the optimization problem min | | X X 0 | | , with  X ( S ) CQ m × n satisfying (3), has a unique solution X such that
vec ( Ψ ( X ) ) = W 3 P 1 P 2 e + W 3 I P 1 P 2 P 1 P 2 vec r ( Re ( X 01 ) ) vec r ( Im ( X 01 ) ) vec r ( Re ( X 02 ) ) vec r ( Im ( X 02 ) ) ,
where W 3 = diag ( K , K , K , K ) or W 3 = diag ( K , K , K , K ) for centro-Hermitian or anti-centro-Hermitian solutions.
Remark 20.
If the matrix X 0 is not (anti-)centro-Hermitian, then instead of the matrix X 0 , the expression
1 2 ( X 0 ± V X 0 ¯ V )
can be used to derive the (anti-)centro-Hermitian solutions of min | | X X 0 | | .
Later, Zhang et al. researched the (anti-)centro-Hermitian solutions for (3) through the real representation matrices of quaternion matrices [57].
A kind of isomorphic real representation of quaternion matrices is defined for a X = X 0 + X 1 i + X 2 j + X 3 k Q m × n as
X R = X 1 X 2 X 3 X 4 X 2 X 1 X 4 X 3 X 3 X 4 X 1 X 2 X 4 X 3 X 2 X 1 R 4 m × 4 n .
The first column of X R is denoted as
X c R = X 1 X 2 X 3 X 4 .
To describe the structure of (anti-)centro-Hermitian matrices using the real representation, the vec C S and vec A C S is denoted as follows.
( 1 ) When m = 2 p + 1 , n = 2 q + 1 ,
vec o o C S ( X ) = ( α 1 , α 2 , , α n ) T ,
where α i = ( x 1 i , x 2 i , , x ( p + 1 ) i ) for i = 1 , 2 , , q + 1 and α j = ( x 1 j , x 2 j , , x p j ) for j = q + 2 , , n .
( 2 ) If m = 2 p + 1 , n = 2 q ,
vec o e C S ( X ) = ( α 1 , α 2 , , α n ) T ,
where α i = ( x 1 i , x 2 i , , x ( p + 1 ) i ) for i = 1 , 2 , , q and α j = ( x 1 j , x 2 j , , x p j ) for i = q + 1 , , n .
( 3 ) If m = 2 p , n = 2 q + 1 ,
vec e o C S ( X ) = ( α 1 , α 2 , , α n ) T ,
with α i = ( x 1 i , x 2 i , , x p i ) for i = 1 , 2 , , n ,
( 4 ) If m = 2 p , n = 2 q ,
vec e e C S ( X ) = ( α 1 , α 2 , , α n ) T ,
with α i = ( x 1 i , x 2 i , , x p i ) for i = 1 , 2 , , n .
( 5 ) When m = 2 p + 1 , n = 2 q + 1 ,
vec o o A C S ( X ) = ( α 1 , α 2 , , α n ) T ,
where α i = ( x 1 i , x 2 i , , x ( p + 1 ) i ) for i = 1 , 2 , , q and α j = ( x 1 j , x 2 j , , x p j ) for j = q + 1 , , n .
( 6 ) When m = 2 p + 1 , n = 2 q ,
vec o e A C S ( X ) = ( α 1 , α 2 , , α n ) T ,
where α i = ( x 1 i , x 2 i , , x ( p + 1 ) i ) for i = 1 , 2 , , q and α j = ( x 1 j , x 2 j , , x p j ) for j = q + 1 , , n .
( 7 ) If m = 2 p , n = 2 q + 1 ,
vec e o A C S ( X ) = ( α 1 , α 2 , , α n ) T ,
with α i = ( x 1 i , x 2 i , , x p i ) for i = 1 , 2 , , n .
( 8 ) If m = 2 p , n = 2 q ,
vec e e A C S ( X ) = ( α 1 , α 2 , , α n ) T ,
with α i = ( x 1 i , x 2 i , , x p i ) for i = 1 , 2 , , n .
The vec C S and vec A C S operators present equivalent conditions for centro-Hermitian and anti-centro-Hermitian matrices. Let X = X 1 + X 2 i + X 3 j + X 4 k Q m × n . Then
X Q C H m × n vec ( X 1 ) vec ( X 2 ) vec ( X 3 ) vec ( X 4 ) = G μ ν CH vec μ ν CS ( X 1 ) vec μ ν ACS ( X 2 ) vec μ ν ACS ( X 3 ) vec μ ν ACS ( X 4 ) ,
and
X Q S C H m × n vec ( X 1 ) vec ( X 2 ) vec ( X 3 ) vec ( X 4 ) = G μ ν SCH vec μ ν ACS ( X 1 ) vec μ ν CS ( X 2 ) vec μ ν CS ( X 3 ) vec μ ν CS ( X 4 ) ,
where
G μ ν CH = G μ ν CS 0 0 0 0 G μ ν ACS 0 0 0 0 G μ ν ACS 0 0 0 0 G μ ν ACS , G μ ν SCH = G μ ν ACS 0 0 0 0 G μ ν CS 0 0 0 0 G μ ν CS 0 0 0 0 G μ ν CS ,
with
μ = o , m is odd , e , n is even and μ = o , m is odd , e , n is even
and
G o o C S = I p + 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 I p + 1 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 I p + 1 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 J p 0 0 0 0 0 0 0 0 0 I p 0 0 0 0 J p + 1 0 0 0 0 0 0 0 0 0 0 0 0 I p J p + 1 0 0 0 0 0 0 0 ,
G o e C S = I p + 1 0 0 0 0 0 0 0 0 0 0 J p 0 I p + 1 0 0 0 0 0 0 0 0 J p 0 0 0 I p + 1 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 J p + 1 0 0 0 0 0 0 0 0 I p J p + 1 0 0 0 0 0 ,
G e o C S = I p 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 I p 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 I p 0 0 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 0 0 J p 0 0 0 0 0 0 0 I p 0 0 0 0 J p 0 0 0 0 0 0 0 0 0 0 I p J p 0 0 0 0 0 0 ,
G e e C S = I p 0 0 0 0 0 0 0 0 0 0 J p 0 I p 0 0 0 0 0 0 0 0 J p 0 0 0 I p 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 J p 0 0 0 0 0 0 0 0 I p J p 0 0 0 0 0 ,
G o o A C S = I p + 1 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 I p + 1 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 I p + 1 0 0 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 0 0 0 0 I p 0 0 0 0 J p + 1 0 0 0 0 0 0 0 0 0 0 I p J p + 1 0 0 0 0 0 0 ,
G o e A C S = I p + 1 0 0 0 0 0 0 0 0 0 0 J p 0 I p + 1 0 0 0 0 0 0 0 0 J p 0 0 0 I p + 1 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 J p + 1 0 0 0 0 0 0 0 0 I p J p + 1 0 0 0 0 0 ,
G e o A C S = I p 0 0 0 0 0 0 0 0 0 0 0 0 J p 0 I p 0 0 0 0 0 0 0 0 0 0 J p 0 0 0 I p 0 0 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 0 0 J p 0 0 0 0 0 0 0 I p 0 0 0 0 J p 0 0 0 0 0 0 0 0 0 0 I p J p 0 0 0 0 0 0 ,
G e e A C S = I p 0 0 0 0 0 0 0 0 0 0 J p 0 I p 0 0 0 0 0 0 0 0 J p 0 0 0 I p 0 0 0 0 0 0 J p 0 0 0 0 0 I p 0 0 0 0 J p 0 0 0 0 0 0 0 0 I p J p 0 0 0 0 0 .
Then another result for the (anti-)centro-Hermitian solutions of (3) is yielded.
Theorem 26
(Centro-Hermitian solutions for (3) over Q [57]). For A 1 Q p × m , B 1 Q n × q , C 1 Q p × q , A 2 Q p × m , B 2 Q n × l , and  C 2 Q p × l , let
K = ( ( B 1 ) c R ) T A 1 R ( ( B 2 ) c R ) T A 2 R .
(1) Then, the quaternion matrix equation system (3) has a centro-Hermitian solution if and only if
I K F M G μ ν C H ( K F M G μ ν C H ) vec ( ( C 1 ) c R ) vec ( ( C 2 ) c R ) = 0 .
When (3) is consistent with centro-Hermitian solutions, these solutions can be expressed as
vec ( Ψ ( X ) ) = G μ ν C H ( K F M G μ ν C H ) vec ( ( C 1 ) c R ) vec ( ( C 2 ) c R ) + G μ ν C H I ( K F M G μ ν C H ) ( K F M G μ ν C H ) y ,
where y is arbitrary.
(2) Moreover, if (3) has a centro-Hermitian solution, then the centro-Hermitian solution is unique if and only if
rank ( K F M G μ ν C H ) = 8 p q + 4 p + 4 q + 1 , m = 2 p + 1 , n = 2 q + 1 , 8 p q + 4 p , m = 2 p + 1 , n = 2 q , 8 p q + 4 q , m = 2 p , n = 2 q + 1 , 8 p q , m = 2 p , n = 2 q .
The unique centro-Hermitian solution is given by
vec ( Ψ ( X ) ) = G μ ν C H ( K F M G μ ν C H ) vec ( C 1 c R ) vec ( C 2 c R ) .
(3) If (3) is inconsistent for the centro-Hermitian solutions, the least-squares solutions are expressed in (33) with arbitrary y, and the unique minimum-norm least-squares solution is expressed as (34).
Theorem 27
(Anti-centro-Hermitian solutions for (3) over Q [57]). Assume that A 1 Q p × m , B 1 Q n × q , C 1 Q p × q , A 2 Q p × m , B 2 Q n × l , and  C 2 Q p × l . Denote
K = ( ( B 1 ) c R ) T A 1 R ( ( B 2 ) c R ) T A 2 R .
(1) Then, the quaternion matrix equation system (3) has an anti-centro-Hermitian solution if and only if
I K F M G μ ν S C H ( K F M G μ ν S C H ) vec ( ( C 1 ) c R ) vec ( ( C 2 ) c R ) = 0 .
When (3) is consistent with anti-centro-Hermitian solutions, these solutions can be expressed as
vec ( Ψ ( X ) ) = G μ ν S C H ( K F M G μ ν S C H ) vec ( ( C 1 ) c R ) vec ( ( C 2 ) c R ) + G μ ν S C H I ( K F M G μ ν S C H ) ( K F M G μ ν S C H ) y ,
where y is arbitrary.
(2) Moreover, if (3) has an anti-centro-Hermitian solution, then the anti-centro-Hermitian solution is unique if and only if
rank ( K F M G μ ν S C H ) = 8 p q + 4 p + 4 q + 3 , m = 2 p + 1 , n = 2 q + 1 , 8 p q + 4 p , m = 2 p + 1 , n = 2 q , 8 p q + 4 q , m = 2 p , n = 2 q + 1 , 8 p q , m = 2 p , n = 2 q .
The unique anti-centro-Hermitian solution is given by
vec ( Ψ ( X ) ) = G μ ν S C H ( K F M G μ ν S C H ) vec ( ( C 1 ) c R ) vec ( ( C 2 ) c R ) .
(3) If (3) is inconsistent for the anti-centro-Hermitian solutions, the least-squares solutions are expressed in (35) with arbitrary y, and the unique minimum-norm least-squares solution is expressed as (36).
At the end of this subsection, we introduce the Hankel solution of the quaternion matrix system of Equation (3). Through a novel vector operator for Hankel matrices, Ref. [58] transformed the problem of finding Hankel solutions into solving for the general solutions. Then, the necessary and sufficient conditions for the system (3) with Hankel solutions, as well as the expression for the general solution, were derived.
A Hankel matrix
H n = a 1 a 2 a 3 a n a 2 a 3 a 4 a n + 1 a 3 a 4 a 5 a n + 2 a n a n + 1 a n + 2 a 2 n 1 Q n × n
is uniquely determined by 2 n 1 elements. Let
vec h ( H n ) = ( a 1 , , a 2 n 1 ) T Q 2 n 1 ,
and
K h = e 1 e 2 e n 0 0 0 e 1 e 2 e n 0 0 0 0 e 1 e 2 e n R n × ( 2 n 1 ) .
Then, for  X Q n × n ,
X HAQ n vec ( X ) = K h vec h ( X ) .
Theorem 28
(Hankel solutions for (3) over Q [58]). Assume that A 1 , B 1 , C 1 , A 2 , B 2 , C 2 Q n × n . Let
G ˜ = B 11 T A 11 B ¯ 12 T A 11 B ¯ 12 T A 12 B 11 T A 12 B 12 T A 12 B ¯ 11 T A 11 B ¯ 11 T A 12 B 12 T A 12 B 21 T B 21 B ¯ 22 T A 21 B ¯ 22 T A 22 B 21 T A 22 B 22 T A 21 B ¯ 21 T A 21 B ¯ 21 T A 21 B 22 T A 22 K h i K h 0 0 0 0 K h i K h K h i K h 0 0 0 0 K h i K h ,
L = vec ( C 11 ) vec ( C 12 ) vec ( C 21 ) vec ( C 22 ) , v = vec h ( Re ( X 1 ) ) vec h ( Im ( X 1 ) ) vec h ( Re ( X 2 ) ) vec h ( Im ( X 2 ) ) , G ^ = G ˜ 0 G ˜ 1 , L ˜ = L 0 L 1 ,
and
M = K h i K h 0 0 0 0 K h i K h .
(1) There is a Hankel matrix solution of (3) if and only if
G ^ G ^ L ˜ = L ˜ .
When such conditions are satisfied, the general Hankel matrix solution is
vec ( Φ ( X ) ) = M ( G ^ L ˜ + ( I G ^ G ^ ) Y ) ,
with arbitrary Y R ( 8 n 4 ) .
(2) A Hankel matrix solution of (3) does not exist; the least-squares Hankel solution is expressed as (37).
(3) For a given M M = M 1 + M 2 j HAQ n , the solution of the optimal problem min X M , with X is the Hankel solution of (3), can be expressed as
vec ( Φ ( X ) ) = M ( G ^ L ˜ + ( I G ^ G ^ ) ( I G ˜ G ˜ ) ( V M G ˜ L ) ) ,
where
V M = vec h ( Re ( M 1 ) ) vec h ( Im ( M 1 ) ) vec h ( Re ( M 2 ) ) vec h ( Im ( M 2 ) ) .

4.3. The Split Quaternion Algebra and the Commutative Quaternion Algebra

In 2020, some scholars studied the system (3) over commutative quaternions with general Hermitian solutions and over split quaternions with η -Hermitian solutions [59,60]. They used representations and vector operators to transform the system (3), with special solutions over commutative and split quaternions, into general solutions over the real field.
The complex representation of commutative quaternions and the real representation of split quaternions are defined as follows.
( 1 ) For A = A 1 + A 2 j CQ m × n , where A 1 , A 2 C m × n , define
f ( A ) = A 1 A 2 A 2 A 1 C 2 m × 2 n .
( 2 ) Assume that A = A 1 + A 2 i + A 3 j + A 4 k SQ m × n , where A 1 , A 2 , A 3 , A 4 R m × n , denote
A R = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 R 4 m × 4 n .
Similarly, define Φ ( A ) = [ A 1 A 2 ] C m × 2 n and Ψ ( A ) = [ Re ( A 1 ) Im ( A 1 ) Re ( A 2 ) Re ( A 2 ) ] C m × 4 n for A CQ m × n or A SQ m × n . Then, Φ ( A B ) = Φ ( A ) f ( B ) over commutative quaternions and Ψ ( A B ) = Ψ ( A ) B R over split quaternions.
On the one hand, the structure of vector operators is given by the following representations.
( 1 ) For A = A 1 + A 2 j CQ p × m , B = B 1 + B 2 j CQ m × n , and  C = C 1 + C 2 j CQ n × q ,
vec ( Φ A B C ) = f ( C 1 T A 1 + C 2 T A 2 ) + ( C 2 T A 1 + C 1 T A 2 ) j vec ( B 1 ) vec ( B 2 ) .
( 2 ) Assume that A = A 1 + A 2 i + A 3 j + A 4 k SQ p × m , B = B 1 + B 2 i + B 3 j + B 4 k SQ m × n , and  C = C 1 + C 2 i + C 3 j + C 4 k SQ n × q . Then,
vec ( Ψ A B C ) = ( C R ) T A 1 , ( ( i C ) R ) T A 2 , ( ( j C ) R ) T A 3 , ( ( k C ) R ) T A 4 W vec ( Ψ B ) ,
where
W 4 = I 0 0 0 I 0 0 0 I 0 0 0 I 0 0 0 0 I 0 0 0 I 0 0 0 I 0 0 0 I 0 0 0 0 I 0 0 0 I 0 0 0 I 0 0 0 I 0 T .
On the other hand, based on the definitions of vec S , vec A , K S and K A , we describe the Hermitian matrices over commutative quaternions and the η -Hermitian matrices over split quaternions.
( 1 ) For a Hermitian matrix X = X 1 + X 2 j CQ n × n , we have
vec ( X 1 ) vec ( X 2 ) = K S i K A 0 0 0 0 K A i K A vec S ( Re ( X 1 ) ) vec A ( Im ( X 1 ) ) vec A ( Re ( X 2 ) ) vec A ( Im ( X 2 ) ) .
( 2 ) Suppose that X = X 1 + X i + X j + X 4 k SQ n × n is η -Hermitian, then
vec ( Ψ ( X ) ) = L η vec η ( Ψ ( X ) ) ,
where
L i = K S 0 0 0 0 K A 0 0 0 0 K S 0 0 0 0 K S , vec i ( Ψ ( X ) ) = vec S ( X 1 ) vec A ( X 2 ) vec S ( X 3 ) vec S ( X 4 ) ,
L j = K S 0 0 0 0 K S 0 0 0 0 K A 0 0 0 0 K S , vec j ( Ψ ( X ) ) = vec S ( X 1 ) vec S ( X 2 ) vec A ( X 3 ) vec S ( X 4 ) ,
L k = K S 0 0 0 0 K S 0 0 0 0 K S 0 0 0 0 K A , vec k ( Ψ ( X ) ) = vec S ( X 1 ) vec S ( X 2 ) vec S ( X 3 ) vec A ( X 4 ) .
The consistent conditions and expressions for the Hermitian solutions of (3) over commutative quaternions are derived.
Theorem 29
(Hermitian solutions for (3) over CQ [59]). For a given A 1 = A 11 + A 12 j CQ p × n , B 1 = B 11 + B 12 j CQ n × q , A 2 = A 21 + A 22 j CQ p × n , B 2 = B 21 + B 22 j CQ n × q , C 1 = C 11 + C 12 j CQ p × q and C 2 = C 21 + C 22 j CQ p × q . Denote M = diag ( K S , K A , K A , K A ) ,
P = f [ ( B 11 T A 11 + B 12 T A 12 ) + ( B 11 T A 11 + B 12 T A 12 ) j ] F [ ( B 21 T A 21 + B 22 T A 22 ) + ( B 21 T A 21 + B 22 T A 22 ) j ] K S i K A 0 0 0 0 K A i K A ,
P 1 = Re ( P ) , P 2 = Im ( P ) , e = vec ( Re ( C 11 ) ) vec ( Re ( C 12 ) ) vec ( Re ( C 21 ) ) vec ( Re ( C 22 ) ) vec ( Im ( C 11 ) ) vec ( Im ( C 12 ) ) vec ( Im ( C 21 ) ) vec ( Im ( C 22 ) ) ,
and
R = ( I P 1 P 1 ) P 2 T , Z = ( I + ( I R R ) P 2 P 1 P 1 T P 2 T ( I R R ) 1 ) , H = R + ( I R R ) Z P 2 P 1 P 1 T ( I P 2 T R ) , S 11 = I P 1 P 1 + P 1 T P 2 T Z ( I R R ) P 2 P 1 , S 12 = P 1 T P 2 T ( I R R ) Z , S 22 = ( I R R ) Z .
Hence,
P 1 P 2 = ( P 1 H T P 2 P 1 , H T ) , P 1 P 2 P 1 P 2 = P 1 P 1 + R R ,
I P 1 P 2 P 1 P 2 = S 11 S 12 S 12 T S 22 .
(1) Then the system (3) has Hermitian solutions if and only if
P 1 P 2 P 1 P 2 e = e ,
or equivalently,
S 11 S 12 S 12 T S 22 e = 0 .
(2) If the consistent condition satisfied, then
vec ( Ψ ( X ) ) = M P 1 P 2 e + M I P 1 P 2 P 1 P 2 y ,
or equivalently,
vec ( Ψ ( X ) ) = M ( P 1 H T P 2 P 1 , H T ) e + M ( I P 1 P 1 R R ) y ,
where y is an arbitrary vector of appropriate order.
(3) When the commutative quaternion system (3) has solutions, the solution is unique if and only if
rank P 1 P 2 = 2 n 2 n .
Under this circumstance,
vec ( Ψ ( X ) ) = M P 1 P 2 e ,
or equivalently,
vec ( Ψ ( X ) ) = M ( P 1 H T P 2 P 1 , H T ) e .
Next, we present the relative result on η -Hermitian solutions of (3) over spit quaternion.
Theorem 30
( η -Hermitian solutions for (3) over SQ [60]). Assume that A 1 = A 11 + A 12 i + A 13 j + A 14 k SQ p × n , B 1 SQ n × q , C 1 SQ p × q , A 2 = A 21 + A 22 i + A 23 j + A 24 k SQ p × n , B 2 SQ q × n , and  C 2 SQ p × q . Let
P 1 = ( B 1 R ) T A 11 , ( ( i B 1 ) R ) T A 12 , ( ( j B 1 ) R ) T A 13 , ( ( k B 1 ) R ) T A 14 W L η , P 2 = ( B 2 R ) T A 21 , ( ( i B 2 ) R ) T A 22 , ( ( j B 2 ) R ) T A 23 , ( ( k B 2 ) R ) T A 24 W L η ,
P = P 1 P 2 , e = vec ( Ψ ( C 1 ) ) vec ( Ψ ( C 2 ) ) ,
and
R = ( I P 1 P 1 ) P 2 T , Z = ( I + ( I R R ) P 2 P 1 P 1 T P 2 T ( I R R ) 1 ) , H = R + ( I R R ) Z P 2 P 1 P 1 T ( I P 2 T R ) , S 11 = I P 1 P 1 + P 1 T P 2 T Z ( I R R ) P 2 P 1 , S 12 = P 1 T P 2 T ( I R R ) Z , S 22 = ( I R R ) Z ,
where W is defined in (38).
(1) The split quaternion system (3) is consistent for η-Hermitian solutions if and only if
P P e = e .
or equivalently,
S 11 S 12 S 12 T S 22 e = 0 .
(2) If (3) is consistent, then
vec ( Ψ ( X ) ) = L η P e + ( I P P ) y ,
or equivalently,
vec ( Ψ ( X ) ) = L η ( P 1 H T P 2 P 1 , H T ) e + ( I P 1 P 1 R R ) y ,
with arbitrary y.
(3) When (3) is consistent, then the η-Hermitian solution of (3) is unique if and only if
rank ( P ) = 2 n 2 + n .
In this case,
vec ( Ψ ( X ) ) = L η P e ,
or equivalently,
vec ( Ψ ( X ) ) = L η ( P 1 H T P 2 P 1 , H T ) e .
Remark 21.
Li et al. also presents a method for solving the η-Hermitian solutions of the split quaternion matrix system (3) using complex representation [60], which will not be discussed in detail here.
In this section, we outline the method for solving the system (3) using vector operators. For the general solution over the complex field, the vector operator transforms the two-sided system of equations into a one-sided equation, which can then be solved using established results from the generalized inverse. For special solutions over the complex field, different vector operators can be designed based on the specific properties of the solutions, thus converting the task of solving for special solutions into finding general solutions. For quaternions, commutative quaternions, and split quaternions, we can utilize their real and complex representations to solve the problem within the real field and obtain the final results.
In fact, the Kronecker product increases the dimensionality of the computed system, and the calculation of the Moore–Penrose inverse is costly for large matrices. However, using the vec operator to solve the system (3) makes these two tools unavoidable. The methods for solving the system (3) over the complex field are well-established, while the methods in Section 4.1 are more heuristic in nature. Although solving large-scale equations using this approach is expensive, it can avoid issues such as noncommutative multiplication or the lack of zero divisors in algebraic structures with special multiplication properties, such as quaternions, split quaternions, and commutative quaternions.

5. The GSVD and CCD Methods

This section introduces matrix decomposition techniques for solving the system of Equation (3). Matrix decomposition is a powerful tool for addressing matrix-related problems and is commonly used to solve systems of equations without relying on the generalized inverse. Several forms of matrix decomposition exist, each associated with different iterative algorithms. However, the two most widely used methods for solving the linear system (3) are generalized singular value decomposition (GSVD) and canonical correlation decomposition (CCD). The definitions of these two decomposition methods are provided below.
Theorem 31
(Generalized singular value decomposition [61,62,63]). Suppose that A 1 R m × p , A 2 R m × l , B 1 R n × p , B 2 R n × l . Then, the GSVDs of matrix pair
[ A 1 A 2 ] and B 1 T B 2 T
can be written as
A 1 = M Σ A 1 U T , A 2 = M Σ A 2 V T and B 1 T = P Σ B 1 N , B 2 T = Q Σ B 2 N ,
where M R m × m , N R n × n are nonsingular matrices and U R p × p , V R l × l , P R p × p , and  Q R l × l are orthogonal matrices. Additionally, we have the following block diagonal matrices:
Σ A 1 = I r 0 0 0 S A 1 0 0 0 0 0 0 0 , Σ A 2 = 0 0 0 0 S A 2 0 0 0 I k r s 0 0 0 ,
Σ B 1 = I r 0 0 0 0 S B 1 0 0 0 0 0 0 , Σ B 2 = 0 0 0 0 0 S B 2 0 0 0 0 I k r s 0 ,
where
S A 1 = diag ( α 1 , , α s ) , 1 > α 1 α 2 α s > 0 , S A 2 = diag ( β 1 , , β s ) , 0 < β 1 β 2 β s < 1 , S B 1 = diag ( α 1 , , α s ) , 1 > α 1 α 2 α s > 0 , S B 2 = diag ( β 1 , , β s ) , 0 < β 1 β 2 β s < 1 ,
and S A 1 2 + S A 2 2 = I s , k = rank [ A 1 , A 2 ] , r = k rank ( A 2 ) , s = rank ( A 1 ) + rank ( A 2 ) k , S B 1 2 + S B 2 2 = I s , k = rank [ B 1 , B 2 ] , r = k rank ( B 2 T ) , s = rank ( B 1 ) + rank ( B 2 ) k .
Theorem 32
(Canonical correlation decomposition [64]). Assume that A 1 R m × p , A 2 R m × l , B 1 R n × p , B 2 R n × l , with  rank ( A 1 ) rank ( A 2 ) , rank ( B 1 ) rank ( B 2 ) . The CCDs of matrix pair ( A 1 , A 2 ) and ( B 1 , B 2 ) can be written as
A 1 = P 1 ( Σ ¯ A 1 , 0 ) E A 1 1 , A 2 = P 1 ( Σ ¯ A 2 , 0 ) E A 2 1 and B 1 = Q 1 ( Σ ¯ B 1 , 0 ) E B 1 1 , B 2 = Q 1 ( Σ ¯ B 2 , 0 ) E B 2 1 ,
where P 1 R m × m and Q 1 R n × n are orthogonal matrices and E A 1 R p × p , E A 2 R l × l , E B 1 R p × p and E B 2 R l × l are nonsingular matrices. The block diagonal matrices are given by:
Σ ¯ A 1 = I r 1 0 0 0 C A 1 0 0 0 0 0 0 0 0 D A 1 0 0 0 I f 1 , Σ ¯ A 2 = I r 1 0 0 0 I s 1 0 0 0 I h 1 r 1 s 1 0 0 0 0 0 0 0 0 0 ,
Σ ¯ B 1 = I r 2 0 0 0 C B 1 0 0 0 0 0 0 0 0 D B 1 0 0 0 I f 2 , Σ ¯ B 2 = I r 2 0 0 0 I s 2 0 0 0 I h 2 r 2 s 2 0 0 0 0 0 0 0 0 0 ,
where
C A 1 = diag ( λ 1 , λ 2 , , λ s 1 ) , 1 > λ 1 λ 2 λ s 1 > 0 , D A 1 = diag ( μ 1 , μ 2 , , μ s 1 ) , 0 < μ 1 μ 2 μ s 1 < 1 , C B 1 = diag ( λ 1 , λ 2 , , λ s 2 ) , 1 > λ 1 λ 2 λ s 2 > 0 , D B 1 = diag ( μ 1 , μ 2 , , μ s 2 ) , 0 < μ 1 μ 2 μ s 2 < 1 ,
and the following conditions hold:
λ i 2 + μ i 2 = 1 , i = 1 , 2 , , s 1 , λ j 2 + μ j 2 = 1 , j = 1 , 2 , , s 2 ,
where s 1 = rank [ A 1 A 2 ] + rank ( A 2 A 1 T ) rank ( A 1 ) rank ( A 2 ) , h 1 = rank ( A 2 ) , r 1 = rank ( A 1 ) + rank ( A 2 ) rank [ A 1 A 2 ] , s 2 = rank [ B 1 B 2 ] + rank ( B 2 T B 1 ) rank ( B 1 ) rank ( B 2 ) , h 2 = rank ( B 2 ) , r 2 = rank ( B 1 ) + rank ( B 2 ) rank [ B 1 B 2 ] .
Theorem 33
(General solutions for (3) over R [61]). For a given A 1 R p × m , A 2 R l × m , B 1 R n × p , B 2 R n × l , C 1 R p × p , and C 2 R l × l , the GSVDs of the matrix pairs
[ A 1 T A 2 T ] and B 1 T B 2 T
can be written as
A 1 T = M Σ A 1 U T , A 2 T = M Σ A 2 V T and B 1 T = P Σ B 1 N , B 2 T = Q Σ B 2 N ,
where M , N are nonsingular matrices, and  N , V , P , Q are orthogonal matrices. The block diagonal matrices Σ A 1 , Σ A 2 , Σ B 1 , Σ B 2 satisfy (39) and (40). Let X ^ = M 1 X N 1 , C 1 ^ = U T C 1 P , C 2 ^ = V T C 2 Q . Then, the system (3) is equivalent to
I r 0 0 0 0 S A 1 0 0 0 0 0 0 X ^ I r 0 0 0 S B 1 0 0 0 0 0 0 0 = C 1 ^ , 0 0 0 0 0 S A 2 0 0 0 0 I k r s 0 X ^ 0 0 0 0 S B 2 0 0 0 I k r s 0 0 0 = C 2 ^ .
Hence, divided C 1 ^ and C 2 ^ into appropriately sized block matrices, the system (3) is solvable if and only if the following submatrices are zero matrices: C 1 ^ 31 , C 1 ^ 32 , C 1 ^ 33 , C 1 ^ 23 , C 1 ^ 13 , C 2 ^ 11 , C 2 ^ 12 , C 2 ^ 13 , C 2 ^ 31 , C 2 ^ 21 and the condition S A 1 1 C 1 ^ 22 S B 1 1 = S A 2 1 C 2 ^ 22 S B 2 1 holds.
In this case, the general solution of (3) can be expressed as
X = M X ^ N ,
where
X ^ = C 1 ^ 11 C 1 ^ 12 S B 1 1 Z 1 Z 2 S A 1 1 C 1 ^ 21 S A 1 1 C 1 ^ 22 S B 1 1 S A 2 1 C 2 ^ 23 Z 3 Z 4 C 2 ^ 32 S B 2 1 C 2 ^ 33 Z 5 Z 6 Z 7 Z 8 Z 9 ,
with arbitrary Z 1 to Z 9 .
Theorem 34
(General solutions for (3) over R [65]). For given A 1 R p × m , A 2 R l × m , B 1 R n × p , B 2 R n × l , C 1 R p × p , C 2 R l × l , and assuming rank ( A 1 ) rank ( A 2 ) , rank ( B 1 ) rank ( B 2 ) , if the CCDs of matrix pairs ( A 1 T , A 2 T ) and ( B 1 , B 2 ) are
A 1 T = P 1 ( Σ ¯ A 1 , 0 ) E A 1 1 , A 2 T = P 1 ( Σ ¯ A 2 , 0 ) E A 2 1 and B 1 = Q 1 ( Σ ¯ B 1 , 0 ) E B 1 1 , B 2 = Q 1 ( Σ ¯ B 2 , 0 ) E B 2 1 ,
where P 1 and Q 1 are orthogonal matrices and  E A 1 , E A 2 , E B 1 , and E B 2 are nonsingular matrices. Define X ^ = P 1 T X Q 1 , C 1 ^ = E A 1 T C 1 E B 1 , C 2 ^ = E A 2 T C 1 E B 2 . Then, the system (3) is equivalent to
I r 1 0 0 0 0 0 0 C A 1 0 0 D A 1 0 0 0 0 0 0 I f 1 0 0 0 0 0 0 X ^ I r 2 0 0 0 0 C B 1 0 0 0 0 0 0 0 0 0 0 0 D B 1 0 0 0 0 I f 2 0 = C 1 ^ , I r 1 0 0 0 0 0 0 I s 1 0 0 0 0 0 0 I h 1 r 1 s 1 0 0 0 0 0 0 0 0 0 X ^ I r 2 0 0 0 0 I s 2 0 0 0 0 I h 2 r 2 s 2 0 0 0 0 0 0 0 0 0 0 0 0 0 = C 2 ^ ,
Hence, divided C 1 ^ and C 2 ^ into suitable size 4 × 4 block matrices, the system ( ) is solvable if and only if C 1 ^ 41 , C 1 ^ 42 , C 1 ^ 43 , C 1 ^ 44 , C 1 ^ 14 , C 1 ^ 24 , C 1 ^ 34 , C 2 ^ 41 , C 2 ^ 42 , C 2 ^ 43 , C 2 ^ 44 , C 2 ^ 14 , C 2 ^ 24 , C 2 ^ 34 , are zero matrices and C 1 ^ 11 = C 2 ^ 11 .
In this case, the general solution of (3) can be expressed as
X = P 1 X ^ Q 1 T ,
where
X ^ = C 2 ^ 11 C 2 ^ 12 C 2 ^ 13 Z 1 ( C 1 ^ 12 C 2 ^ 12 C B 1 ) D B 1 1 C 1 ^ 13 C 2 ^ 21 C 2 ^ 22 C 2 ^ 23 Z 2 Y 1 C A 1 C 1 ^ 23 C 2 ^ 31 C 2 ^ 32 C 2 ^ 33 Z 3 Z 4 Z 5 Z 6 Z 7 Z 8 Z 9 Z 10 Z 11 D A 1 1 ( C 1 ^ 21 C A 1 C 2 ^ 21 ) Y 2 Z 12 Z 13 Y 3 D A 1 C 1 ^ 23 C 1 ^ 13 C 1 ^ 32 C B 1 Z 1 4 Z 1 5 C 1 ^ 32 D B 1 C 1 ^ 33 ,
with
Y 1 = ( D A 1 2 + C A 1 2 D B 1 2 ) 1 ( C A 1 C 1 ^ 22 D B 1 C A 1 2 C 2 ^ 22 C B 1 D B 1 ) , Y 2 = ( D A 1 2 + C A 1 2 D B 1 2 ) 1 ( D A 1 C 1 ^ 22 C B 1 C A 1 D A 1 C 2 ^ 22 C B 1 2 ) , Y 3 = D A 1 1 C 1 ^ 22 D B 1 1 D A 1 1 C A 1 C 2 ^ 22 C B 1 D B 1 1 Y 2 C B 1 D B 1 1 D A 1 1 C A 1 Y 1 ,
and arbitrary Z 1 to Z 15 .
Remark 22.
Matrix decomposition is another powerful tool for solving linear systems. With the help of GSVD and CCD, not only can the general solution of the system (3) be derived, but expressions for its least-squares solution and minimal norm least-squares solution can also be derived. Additionally, special and relative cases of the system (3), such as the general solution and least-squares solution of the matrix equations ( A T X A , B T X B ) = ( C 1 , C 2 ) and ( A 1 X B 1 , A 2 X B 2 , A 1 X B 2 , A 2 X B 1 ) = ( C 1 , C 2 , C 3 , C 4 ) can be addressed. Relevant results can be found in  [66,67,68,69,70,71].
The matrix decomposition results mentioned above are valid not only over the real number field but also over general fields. For more general division rings, Wang introduced a dual matrix decomposition and, based on this, provided a solution to the system (3) over any division ring [72]. Later, in 2011, Wang extended these findings by presenting a simultaneous decomposition of three matrices over any field, which was then applied to solve the system (13). These results can be naturally extended to solving the matrix systems (3) and (13) over the quaternion field [73]. The specific results can be found in references [72,73].

6. The Cramer’s Rule Methods

Solving matrix equations over general fields using Cramer’s rule is an effective and classical approach. This section presents the main results concerning the application of Cramer’s rule to the solution of the system (3) over the quaternion algebra. These methods have significant theoretical value, although they may incur considerable computational costs when applied to large-scale linear systems.
In this section, we primarily present the results from [74,75]. In 2018, Song et al. extended the determinant-based solution representation of the matrix equation A X B = C to the more general system (3) over the quaternion algebra [74]. Four years later, Kyrchei further developed a version of Cramer’s rule applicable to the system (3), providing explicit solutions, including Hermitian and η -(anti-)Hermitian solutions over quaternions [75].
Before presenting the results on the matrix equation system (3) as given in [74], we first introduce several definitions from [76], which are essential for understanding the determinant-based framework. Let S n denote the symmetric group on the index set I n = { 1 , , n } , and  A Q n × n .
( 1 ) The i-th row determinant of A, denoted by rdet i ( A ) j . , is defined as
rdet i ( A ) = σ S n ( 1 ) n r a i i k 1 a i k 1 i k 1 + 1 a i k 1 + l 1 i a i k r i k r + 1 a i k r + l r i k r
for i = 1 , , n . The permutation σ S n is expressed in left-ordered cycle form as
σ = ( i i k 1 i k 1 + 1 i k 1 + l 1 ) ( i k 2 i k 2 + 1 i k 2 + l 2 ) ( i k r i k r + 1 i k r + l r ) .
The index i initiates the leftmost cycle, while the remaining indices satisfy i k 2 < i k 3 < < i k r and i k t < i k t + s , where t = 2 , , r and s = 1 , , l t .
( 2 ) The j-th column determinant of A, denoted by cdet j ( A ) . i , is defined as
cdet j ( A ) = τ S n ( 1 ) n r a j k r j k r + l r a j k r + 1 j k r a j k 1 + l 1 j k 1 + 1 a j k 1 j
for j = 1 , , n . The permutation τ S n is expressed in right-ordered cycle form as
τ = ( j k r + l r j k r + 1 j k r ) ( j k 2 + l 2 j k 2 + 1 j k 2 ) ( j k 1 + l 1 j k 1 + 1 j k 1 j ) .
The index j initiates the rightmost cycle, while the remaining indices satisfy j k 2 < j k 3 < < j k r and j k t < j k t + s where t = 2 , , r and s = 1 , , l t .
With the above preliminaries, Ref. [74] proceeded to establish the following theorem, which provides a general solution to the system (3).
Theorem 35
(General solutions for (3) over Q [74]). For a given A 1 Q p × m , B 1 Q n × q , C 1 Q p × q , A 2 Q r × m , B 2 Q n × l , and  C 2 Q r × l , such that the system (3) is consistent, denote
T = A 1 * A 1 + A 2 * A 2 , S = B 1 B 1 * + B 2 B 2 * , Y 10 = A 22 A 11 A 2 * C 2 B 2 * + L A 22 A 1 * C 1 B 1 * B 22 B 11 , Y 20 = A 11 A 22 A 1 * C 1 B 1 * + L A 11 A 2 * C 2 B 2 * B 11 B 22
and
A i i = A i * A i T , B i i = S B i B i * , i = 1 , 2 .
Assume there are two full column rank matrices K * and L satisfying
N r ( T ) = R r ( K * ) , N r ( S ) = R r ( L ) .
In this case, the general solution of (3) is represented by X = ( x i j ) n × p , and admits determinantal representations as follows:
x i j = ( det ( T + K * K ) det ( S + L L * ) ) 1 ( rdet j ( S + L L * ) j . ( c i . A ) ) ,
or equivalently,
x i j = ( det ( T + K * K ) det ( S + L L * ) ) 1 ( cdet i ( T + K * K ) . i ( c . j B ) ) ,
with
c i . A = [ cdet i ( T + K * K ) . i · ( d . 1 ) , , cdet i ( T + K * K ) . i ( d . p ) ] , c . j B = [ rdet j ( S + L L * ) j . · ( d 1 . ) , , rdet j ( S + L L * ) j . ( d n . ) ] T .
d i . and d . j being the i-th row vector and j-th column vector of D, for  i = 1 , , n , j = 1 , , p . Where
D = ( T + K * K ) T ( A 1 * C 1 B 1 * + A 2 * C 2 B 2 * + Y 10 + Y 20 ) S ( S + L L * ) + M + W , M = ( T + K * K ) T ( L A 22 V 1 R B 11 + L A 11 V 2 R B 22 ) S ( S + L L * ) , W = T Z L L * + K * K Z S + K * K Z L L * ,
V 1 , V 2 and Z denote arbitrary quaternion matrices with appropriate dimensions.
Building upon Theorem 35 for the system (3), [74] further derived the Hermitian solution to the following matrix system over the quaternion field:
A 1 X A 1 * = C 1 , A 2 X A 2 * = C 2 .
The corresponding corollary is presented below.
Corollary 6
(Hermitian solutions for (41) over Q [74]). Suppose that A i Q p × m and C i Q p × p for i = 1 , 2 , such that (41) has a Hermitian solution. Let
T = A 1 * A 1 + A 2 * A 2 , A 11 = A 1 * A 1 T , A 22 = A 2 * A 2 T , Y 10 = A 22 A 11 A 2 * C 2 A 2 + L A 22 A 1 * C 1 A 1 A 22 A 11 , Y 20 = A 11 A 22 A 1 * C 1 A 1 + L A 11 A 2 * C 2 A 2 A 11 A 22 .
Assume there exists K * as a full column rank matrix satisfying
N r ( T ) = R r ( K * ) .
In this case, the general Hermitian solution of (41) is represented by X = ( x i j ) n × n , and admits determinantal representations as
x i j = 1 2 ( y i j + y j i * ) ,
with
y i j = ( det ( T + K * K ) 2 ) 1 ( rdet j ( T + K * K ) j . ( c i . A ) ) ,
or equivalently,
y i j = ( det ( T + K * K ) 2 ) 1 ( cdet i ( T + K * K ) . i ( c . j B ) ) .
Among them,
c i . A = [ cdet i ( T + K * K ) . i · ( d . 1 ) , , cdet i ( T + K * K ) . i ( d . n ) ] , c . j B = [ rdet j ( T + K * K ) j . · ( d 1 . ) , , rdet j ( T + K * K ) j . ( d n . ) ] T
d i . and d . j being the i-th row vector and j-th column vector of D, for  i , j = 1 , , n . Where
D = ( T + K * K ) T ( A 1 * C 1 A 1 + A 2 * C 2 A 2 + Y 10 + Y 20 ) T ( T + K * K ) + M + W , M = ( T + K * K ) T ( L A 22 V 1 R A 11 + L A 11 V 2 R A 22 ) T ( T + K * K ) , W = T Z K K * + K * K Z T + K * K Z K K * ,
V 1 , V 2 and Z respect arbitrary quaternion matrices with appropriate dimensions.
In 2005, Wang provided a general solution to the system of Equation (3) [42]. Building on this result, Kyrchei derived explicit forms of partial solutions by setting the arbitrary components in the general solution to zero matrices [75]. Moreover, Kyrchei also presented partial solutions of the system (3), as well as corresponding η -(anti-)Hermitian partial solutions, using an extension of Cramer’s rule.
Let α = { α 1 , , α k } { 1 , , m } and β = { β 1 , , β k } { 1 , , n } be subsets with 1 k min { m , n } . Assume that A β α denote the submatrix of A Q m × n formed by selecting rows indexed by α and columns indexed by β . If A is Hermitian, then | A | α α denotes a principal minor of det ( A ) . The set of strictly increasing sequences of k integers selected from { 1 , , n } is denoted by
L k , n = { α : α = ( α 1 , , α k ) , 1 α 1 < < α k n } .
For a fixed index i α and j β , define
I r , m { i } = { α L r , m : i α } , J r , n { j } = { β L r , n : j β } .
Let a . j , a . j * , a i . , and  a i . * denote the j-th column of A, the j-th column of A * , the i-th row of A, and the i-th row of A * , respectively. Denote A i . ( b ) as the matrix obtained by replacing the i-th row of A with the row vector b Q 1 × n , and  A . j ( c ) as the matrix obtained by replacing the j-th column of A with the column vector c Q m × 1 . The notations a i . k , a . j k , a i . k , g , and  a . j k , g refer to the i-th row of A k , the j-th column of A k , the i-th row of A k g , and the j-th column of A k g , respectively.
The partial general solution to the matrix equation system (3) over the quaternion algebra is given by
X = A 1 C 1 B 1 + H C 2 B 2 + T C 2 N + H A 2 T A 2 A 1 C 1 B 1 B 2 B 2 H A 2 A 1 C 1 B 1 B 2 B 2 H A 2 T C 2 B 2 T A 2 A 1 C 1 B 1 B 2 N .
Kyrchei derived this solution by applying the determinantal representations of the Moore– Penrose inverse and utilizing the structure of the system (3) [75]. The following presents the partial general solution for the system (3).
Theorem 36
(Partial solutions for (3) over Q [75]). Let A 1 = ( a i j ( 1 ) ) Q p × m , B 1 = ( b i j ( 1 ) ) Q n × q , A 2 = ( a i j ( 2 ) ) Q r × m , B 2 = ( b i j ( 2 ) ) Q n × l , C 1 = ( C i j ( 1 ) ) Q p × q , and C 2 = ( c i j ( 2 ) ) Q r × l . Assume that rank ( A 1 ) = r 1 , rank ( B 1 ) = r 2 , rank ( A 2 ) = r 3 and rank ( B 2 ) = r 4 . Denote
H = A 2 L A 1 , N = R B 1 B 2 , T = R H A 2 , F = B 2 L N .
The matrix equation system (3) is consistent if and only if
T ( A 2 X B 2 A 1 C 1 B 1 ) F = 0
and
A i A i C i B i B i = C i
for i = 1 , 2 .
Then, let
rank ( H ) = min { rank ( A 2 ) , rank ( L A 1 ) } = r 5 , rank ( N ) = min { rank ( B 2 ) , rank ( R B 1 ) } = r 6 , rank ( T ) = min { rank ( A 2 ) , rank ( R H ) } = r 7 .
The partial solution X = ( x i j ) Q m × n to the system (3) consists of seven components as detailed below:
x i j = l = 1 4 x i j ( t ) l = 5 7 x i j ( t ) ,
with
( x i j ( 1 ) ) = A 1 C 1 B 1 , ( x i j ( 2 ) ) = H C 2 B 2 , ( x i j ( 3 ) ) = T C 2 N , ( x i j ( 4 ) ) = H A 2 T A 2 A 1 C 1 B 1 B 2 B 2 , ( x i j ( 5 ) ) = H A 2 A 1 C 1 B 1 B 2 B 2 , ( x i j ( 6 ) ) = H A 2 T C 2 B 2 , ( x i j ( 7 ) ) = T A 2 A 1 C 1 B 1 B 2 N .
In which
x i j ( 1 ) = ( β J r 1 , m | A 1 * A 1 | β β α I r 2 , l | B 1 B 1 * | α α ) 1 ( β J r 1 , m { i } cdet i ( ( A 1 * A 1 ) . i ( d . j B 1 ) ) β β ) = ( β J r 1 , l | A 1 * A 1 | β β α I r 2 , s | B 1 B 1 * | α α ) 1 ( α I r 2 , s { j } rdet j ( ( B 1 B 1 * ) j . ( d i . A 1 ) ) α α ) , x i j ( 2 ) = ( β J r 5 , m | H * H | β β α I r 4 , n | B 2 B 2 * | α α ) 1 ( β J r 5 , m { i } cdet i ( ( H * H ) . i ( d . j B 2 ) ) β β ) = ( β J r 5 , m | H * H | β β α I r 4 , n | B 2 B 2 * | α α ) 1 ( α I r 4 , n { j } rdet j ( ( B 2 B 2 * ) j . ( d i . H ) ) α α ) , x i j ( 3 ) = ( β J r 7 , m | T * T | β β α I r 6 , n | N N * | α α ) 1 ( β J r 7 , m { i } cdet i ( ( T * T ) . i ( d . j N ) ) β β ) = ( β J r 7 , m | T * T | β β α I r 6 , n | N N * | α α ) 1 ( α I r 6 , n { j } rdet j ( ( N N * ) j . ( d i . T ) ) α α ) , x i j ( 4 ) = ( β J r 5 , m | H * H | β β β J r 7 , m | T * T | β β α I r 4 , n | B 2 B 2 * | α α ) 1 ( β J r 5 , m { i } cdet i ( ( H * H ) . i ( h ˜ . j ) ) β β ) , x i j ( 5 ) = ( β J r 5 , m | H * H | β β α I r 4 , n | B 2 B 2 * | α α ) 1 ( β J r 5 , m { i } cdet i ( ( H * H ) . i ( h ^ . j ) ) β β ) , x i j ( 6 ) = ( β J r 5 , m | H * H | β β β J r 7 , m | T * T | β β α J r 4 , n | B 2 B 2 * | α α ) 1 ( β J r 3 , m { i } cdet i ( ( H * H ) . i ( ϕ ˜ . j ) ) β β ) , x i j ( 7 ) = ( β J r 7 , m | T * T | β β α I r 6 , n | N N * | α α ) 1 ( β J r 7 , m { i } cdet i ( ( T * T ) . i ( ω ˜ . j ) ) β β ) , d . j B 1 = α I r 2 , l { j } rdet j ( ( B 1 B 1 * ) j . ( c ˜ s . ( 1 ) ) ) α α , d i . A 1 = β J r 1 , m { i } cdet i ( ( A 1 * A 1 ) . i ( c ˜ . t ( 1 ) ) ) β β , d . j B 2 = α I r 4 , n { j } rdet j ( ( B 2 B 2 * ) j . ( c ˜ s . ( 2 ) ) ) α α , d i . H = β J r 5 , m { i } cdet i ( ( H * H ) . i ( c ˜ . t ( 2 ) ) ) β β , d . j N = α I r 6 , n { f } rdet j ( ( N N * ) j . ( c ^ s . ( 2 ) ) ) α α , d i . T = β J r 7 , m { i } cdet i ( ( T * T ) . i ( c ^ . t ( 2 ) ) ) β β , t s j ( 1 ) = z = 1 m β J r 7 , m { s } cdet s ( ( T * T ) . s ( a . z ( 2 , T ) ) ) β β p z j ( 1 ) = β J r 7 , m { s } cdet s ( ( T * T ) . s ( t ˜ . j ) ) β β , p z j ( 1 ) = f = 1 n x z f ( 1 ) α I r 4 , n { j } rdet j ( ( B 2 B 2 * ) j . ( b ¨ f . ( 2 ) ) ) α α = α I r 4 , n { j } rdet j ( ( B 2 B 2 * ) j . ( x ˜ z . ( 1 ) ) ) α α , ω s j = f = 1 n x s f ( 1 ) α I r 6 , n { j } rdet j ( ( N N * ) j . ( b f . ( 2 , N ) ) ) α α = α I r 6 , n { j } rdet j ( ( N N * ) j . ( x ^ s . ( 1 ) ) ) α α , ϕ q j = β J r 7 , n { i } cdet q ( ( T * T ) . q ( φ . j B 2 ) ) β β = α I r 4 , r { j } rdet j ( ( B 2 B 2 * ) j . ( φ q . T ) ) α α , φ . j B 2 = α I r 4 , r { f } rdet j ( ( B 2 B 2 * ) j . ( c ˇ s . ( 2 ) ) ) α α , φ q . T = β J r 7 , n { q } cdet q ( ( T * T ) . q ( c ˇ . t ( 2 ) ) ) β β ,
where
C 1 ˜ = A 1 * C 1 B 1 * C 2 ˜ = H * C 2 B 2 * , C 2 ^ = T * C 2 N * , H ˜ = H * A 2 T 1 , T ˜ = T * A 2 P 1 , X 1 ˜ = A 1 C 1 B 1 B 2 B 2 , H ^ = H * A 2 P 1 , Φ ˜ = H * A 2 Φ , Ω ^ = T * A 2 Ω , C ˇ 2 = T * C 2 B 2 * , X 1 ^ = A 1 C 1 B 1 B 1 N * ,
a . z ( 2 , T ) is the z-th column of the matrix T * A 2 , b ¨ f . ( 2 ) is the f-th row of B 2 B 2 * , and  b f . ( 2 , N ) is the f-th row of B 2 N * .
Remark 23.
Theorem 36 presents results established over quaternions, along with analogous results also hold over the complex field.
In addition, Kyrchei presented two particular cases of the system (3), specifically the partial Hermitian solution of (41) and the partial η -(anti-)Hermitian solution of the system
A 1 X A 1 η * = C 1 , A 2 X A 2 η * = C 2 .
Next, we consider the partial Hermitian solution to the matrix system (41). The partial solution to the system of Equation (41) is given by
X = A 1 C 1 ( A 1 * ) + H C 2 ( A 2 * ) + T C 2 ( H * ) + H A 2 T A 2 A 1 C 1 ( A 1 * ) ( A 2 * ) A 2 * H A 2 A 1 C 1 ( A 1 * ) ( A 2 * ) A 2 * H A 2 T C 2 ( A 2 * ) T A 2 A 1 C 1 ( A 1 * ) A 2 * ( H * ) .
Let X = ( x i j ) Q n × n be the partial Hermitian solution to the system (41). Then, we have the following theorem.
Theorem 37
(Partial Hermitian solution for (41) over Q [75]). For a given A 1 Q p × m , A 2 Q r × m , C 1 Q p × p , and  C 2 Q r × r , with  rank ( A 1 ) = r 1 , rank ( A 2 ) = r 2 . Let
N = R A 1 * A 2 * = ( A 2 L A 1 ) * = H * , F = A 2 * L H * = ( R H A 2 ) * = T * .
The matrix equation system (41) is consistent if and only if
T ( A 2 C 2 A 2 , * A 1 C 1 A 1 , * ) T * = 0
and
( A i * ) A i * C i ( A i * ) A i * = C i , i = 1 , 2 .
Then, denote
rank ( H ) = rank ( A 2 L A 1 ) = r 3 , rank ( T ) = rank ( R H A 2 ) = r 4 .
The partial Hermitian solution X = ( x i j ) Q m × m to the system (41) can be expressed as the sum of seven components, as outlined below:
x i j = l = 1 4 x i j ( l ) l = 5 7 x i j ( l ) ,
with
( x i j ( 1 ) ) = A 1 C 1 ( A 1 * ) , ( x i j ( 2 ) ) = H C 2 ( A 2 * ) , ( x i j ( 3 ) ) = T C 2 ( H * ) , ( x i j ( 4 ) ) = H A 2 T A 2 A 1 C 1 ( A 1 * ) ( A 2 * ) A 2 * , ( x i j ( 5 ) ) = H A 2 A 1 C 1 ( A 1 * ) ( A 2 * ) A 2 * , ( x i j ( 6 ) ) = H A 2 T C 2 ( A 2 * ) , ( x i j ( 7 ) ) = T A 2 A 1 C 1 ( A 1 * ) A 2 * ( H * ) .
In which
x i j ( 1 ) = ( α I r 1 , n | A 1 * A 1 | α α ) 2 ( α I r 1 , n , { j } rdet j ( ( A 1 * A 1 ) j . ( v i . ( 1 ) ) ) α α ) = ( β J r 1 , n | A 1 * A 1 | β β ) 2 ( β J r 1 , n { i } cdet i ( ( A 1 * A 1 ) . i ( v . j ( 2 ) ) ) β β ) , x i j ( 2 ) = ( β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( d . j A 2 ) ) β β ) = ( β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( d i . H ) ) α α ) , x i j ( 3 ) = ( β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α ) 1 ( β J r 4 , n { i } cdet i ( ( T * T ) . i ( d . j H ) ) β β ) = ( β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α ) 1 ( α I r 3 , n { j } rdet j ( ( H * H ) j . ( d i . T ) ) α α ) , x i j ( 4 ) = ( β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β β I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( ψ ˜ . j ) ) β β ) , x i j ( 5 ) = ( β J r 3 , n | H * H | β β β I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( ϕ ˜ . j ) ) β β ) , x i j ( 6 ) = ( β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β β J r 2 , n | A 2 * A 2 | β β ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( ω ˜ . j ) ) β β ) , x i j ( 7 ) = ( β J r 4 , n | T * T | β β α I r 3 , r | H * H | α α ) 1 ( α I r 3 , n { j } rdet j ( ( H * H ) j . ( w i . ( 1 ) ) ) α α ) = ( β J r 4 , n | T * T | β β α I r 3 , r | H * H | α α ) 1 ( β J r 4 , n { i } cdet i ( ( T * T ) . i ( w . j ( 2 ) ) ) β β ) , v i . ( 1 ) = β J r 1 , n { i } cdet i ( ( A 1 * A 1 ) . i ( c . s ( 11 ) ) ) β β , v . j ( 2 ) = α I r 1 , n { j } rdet j ( ( A 1 * A 1 ) j . ( c f . ( 11 ) ) ) α α , d . j A 2 = α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( c q . ( 21 ) ) ) α α , d i . H = β J r 3 , n { i } cdet i ( ( H * H ) . i ( c . l ( 21 ) ) ) β β , d . j H = α I r 3 , n { f } rdet j ( ( H * H ) j . ( c q . ( 22 ) ) ) α α , d i . T = β J r 4 , n { i } cdet i ( ( T * T ) . i ( c . l ( 22 ) ) ) β β , ψ q j = z β J r 4 , n { q } cdet q ( ( T * T ) . q ( a . z ( 2 , T ) ) ) β β ϕ z j = β J r 4 , n { q } cdet q ( ( T * T ) . q ( ϕ ˜ . j ) ) β β , ϕ z j = f x z f ( 1 ) β I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( a ˙ f . ( 2 ) ) ) α α = β I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( x ˜ z . ( 1 ) ) ) α α , ω q j = β J r 4 , n { q } cdet q ( ( T * T ) . q ( φ . j A 2 ) ) β β = α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( φ q . T ) ) α α , φ . j A 2 = α I r 2 , n { f } rdet j ( ( A 2 * A 2 ) j . ( c q . ( 23 ) ) ) α α , φ q . T = β J r 4 , n { q } cdet q ( ( T * T ) . q ( c . l ( 23 ) ) ) β β , w i . ( 1 ) = β J r 4 , n { i } cdet i ( ( T * T ) . i ( x ˜ . f ( 1 ) ) ) β β , w . j ( 2 ) = α I r 3 , n { j } rdet j ( ( H * H ) j . ( x ˜ q . ( 1 ) ) ) α α ,
with
C 11 = A 1 * C 1 A 1 , C 21 = H * C 2 A 2 , C 22 = T * C 2 H , C 23 = T * C 2 A 2 , Ψ ˜ = H * A 2 Ψ , Φ ˜ = T * A 2 Φ , Ω ˜ = H * A 2 Ω , X ˜ 1 = X 1 A 2 * A 2 ,
a ˙ f . ( 2 ) is the f-th row of A 2 * A 2 and a . z ( 2 , T ) is the z-th column of the matrix T * A 2 .
We now present the η -(anti-)Hermitian solution of the matrix system (42). The corresponding partial solution to the system of Equation (42) is expressed as
X = A 1 C 1 ( A 1 η * ) + H C 2 ( A 2 η * ) + T C 2 ( H η * ) + H A 2 T A 2 A 1 C 1 ( A 1 η * ) ( A 2 η * ) A 2 η * H A 2 A 1 C 1 ( A 1 η * ) ( A 2 η * ) A 2 η * H A 2 T C 2 ( A 2 η * ) T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) .
The following presents the partial η -(anti-)Hermitian solution (43) for the system (42) as given in [75].
Theorem 38
(Partial η -(anti-)Hermitian solution for (42) over Q [75]). Assume that A 1 H p × m , A 2 H r × m , C 1 H p × p , and  C 2 H r × r , with  rank ( A 1 ) = r 1 , rank ( A 2 ) = r 2 . Let
N = R A 1 η * A 2 η * = A 2 L A 1 η * = H η * , F = A 2 η * L H η * = R H A 2 η * = T η * .
The matrix equation system (42) is consistent if and only if
T ( A 2 C 2 ( A 2 η * ) A 1 C 1 ( A 1 η * ) ) T η * = 0
and
( A i η * ) A i η * C i ( A i η * ) A i η * = C i
for i = 1 , 2 .
Then, let
rank ( H ) = rank ( A 2 L A 1 ) = r 3 , rank ( T ) = rank ( R H A 2 ) = r 4 .
The partial η-(anti-)Hermitian solution to the system (42) is composed of seven components, as outlined below:
x i j = δ = 1 4 x i j ( δ ) δ = 5 7 x i j ( δ ) ,
with
( x i j ( 1 ) ) = A 1 C 1 ( A 1 η * ) , ( x i j ( 2 ) ) = H C 2 ( A 2 η * ) , ( x i j ( 3 ) ) = T C 2 ( H η * ) , ( x i j ( 4 ) ) = H A 2 T A 2 A 1 C 1 ( A 1 η * ) ( A 2 η * ) A 2 η * , ( x i j ( 5 ) ) = H A 2 A 1 C 1 ( A 1 η * ) ( A 2 η * ) A 2 η * , ( x i j ( 6 ) ) = H A 2 T C 2 ( A 2 η * ) , ( x i j ( 7 ) ) = T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) .
In which
x i j ( 1 ) = ( α I r 1 , n | A 1 * A 1 | α α ) 2 ( η α I r 1 , n { j } rdet j ( ( A 1 * A 1 ) j . ( v i . ( 1 ) , η ) ) α α η ) = ( β J r 1 , n | A 1 * A 1 | β β ) 2 ( β J r 1 , n { i } cdet i ( ( A 1 * A 1 ) . i ( v . j ( 2 ) ) ) β β ) , x i j ( 2 ) = ( β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( d . j A 2 ) ) β β ) = ( β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( η α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( d i . H ) ) α α η ) , x i j ( 3 ) = ( β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α ) 1 ( β J r 4 , n { i } cdet i ( ( T * T ) . i ( d . j H ) ) β β ) = ( β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α ) 1 ( η α I r 3 , r { j } rdet j ( ( H * H ) j . ( d i . T ) ) α α η ) , x i j ( 4 ) = ( β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( h ˜ . j ) ) β β ) , x i j ( 5 ) = ( β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( h ^ . j ) ) β β ) , x i j ( 6 ) = ( β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β β J r 2 , n | A 2 * A 2 | β β ) 1 ( β J r 3 , n { i } cdet i ( ( H * H ) . i ( ϕ ˜ . j ) ) β β ) , x i j ( 7 ) = ( β J r 4 , n | T * T | β β α I r 3 , r | H * H | α α ) 1 ( β J r 4 , n { i } cdet i ( ( T * T ) . i ( ω ^ . j ) ) β β ) , v i . ( 1 ) , η = η β J r 1 , n { i } cdet i ( ( A 1 * A 1 ) . i ( c . s ( 11 ) ) ) β β η , v . j ( 2 ) = η α I r 1 , n { j } rdet j ( ( A 1 * A 1 ) j . ( c f . ( 11 ) , η ) ) α α η , d . j A 2 = η α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( c q . ( 21 ) , η ) ) α α η , d i . H = η β J r 3 , n { i } cdet i ( ( H * H ) . i ( c . l ( 21 ) ) ) β β η , d . j H = η α I r 3 , n { f } rdet j ( ( H * H ) j . ( c q . ( 22 ) , η ) ) α α η , d i . T = η β J r 4 , n { i } cdet i ( ( T * T ) . i ( c . l ( 22 ) ) ) β β η , t s j ( 1 ) = z = 1 n β J r 4 , n { s } cdet s ( ( T * T ) . s ( a . z ( 2 , T ) ) ) β β q z j ( 1 ) = β J r 4 , n { s } cdet s ( ( T * T ) . s ( t ˜ . j ) ) β β , q z j ( 1 ) = f = 1 n x z f ( 1 ) α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( a ˙ f . ( 2 ) ) ) α α = α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( x ˜ z . ( 1 ) ) ) α α , ϕ q j = β J r 4 , n { q } cdet q ( ( T * T ) . q ( φ . j A 2 ) ) β β = η α I r 2 , n { j } rdet j ( ( A 2 * A 2 ) j . ( φ q . T ) ) α α η , φ . j A 2 = η α I r 2 , n { f } rdet j ( ( A 2 * A 2 ) . j ( c q . ( 23 ) , η ) α ) α η , φ q . T = η β J r 4 , n { q } cdet q ( ( T * T ) . q ( c . l ( 23 ) ) ) β β η , ω q j = f = 1 n x q f ( 1 ) ( η α I r 3 , n { j } rdet j ( ( H * H ) j . ( a f . ( 2 , H , η * ) ) ) α α η ) = η α I r 3 , n { j } rdet j ( ( H * H ) j . ( x ^ q . ( 1 , η ) ) ) α α η ,
with
C 11 = A 1 * C 1 A 1 η , C 11 η = A 1 η * C 1 η A 1 , C 21 = H * C 2 A 2 η C 22 = T * C 2 H η , C 23 = T * C 2 A 2 η , H ˜ = H * A 2 T 1 , T ˜ = T * A 2 Q 1 , X ˜ 1 = A 1 C 1 ( A 1 η * ) A 2 * A 2 , Φ ˜ = H * A 2 Φ , Ω ^ = T * A 2 Ω , X 1 η = ( A 1 C 1 ( A 1 η * ) ) η A 2 * H ,
a ˙ f . ( 2 ) is the f-th row of A 2 * A 2 and a f . ( 2 , H , η * ) is the f-th row of A 2 η * H .
In this section, Cramer’s rule is used to solve the system (3) over quaternions, offering a conceptually clear and intuitive approach. However, the method relies on complex notations. Additionally, due to the high computational cost associated with applying Cramer’s rule to high-dimensional matrix equations, its practical applicability to large-scale instances of the system (3) is limited. Due to the special computational rules of quaternions, there has been little research on developing numerically efficient methods. In the following chapter, we introduce several numerical methods to solve the system (3) over the real number field.

7. The Iterative Algorithm Methods

Numerical methods for solving general equations have been extensively studied. However, the development of numerical algorithms for systems of equations presents greater challenges, as it requires the simultaneous convergence of solutions for multiple equations. Consequently, research in this area remains relatively limited and technically demanding.
For the matrix system (3), existing studies on numerical algorithms have predominantly focused on the real number field. Over the years, ongoing efforts have resulted in the development of more efficient iterative methods, thereby expanding both the types of solutions and the practical applicability of these algorithms.
This section provides a review of a series of studies by Peng et al. from 2006 to 2021, which investigated various solution types, including symmetric solutions, minimum-norm solutions, least-squares solutions, and the optimization problem
min X S E X X 0 ,
for a given X 0 , with S E representing the set of solutions of (3) [77,78,79,80,81,82,83,84,85,86,87,88].
Sheng and Chen were among the first to propose a finite iterative method for solving the general solution of the matrix equation system (3) [77]. The algorithm presented in [77] is outlined as follows.
Theorem 39
(Convergence of Algorithm 1 in [77]). For any given initial matrix X 1 , a solution to the system of matrix Equation (3) can be obtained in at most m n iterations. When the initial iteration matrix is taken as X 1 = A 1 T H B 1 T + A 2 T H ˜ B 2 T , where H and H ˜ are arbitrary, the matrix X * obtained by the Algorithm 1 iteration is the minimum-norm solution of the system of matrix Equation (3).
Algorithm 1 General solution for (3) over R [77].
  • Require: Matrices A 1 R p × m , B 1 R n × q , C 1 R p × q , A 2 R r × m , B 2 R n × l , C 2 R r × l , and the initial matrix X 1 R m × n .
  • Ensure: The solution matrix X.
  •     Step 1: Calculate
    R 0 = C 1 A 1 X 1 B 1 , r 0 = C 2 A 2 X 1 B 2 , P 0 = A 1 T R 0 B 1 T , Q 0 = A 2 T r 0 B 2 T .
        Set k = 1.
  •     Step 2: If R k = 0 , r k = 0 , then stop; else, k = k + 1 .
  •     Step 3: Calculate
    X k = X k 1 + ( R k 1 2 + r k 1 2 ) P k 1 + Q k 1 2 ( P k 1 + Q k 1 ) , R k = C 1 A 1 X k B 1 = R k 1 ( R k 1 2 + r k 1 2 ) P k 1 + Q k 1 2 A 1 ( P k 1 + Q k 1 ) B 1 , r k = C 2 A 2 X k B 2 = r k 1 ( R k 1 2 + r k 1 2 ) P k 1 + Q k 1 2 A 2 ( P k 1 + Q k 1 ) B 2 , P k = A 1 T R k B 1 T + ( R k 2 + r k 2 ) / ( R k 1 2 + r k 1 2 ) P k 1 , Q k = A 2 T r k B 2 T + ( R k 2 + r k 2 ) / ( R k 1 2 + r k 1 2 ) Q k 1 .
  •     Step 4: Go to step 2.
Remark 24.
The problem (44) is equivalent to the minimum-norm solution of the following system:
A 1 X B 1 = C 1 A 2 X B 2 = C 2 A 1 ( X X 0 ) B 1 = C 1 A 1 X 0 B 1 A 2 ( X X 0 ) B 2 = C 2 A 2 X 0 B 2 .
Let X ˜ = X X 0 , C 1 ˜ = C 1 A 1 X 0 B 1 and C 2 ˜ = C 2 A 2 X 0 B 2 . Then, the system (45) equivalent to
A 1 X ˜ B 1 = C ˜ 1 , A 2 X ˜ B 2 = C ˜ 2 .
By using Algorithm 1, we can obtain the unique minimum-norm solution X ˜ of the system of linear matrix Equation (46).
Ding et al. considered the numerical solutions to the system of matrix equation (3) [78]. Among their contributions, they provided the constraint conditions for solving the unique solution of the matrix equation system (3). Stochastic gradient algorithms and least-squares algorithms were developed to iteratively generate approximate solutions, and the algorithm was later extended to a more general form of matrix equation system.
Prior to that, some notations are introduced. Let
X = X 1 X 2 X p R ( m p ) × n , Y = Y 1 Y 2 Y p R ( n p ) × m ,
where X i , Y i T R m × n , for i = 1 , 2 , , p . Then, the block-matrix star product ⋆ is defined as
X Y = X 1 X 2 X p Y 1 Y 2 Y p = X 1 Y 1 X 2 Y 2 X p Y p .
Below are the expressions for the iterative solution of the matrix equations system (3). Let A 1 R p × m , B 1 R n × q , C 1 R p × q , A 2 R r × m , B 2 R n × l and C 2 R r × l . Define
S = B 1 T A 1 B 2 T A 2 R ( p q + r l ) × ( m n ) .
The matrix equations system (3) has a unique solution if and only if rank { S , vec ( [ C 1 , C 2 ] ) } = rank { S } = m n . In this case, the unique solution is given by
vec ( X ) = ( S T S ) 1 S T vec ( [ C 1 , C 2 ] ) ,
and the corresponding homogeneous matrix equations A 1 X B 1 = 0 , A 2 X B 2 = 0 have a unique solution X = 0 .
Define
G = A 1 A 2 , H = [ B 1 , B 2 ] .
If A 1 R p × m and A 2 R r × m are nonsquare matrices with full column rank, and B 1 R n × q and B 2 R n × l are nonsquare matrices with full row rank, then [78] described the gradient based iterative algorithm as follows:
X ( k ) = X ( k 1 ) + μ G T C 1 A 1 X ( k 1 ) B 1 C 2 A 2 X ( k 1 ) B 2 H T ,
where
0 < μ < 2 ( λ max [ G G T ] λ max [ H T H ] ) 1 .
Next, we present the convergence of the iterative formula (47).
Theorem 40
(Convergence of (47) in [78]). If the matrix equations in (3) have a unique solution X, then the iterative solution X ( k ) given by the sequence (47) converges to X.
On the other hand, when A 1 R p × m and A 2 R r × m are nonsquare matrices with full column rank, and  B 1 R n × q and B 2 R n × l are nonsquare matrices with full row rank, the least-squares iterative sequence based iterative algorithm is described as
X ( k ) = X ( k 1 ) + μ ( G T G ) 1 G T C 1 A 1 X ( k 1 ) B 1 C 2 A 2 X ( k 1 ) B 2 H T ( H H T ) 1 ,
where
0 < μ < 2 .
Theorem 41
(Convergence of (48) in [78]). If the matrix system (3) has a unique solution X, then the iterative solution X ( k ) given by the algorithm in (48) converges to X.
Remark 25.
The iterative sequences (47) and (48) can be also applied to the generalized matrix equations:
A 1 X B 1 = C 1 , A 2 X B 2 = C 2 , A p X B p = C p .
Define
G p = A 1 A 2 A p , H p = [ B 1 , B 2 , , B p ] .
The gradient iterative based solution can be expressed as
X ( k ) = X ( k 1 ) + μ G p T C 1 A 1 X ( k 1 ) B 1 C 2 A 2 X ( k 1 ) B 2 C p A p X ( k 1 ) B p H p T ,
where
0 < μ 2 i = 1 p A i 2 B i 2 1 .
Similarly, one can easily give the least-squares based iterative algorithm solution to the matrix equations in (49):
X ( k ) = X ( k 1 ) + μ ( G p T G p ) 1 G p T C 1 A 1 X ( k 1 ) B 1 C 2 A 2 X ( k 1 ) B 2 C p A p X ( k 1 ) B p H p T ( H p H p T ) 1 ,
where
0 < μ 2 .
We next present the symmetric solutions of the system (3) over R [79,80,81,82]. In addressing this problem, various algorithms have been continuously refined to solve symmetric solutions of the system (3), resulting in further reductions in the computational cost per iteration.
First applied by Peng et al., the iterative method is used to obtain symmetric solutions of the system (3) [79]. The following outlines the iterative algorithm designed to solve the symmetric solutions of the system (3) over R in [79].
Theorem 42
(Convergence of Algorithm 2 in [79]). If the system of Equation (3) is consistent, then for any initial matrix X 1 SR m × m , a solution can be obtained within a finite number of iterations. The minimum-norm solution can be obtained by choosing the initial iteration matrix X 1 = 0 R m × m . Additionally, the problem (44) can be equivalently transformed into solving (46).
Algorithm 2 Symmetric solutions for (3) over R [79].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l and the initial matrix X 1 SR m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Compute
    R 1 = C 1 A 1 X 1 B 1 0 0 C 2 A 2 X 1 B 2 , P 1 = A 1 T ( C 1 A 1 X 1 B 1 ) B 1 T + A 2 T ( C 2 A 2 X 1 B 2 ) B 2 T , Q 1 = 1 2 ( P 1 + P 1 T ) .
        Set k = 1 .
  •     Step 2: Compute
    X k + 1 = X k + R k 2 Q k 2 Q k , R k + 1 = C 1 A 1 X k + 1 B 1 0 0 C 2 A 2 X k + 1 B 2 , P k + 1 = A 1 T ( C 1 A 1 X k + 1 B 1 ) B 1 T + A 2 T ( C 2 A 2 X k + 1 B 2 ) B 2 T , Q k + 1 = 1 2 ( P k + 1 + P k + 1 T ) tr ( P k + 1 Q k ) Q k 2 Q k .
  •     if  R k + 1 = 0 or R k + 1 0 , Q k + 1 = 0  then
  •         Stop.
  •     else
  •          k = k + 1 , go to Step 2.
  •     end if
Later, Chen et al. proposed a LSQR iterative method for symmetric solutions to the system of matrix Equation (3) [80]. The system of matrix Equation (3) can be processed as
M = B 1 T A 1 B 2 T A 2 A 1 B 1 T A 2 B 2 T , f = vec ( C 1 ) vec ( C 2 ) vec ( C 1 T ) vec ( C 2 T ) , x = vec ( X ) .
Hence, the vector form β i + 1 = f M x 0 , v 1 = M T u 1 , β i + 1 u i + 1 = M v i α i u i and α i + 1 v i + 1 = M T u i + 1 β i + 1 v i in the LSQR algorithm can be rewritten as the following matrix form:
β 1 U 1 = E A X 0 B , β ˜ 1 U ˜ 1 = F C X 0 D , β 1 = 2 ( E A X 0 B , F C X 0 D ) , α 1 V 1 = A T U 1 B T + B U 1 T A + C T U ˜ 1 D T + D U ˜ 1 T C , α 1 = A T U 1 B T + B U 1 T A + C T U ˜ 1 D T + D U ˜ 1 T C , β i + 1 U i + 1 = A V i B α i U i , U ˜ i + 1 = C V i D α i U ˜ i , β i + 1 = 2 ( A V i B α i U i , C V i D α i U ˜ i ) , α i + 1 V i + 1 = A T U i + 1 B T + B U i + 1 T A + C T U ˜ i + 1 D T + D U ˜ i + 1 T C β i + 1 V i , α i + 1 = A T U i + 1 B T + B U i + 1 T A + C T U ˜ i + 1 D T + D U ˜ i + 1 T C β i + 1 V i .
Then, we obtain the matrix form iteration LSQR method for solving the system of matrix Equation (3) and the least-squares problem (57).
Theorem 43
(Convergence of Algorithm 3 in [80]). Algorithm 3 possesses the finite termination property, and the specific stopping criteria can be found in [80]. Let the initial iteration matrix be X 1 = A 1 T G B 1 T + B 1 G T A 1 + A 2 T H B 2 T + B 2 H T A 2 , where G R p × q and H R r × l are arbitrary matrices. In particular, if  X 1 = 0 R m × m , the solution X * obtained by Algorithm 3 is the unique minimum-norm symmetric solution of the system of matrix Equation (3).
Remark 26.
For a given arbitrary matrix X 0 R m × m and X S E , the optimize problem (44) can be considered as
min X S E X X 0 2 = min X S E X 1 2 ( X 0 + X 0 T ) 2 + 1 2 ( X 0 X 0 T ) 2 .
The solution can be obtained by applying Algorithm 3 to the modified equations with right-hand sides as
C 1 1 2 A 1 ( X 0 + X 0 T ) B 1
and
C 2 1 2 A 2 ( X 0 + X 0 T ) B 2 ,
respectively. The solution can be expressed as X = X * + 1 2 ( X 0 + X 0 T ) .
Li et al. proposed an efficient algorithm for computing symmetric solutions of the system of matrix Equation (3) [81]. This algorithm outperforms previous methods in terms of speed and computational cost per iteration, as it involves only matrix-matrix multiplications at each step, making it well-suited for parallel implementation. The solution is formulated as the intersection of closed convex sets and is computed via the alternating projection method.
Algorithm 3 Symmetric solution for (3) over R [80].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l and the initial matrix X 0 SR m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Compute
    β 1 U 1 = E A X 0 B , β 1 U ˜ 1 = F C X 0 D , β 1 = 2 ( E A X 0 B , F C X 0 D ) , α 1 V 1 = A T U 1 B T + B U 1 T A + C T U ˜ 1 D T + D U ˜ 1 T C , α 1 = A T U 1 B T + B U 1 T A + C T U ˜ 1 D T + D U ˜ 1 T C , W 1 = V 1 , ϕ 1 = β 1 , ρ ˜ 1 = α 1 .
        Step 2: Repeat the process until the stopping criteria are met.
  •     Step 3: Compute
    U i + 1 = A V i B α i U i , U ˜ i + 1 = C V i D α i U ˜ i , β i + 1 = 2 ( A V i B α i U i , C V i D α i U ˜ i ) , α i + 1 V i + 1 = A T U i + 1 B T + B U i + 1 T A + C T U ˜ i + 1 D T + D U ˜ i + 1 T C β i + 1 V i , α i + 1 = A T U i + 1 B T + B U i + 1 T A + C T U ˜ i + 1 D T + D U ˜ i + 1 T C β i + 1 V i , ρ i = ( ρ ˜ i 2 + β i + 1 2 ) 1 / 2 , c i = ρ ˜ i / ρ i , s i = β i + 1 / ρ i , θ i + 1 = s i α i + 1 , ρ ˜ i + 1 = c i α i + 1 , ψ i = c i ϕ i , δ i + 1 = s i ϕ i , X i = X i 1 + ( ϕ i / ρ i ) W i , W i + 1 = V i + 1 ( θ i + 1 / ρ i ) W i .
  •     Step 4: Go to step 2.
Let M be a closed convex subset of a real Hilbert space H, and  u H . The projection of u onto M, denoted by P M ( u ) , is the point in M closest to u, which satisfies the following equation
P M ( u ) u = min x M x u .
The system (3) is solved in [81] using the alternating projection method. When the sets intersect, this method finds a point in their intersection. For solving (3), two sets are defined as
Ω 1 = { X R m × m A 1 X B 1 = C 1 }
and
Ω 2 = { X R m × m A 2 X B 2 = C 2 } .
If the system (3) is consistent, then Ω 1 Ω 2 SR m × m , and the intersection point X * Ω 1 Ω 2 SR m × m is the solution of (3). Thus, solving system (3) is equivalent to finding the intersection of Ω 1 , Ω 2 , and  SR m × m . For given matrix Z R m × m , we can obtain the projections P Ω 1 ( Z ) , P Ω 2 ( Z ) , and  P SR m × m ( Z ) of matrix Z on Ω 1 , Ω 2 , and  SR m × m as
P Ω 1 ( Z ) = Z + A 1 ( C 1 A 1 Z B 1 ) B 1 , P Ω 2 ( Z ) = Z + A 2 ( C 2 A 2 Z B 2 ) B 2 , P S R n × n ( Z ) = 1 2 ( Z + Z T ) ,
respectively. Based on these preparatory results, the proposed algorithm is presented below.
Remark 27.
Compared to Algorithm 2 in [79] and Algorithm 3 in [80], Algorithm 4 requires fewer computational resources per step when solving the symmetric solution of the system of matrix equations (3).
Algorithm 4 Symmetric solution for (3) over R [81].
  • Require: Matrices A 1 R p × m , B 1 R m × q , A 2 R r × m , B 2 R m × l , C 1 R p × q , C 2 R r × l , and the initial matrix X 1 R m × m .
  • Ensure: The solution X of the (3).
  •     Step 1: Set A 1 ˜ = A 1 , B 1 ˜ = B 1 , A 2 ˜ = A 2 , B 2 ˜ = B 2 .
  •     for  k = 1 , 2 , 3 ,  do
  •         Step 2: Compute
    Y k = P Ω 1 ( X k ) = X k + A 1 ˜ ( C 1 A 1 X k B 1 ) B 1 ˜ , Z k = P Ω 2 ( X k ) = Y k + A 2 ˜ ( C 2 A 2 Y k B 2 ) B 2 ˜ , X k + 1 = P S R n × n ( Z k ) = 1 2 ( Z k + Z k T ) .
  •     end for
  •     Step 3:  X = X k + 1 .
Theorem 44
(Convergence of Algorithm 4 in [81]). If the system of matrix Equation (3) is consistent, the matrix sequence { X k } generated by Algorithm 4 converges to the solution of (3).
Wu and Zeng proposed an alternating direction method of multipliers (ADMM) to solve the symmetric solution of the optimize problem (44) [82]. They introduced two equivalent constrained optimization problems for the matrix least-squares problem (44), which are formulated as
A 1 X Y = 0 , Y B 1 C 1 = 0 , A 2 X Z = 0 , Z B 2 C 2 = 0 ,
with X SR m × m , Y R p × m , Z R r × m , and 
X B 1 Y = 0 , A 1 Y C 1 = 0 , X B 2 Z = 0 , A 2 Z C 2 = 0 ,
with X SR m × m , Y R m × q , Z R m × l . Both problems are formulated such that (44) holds.
Theorem 45
(Constrained optimization problem (50) in [82]). The problem (50) admits matrices X * , Y * , Z * as solutions if and only if there exist matrices M * R p × m , N * R p × q , S * R r × m and T * R r × l that satisfy the equations below.
( X * X ¯ A 1 T M * A 2 T S * ) + ( X * X ¯ A 1 T M * A 2 T S * ) T = 0 , M * N * B 1 T = 0 , S * T * B 2 T = 0 , A 1 X * Y * = 0 , Y * B 1 C 1 = 0 , A 2 X * Z * = 0 , Z * B 2 C 2 = 0 .
Theorem 46
(Constrained optimization problem (51) in [82]). Matrices X * , Y * , Z * are solutions of the constrained optimization problem (51) if and only if there exists matrices M * R m × q , N * R p × q , S * R m × l and T * R r × l such that the following equations hold.
( X * X ¯ M * B 1 T S * B 2 T ) + ( X * X ¯ M * B 1 T S * B 2 T ) T = 0 , M * A 1 T N * = 0 , S * A 2 T T * = 0 , X * B 1 Y * = 0 , A 1 Y * C 1 = 0 , X * B 2 Z * = 0 , A 2 Z * C 2 = 0 .
The augmented Lagrangians corresponding to the constrained optimization problems (50) and (51) are given by
L α , β , γ , δ ( X , Y , Z , M , N , S , T ) = 1 2 X X ¯ 2 M , A 1 X Y N , Y B 1 C 1 S , A 2 X Z T , Z B 2 C 2 + α 2 A 1 X Y 2 + β 2 Y B 1 C 1 2 + γ 2 A 2 X Z 2 + δ 2 Z B 2 C 2 2 ,
with X SR m × m , Y R p × m , Z R r × m , M R p × m , N R p × q , S R r × m , T R r × l , and 
L ¯ α , β , γ , δ ( X , Y , Z , M , N , S , T ) = 1 2 X X ¯ 2 M , X B 1 Y N , A 1 Y C 1 S , X B 2 Z T , A 2 Z C 2 + α 2 X B 1 Y 2 + β 2 A 1 Y C 1 2 + γ 2 X B 2 Z 2 + δ 2 A 2 Z C 2 2 ,
with X SR m × m , Y R m × q , Z R m × l , M R m × q , N R p × q , S R m × l , and T R r × l , where α , β , γ , δ > 0 are penalty parameters.
Based on the ADMM approach, the variables X , Y and Z are minimized at each iteration step, after which the Lagrange multipliers M , N , S , T are updated according to the steepest ascent principle [89]. The following two iterative algorithms correspond to the Lagrange functions (52) and (53).
Remark 28.
The sequences X k generated by Algorithms 5 and 6 start with initial matrices Y 0 , Z 0 , M 0 , N 0 , S 0 , and T 0 , along with parameters α , β , γ , δ > 0 . The sequence X k then converges to the unique solution of the matrix least-squares problem (44).
Further work on symmetric solutions of the system (3) can be found in [83,84], where bisymmetric solutions are discussed.
Algorithm 5 Least-squares symmetric solution for (3) over R [82].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l , and  X ¯ R m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Choose the initial matrices Y 0 , Z 0 , M 0 , N 0 , S 0 , T 0 and the parameters α , β , γ , δ > 0 .
         Set k = 0 .
  •     Step 2: Exit if a stopping criterion has been met.
  •     Step 3: Compute
    X k + 1 = arg min X SR m × m L α , β , γ , δ ( X , Y k , Z k , M k , N k , S k , T k ) , Y k + 1 = arg min Y R p × m L α , β , γ , δ ( X k + 1 , Y , Z k , M k , N k , S k , T k ) , Z k + 1 = arg min Z R r × m L α , β , γ , δ ( X k + 1 , Y k + 1 , Z , M k , N k , S k , T k ) , M k + 1 = M k α ( A 1 X k + 1 Y k + 1 ) , N k + 1 = N k β ( Y k + 1 B 1 C 1 ) , S k + 1 = S k γ ( A 2 X k + 1 Z k + 1 ) , T k + 1 = T k δ ( Z k + 1 B 2 C 2 ) .
  •     Step 4: Set k = k + 1 and go to Step 1.
Algorithm 6 Least-squares symmetric solution for (3) over R [82].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l , and  X ¯ R m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Choose the initial matrices Y 0 , Z 0 , M 0 , N 0 , S 0 , T 0 and the parameters α , β , γ , δ > 0 .
         Set k = 0 .
  •     Step 2: Exit if a stopping criterion has been met.
  •     Step 3: Compute
    X k + 1 = arg min X SR m × m L ¯ α , β , γ , δ ( X , Y k , Z k , M k , N k , S k , T k ) , Y k + 1 = arg min Y R m × q L ¯ α , β , γ , δ ( X k + 1 , Y , Z k , M k , N k , S k , T k ) , Z k + 1 = arg min Z R m × l L ¯ α , β , γ , δ ( X k + 1 , Y k + 1 , Z , M k , N k , S k , T k ) , M k + 1 = M k α ( X k + 1 B 1 Y k + 1 ) , N k + 1 = N k β ( A 1 Y k + 1 C 1 ) , S k + 1 = S k γ ( X k + 1 B 2 Z k + 1 ) , T k + 1 = T k δ ( A 2 Z k + 1 C 2 ) .
  •     Step 4: Set k = k + 1 and go to Step 1.
Cai and Chen proposed an iterative algorithm to compute the least-squares bi-symmetric solutions of the system of matrix Equation (3) [83]. They defined a matrix function on BSR m × m as
F ( X ) = A 1 X B 1 A 2 X B 2 C 1 C 2 2 .
Before introducing the algorithm in [83], a theorem that serves as its foundation is presented.
Theorem 47
(Bi-symmetry solutions for (3) over R [83]). A matrix  X * BSR m × m  is a solution of the system (3) if and only if it satisfies the following matrix equation:    
A 1 T A 1 X B 1 T + B 1 B 1 T X A 1 T A 1 + A 2 T A 2 X B 2 T + B 2 B 2 T X A 2 T A 2 +   J n ( A 1 T A 1 X B 1 T + B 1 B 1 T X A 1 T A 1 ) J n + J n ( A 2 T A 2 X B 2 T + B 2 B 2 T X A 2 T A 2 ) J n = A 1 T C 1 B 1 T + B 1 C 1 T A 1 + A 2 T C 2 B 2 T + B 2 C 2 T A 2 +   J n ( A 1 T C 1 B 1 T + B 1 C 1 T A 1 ) J n + J n ( A 2 T C 2 B 2 T + B 2 C 2 T A 2 ) J n .
For convenience, the components of Equation (55) are denoted as follows:
G ( X ) = A 1 T A 1 X B 1 T + B 1 B 1 T X A 1 T A 1 + A 2 T A 2 X B 2 T + B 2 B 2 T X A 2 T A 2 +   S n ( A 1 T A 1 X B 1 T + B 1 B 1 T X A 1 T A 1 ) S n + S n ( A 2 T A 2 X B 2 T + B 2 B 2 T X A 2 T A 2 ) S n , H = A 1 T C 1 B 1 T + B 1 C 1 T A 1 + A 2 T C 2 B 2 T + B 2 C 2 T A 2 + S n ( A 1 T C 1 B 1 T + B 1 C 1 T A 1 ) S n +   S n ( A 2 T C 2 B 2 T + B 2 C 2 T A 2 ) S n .
The iterative algorithm for computing the least-squares bi-symmetric solutions of the system of matrix Equation (3) is given.
Remark 29.
The sequence { R k } in Algorithm 7 is orthogonal in the finite dimensional matrix space BSR m × m . Therefore, there exists a positive integer t such that R t = 0 , and the solution of (3) can be obtained in a finite number of iterations. In Algorithm 7, the initial iteration matrix is given by
X 1 = A 1 T H 1 B 1 T + ( A 1 T H 1 B 1 T ) T + J n A 1 T H 1 B 1 T + ( A 1 T H 1 B 1 T ) T J n +   A 2 T H 2 B 2 T + ( A 2 T H 2 B 2 T ) T + J n A 2 T H 2 B 2 T + ( A 2 T H 2 B 2 T ) T J n ,
where H 1 R p × q and H 2 R r × l are arbitrary matrices. In particular, if  X 1 = 0 , Algorithm 7 will compute the bisymmetric minimum-norm solution of (3).
Algorithm 7 Least-squares bi-symmetric solution for (3) over R [83].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l and X 1 BSR m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Calculate
    R 0 = H G ( X 0 ) , P 0 = R 0 .
         Set k = 1.
  •     Step 2: If R k = 0 then stop; else, k = k + 1 .
  •     Step 3: Calculate
    α k = R k 2 / R k , G ( P k ) , X k + 1 = X k + α k P k , R k + 1 = R k α k G ( P k ) , β k = R k + 1 , G ( P k ) / P k , G ( P k ) , P k + 1 = R k + 1 β k P k .
  •     Step 4: Go to step 2.
Remark 30.
The optimal approximation solution to (44) can be derived. Let X S E and X 0 R m × m . We have    
X X 0 2 = X 1 2 ( X 0 + X 0 T ) 2 + 1 2 ( X 0 X 0 T ) 2 = X 1 2 ( 1 2 ( X 0 + X 0 T ) + 1 2 J n ( X 0 + X 0 T ) J n ) 2 +   1 2 ( 1 2 ( X 0 + X 0 T ) 1 2 J n ( X 0 + X 0 T ) J n ) 2 + 1 2 ( X 0 X 0 T ) 2 = X 1 4 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) 2 +   1 2 ( 1 2 ( X 0 + X 0 T ) 1 2 J n ( X 0 + X 0 T ) J n ) 2 + 1 2 ( X 0 X 0 T ) 2 .
Hence, the problem (44) is equivalent to
min X S E X 1 4 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) .
Denote X ˜ = X 1 4 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) , C ˜ 1 = C 1 1 4 A 1 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) B 1 , and C ˜ 2 = C 2 1 4 A 2 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) B 2 . Substituting these into Equation (55) obtains
A 1 T A 1 X ˜ B 1 T + B 1 B 1 T X ˜ A 1 T A 1 + A 2 T A 2 X ˜ B 2 T + B 2 B 2 T X ˜ A 2 T A 2 +   J n ( A 1 T A 1 X ˜ B 1 T + B 1 B 1 T X ˜ A 1 T A 1 ) J n + J n ( A 2 T A 2 X ˜ B 2 T + B 2 B 2 T X ˜ A 2 T A 2 ) J n = A 1 T C ˜ 1 B 1 T + B 1 C ˜ 1 T A 1 + A 2 T C ˜ 2 B 2 T + B 2 C ˜ 2 T A 2 +   J n ( A 1 T C ˜ 1 B 1 T + B 1 C ˜ 1 T A 1 ) J n + J n ( A 2 T C ˜ 2 B 2 T + B 2 C ˜ 2 T A 2 ) J n ,
This transforms the problem into solving the least-squares problem
min X ˜ BSR m × m A 1 X ˜ B 1 A 2 X ˜ B 2 C ˜ 1 C ˜ 2 .
The least-squares bisymmetric solution X ˜ to (56) can be computed using Algorithm 7, and the optimal approximation solution is given by
X ^ = X ˜ + 1 4 ( X 0 + X 0 T + J n ( X 0 + X 0 T ) J n ) .
Liu et al. [84] introduced a novel iterative method for computing the bi-symmetric minimum-norm solution of the system of matrix Equation (3). This method demonstrates improved speed and stability compared to Algorithm 7 by Cai et al. [83].
For a given matrix X BSR m × m , the bi-symmetry condition holds if and only if
X = X T = J n X J n .
The algorithm is described next.
Remark 31.
The stopping criteria for Algorithm 8 can be defined as
C 1 A 1 X i B 1 + C 2 A 2 X i B 2 ϵ , | ξ i | ϵ or X i X i 1 ϵ ,
where ϵ is a small tolerance.
Algorithm 8 Bi-symmetric minimum-norm solution for (3) over R [84].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l , and the initial matrix X 1 R m × m .
  • Ensure: The solution matrix X.
  •     Step 1: Compute
    τ 0 = 1 , ξ 0 = 1 , θ 0 = 0 , Z 0 = 0 ( R p × n ) , W 0 = Z 0 , β 1 = 2 C 1 2 + C 2 2 , U 1 j = C j / β 1 , j = 1 , 2 , T 1 = A 1 T U 11 B 1 T + A 2 T U 12 B 2 T , V ¯ 1 = T 1 + T 1 T + J n ( T 1 + T 1 T ) J n , α 1 = V ¯ 1 , V 1 = V ¯ 1 / α 1 .
        Step 2: Repeat until the stopping criteria have been met.
  •     Step 3: For i = 1 , 2 , , compute
    ξ i = ξ i 1 β i / α i , Z i = Z i 1 + ξ i V i , θ i = ( τ i 1 β i θ i 1 ) / α i ; W i = W i 1 + θ i V i , U ¯ i + 1 , j = A j V i B j α i U i j , j = 1 , 2 , β i + 1 = 2 U ¯ i + 1 , 1 2 + U ¯ i + 1 , 2 2 , U i + 1 , j = U ¯ i + 1 , j / β i + 1 , j = 1 , 2 , τ i = τ i 1 α i / β i + 1 , T i + 1 = A 1 T U i + 1 , 1 B 1 T + A 2 T U i + 1 , 2 B 2 T , V ¯ i + 1 = T i + 1 + T i + 1 T + J n ( T i + 1 + T i + 1 T ) J n β i + 1 V i , α i + 1 = V ¯ i + 1 ; V i + 1 = V ¯ i + 1 / α i + 1 , γ i = β i + 1 ξ i / ( β i + 1 θ i τ i ) , X i = Z i γ i W i .
        Step 4: Go to step 2.
Theorem 48
(Convergence of Algorithm 8 in [84]). The solution generated by Algorithm 8 is the bi-symmetric minimum-norm solution of (3). In the absence of round-off errors, the algorithm is guaranteed to terminate in at most p q + r l iterations.
Several algorithms for computing reflexive solutions of the system (3) have been presented in [85,86,87,88]. The earlier algorithms offer specific forms of reflexive solutions, while the later ones yield more general solutions.
Peng et al. [85] proposed an efficient algorithm for computing the least-squares reflexive solution of the system of matrix Equation (3). In their work, the problem is transformed into the optimized problem
min X R r m × m ( P ) A 1 X B 1 A 2 X B 2 C 1 C 2 .
Before presenting the algorithm, the concept of gradient matrix is introduced. Let f ( X ) : R r n × n ( P ) R be a continuous and differentiable function. The gradient of f ( X ) on R r m × m ( P ) is denoted as f ( X ) = f ( X ) x i j .
Theorem 49
(Reflexive solution for (3) over R [85]). A matrix X R r m × m ( P ) is a solution of the system of matrix Equation (3) if and only if
F ( X * ) = A 1 T A 1 X * B 1 B 1 T + P A 1 T A 1 X * B 1 B 1 T P A 1 T C 1 B 1 T P A 1 T C 1 B 1 T P + A 2 T A 2 X * B 2 B 2 T +   P A 2 T A 2 X * B 2 B 2 T P A 2 T C 2 B 2 T P A 2 T C 2 B 2 T P = 0 .
For clarity, we define the following notations:
M ( E ) = A 1 T A 1 E B 1 B 1 T + P A 1 T A 1 E B 1 B 1 T P + A 2 T A 2 E B 2 B 2 T + P A 2 T A 2 E B 2 B 2 T P , G = A 1 T C 1 B 1 T + P A 1 T C 1 B 1 T P + A 2 T C 2 B 2 T + P A 2 T C 2 B 2 T P , S ( X ) = F ( X ) = G M ( X ) , S k = S ( X k ) .
Then the corresponding iterative algorithm in [85] is presented.
Theorem 50
(Convergence of Algorithm 9 in [85]). For an arbitrary initial matrix X 1 R r m × m ( P ) , Algorithm 9 generates a solution to (57) in a finite number of iterations. In particular, if the initial matrix is chosen as X 1 = 0 , the unique minimum-norm solution of (57) can be obtained within a finite number of iterations using Algorithm 9.
Algorithm 9 Reflexive solution for (3) over R [85].
  • Require: Matrices A 1 R p × m , B 1 R m × q , C 1 R p × q , A 2 R r × m , B 2 R m × l , C 2 R r × l , and the initial matrix X 1 R r m × m ( P ) .
  • Ensure: The solution matrix X.
  •     Step 1: Calculate
    S 1 = G M ( X 1 ) , Q 1 = M ( S 1 ) .
         Set k = 1 .
  •     while  S k 0  do
  •          k = k + 1 .
  •         Calculate
    X k + 1 = X k + S k 2 / Q k , M ( S k ) Q k , S k + 1 = S k S k 2 / Q k , M ( S k ) M ( Q k ) , Q k + 1 = S k + 1 S k + 1 , M ( Q k ) / Q k , M ( Q k ) Q k .
  •     end while
Remark 32.
When (3) is consistent, then
min X R r m × m ( P ) A 1 X B 1 A 2 X B 2 C 1 C 2 min X R r m × m ( P ) A 1 ( X X 0 ) B 1 A 2 ( X X 0 ) B 2 C 1 A 1 X 0 B 1 C 2 A 2 X 0 B 2 .
Suppose that X ˜ = X X 0 , C ˜ 1 = C 1 A 1 X 0 B 1 , C ˜ 2 = C 2 A 2 X 0 B 2 , then the optimal approximation solution X of the (57) is equivalent to the minimum-norm reflexive solution X ˜ of the following minimum residual problem
min X ˜ R r m × m ( P ) A 1 X ˜ B 1 A 2 X ˜ B 2 C ˜ 1 C ˜ 2 .
Dehghan and Hajarian proposed an iterative algorithm for computing the reflexive solution of the system of matrix Equation (3) [86]. Their method refines and extends the algorithm originally introduced in [85]. Given a matrix P RGR m × m , the following is the iterative algorithm for solving the reflexive solution of the matrix equation system (3).
Theorem 51
(Convergence of Algorithm 10 in [86]). For any given initial matrix X 1 R r m × m ( P ) , a solution to the system of matrix Equation (3) can be obtained in a finite number of iterations in the absence of round-off errors. When (3) is consistent, and if we choose the initial iteration matrix as
X 1 = A 1 T G B 1 T + A 2 T G ^ B 2 T + P A 1 T G B 1 T P + P A 2 T G ^ B 2 T P ,
where G R m × m and G ^ R m × m are arbitrary. In particular, if  X 1 = 0 , then the solution obtained by Algorithm 10 is the minimum-norm reflexive solution of the system (3).
Algorithm 10 Reflexive solution for (3) over R [86].
  • Require: Matrices A 1 R p × m , B 1 R m × q , A 2 R r × m , B 2 R m × l , C 1 R p × q , C 2 R r × l , P RGR m × m , and  X 1 R r m × m ( P ) .
  • Ensure: The solution matrix X.
  •     Step 1: Calculate
    R 1 = C 1 A 1 X 1 B 1 0 0 C 2 A 2 X 1 B 2 , P 1 = 1 2 [ A 1 T ( C 1 A 1 X 1 B 1 ) B 1 T + A 2 T ( C 2 A 2 X 1 B 2 ) B 2 T +   P A 1 T ( C 1 A 1 X 1 B 1 ) B 1 T P + P A 2 T ( C 2 A 2 X 1 B 2 ) B 2 T P ] .
         Set k = 1.
  •     Step 2: If R k = 0 , then stop; else, k = k + 1 .
  •     Step 3: Calculate
    X k = X k 1 + ( R k 1 2 / P k 1 2 ) P k 1 , R k = C 1 A 1 X k B 1 0 0 C 2 A 2 X k B 2 = R k 1 ( R k 1 2 / P k 1 2 ) A 1 P k 1 B 1 0 0 A 2 P k 1 B 2 , P k = 1 2 [ A 1 T ( C 1 A 1 X k B 1 ) B 1 T + A 2 T ( C 2 A 2 X k B 2 ) B 2 T + P A 1 T ( C 1 A 1 X k B 1 ) B 1 T P +   P A 2 T ( C 2 A 2 X k B 2 ) B 2 T P ] + ( R k 2 / R k 1 2 ) P k 1 .
  •     Step 4: Go to step 2.
Remark 33.
The optimal approximation solution X ^ to (44) for a given matrix X 0 R r m × m ( P ) can be derived from the minimum-norm reflexive solution of the following system (46). Let X ˜ = X X 0 , C ˜ 1 = C 1 A 1 X 0 B 1 , and  C ˜ 2 = C 2 A 2 X 0 B 2 , where X S E . Then, using Algorithm 10 and the initial matrix X ˜ 1 from (58), the minimum-norm solution X ˜ can be obtained. In this case, the unique solution X ^ to (44) can be computed and is given by X ^ = X ˜ + X 0 .
Chen et al. proposed an iterative algorithm for computing the generalized reflexive solution of the system of matrix Equation (3). The iterative algorithm for obtaining the generalized reflexive solution is presented first, followed by an explanation of its convergence [87].
Theorem 52
(Convergence of Algorithm 11 in [87]). When the system (3) is consistent, a solution can be obtained within a finite number of iterations for any initial matrix X 1 R r m × n ( P , Q ) , assuming there are no round-off errors. If the system (3) is consistent and the initial matrix is chosen as
X 1 = A 1 T H B 1 T + A 2 T H ^ B 2 T + P A 1 T H B 1 T Q + P A 2 T H ^ B 2 T Q ,
where H R p × q and H ^ R r × l are arbitrary matrices, or in particular X 1 = 0 R r m × n ( P , Q ) , the unique minimum-norm generalized reflexive solution to (3) can be obtained within a finite number of iterations using Algorithm 11.
Algorithm 11 Generalized reflexive solution for (3) over R [87].
  • Require: Matrices A 1 R p × m , B 1 R n × q , A 2 R r × m , B 2 R n × l , C 1 R p × q , C 2 R r × l , P RGR m × m , Q RGR n × n and X 1 R r m × n ( P , Q ) .
  • Ensure: The solution matrix X.
  •     Step 1: Compute
    R 1 = C 1 A 1 X 1 B 1 0 0 C 2 A 2 X 1 B 2 , P 1 = 1 2 ( A 1 T ( C 1 A 1 X 1 B 1 ) B 1 T + A 2 T ( C 2 A 2 X 1 B 2 ) B 2 T +   P A 1 T ( C 1 A 1 X 1 B 1 ) B 1 T Q + P A 2 T ( C 2 A 2 X 1 B 2 ) B 2 T Q ) .
         Set k = 1.
  •     Step 2: If R 1 = 0 , then stop. Else go to Step 3.
  •     Step 3: Compute
    X k + 1 = X k + R k 2 / P k 2 P k , R k + 1 = C 1 A 1 X k + 1 B 1 0 0 C 2 A 2 X k + 1 B 2 = R k R k 2 / P k 2 A 1 P k B 1 0 0 A 2 P k B 2 , P k + 1 = 1 2 ( A 1 T ( C 1 A 1 X k + 1 B 1 ) B 1 T + A 2 T ( C 2 A 2 X k + 1 B 2 ) B 2 T +   P A 1 T ( C 1 A 1 X k + 1 B 1 ) B 1 T Q + P A 2 T ( C 2 A 2 X k + 1 B 2 ) B 2 T Q ) +   R k + 1 2 / R k 2 P k .
  •     Step 4: If R k + 1 = 0 , then stop. Else, let k = k + 1 . Go to Step 3.
Remark 34.
When X 0 R r m × n ( P , Q ) is a solution to (3), the solvability and solution of (3) are equivalent to
A 1 X B 1 = C 1 , A 2 X B 2 = C 2 , A 1 P X Q B 1 = C 1 , A 2 P X Q B 2 = C 2 .
Yin and Huang proposed an iterative algorithm for computing the least-squares generalized reflexive solution of the system of matrix Equation (3). In their work, Algorithm 12 is employed to find a solution X R r m × n ( P , Q ) that satisfies (57) [88].
Let F ( X ) be defined as in (54). Since the set R r m × n ( P , Q ) is unbounded, open, and convex, F ( X ) is a continuous, differentiable, and convex function on R r m × n ( P , Q ) .
Theorem 53
(Generalized reflexive solutions for (3) over R [88]).  X * R r m × n ( P , Q ) is a solution of the system of matrix Equation (3), if and only if
F ( X * ) = A 1 T A 1 X * B 1 B 1 T A 1 T C 1 B 1 T + P A 1 T A 1 X * B 1 B 1 T Q P A 1 T C 1 B 1 T Q + A 2 T A 2 X * B 2 B 2 T A 2 T C 2 B 2 T + P A 2 T A 2 X * B 2 B 2 T Q P A 2 T C 2 B 2 T Q = 0 .
For simplicity, we introduce the following notations:
M ( X ) = A 1 T A 1 X B 1 B 1 T + A 2 T A 2 X B 2 B 2 T + P A 1 T A 1 X B 1 B 1 T Q + P A 2 T A 2 X B 2 B 2 T Q , N = A 1 T C 1 B 1 T + A 2 T C 2 B 2 T + P A 1 T C 1 B 1 T Q + P A 2 T C 2 B 2 T Q , G ( X ) = F ( X ) = N M ( X ) , P k = G ( X k ) .
Then, the result in [88] is given.
Algorithm 12 Least-squares generalized reflexive solution for (3) over R [88].
  • Require: Matrices A 1 R p × m , B 1 R n × q , A 2 R r × m , B 2 R n × l , C 1 R p × q , C 2 R r × l , P R m × m , Q R n × n , and  X 1 R r m × n ( P , Q ) .
  • Ensure: The solution matrix X.
  •     Step 1: Compute
    P 1 = N M ( X 1 ) , Q 1 = M ( P 1 ) .
         Set k = 1.
  •     Step 2: If P 1 = 0 , then stop. Else go to Step 4.
  •     Step 4: Compute
    X k + 1 = X k + P k 2 / Q k , M ( P k ) Q k , P k + 1 = P k P k 2 / Q k , M ( P k ) M ( Q k ) , Q k + 1 = P k + 1 P k + 1 , M ( Q k ) / Q k , M ( Q k ) Q k .
  •     Step 5: If P k + 1 = 0 , then stop. Else, let k = k + 1 . Go to Step 4.
Theorem 54
(Convergence of Algorithm 12 in [88]). For any initial matrix X 1 R r m × n ( P , Q ) , Algorithm 12 generates a solution to (3) within a finite number of iterations, assuming no round-off errors occur. In particular, if the initial matrix is chosen as X 1 = 0 R r m × n ( P , Q ) , Algorithm 12 produces the unique minimum-norm generalized reflexive solution to (57) within a finite number of iterations.
Remark 35.
Note that for (57), we have
min X R r m × n ( P , Q ) A 1 X B 1 A 2 X B 2 C 1 C 2 min X R r m × n ( P , Q ) A 1 ( X X 0 ) B 1 A 2 ( X X 0 ) B 2 C 1 A 1 X 0 B 1 C 2 A 2 X 0 B 2 .
Let X ˜ = X X 0 , C 1 ˜ = C 1 A 1 X 0 B 1 , C 2 ˜ = C 2 A 2 X 0 B 2 . The problem of (44) is equivalent to finding the minimum-norm generalized reflexive solution of a new corresponding minimum residual problem
min X ˜ R r m × n ( P , Q ) A 1 X ˜ B 1 A 2 X ˜ B 2 C 1 ˜ C 2 ˜ .
This section introduces iterative methods for solving system (3). The development of existing algorithms has become more refined, with increasingly comprehensive methods for obtaining various types of solutions. However, there is still significant room for further advancement in numerical techniques for solving matrix equations. Most current algorithms are designed primarily for real-number domains, and future research could focus on extending these methods to more general numerical settings. Additionally, due to the high computational cost of numerical algorithms, their practical application to large-scale matrix equations remains limited.

8. Conclusions

This paper provides a comprehensive review of the solutions to the system (3). During the course of the review, several definitions of special matrices in various algebraic structures are introduced. The paper also discusses the methods for solving (3) over different algebraic structures, including general fields, real fields, complex fields, quaternion algebra, principal ideal domains, regular rings, strongly *-reducible rings, and operators on Banach spaces, along with the relevant conclusions. Additionally, the solutions to (4), (11), (13)–(17), (41), and (42) are also considered. Among the methods discussed, generalized inverse techniques were the first to be proposed and remain the most widely used. The vec -operator method is particularly effective in preserving the Hermitian structure when solving Hermitian-type solutions. Matrix decomposition methods have demonstrated superior performance in numerical examples, while Cramer’s rule offers conceptual clarity. Various numerical algorithms have shown good performance in solving symmetric solutions over the real field. Due to the extensive literature on solving the system (3) from various perspectives, the author may have overlooked some works while compiling the information. However, this does not affect the core ideas of this paper.
Future research on the system (3) and related systems may involve further exploration of numerical algorithms. Although there are currently few algorithms available for solving these systems, the widespread application of system (3) in practical problems, coupled with the high computational speed requirements of real-world applications, makes the development of faster iterative algorithms a promising research direction. These iterative algorithms can be derived either by directly solving the system of equations or by more efficient matrix decomposition methods, as discussed in [90]. Additionally, with the expanding use of quaternions, dual quaternions, split quaternions, and commutative quaternions, it is worthwhile to investigate solutions to (3) within these nonstandard algebraic structures. Notably, the dual component introduced in dual quaternions, which can represent more information, makes solving (3) more challenging but also more valuable for applications. Finally, while the vec -operator method preserves the special structure of matrices when solving Hermitian type solutions, it requires the use of the Kronecker product, which increases the computational complexity of matrix multiplication and leads to higher computational costs. However, research into multi-particle states in quantum computing may provide solutions to this issue, greatly improving computational efficiency. These advancements are expected to facilitate the further application of the system (3) in fields such as control theory, signal processing, and quantum mechanics as described in [90].

Author Contributions

Methodology, Q.-W.W., Z.-H.G. and Y.-F.L.; software, Z.-H.G. and Y.-F.L.; investigation, Q.-W.W. and Z.-H.G.; writing—original draft preparation, Z.-H.G. and Y.-F.L.; writing—review and editing, Q.-W.W., Z.-H.G. and Y.-F.L.; supervision, Q.-W.W.; project administration, Q.-W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (No. 12371023).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Santesso, P.; Valcher, M.E. On the zero pattern properties and asymptotic behavior of continuous-time positive system trajectories. Linear Algebra Appl. 2007, 425, 283–302. [Google Scholar] [CrossRef]
  2. Hsieh, C.; Skelton, R.E. All covariance controllers for linear discrete-time systems. IEEE Trans. Autom. Control 1990, 35, 908–915. [Google Scholar] [CrossRef]
  3. Paula, A.; Acioli, G.; Barros, P. Frequency-based multivariable control design with stability margin constraints: A linear matrix inequality approach. J. Process Contrl 2023, 132, 103115. [Google Scholar] [CrossRef]
  4. Lei, J.Z.; Wang, C.Y. On the reducibility of compartmental matrices. Comput. Biol. Med. 2008, 38, 881–885. [Google Scholar] [CrossRef] [PubMed]
  5. Took, C.C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef]
  6. Sanches, J.M.; Marques, J.S. Image denoising using the Lyapunov equation from non-uniform samples. In Image Analysis and Recognition; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4141. [Google Scholar] [CrossRef]
  7. Kirkland, S.J.; Neumann, M.; Xu, J.H. Transition matrices for well-conditioned Markov chains. Linear Algebra Appl. 2007, 424, 118–131. [Google Scholar] [CrossRef]
  8. Yang, B. Application of matrix decomposition in machine learning. In Proceedings of the IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology, Fuzhou, China, 24–26 September 2021; pp. 133–137. [Google Scholar] [CrossRef]
  9. Stoll, M. A literature survey of matrix methods for data science. GAMM-Mitteilungen 2020, 43, e202000013. [Google Scholar] [CrossRef]
  10. Tokala, S.; Enduri, M.K.; Lakshmi, T.J.; Sharma, H. Community-based matrix factorization (CBMF) approach for enhancing quality of recommendations. Entropy 2023, 25, 1360. [Google Scholar] [CrossRef]
  11. Elhami, M.; Dashti, I. A new approach to the solution of robot kinematics based on relative transformation matrices. Int. J. Robot. Autom. 2016, 5, 213–222. [Google Scholar] [CrossRef]
  12. Rao, C.R. Estimation of variance and covariance components in linear models. J. Am. Stat. Assoc. 1972, 67, 112–115. [Google Scholar] [CrossRef]
  13. Stagg, G.W.; El-Abaid, A.H. Computer Methods in Power System Analysis; McGraw-Hill: New York, NY, USA, 1968. [Google Scholar]
  14. Chen, H.C. Generalized reflexive matrices: Special properties and applications. SIAM J. Matrix Anal. Appl. 1998, 19, 140–153. [Google Scholar] [CrossRef]
  15. Chu, M.T.; Trendafilov, N.T. On a differential equation approach to the weighted orthogonal procrustes problem. Stat. Comput. 1998, 8, 125–133. [Google Scholar] [CrossRef]
  16. Chu, M.T.; Trendafilov, N.T. The orthogonally constrained regression revisited. J. Comput. Graph. Stat. 2001, 10, 746–771. [Google Scholar] [CrossRef]
  17. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
  18. Wang, Q.-W.; Xie, L.-M.; Gao, Z.-H. A survey on solving the matrix equation AXB=C with applications. Mathematics 2025, 13, 450. [Google Scholar] [CrossRef]
  19. Wang, Q.-W.; Gao, Z.-H.; Gao, J.-L. A comprehensive review on solving the system of equations AX = C and XB = D. Symmetry 2025, 17, 625. [Google Scholar] [CrossRef]
  20. Cai, Z.Q.; Wang, G.S. Applications of generalized Sylvester matrix equations in the design of eigenstructure assignment. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; Volume 9, pp. 7317–7320. [Google Scholar] [CrossRef]
  21. Liu, W.; Zhang, D.; Chen, L. Low-rank updates and a divide-and-conquer method for linear matrix equations. Eng. Appl. Artif. Intell. 2025, 56, 169–180. [Google Scholar]
  22. Chountasis, S.; Katsikis, V.N.; Pappas, D. Applications of the Moore-Penrose inverse in digital image restoration. Math. Probl. Eng. 2009, 2009, 170724. [Google Scholar] [CrossRef]
  23. Wanta, D.; Smolik, A.; Smolik, W.T.; Midura, M.; Wróblewski, P. Image reconstruction using machine-learned pseudoinverse in electrical capacitance tomography. Eng. Appl. Artif. Intell. 2025, 142, 109888. [Google Scholar] [CrossRef]
  24. Dehghan, M.; Hajarian, M. An efficient algorithm for solving general coupled matrix equations and its application. Math. Comput. Model. 2010, 51, 1118–1134. [Google Scholar] [CrossRef]
  25. Hamilton, W.R. On quaternions, or on a new system of imaginaries in algebra. Philos. Mag. 1844, 25, 10–13. [Google Scholar]
  26. Cockle, J. On systems of algebra involving more than imaginary and on equations of the fifth degree. Philos. Mag. 1849, 35, 434–437. [Google Scholar] [CrossRef]
  27. Segre, C. The real representations of complex elements and extension to bicomplex systems. Math. Ann. 1892, 40, 413–467. [Google Scholar] [CrossRef]
  28. Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  29. Rao, C.R.; Mitra, S.K. Generalized inverse of a matrix and its applications. Berkeley Symp. on Math. Statist. Prob. 1972, 6, 601–620. [Google Scholar]
  30. Mitra, S.K. Common solutions to a pair of linear matrix equations A1XB1 = C1 and A2XB2 = C2. Proc. Camb. Phil. Soc. 1973, 74, 213–216. [Google Scholar] [CrossRef]
  31. van der Woude, J.W. Feedback Decoupling and Stabilization for Linear Systems with Multiple Exogenous Variables. Ph.D. Thesis, Technische Universiteit Eindhoven, Eindhoven, The Netherlands, 1987. [Google Scholar]
  32. van der Woude, J.W. Almost non-interacting control by measurement feedback. Syst. Control Lett. 1987, 9, 7–16. [Google Scholar] [CrossRef]
  33. Mitra, S.K. A pair of simultaneous linear matrix equations A1XB1 = C1, A2XB2 = C2 and a matrix programming problem. Linear Algebra Appl. 1990, 131, 107–123. [Google Scholar] [CrossRef]
  34. van der Woude, J.W. On the existence of a common solution X to the matrix equations AiXBj = Cij, (i,j) ∈ Γ. Linear Algebra Appl. 2003, 375, 135–145. [Google Scholar] [CrossRef]
  35. John, J.J.; Chiewchar, N. Estimation of variance and covariance components in linear models containing multiparameter matrices. Math. Comput. Model. 1988, 11, 1097–1100. [Google Scholar] [CrossRef]
  36. Liu, X.; Yang, H. An expression of the general common least-squares solution to a pair of matrix equations with applications. Comput. Math. Appl. 2011, 61, 3071–3078. [Google Scholar] [CrossRef]
  37. He, Z.-H.; Wang, Q.-W. The general solutions to some systems of matrix equations. Linear Multilinear Algebra 2014, 62, 1265–1280. [Google Scholar] [CrossRef]
  38. Cvetković-llić, D.S.; Radenković, N.; Wang, Q.-W. Algebraic conditions for the solvability to some systems of matrix equations. Linear Multilinear Algebra 2021, 69, 1579–1609. [Google Scholar] [CrossRef]
  39. Zhang, H.T.; Zhang, H.R.; Liu, L.N.; Yuan, Y.X. A simple method for solving matrix equations AXB = D and GXH = C. AIMS Math. 2020, 6, 2579–2589. [Google Scholar] [CrossRef]
  40. Özgüler, A.B.; Akar, N. A common solution to a pair of linear matrix equations over a principal ideal domain. Linear Algebra Appl. 1991, 144, 85–99. [Google Scholar] [CrossRef]
  41. Wang, Q.-W. A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity. Linear Algebra Appl. 2004, 384, 43–54. [Google Scholar] [CrossRef]
  42. Wang, Q.-W. The general solution to a system of real quaternion matrix equations. Comput. Math. Appl. 2005, 49, 665–675. [Google Scholar] [CrossRef]
  43. Wang, Q.-W. Bisymmetric and centro-symmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2005, 49, 641–650. [Google Scholar] [CrossRef]
  44. Wang, Q.-W.; Chang, H.-X.; Lin, C.-Y. P-(skew)symmetric common solutions to a pair of quaternion matrix equations. Appl. Math. Comput. 2008, 195, 721–732. [Google Scholar] [CrossRef]
  45. Dajić, A. Common solutions of linear equations in a ring, with applications. Electron. J. Linear Algebra 2015, 30, 66–79. [Google Scholar] [CrossRef]
  46. Xie, L.-M.; Wang, Q.-W. Some novel results on a classical system of matrix equations over the dual quaternion algebra. Filomat 2025, 39, 1477–1490. [Google Scholar] [CrossRef]
  47. Jin, L.-X.; Wang, Q.-W.; Xie, L.-M. Two systems of dual quaternionmatrix equations with applications. Comput. Appl. Math. 2025, 44, 378. [Google Scholar] [CrossRef]
  48. Magnus, J.R. L-structured matrices and linear matrix equations. Linear Multilinear Algebra 1983, 14, 67–88. [Google Scholar] [CrossRef]
  49. Navarra, A.; Odell, P.L.; Young, D.M. A representation of the general common solution to the matrix equations A1XB1 = C1 and A2XB2 = C2 with applications. Comput. Math. Appl. 2001, 41, 929–935. [Google Scholar] [CrossRef]
  50. Wang, P.; Yuan, S.; Xie, X. Least-squares Hermitian problem of the complex matrix equation (AXB, CXD) = (E, F). J. Inequal. Appl. 2016, 2016, 296. [Google Scholar] [CrossRef]
  51. Liang, Y.; Yuan, S.; Tian, Y.; Li, M. Least squares Hermitian problem of matrix equation (AXB, CXD) = (E, F) associated with indeterminate admittance matrices. J. Appl. Math. Phys. 2018, 6, 1199–1214. [Google Scholar] [CrossRef]
  52. Zhang, F.; Wei, M.; Li, Y.; Zhao, J. An efficient method for special least squares solution of the complex matrix equation (AXB, CXD) = (E, F). Comput. Math. Appl. 2018, 76, 2001–2010. [Google Scholar] [CrossRef]
  53. Yuan, S.; Liao, A.; Lei, Y. Least squares Hermitian solution of the matrix equation (AXB, CXD) = (E, F) with the least norm over the skew field of quaternions. Math. Comput. Model. 2008, 48, 91–100. [Google Scholar] [CrossRef]
  54. Yuan, S.F.; Liao, A.; Wang, P. Least squares η-bi-Hermitian problems of the quaternion matrix equation (AXB, CXD) = (E, F). Linear Multilinear Algebra 2015, 63, 1849–1863. [Google Scholar] [CrossRef]
  55. Yuan, S.F.; Wang, Q.-W. L-structured quaternion matrices and quaternion linear matrix equations. Linear Multilinear Algebra 2016, 64, 321–339. [Google Scholar] [CrossRef]
  56. Şimşek, S.; Sarduvan, M.; Özdemir, H. Centrohermitian and skew-centrohermitian solutions to the minimum residual and matrix nearness problems of the quaternion matrix equation (AXB,DXE) = (C,F). Adv. Appl. Clifford Algebr. 2017, 27, 2201–2214. [Google Scholar] [CrossRef]
  57. Zhang, F.; Li, Y.; Zhao, J. A real representation method for special least squares solutions of the quaternion matrix equation (AXB,DXE) = (C,F). AIMS Math. 2022, 7, 14595–14613. [Google Scholar] [CrossRef]
  58. Wang, Y.; Huang, J.; Xiong, H.; Zhang, S. The Hankel matrix solution to a system of quaternion matrix equations. In Proceedings of the 2020 Chinese Control And Decision Conference, Hefei, China, 22–24 August 2020. [Google Scholar]
  59. Yuan, S.F.; Tian, Y.; Li, M.Z. On Hermitian solutions of the reduced biquaternion matrix equation (AXB, CXD) = (E,G). Linear Multilinear Algebra 2020, 68, 1355–1373. [Google Scholar] [CrossRef]
  60. Li, M.Z.; Yuan, S.F.; Jiang, H. Direct methods on η-Hermitian solutions of the split quaternion matrix equation (AXB, CXD) = (E, F). Math. Meth. Appl. Sci. 2021, 46, 15952–15971. [Google Scholar] [CrossRef]
  61. Chu, K.W. Singular value and generalized singular value decompositions and the solution of linear matrix equations. Linear Algebra Appl. 1987, 88, 83–98. [Google Scholar] [CrossRef]
  62. Stewart, G.W. Computing the CS decomposition of a partitioned orthonormal matrix. Numer. Math. 1982, 42, 297–306. [Google Scholar] [CrossRef]
  63. Paige, C.C.; Saunders, M.A. Towards a generalized singular value decomposition. SIAM J. Numer. Anal. 1981, 18, 398–409. [Google Scholar] [CrossRef]
  64. Golub, G.H.; Zha, H. Perturbation analysis of the canonical correlations of matrix pairs. Linear Algebra Appl. 1994, 210, 3–28. [Google Scholar] [CrossRef]
  65. Yuan, Y.X. The minimal norm solutions of two classes of matrix equations. J. Comput. Math. Coll. Univ. 2002, 2, 127–134. [Google Scholar]
  66. Liao, A.P. A class of generalized inverse eigenvalue problems and their applications. J. Hunan Univ. 1995, 2, 7–10. [Google Scholar]
  67. Yuan, Y.X. On the two classes of best approximation problems of matrices. Math. Numer. Sin. 2001, 4, 429–436. [Google Scholar]
  68. Yuan, Y.X. The optimal solutions to linear matrix equations using matrix decompositions. Math. Numer. Sin. 2002, 43, 147–161. [Google Scholar]
  69. Liao, A.P.; Lei, Y. Least-Squares solution with the minimum-norm for the matrix equation (AXB, GXH) = (C, D). Comput. Math. Appl. 2005, 50, 539–549. [Google Scholar] [CrossRef]
  70. Liao, A.P.; Yuan, S.F.; Shi, F. The matrix nearness problem for symmetric matrices associated with the matrix equation [ATXA, BTXB] = [C, D]. Linear Algebra Appl. 2006, 418, 939–954. [Google Scholar] [CrossRef]
  71. Shen, J.-R.; Li, Y.; Zhao, J. The minimal norm least squares solutions for a class of matrix equations. Scirea J. Math. 2022, 7, 132–145. [Google Scholar] [CrossRef]
  72. Wang, Q.-W. The decomposition of pairwise matrices and matrix equations over an arbitrary skew field. Acta Math. Sin. 1996, 39, 369–380. [Google Scholar]
  73. Wang, Q.-W.; van der Woude, J.W.; Yu, S.-W. An equivalence canonical form of a matrix triplet over an arbitrary division ring with applications. Sci. China Math. 2011, 54, 907–924. [Google Scholar] [CrossRef]
  74. Song, G.J.; Wang, Q.W.; Yu, S.W. Cramer’s rule for a system of quaternion matrix equations with applications. Appl. Math. Comput. 2018, 336, 490–499. [Google Scholar] [CrossRef]
  75. Kyrchei, I. Generalized Inverses: Algorithms and Applications; Nova Science Publishers, Inc.: Hauppauge, NY, USA, 2022. [Google Scholar]
  76. Kyrchei, I. Cramer’s rule for quaternionic systems of linear equations. J. Math. Sci. 2008, 155, 839–858. [Google Scholar] [CrossRef]
  77. Sheng, X.P.; Chen, G.L. A finite iterative method for solving a pair of linear matrix equations (AXB, CXD) = (E, F). Appl. Math. Comput. 2007, 189, 1350–1358. [Google Scholar] [CrossRef]
  78. Ding, J.; Liu, Y.J.; Ding, F. Iterative solutions to matrix equations of the form AiXBi = Fi. Comput. Math. Appl. 2010, 59, 3500–3507. [Google Scholar] [CrossRef]
  79. Peng, Y.X.; Hu, X.Y.; Zhang, L. An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1XB1 = C1, A2XB2 = C2. Appl. Math. Comput. 2006, 183, 1127–1137. [Google Scholar] [CrossRef]
  80. Chen, Y.B.; Peng, Z.Y.; Zhou, T.J. LSQR iterative common symmetric solutions to matrix equations AXB = E and CXD = F. Appl. Math. Comput. 2010, 217, 230–236. [Google Scholar] [CrossRef]
  81. Li, C.M.; Duan, X.F.; Li, J.; Yu, S.T. A new algorithm for the symmetric solution of the matrix equations AXB = E and CXD = F. Ann. Funct. Anal. 2018, 9, 8–16. [Google Scholar] [CrossRef]
  82. Wu, Y.N.; Zeng, M.L. On ADMM-based methods for solving the nearness symmetric solution of the system of matrix equations A1XB1 = C1 and A2XB2 = C2. J. Appl. Anal. Comput. 2021, 11, 227–241. [Google Scholar] [CrossRef]
  83. Cai, J.; Chen, G.L. An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1 = C1,A2XB2 = C2. Math. Comput. Model. 2009, 50, 1237–1244. [Google Scholar] [CrossRef]
  84. Liu, A.J.; Chen, G.L.; Zhang, X.Y. A new method for the bisymmetric minimum norm solution of the consistent matrix equations A1XB1 = C1, A2XB2 = C2. J. Appl. Math. 2013, 125687. [Google Scholar] [CrossRef]
  85. Peng, Z.H.; Hu, X.Y.; Zhang, L. An efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1 = C1,A2XB2 = C2. Appl. Math. Comput. 2006, 181, 988–999. [Google Scholar] [CrossRef]
  86. Dehghan, M.; Hajarian, M. An iterative algorithm for solving a pair of matrix equations AYB=E,CYD=F over generalized centro-symmetric matrices. Comput. Math. Appl. 2008, 56, 3246–3260. [Google Scholar] [CrossRef]
  87. Chen, D.Q.; Yin, F.; Huang, G.X. An iterative algorithm for the generalized reflexive solution of the matrix equations AXB = E,CXD = F, J. Appl. Math. 2012, 492951. [Google Scholar] [CrossRef]
  88. Yin, F.; Huang, G.X. An iterative algorithm for the least squares generalized reflexive solutions of the matrix equations AXB = E,CXD = F. Abstr. Appl. Anal. 2012, 857284. [Google Scholar] [CrossRef]
  89. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  90. Noorizadegan, A.; Chen, C.-S.; Cavoretto, R.; De Rossi, A. Efficient truncated randomized SVD for mesh-free kernel methods. Comput. Math. Appl. 2024, 164, 12–20. [Google Scholar] [CrossRef]
Figure 1. Overview of the review framework.
Figure 1. Overview of the review framework.
Symmetry 17 01307 g001
Table 1. Definitions of some special matrices.
Table 1. Definitions of some special matrices.
SymbolsTypes of MatricesDefinitions
SR n × n symmetry real matrix A = A T
ASR n × n anti-symmetry real matrix A = A T
BSR n × n bi-symmetry real matrix A = A T and A = J A T J
ABSR n × n anti-bi-symmetry real matrix A = A T and A = J A T J
RGR n × n real generalized reflection matrix A = A T = A 1
R r n × n ( P ) reflexive matrix A = P A P and P RGR n × n
AR r n × n ( P ) anti-reflexive matrix A = P A P and P RGR n × n
R r m × n ( P , Q ) generalized reflexive matrix A = P A Q P RGR m × m and Q RGR n × n
HC n × n Hermitian complex matrix A = A *
HQ n × n Hermitian quaternion matrix A = A *
η HQ n × n η -Hermitian quaternion matrix A = η A * η
η AQ n × n η -anti-Hermitian quaternion matrix A = η A * η
η BQ n × n η -bi-Hermitian quaternion matrix A = η A * η and A = η J A * J η
η ABQ n × n η -anti-bi-Hermitian quaternion matrix A = η A * η and A = η J A * J η
JQ n × n j-self conjugate quaternion matrix A = j A j
AJQ n × n anti-j-self conjugate quaternion matrix A = j A j
CSQ n × n centro-symmetric quaternion matrix A = J A J
SCSQ n × n anti-centro-symmetric quaternion matrix A = J A J
P- SCSQ n × n P-centro-symmetric quaternion matrix A = P A P , P 2 = I , P I
P- SCSQ n × n P-anti-centro-symmetric quaternion matrix A = P A P , P 2 = I , P I
CHQ n × n centro-Hermitian quaternion matrix A = J A * J
SCHQ n × n anti-centro-Hermitian quaternion matrix A = J A * J
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.-W.; Gao, Z.-H.; Li, Y.-F. An Overview of Methods for Solving the System of Equations A1XB1 = C1 and A2XB2 = C2. Symmetry 2025, 17, 1307. https://doi.org/10.3390/sym17081307

AMA Style

Wang Q-W, Gao Z-H, Li Y-F. An Overview of Methods for Solving the System of Equations A1XB1 = C1 and A2XB2 = C2. Symmetry. 2025; 17(8):1307. https://doi.org/10.3390/sym17081307

Chicago/Turabian Style

Wang, Qing-Wen, Zi-Han Gao, and Yu-Fei Li. 2025. "An Overview of Methods for Solving the System of Equations A1XB1 = C1 and A2XB2 = C2" Symmetry 17, no. 8: 1307. https://doi.org/10.3390/sym17081307

APA Style

Wang, Q.-W., Gao, Z.-H., & Li, Y.-F. (2025). An Overview of Methods for Solving the System of Equations A1XB1 = C1 and A2XB2 = C2. Symmetry, 17(8), 1307. https://doi.org/10.3390/sym17081307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop