Next Article in Journal
Macroscopic Mechanical Properties of Periodic Nanocomposites Containing Arbitrarily-Shaped Inclusions
Previous Article in Journal
Evaluating Partitions in Packet Classification with the Asymmetric Metric of Disassortative Modularity
Previous Article in Special Issue
Symmetric Functions and Rings of Multinumbers Associated with Finite Groups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Right–Left WG Inverse Solutions to Quaternion Matrix Equations

1
Pidstryhach Institute for Applied Problems of Mechanics and Mathematics of NAS of Ukraine, 79060 Lviv, Ukraine
2
Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(1), 38; https://doi.org/10.3390/sym17010038
Submission received: 27 November 2024 / Revised: 20 December 2024 / Accepted: 23 December 2024 / Published: 28 December 2024
(This article belongs to the Special Issue Exploring Symmetry in Dual Quaternion Matrices and Matrix Equations)

Abstract

:
This paper studies new characterizations and expressions of the weak group (WG) inverse and its dual over the quaternion skew field. We introduce a dual to the weak group inverse for the first time in the literature and give some new characterizations for both the WG inverse and its dual, named the right and left weak group inverses for quaternion matrices. In particular, determinantal representations of the right and left WG inverses are given as direct methods for their constructions. Our other results are related to solving the two-sided constrained quaternion matrix equation AXB = C and the according approximation problem that could be expressed in terms of the right and left WG inverse solutions. Within the framework of the theory of noncommutative row–column determinants, we derive Cramer’s rules for computing these solutions based on determinantal representations of the right and left WG inverses. A numerical example is given to illustrate the gained results.

1. Introduction. Preliminaries on Quaternion Matrices and Generalized Inverses

Let H = { η 0 + η 1 i + η 2 j + η 3 k i 2 = j 2 = k 2 = ijk = 1 , η 0 , η 1 , η 2 , η 3 R } be the quaternion skew field. For η = η 0 + η 1 i + η 2 j + η 3 k H , the quaternion η ¯ = η 0 η 1 i η 2 j η 3 k and the real number η = η η ¯ = η ¯ η = η 0 2 + η 1 2 + η 2 2 + η 3 2 are the conjugate and norm of η , respectively.
The symbols rank ( A ) and A , respectively, stand for the rank and conjugate transpose of A H m × n , where H m × n contains all m × n matrices on H . The set H r m × n presents the subset of matrices from H m × n of the rank r. Denote by
C r ( A ) = { s H m × 1 : s = A t , t H n × 1 } , N r ( A ) = { t H n × 1 : A t = 0 } , R l ( A ) = { s H 1 × n : s = t A , t H 1 × m } , N l ( A ) = { t H 1 × m : t A = 0 } ,
the right column space, the right null space, the left row space, and the left null space of A , respectively. It is evident that R l ( A ) = C r ( A ) and N l ( A ) = N r ( A ) .
The quaternion matrix Frobenius norm for A = ( a i l ) H m × n is defined as follows:
A F = tr A A = l a . l 2 = l i a i l 2 .
Generalized inverses are extended over the quaternion skew field in the usual way with minor features.
Definition 1.
For A H m × n , the unique solution X H n × m to the system of the four equations
( 1 ) A X A = A ; ( 2 ) X A X = X ; ( 3 ) A X = A X ;   a n d ( 4 ) X A = X A
is the Moore–Penrose (or shortly MP) inverse A of A .
The index of A H n × n (denoted by k = Ind ( A ) ) is the smallest nonnegative integer such that rank ( A k + 1 ) = rank ( A k ) .
Definition 2.
The Drazin inverse A D of A H n × n with the index k = Ind ( A ) is the unique matrix X for which
( 1 k ) A k X A = A k ; ( 2 ) X A X = X ; ( 5 ) A X = X A .
Especially, for Ind ( A ) 1 , A D = A # reduces to the group inverse of A .
A matrix A satisfying the conditions ( i ) , ( j ) , , ( k ) is called an { i , j , , k } -inverse of A and is denoted by A ( i , j , , k ) . The set of matrices A ( i , j , , k ) is denoted by A { i , j , , k } . In particular, A ( 1 ) is an inner inverse, A ( 2 ) is an outer inverse, A ( 1 , 2 ) is a reflexive inverse, A ( 1 , 2 , 3 , 4 ) is the Moore–Penrose inverse, etc.
The concepts of the core inverse and core–EP inverse, introduced for complex matrices in [1,2], have been extended to quaternion matrices [3] as follows.
Definition 3.
The core–EP inverse A of A H n × n presents the distinctive solution to
X = X A X , C r ( X ) = C r ( A D ) = C r ( X )
when Ind ( A ) 1 , A = A # is the core inverse of A .
Definition 4.
The dual core–EP inverse A of A H n × n is the unique solution to
X = X A X , R l ( X ) = R l ( A D ) = R l ( X ) .
In particular, when Ind ( A ) 1 , A = A # is called the dual core inverse of A .
The following representations, obtained in [4] for a ring with involution, are also applicable to quaternion matrices.
Lemma 1.
Let A H n × n with k = Ind ( A ) . Then, for any k ,
A = A D A A , A = A A A D .
From Lemma 1 and Definition 2, it follows that
A = A D A k + 1 A k + 1 = A k ( A k + 1 ) ,
A = A k + 1 A k + 1 A D = ( A k + 1 ) A k .
Since the quaternion core–EP inverse A is associated with the right space C r ( A ) of A H n × n , while the quaternion dual core–EP inverse A is linked to the left space R l ( A ) , is advisable to refer to these generalized inverses as the right and left core–EP inverses, respectively.
Building on the results related to the core–EP inverse from [2], the quaternion right and left core–EP inverses are characterized by specific restricted equations.
Lemma 2.
(Lemma 6 in ref. [5]). Let A H n × n with k = Ind ( A ) . Then, X H n × n is the right core–EP inverse of A if and only if
X A k + 1 = A k , A X 2 = X , A X = A X and C r ( X ) C r ( A k ) .
Lemma 3.
(Lemma 8 in ref. [5]). Let A H n × n with k = Ind ( A ) . The left core–EP inverse X H n × n of A is defined as the solution to
A k + 1 X = A k , X 2 A = X , X A = X A and R l ( X ) R l ( A k ) .
Recently, Wang and Chen [6] introduced the weak group inverse of A C n × n that evidently can be expanded to quaternion matrices.
Definition 5.
Let A H n × n with the index k = Ind ( A ) . The weak group inverse A w of A is the unique matrix X H n × n satisfying A X 2 = X , A X = A A .
Lemma 4.
Let A H n × n with the index k = Ind ( A ) . Then,
A w = ( A A A ) # = A 2 A = A 2 A .
The study of the weak group inverse and the generalized weak group inverse has garnered significant attention (see, for example, [7,8,9,10]). To our knowledge, the dual of the weak group (WG) inverse has not yet been introduced or studied for complex matrices. Furthermore, the WG inverse and its dual have not been explored for matrices over the quaternion skew field. Investigating the characteristic representations of the WG inverse and its dual is particularly important due to their applications in solving matrix equations with constraints on the solution space. In this paper, we extend the concepts of the WG inverse and its dual to quaternion matrices and provide their characterizations.
We pay particular attention to their determinantal representations (abbreviated as D -representation), which serves as the unique direct method for computing the quaternionic weak group (WG) inverse and its dual. It is well known that the D -representation of the ordinary inverse can be obtained as the matrix with cofactors in its entries. However, there are numerous D -representations of generalized inverses for matrices over complex numbers [11,12,13,14], resulting from the search for more applicable explicit expressions. Given the non-commutativity of quaternions, addressing the D -representation of quaternion generalized inverses becomes more complex, primarily due to the challenge of defining the determinant of a matrix with non-commutative entries, known as a noncommutative determinant (see [15,16,17] for details). In this paper, we provide D -representations of the quaternion WG inverse and its dual, based on the theory of noncommutative column-row determinants developed in [18].
Solving matrix equations is the main application of generalized inverses. Moreover, using various generalized inverses enables us to select solutions belonging to specific constrained subspaces of a solution space. For instance, the equation AX = B has a solution if and only if B C r ( A ) . However, it is often necessary to find a solution that lies within a constrained subspace of C r ( A ) that corresponds to the given matrix B . Various types of generalized inverses serve as essential tools for addressing such problems.
Recently, in [19], the minimization problem in the Frobenius norm was examined for the case when the equation AX = C has no solution. It was found that X = A w C is the uniquely determined solution to the minimization problem.
min A 2 X AC F provided that R ( X ) R ( A l ) .
This paper focuses on the two-sided quaternion matrix equation (TQME), defined as AXB = C . As a particular case of the Sylvester equation, TQME is applicable in various fields, including the statistics of quaternion random signals [20], quaternion matrix optimization problems [21], signal and color image processing [22], face recognition [23], and more. Exploring and solving matrix equations over quaternion algebras is currently a hot topic in both pure mathematics and applied fields. Studies on matrix equations are emerging not only over Hamilton’s quaternion skew field (see, e.g., [24,25,26,27,28]) but also over the quaternion split algebra [29,30,31], the generalized commutative quaternion algebra [32,33], the algebra of dual quaternions [34], and others. Remarkably, these works in various quaternion algebras are mutually stimulating and complementary, fostering a mutual exchange of ideas. Inspired by previous research on quaternion matrix equations, this paper aims to investigate solutions to two-sided constrained quaternion matrix equations (CQME) and their one-sided partial cases, specifically with constraints involving the WG inverse and its dual, particularly in cases where CQME has a solution. Additionally, we explore approximation problems related to CQME, expressed in terms of the WG inverse and its dual.
The following notation will be useful
H ( n ) ( k ) = M H n × n Ind ( M ) = k , ( M | N ) H ( n | m ) ( k | q ) M H ( n ) ( k ) , N H ( m ) ( q ) , Q C r , ( M ) C r ( Q ) C r ( M ) , Q R l , ( N ) R l ( Q ) R l ( N ) , Q O ( M , N ) C r ( Q ) C r ( M ) , R l ( Q ) R l ( N ) .
This paper continues the discussion of the applications of generalized inverses in solving CQME with specific constraints involving different inverses that started in [35,36].
The remainder of our article is directed as follows. Some new characterizations of the WG inverse and its dual are given in Section 2. The D -representations of the WG inverse and its dual over the quaternion skew field are derived in Section 3. The solvability of constrained quaternion two-sided matrix equations with their partial cases that could be expressed in terms of the right and left WG inverse solutions are considered in Section 4 and the related approximation problem is studied in Section 5. Cramer’s rules of obtained solutions are derived in Section 6. A numerical example that illustrates our results is given in Section 7. Concluding comments are stated in Section 8.

2. Characterizations of the WG Inverse and Its Dual

Characterizations of the WG inverse based on the core–EP decomposition were derived in [6]. We give the characterization of the WG inverse based on one of the representations in (3) and the characteristic equations of the Drazin and core–EP inverses.
Theorem 1.
Let A H ( n ) ( k ) . The subsequent statements are equivalent:
(i)
X = A w = A 2 A is the weak group inverse of A .
(ii)
X is the unique solution to the equations
A X 2 = X , A X = A A .
Proof. 
( i ) ( ii ) The first step in the proof consists of checking that X = A 2 A satisfies (4). Since, by (1), A = A k ( A k + 1 ) , then we have
A X 2 = A A 2 A 2 = A A k ( A k + 1 ) A k ( A k + 1 ) A 2 = A A k ( A k + 1 ) A k ( A k + 1 ) A k + 1 ( A k + 1 ) A k ( A k + 1 ) A = A k + 1 ( A k + 1 ) A k ( A k + 1 ) A k ( A k + 1 ) A = A k + 1 ( A k + 1 ) A k + 1 A D ( A k + 1 ) A k ( A k + 1 ) A = A k ( A k + 1 ) A k ( A k + 1 ) A = A 2 A = X ,
and
A X = A A 2 A = A A k ( A k + 1 ) A k ( A k + 1 ) A = A k + 1 ( A k + 1 ) A k + 1 A D ( A k + 1 ) A = A D A k + 1 ( A k + 1 ) A = A k ( A k + 1 ) A = A A .
From (5) and (6), it follows that X = A 2 A is a solution to (4).
Furthermore, assume that both X and Y satisfy (4). Then,
X = A X 2 = A A X = A A A = A A Y = A Y 2 = Y ,
that is, the system of Equation (4) is uniquely solvable.
Denoting X = A 2 A = : A w completes the proof. □
Theorem 2.
Let A H ( n ) ( k ) . The following statements are equivalent:
(i)
X = A w = A A 2 is the dual (or left) weak group inverse of A .
(ii)
X is the unique solution to the equations
X 2 A = X , X A = A A .
Proof. 
The proof is similar to the proof of Theorem 1. □
Remark 1.
Since the WG inverse A w = A 2 A is represented by the right core–EP inverse, it can be called the right WG inverse. Similarly, A w is the left WG inverse.
The following representations for A w and A w follow as the expression for the weak group inverse of complex matrices proposed in [7].
Lemma 5.
If A H ( n ) ( k ) , then
A w = A k ( A k + 2 ) A ,
A w = A ( A k + 2 ) A k .
Proof. 
The representation in (8) of the right weak group inverse can be proved similarly as in the complex case (Theorem 2.1. in ref. [7]).
We prove (9). Applying A = ( A m ) A m A D from (2) for all m k , m N , it follows that
A w = A A 2 = A ( A m ) A m A D 2 = A ( A m ) A D A m ( A m ) A m A D = A ( A m ) A D A m A D = A ( A k + 2 ) A D A k + 2 A D = A ( A k + 2 ) A D A k + 1 = A ( A k + 2 ) A k .
Lemma 6 presents some properties of projections involving A w and A w .
Lemma 6.
Let A H ( n ) ( k ) . Then,
C r ( A k ) = C r ( A w A ) , N r ( ( A k ) A 2 ) = N r ( A w A ) , R l ( ( A k ) A ) = R l ( A A w ) , N l ( A k ) = N l ( A A w ) , R l ( A k ) = R l ( A A w ) , N l ( A 2 ( A k ) ) = N l ( A A w ) , C r ( A ( A k ) ) = C r ( A w A ) , N r ( A k ) = N r ( A w A ) .
Proof. 
According to [8], it follows that C r ( A k ) = C r ( A D ) = C r ( A w A ) as well as
N r ( ( A k ) A 2 ) = N r ( ( A k + 1 ) A 2 ) = N r ( ( A k + 1 ) A 2 ) = N r ( A k ( A k + 1 ) A 2 ) = N r ( A A 2 ) = N r ( A w A ) .
Since A A w = A ( A ) 2 A = A A = A k ( A k + 1 ) A , we obtain
R l ( ( A k ) A ) = R l ( ( A k + 1 ) A ) = R l ( ( A k + 1 ) A ) = R l ( A k ( A k + 1 ) A ) = R l ( A A w )
and
N l ( A k ) N l ( A A w ) = N l ( A k ( A k + 1 ) A ) N l ( A k ( A k + 1 ) A k + 1 ) = N l ( A k ( A k ) A k ) = N l ( A k ) ,
i.e., N l ( A k ) = N l ( A k ( A k + 1 ) A ) = N l ( A A w ) .
The rest of the proof follows similarly. □
Necessary and sufficient conditions for a quaternion matrix, X , to be the weak group inverse of A are developed in Theorem 3. Notice that most of the presented conditions are new in the literature.
Theorem 3.
The following statements are equivalent for A H ( n ) ( k ) and X H n × n :
(i)
X = A w .
(ii)
A D A X = X , A X = A A .
(iii)
A D A X = X , A D X = ( A ) 3 A .
(iv)
A A X = X , A X = A A .
(v)
X A X = X , A X = A A , X A = A A .
(vi)
X A X = X , A X = A A , X A D = A D A D .
(vii)
X A X = X , A X = A A , X A = ( A ) 2 A 2 .
(viii)
A X A = A A 2 , X A X = X , X A = ( A ) 2 A 2 , A X = A A .
(ix)
X A A = X , X A = A A .
(x)
X A A = X , X A D = A D A D .
(xi)
X A A = X , X A = ( A ) 2 A 2 .
(xii)
X A A = X , X A k + 1 = A k .
(xiii)
X A A 2 X = X , A A 2 X = A A , X A A 2 = ( A ) 2 A 2 .
(xiv)
X A A 2 X = X , A A 2 X A A 2 = A A 2 , A A 2 X = A A , X A A 2 = ( A ) 2 A 2 .
(xv)
( A ) 2 A 2 X = X , ( A k ) A 2 X = ( A k ) * A .
Proof. 
( i ) ( ii ) The equalities X = A w = A 2 A and A = A D A k A k give this implication.
( ii ) ( iii ) Notice that ( A ) 3 = ( A D A k A k ) 3 = ( A D ) 3 A k A k = ( A D ) 2 A . The use of A X = A A leads to
A D X = ( A D ) 2 ( A X ) = ( A D ) 2 A A = ( A ) 3 A .
( iii ) ( i ) The assumptions A D A X = X and A D X = ( A ) 3 A imply that
X = A ( A D X ) = A ( A ) 3 A = ( A ) 2 A .
( i ) ( iv ) ( vi ) These equivalences follow due to [8].
( i ) ( vii ) ( viii ) This is evident.
( viii ) ( i ) It can be noticed that
X = ( X A ) X = ( A ) 2 A ( A X ) = ( A ) 2 A A A = ( A ) 2 A .
The rest of the proof follows similarly. □
Dual characterizations for A w can be proved by analogy to the above.
Theorem 4.
The subsequent statements are equivalent for A H ( n ) ( k ) and X H n × n :
(i)
X = A w .
(ii)
X A A D = X , X A = A A .
(iii)
X A A D = X , X A D = A ( A ) 3 .
(iv)
X A A = X , X A = A A .
(v)
X A X = X , X A = A A , A X = A A .
(vi)
X A X = X , X A = A A , A D X = A D A D .
(vii)
X A X = X , X A = A A , A X = A 2 ( A ) 2 .
(viii)
X A X = X , A X A = A 2 A , X A = A A , A X = A 2 ( A ) 2 .
(ix)
A A X = X , A X = A A .
(x)
A A X = X , A D X = A D A D .
(xi)
A A X = X , A X = A 2 ( A ) 2 .
(xii)
A A X = X , A k + 1 X = A k .
(xiii)
X A 2 A X = X , A 2 A X = A 2 ( A ) 2 , X A 2 A = A A .
(xiv)
X A 2 A X = X , A 2 A X A 2 A = A 2 A , A 2 A X = A 2 ( A ) 2 , X A 2 A = A A .
(xv)
X A 2 ( A ) 2 = X , X A 2 ( A k ) = A ( A k ) .
Recall that, by [37], an outer inverse of A H n × n with predefined right column space, T 1 , and right null space, S 1 , is a solution to the constrained equation
X A X = X , C r ( X ) = T 1 , N r ( X ) = S 1 .
Recall that, by [37], an outer inverse of A H n × n with the predefined right column space T 1 and the right null space S 1 is a solution to the constrained equation
X A X = X , R l ( X ) = T 2 , N l ( X ) = S 2 .
It is unique (if it exists) and denoted by X = A l T 2 , S 2 ( 2 ) . An outer inverse of A H n × n with the prescribed right column space T 1 , the right null space S 1 , the left row space T 2 , and the left null space S 2 is a solution to the constrained equation
X A X = X , C r ( X ) = T 1 , N r ( X ) = S 1 , R l ( X ) = T 2 , N l ( X ) = S 2 .
It is unique (if it exists) and denoted by X = A ( T 1 , T 2 ) , ( S 1 , S 2 ) ( 2 ) . If some of the above mentioned outer inverses A ( 2 ) satisfy A A ( 2 ) A = A , we use the notation A ( 1 , 2 ) .
The results obtained in Theorems 3 and 4 lead to the following representations and characterizations.
Corollary 1.
The right and left WG inverses of A H ( n ) ( k ) can be represented as
A w = A r C r ( A k ) , N r ( ( A k ) A ) ( 2 ) = A l R l ( ( A k ) A ) , N l ( A k ) ( 2 ) = A ( C r ( A k ) , R l ( ( A k ) A ) ) , ( N r ( ( A k ) A ) , N l ( A k ) ) ( 2 ) = ( A A 2 ) r C r ( A k ) , N r ( ( A k ) A ) ( 1 , 2 ) = ( A A 2 ) l R l ( ( A k ) A ) , N l ( A k ) ( 2 ) = ( A A 2 ) ( C r ( A k ) , R l ( ( A k ) A ) ) , ( N r ( ( A k ) A ) , N l ( A k ) ) ( 1 , 2 ) ,
and
A w = A r C r ( A ( A k ) ) , N r ( A k ) ( 2 ) = A l R l ( A k ) , N l ( A ( A k ) ) ( 2 ) = A ( C r ( A ( A k ) ) , R l ( A k ) ) , ( N r ( A k ) , N l ( A ( A k ) ) ( 2 ) = ( A 2 A ) r C r ( A ( A k ) ) , N r ( A k ) ( 1 , 2 ) = ( A 2 A ) l R l ( A k ) , N l ( A ( A k ) ) ( 1 , 2 ) = ( A 2 A ) ( C r ( A ( A k ) ) , R l ( A k ) ) , ( N r ( A k ) , N l ( A ( A k ) ) ( 1 , 2 ) .
Proof. 
Theorems 3 and 4 imply that A w and A w are outer inverses of A , A w is both inner and outer inverse of A A 2 , and A w is both inner and outer inverse of A 2 A . The proof can be completed by Lemma 5. □

3. Determinantal Representations of the WG Inverse and Its Dual

The problem of D -representing quaternion generalized inverses is successfully resolved within the theory of noncommutative column–row determinants [18].

3.1. Preliminaries on Quaternion D -Representations

For A = ( a i j ) H n × n , there exists a method to generate n row ( R -)determinants and n column ( C -)determinants by generating a certain order of factors in each term.
  • The ith R -determinant of A , for an arbitrary row index, i I n = { 1 , , n } , is given by
    rdet i A : = σ S n 1 n r ( a i i k 1 a i k 1 i k 1 + 1 a i k 1 + l 1 i ) ( a i k r i k r + 1 a i k r + l r i k r ) ,
    where S n denotes the symmetric group on I n , while the permutation σ is defined as a product of mutually disjoint subsets ordered from the left to right by the rules
    σ = i i k 1 i k 1 + 1 i k 1 + l 1 i k 2 i k 2 + 1 i k 2 + l 2 i k r i k r + 1 i k r + l r , i k t < i k t + s , i k 2 < i k 3 < < i k r , t = 2 , , r , s = 1 , , l t .
  • For an arbitrary column index, j I n , the jth C -determinant of A is defined as
    cdet j A = τ S n ( 1 ) n r ( a j k r j k r + l r a j k r + 1 j k r ) ( a j j k 1 + l 1 a j k 1 + 1 j k 1 a j k 1 j ) ,
    in which a permutation, τ , is ordered from the right to left in the following way:
    τ = j k r + l r j k r + 1 j k r j k 2 + l 2 j k 2 + 1 j k 2 j k 1 + l 1 j k 1 + 1 j k 1 j , j k t < j k t + s , j k 2 < j k 3 < < j k r .
The non-commutativity of quaternion operations generally results in different R - and C -determinants, except in the case where A is a Hermitian matrix; in that scenario, the following equalities hold [18]:
rdet 1 A = = rdet n A = cdet 1 A = = cdet n A = α R .
This property allows us to define the unique determinant of a Hermitian matrix A by putting det A = α . The denotation | A | : = det A will also be used.
The next symbols related to D -representations will be used. Let a i . and a . j denote the ith row and jth column of A , respectively. Further, A . j c (resp. A i . b ) stand for the matrices formed by replacing the jth column (resp. ith row) of A by the column vector c (resp. by the row vector b ). Suppose α : = α 1 , , α k 1 , , m and β : = β 1 , , β k 1 , , n are subsets with 1 k min m , n . For A H m × n , the notation A β α stands for a submatrix with rows and columns indexed by α and β , respectively. When A H n × n is Hermitian, A α α and | A | α α denote a principal submatrix and a principal minor of A , respectively. The usual notation L k , n : = α : α = α 1 , , α k , 1 α 1 < < α k n is the set of strictly increasing sequences of k { 1 , , n } integers elected from 1 , , n . For some selected i α and j β , it is usual to write I r , m i : = α : α L r , m , i α , J r , n j : = β : β L r , n , j β .
Lemma 7.
(Theorem 4.5. in ref. [38]). If A H s m × n , then its Moore–Penrose inverse A = a i j H n × m possesses the D -representations
a i j = β J s , n i cdet i A A . i a . j β β β J s , n A A β β
= α I s , m j rdet j ( A A ) j . ( a i . ) α α α I s , m A A α α ,
where a . j and a i . stand for the jth column and ith row of A .
The D -representations of the right and left quaternion core–EP inverses are derived in [3].

3.2. D -Representations of Quaternion Right and Left WG Inverses

Theorem 5.
If A H ( n ) ( k ) with rank ( A k ) = s , then the right WG inverse A w = a i j w , r possesses the subsequent D -representation
a i j w , r = t = 1 n α I s , n t rdet t A k + 2 A k + 2 t . ( a ^ i . ( 2 ) ) α α a t j α I s , n A k + 2 A k + 2 α α ,
where a ^ i . ( 2 ) is the ith row of A ^ 2 = A k ( A k + 2 ) .
Proof. 
Since rank ( A k + 2 ) = rank ( A k + 1 ) = rank ( A k ) = s , then from Equation (8), it follows that
a i j w , r = t = 1 n m = 1 n a i m ( k ) a m t ( k + 2 ) a t j .
By Equation (11),
a m t ( k + 2 ) = α I s 2 , n t rdet t A k + 2 A k + 2 t . a m . ( k + 2 ) , α α α I s 2 , n A k + 2 A k + 2 α α ,
where a m . ( k + 2 ) , is the mth row of A k + 2 .
Denote A ^ 2 = A k ( A k + 2 ) . Then, Equation (12) holds because m = 1 n a i m ( k ) a m . ( k + 2 ) , = a ^ i . ( 2 ) .
Theorem 6.
If A H ( n ) ( k ) with rank ( A k ) = s , then the left WG inverse A w = a i j w , l possesses the D -representation
a i j w , l = t = 1 n a i t β J s , n t cdet t A k + 2 A k + 2 . t ( a ˇ . j ( 2 ) ) β β β J s , n A k + 2 A k + 2 β β ,
where a ˇ . j ( 2 ) is the jth column of A ˇ 2 = ( A k + 2 ) A k .
Proof. 
The proof is analogous to the proof of Theorem 5. □
Remark 2.
For a complex matrix A C ( n ) ( k ) with rank ( A k ) = s , the D -representations of the right A w = a i j w , r and left A w = a i j w , l WG inverses can be obtained by substituting non-commutative row and column determinants with an ordinary determinant such that
a i j w , r = t = 1 n α I s , n t A k + 2 A k + 2 t . ( a ^ i . ( 2 ) ) α α a t j α I s , n A k + 2 A k + 2 α α , a i j w , l = t = 1 n a i t β J s , n t A k + 2 A k + 2 . t ( a ˇ . j ( 2 ) ) β β β J s , n A k + 2 A k + 2 β β ,
where a ^ i . ( 2 ) is the ith row of A ^ 2 = A k ( A k + 2 ) , and a ˇ . j ( 2 ) is the jth column of A ˇ 2 = ( A k + 2 ) A k . This result is also novel and unique to complex matrices.

4. WG–Dual WG Solutions of CQMEs

In this section, we consider the solvability of CQMEs:
( A k ) A 2 X B 2 ( B q ) = ( A k ) A C B ( B q ) , X O ( A k , B q )
A k + 1 X B q + 1 = A k C B q , X O ( A ( A k ) , ( B q ) B ) .
Applying the WG inverse of A and the dual WG inverse of B , the unique solution to CQME (14) is presented in the first result of this section.
Theorem 7.
CQME (14) has a uniquely determined solution represented by
X = A w C B w .
Proof. 
The equalities A = A D A k ( A k ) and B = ( B q ) B q B D give A w = ( A ) 2 A = ( A D ) 2 A k ( A k ) and B w = B ( B ) 2 = ( B q ) B q ( B D ) 2 , respectively. Hence,
X = A w C B w = ( A D ) 2 A k ( A k ) A C B ( B q ) B q ( B D ) 2 O ( A k , B q ) ,
and
( A k ) A 2 X B 2 ( B q ) = ( A k ) A 2 ( A D ) 2 A k ( A k ) A C B ( B q ) B q ( B D ) 2 B 2 ( B q ) = ( A k ) A A D A k ( A k ) A C B ( B q ) B q B D B ( B q ) = ( A k ) A k ( A k ) A C B ( B q ) B q ( B q ) = ( A k ) A C B ( B q ) ,
imply that X = A w C B w is a solution of (14).
In order to prove that (14) has unique solution, let X 1 and X be two solutions to (14). Then, ( A k ) A 2 ( X 1 X ) B 2 ( B q ) = 0 , C r ( X 1 ) C r ( A k ) and C r ( X ) C r ( A k ) yield
( X 1 X ) B 2 ( B q ) N r ( ( A k ) A 2 ) C r ( A k ) = N r ( A w A ) C r ( A w A ) = { 0 } .
Now, R l ( X 1 ) R l ( B q ) , R l ( X ) R l ( B q ) and ( X 1 X ) B 2 ( B q ) = 0 give
X 1 X N l ( B 2 ( B q ) ) R l ( B q ) = N l ( B B w ) R l ( B B w ) = { 0 } ,
i.e., X 1 = X . □
As a consequence of Theorem 7, we solve the CQMEs when A = I n or B = I m .
Corollary 2.
Let C H n × m .
(a)
If A H ( n ) ( k ) , then
X = A w C
is the unique solution to ( A k ) A 2 X = ( A k ) A C , C r ( X ) C r ( A k ) .
(b)
If B H ( m ) ( q ) , then
X = C B w
is the unique solution to X B 2 ( B q ) = C B ( B q ) , R l ( X ) R l ( B q ) .
Similarly to Theorem 7, we can prove the solvability of the next CQME.
Corollary 3.
Let ( A | B ) H ( n | m ) ( k | q ) and C H n × m . Then, (16) is the unique solution to
A 2 X B 2 = A k ( A k ) A C B ( B q ) B q , X O ( A k , B q ) .
When Ind ( A ) = Ind ( B ) = 1 in Theorem 7 and Corollary 3, we obtain the following result.
Corollary 4.
Let C H n × m , A H n × n , B H m × m with Ind ( A ) = 1 and Ind ( B ) = 1 . Then, X = A # C B # is the unique solution to
(a)
A A 2 X B 2 B = A A C B B , X O ( A , B ) ;
(b)
A 2 X B 2 = A C B , X O ( A , B ) .
Theorem 8.
CQME (15) has a uniquely determined solution that can be represented as follows:
X = A w C B w .
Proof. 
Since C r ( ( A k ) ) = C r ( ( A k ) ) and R l ( ( B q ) ) = R l ( ( B q ) ) , we have that
X = A w C B w = A ( A k ) A k ( A D ) 2 C ( B D ) 2 B q ( B q ) B
satisfies X O ( A ( A k ) , ( B q ) B ) = O ( A ( A k ) , ( B q ) B ) and
A k + 1 X B q + 1 = A k + 2 ( A k ) A k ( A D ) 2 C ( B D ) 2 B q ( B q ) B q + 2 = A k + 2 ( A D ) 2 C ( B D ) 2 B q + 2 = A k C B q .
This means that (19) is a solution to CQME (15).
To check that CQME (15) has a unique solution, assume that X 1 and X are its two solutions. Further, by A k + 1 ( X 1 X ) B q + 1 = 0 , C r ( X 1 ) C r ( A ( A k ) ) and C r ( X ) C r ( A ( A k ) ) , it follows that
( X 1 X ) B q + 1 N r ( A k + 1 ) C r ( A ( A k ) ) = N r ( A w A ) C r ( A w A ) = { 0 } .
Using R l ( X 1 ) R l ( ( B q ) B ) , R l ( X ) R l ( ( B q ) B ) and ( X 1 X ) B q + 1 = 0 , we obtain
X 1 X N l ( B q + 1 ) R l ( ( B q ) B ) = N l ( B B w ) R l ( B B w ) = { 0 } .
So, X 1 = X . □
In special cases, Theorem 8 implies the solvability of the next CQMEs.
Corollary 5.
Let C H n × m .
(a)
If A H ( n ) ( k ) , then
X = A w C
is the unique solution to A k + 1 X = A l C , C r ( X ) C r ( A ( A k ) ) .
(b)
If B H ( m ) ( q ) , then
X = C B w
is the unique solution to X B q + 1 = C B q , R l ( X ) R l ( ( B q ) B ) .
Combining the WG inverses of A and B or the dual WG inverses of A and B , we can solve some more CQMEs.
Theorem 9.
Let ( A | B ) H ( n | m ) ( k | q ) and C H n × m .
(a)
Then,
X = A w C B w
is the unique solution to ( A k ) A 2 X B q + 1 = ( A k ) A C B q , X O ( A k , ( B q ) B ) .
(b)
Then,
X = A w C B w
is the unique solution to A k + 1 X B 2 ( B q ) = A k C B ( B q ) , X O ( A ( A k ) , B q ) .
Especially, Theorem 9 gives the next result.
Corollary 6.
Let C H n × m , A H n × n , B H m × m with Ind ( A ) = 1 , and Ind ( B ) = 1 . Then, X = A # C B # is a unique solution to
(a)
A A 2 X B 2 = A A C B , X O ( A , B ) ;
(b)
A 2 X B 2 B = A C B B , X O ( A , B ) .

5. Constrained Quaternion Matrix Minimization Problems

Let ( A | B ) H ( n | m ) ( k | q ) and C H n × m . The main goal of this section is to present solution of the constrained quaternion matrix minimization problem (CQMMP)
A 2 X B 2 A C B F = min subject to X O ( A k , B q ) ,
and its particular kinds. The following result regarding the decomposition of A and its WG inverse A w , obtained for complex matrices in [39], is extended to quaternion matrices by analogy to the method.
Lemma 8.
If A H ( n ) ( k ) and r = rank ( A k ) , then
A = Q A 1 A 2 0 A 3 Q ,
where Q H n × n is unitary, A 1 H r × r is nonsingular and A 3 H ( n r ) × ( n r ) is nilpotent of index l. Moreover,
A w = Q A 1 1 A 1 2 A 2 0 0 Q .
Based on A w and B w , we solve now CQMMP (24).
Theorem 10.
CQMMP (24) has a uniquely determined solution represented by (16).
Proof. 
The hypothesis X O ( A k , B q ) implies that X = A k Z B q , for some Z H n × m . By Lemma 8, we can write
A = Q 1 A 1 A 2 0 A 3 Q 1 , B = Q 2 B 1 B 2 0 B 3 Q 2 ,
where rank ( A k ) = r 1 , rank ( B q ) = r 2 , A 1 H r 1 × r 1 and B 1 H r 2 × r 2 are nonsingular, A 3 H ( n r 1 ) × ( n r 1 ) and B 1 H ( m r 2 ) × ( m r 2 ) are nilpotent matrices of indices k and q, respectively. Using (Remark 1.3 in ref. [4]), we have B = [ ( B ) ] , which gives
B w = B ( B ) 2 = ( B ) [ ( B ) ] 2 = [ ( B ) ] 2 B = [ ( B ) w ] .
According to Lemma 8, it follows that
A w = Q 1 A 1 1 A 1 2 A 2 0 0 Q 1 , B w = [ ( B ) w ] = Q 2 ( B 1 1 ) 0 B 2 ( B 1 2 ) 0 Q 2 .
Set
Q 1 Z Q 2 = Z 1 Z 2 Z 3 Z 4 , Q 1 C Q 2 = C 1 C 2 C 3 C 4 ,
where Z 1 , C 1 H r 1 × r 2 , Z 2 , C 2 H r 1 × ( m r 2 ) , Z 3 , C 3 H ( n r 1 ) × r 2 , and Z 4 , C 4 H ( n r 1 ) × ( m r 2 ) . For the appropriate matrices S 1 H ( n r 1 ) × r 1 and S 2 H ( m r 2 ) × r 2 , we obtain
A 2 X B 2 = A k + 2 Z B q + 2 = Q 1 A 1 2 A 1 A 2 + A 2 A 3 0 A 3 2 A 1 k S 1 0 0 Q 1 Z Q 2 × ( B 1 ) q 0 S 2 0 ( B 1 ) 2 0 B 2 B 1 + B 3 B 2 ( B 3 ) 2 Q 2 = Q 1 A 1 k + 2 A 1 2 S 1 0 0 Z 1 Z 2 Z 3 Z 4 ( B 1 ) q + 2 0 S 2 ( B 1 ) 2 0 Q 2 = Q 1 A 1 k + 2 Z 1 ( B 1 ) q + 2 + A 1 2 S 1 Z 3 ( B 1 ) q + 2 + A 1 k + 2 Z 2 S 2 ( B 1 ) 2 + A 1 2 S 1 Z 4 S 2 ( B 1 ) 2 0 0 0 Q 2
and
ACB = Q 1 A 1 A 2 0 A 3 Q 1 C Q 2 B 1 0 B 2 B 3 Q 2 = Q 1 A 1 A 2 0 A 3 C 1 C 2 C 3 C 4 B 1 0 B 2 B 3 Q 2 = Q 1 G A 1 C 2 B 3 + A 2 C 4 B 3 A 3 C 3 B 1 + A 3 C 4 B 2 A 3 C 4 B 3 Q 2 ,
where G = A 1 C 1 B 1 + A 2 C 3 B 1 + A 1 C 2 B 2 + A 2 C 4 B 2 . Therefore,
A 2 X B 2 ACB F 2 c = A 1 k + 2 Z 1 ( B 1 ) q + 2 + A 1 2 S 1 Z 3 ( B 1 ) q + 2 + A 1 k + 2 Z 2 S 2 ( B 1 ) 2 + A 1 2 S 1 Z 4 S 2 ( B 1 ) 2 G F 2 + A 1 C 2 B 3 + A 2 C 4 B 3 F 2 +   A 3 C 3 B 1 + A 3 C 4 B 2 F 2 +   A 3 C 4 B 3 F 2 .
Because X is a solution to (24) if and only if Z is a solution to A k + 2 Z B q + 2 ACB F = min , one can see that
min Z i for i = 1 , , 4 A 1 k + 2 Z 1 ( B 1 ) q + 2 + A 1 2 S 1 Z 3 ( B 1 ) q + 2 + A 1 k + 2 Z 2 S 2 ( B 1 ) 2 + A 1 2 S 1 Z 4 S 2 ( B 1 ) 2 G F 2 = 0 ,
i.e.,
A k + 2 Z B q + 2 ACB F = min A 1 C 2 B 3 + A 2 C 4 B 3 F 2 +   A 3 C 3 B 1 + A 3 C 4 B 2 F 2 +   A 3 C 4 B 3 F 2
for arbitrary Z i for i = 2 , 3 , 4 and
Z 1 = A 1 ( k + 2 ) G ( B 1 ) ( q + 2 ) A 1 k S 1 Z 3 Z 2 S 2 ( B 1 ) q A 1 k S 1 Z 4 S 2 ( B 1 ) q .
If we substitute (27) into
X = A k Z B q = Q 1 A 1 k S 1 0 0 Z 1 Z 2 Z 3 Z 4 ( B 1 ) q 0 S 2 0 Q 2 = Q 1 A 1 k Z 1 ( B 1 ) q + S 1 Z 3 ( B 1 ) q + A 1 k Z 2 S 2 + S 1 Z 4 S 2 0 0 0 Q 2 ,
we obtain
X = Q 1 A 1 2 G ( B 1 ) 2 0 0 0 Q 2 = Q 1 A 1 1 A 1 2 A 2 0 0 C 1 C 2 C 3 C 4 ( B 1 1 ) 0 B 2 ( B 1 2 ) 0 Q 2 = A w C B w
is a uniquely determined solution of CQMMP (24). □
For Ind ( A ) = 1 or Ind ( B ) = 1 in Theorem 10, the next consequence follows.
Corollary 7.
Let C H n × m , A H n × n , B H m × m with Ind ( A ) = 1 and Ind ( B ) = 1 . Then, X = A # C B # is the unique solution to CQMMP
A 2 X B 2 A C B F = min s u b j e c t t o X O ( A , B ) .
As special types of CQMMP (24), we can consider CQMMPs:
A 2 X AC F = min subject to C r ( X ) C r ( A k ) ,
where C H n × m and A H ( n ) ( k ) ; and
X B 2 CB F = min subject to R l ( X ) R l ( B q ) ,
where C H n × m and B H ( m ) ( q ) .
Corollary 8.
(a) CQMMP (29) has a uniquely determined solution by (17).
(b) CQMMP (30) has a uniquely determined solution by (18).

6. Cramer’s-Type Representations of Derived Solutions

Firstly, we obtain the D -representations for solutions to (16) and its special cases (17) and (18).
Theorem 11.
Let ( A | B ) H ( n | m ) ( k | q ) , rank ( A k ) = s 1 , and rank ( B q ) = s 2 . The unique solution X = ( x i j ) H n × m from (16) can be expressed componentwise by
x i j = c i j ( 1 ) α I s 1 , n A k + 2 A k + 2 α α β J s 2 , m B q + 2 B q + 2 β β ,
where C 1 = ( c i j ( 1 ) ) = Φ A C B Ψ . Here, Φ = ( ϕ i j ) and Ψ = ( ψ i j ) are determined, respectively, by
ϕ i t = α I s 1 , n t rdet t A k + 2 A k + 2 t . ( a ^ i . ( 2 ) ) α α ,
ψ f j = β J s 2 , m f cdet f B q + 2 B q + 2 . f ( b ˇ . j ( 2 ) ) β β ,
where a ^ i . ( 2 ) is the ith row of A ^ 2 = A k ( A k + 2 ) and b ˇ . j ( 2 ) is the jth column of B ˇ 2 = ( B q + 2 ) B q .
Proof. 
According to (16) and the D -representations (12) of the right WG inverse A w = a i j w , r and (13) of the left WG inverse A w = a i j w , l , respectively, we have
x i j = g = 1 n p = 1 m a i g w , r c g p b p j w , l = g = 1 n p = 1 m t = 1 n α I s 1 , n t rdet t A k + 2 A k + 2 t . ( a ^ i . ( 2 ) ) α α a t g c g p α I s 1 , n A k + 2 A k + 2 α α × f = 1 m b p f β J s 2 , m f cdet f B q + 2 B q + 2 . f ( b ˇ . j ( 2 ) ) β β β J s 2 , m B q + 2 B q + 2 β β ,
where a ^ i . ( 2 ) is the ith row of A ^ 2 = A k ( A k + 2 ) and b ˇ . j ( 2 ) is the jth column of B ˇ 2 = ( B q + 2 ) B q .
Suppose that c t f = g = 1 n p = 1 m a t g c g p b p f and C ˜ = ( c t f ) = A C B H n × m . If we construct the matrices Φ = ( ϕ i t ) and Ψ = ( ψ f j ) determined by (32) and (33), respectively, then, by putting C 1 = Φ C ˜ Ψ , (31) follows. □
The next corollaries evidently follow from Theorem 11.
Corollary 9.
Under the assumptions A H ( n ) ( k ) with rank ( A k ) = s 1 , and C H n × m , the matrix X = ( x i j ) H n × m defined in (20) is represented as
x i j = c ˜ i j ( 1 ) α I s 1 , n A k + 2 A k + 2 α α ,
where C ˜ 1 = ( c ˜ i j ( 1 ) ) = Φ A C and Φ is determined by (32).
Corollary 10.
Assume that B H ( m ) ( q ) with rank ( B q ) = s 2 , and C H n × m ; the matrix X = ( x i j ) H n × m defined in (18) is represented as
x i j = c ^ i j ( 1 ) β J s 2 , m B q + 2 B q + 2 β β ,
where C ^ 1 = ( c ^ i j ( 1 ) ) = C B Ψ , and Ψ = ( ψ i j ) is determined by (33).
Theorem 12.
Suppose that ( A | B ) H ( n | m ) ( k | q ) , rank ( A k ) = s 1 , and rank ( B q ) = s 2 . The unique solution X = ( x i j ) H n × m from (19) is expressible elementwise in two possible ways.
(1)
x i j = f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( ϕ ^ i . ) α α b f j β J s 1 , n A k + 2 A k + 2 β β α I s 2 , m B q + 2 B q + 2 α α ,
where ϕ ^ i . is the ith row of Φ ^ = Φ 1 B q ( B q + 2 ) , and Φ 1 = ( ϕ i p ( 1 ) ) is determined by
ϕ i p ( 1 ) = t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( c ˇ . p ) β β ,
where c ˇ . p is the pth column of C ˇ = ( A k + 2 ) A k C .
(2)
x i j = t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( ψ ˇ . j ) β β β J s 1 , n A k + 2 A k + 2 β β α I s 2 , m B q + 2 B q + 2 α α ,
where ψ ˇ . j is the jth column of Ψ ˇ = ( A k + 2 ) A k Ψ 1 , and Ψ 1 = ( ψ g j ( 1 ) ) is determined by
ψ g j ( 1 ) = f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( c ^ g . ) α α b f j ,
where c ^ g . is the gth row of C ^ = C B q ( B q + 2 ) .
Proof. 
According to (19) and the D -representations (13) and (12) of the left WG inverse A w = a i g w , l and the right WG inverse B w = b p j w , r , respectively, we have
x i j = g = 1 n p = 1 m a i g w , l c g p b p j w , r = g = 1 n p = 1 m t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( a ˇ . g ( 2 ) ) β β c g p β J s 1 , n A k + 2 A k + 2 β β × f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( b ^ p . ( 2 ) ) α α b f j α I s 2 , m B q + 2 B q + 2 α α ,
where a ˇ . g ( 2 ) is the gth column of A ˇ 2 = ( A k + 2 ) A k , and b ^ p . ( 2 ) is the pth row of B ^ 2 = B q ( B q + 2 ) .
To obtain expressive formulas, we make some convolutions of (38).
Denote C ˇ = ( c ˇ i j ) = A ˇ 2 C . Then,
g = 1 n β J s 1 , n t cdet t A k + 2 A k + 2 . t ( a ˇ . g ( 2 ) ) β β c g p = β J s 1 , n t cdet t A k + 2 A k + 2 . t ( c ˇ . p ) β β .
Further, put
ϕ i p ( 1 ) = t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( c ˇ . p ) β β
and construct the matrix Φ 1 = ( ϕ i p ( 1 ) ) H n × m . Finally, Equation (34) follows from the setting
p = 1 m ϕ i p ( 1 ) b ^ p . ( 2 ) = : ϕ ^ i .
Now, denote C ^ = ( c ^ i j ) = C B ^ 2 . Then,
p = 1 m c g p α I s 2 , m f rdet f B q + 2 B q + 2 f . ( b ^ p . ( 2 ) ) α α = α I s 2 , m f rdet f B q + 2 B q + 2 f . ( c ^ g . ) α α .
We put
ψ g j ( 1 ) = f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( c ^ g . ) α α b f j
and determine the matrix Ψ 1 = ( ψ g j ( 1 ) ) . Because of g = 1 n a ˇ . g ( 2 ) ψ g j ( 1 ) = ψ ˇ . j , Equation (36) holds. □
We possess the next results when A = I n or B = I m in Theorem 12.
Corollary 11.
Suppose that A H ( n ) ( k ) with rank ( A k ) = s 1 , and C H n × m . The matrix X = ( x i j ) H n × m defined in (17) can be represented as
x i j = t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( c ˇ . j ) β β β J s 1 , n A k + 2 A k + 2 β β ,
where c ˇ . j is the jth column of C ˇ = ( A k + 2 ) A k C .
Corollary 12.
Suppose that B H ( m ) ( q ) with rank ( B q ) = s 2 , and C H n × m . The matrix X = ( x i j ) H n × m defined in (18) is represented as
x i j = f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( c ^ i . ) α α b f j β J s 1 , n A k + 2 A k + 2 β β α I s 2 , m B q + 2 B q + 2 α α ,
where c ^ i . is the gth row of C ^ = C B q ( B q + 2 ) .
The remaining two theorems can be proved similarly to the proofs of Theorems 11 and 12.
Theorem 13.
Let ( A | B ) H ( n | m ) ( k | q ) , rank ( A k ) = s 1 , and rank ( B q ) = s 2 . The unique solution X = ( x i j ) H n × m from (22) can be expressed componentwise by
x i j = f = 1 m α I s 2 , m f rdet f B q + 2 B q + 2 f . ( ϕ i . ( 2 ) ) α α b f j α I s 1 , n A k + 2 A k + 2 α α α I s 2 , m B q + 2 B q + 2 α α ,
where ϕ i . ( 2 ) is the ith row of Φ 2 = ( ϕ i j ( 2 ) ) = Φ A C B q B q + 2 , where Φ is determined by (32).
Theorem 14.
Let ( A | B ) H ( n | m ) ( k | q ) , rank ( A k ) = s 1 , and rank ( B q ) = s 2 . The unique solution X = ( x i j ) H n × m from (23) can be expressed componentwise by
x i j = t = 1 n a i t β J s 1 , n t cdet t A k + 2 A k + 2 . t ( ψ . j ( 2 ) ) β β β J s 1 , n A k + 2 A k + 2 β β β J s 2 , m B q + 2 B q + 2 β β ,
where ψ . j ( 2 ) is the jth column of Ψ 2 = ( A k + 2 ) A k C B Ψ and Ψ is determined by (33).

7. An Illustrative Example

Let us use corresponding Cramer’s rules to find the WG–dual WG solutions to CQMEs with the given matrices:
A = k j 0 i 1 j i + k j 1 + j k 0 i 0 i + k 1 j i i k , B = 4 k 4 j 5 i 2 j 2 k 3 i 1 k , C = i 0 1 0 k 0 k 0 j 0 j 0 .
We have rank ( A ) = 3 , rank ( A 3 ) = rank ( A 2 ) = 2 , rank ( B ) = 2 , and rank ( B 2 ) = rank ( B 3 ) = 1 . So, k = Ind ( A ) = 2 and q = Ind ( B ) = 2 . Given Theorem 11, Cramer’s rule to the solution (31) is computed as follows.
1. Compute Φ by (32) and Ψ by (33).
Φ = 15 5 i 10 j 5 j 10 i 15 k 5 5 j 10 i 15 k 5 i 5 k 15 + 25 j 5 i 25 10 j 5 j 15 + 10 j 5 i + 5 k 15 10 j 5 + 5 j , Ψ = 11 11 i 11 j 11 i 11 11 k 11 j 11 k 11 .
2. Taking into account that α I 2 , 4 A 4 A 4 α α = 25 and β I 1 , 3 B 4 B 4 β β = 33 , from (31), it follows that
X = 1 15 7 5 i + 2 j 5 7 i 2 k 2 7 j 5 k 5 j 9 k 9 j + 5 k 5 + 9 i 7 5 i + 7 j 5 k 5 7 i 5 j 7 k 7 + 5 i 7 j 5 k 9 5 i 5 9 i 9 j 5 k .
Due to Theorem 12, the Cramer rule to the solution (19) can be founded as follows.
1. Compute Φ 1 by (35).
Φ 1 = 25 j 25 i + 25 k 25 k 55 i + 10 k 45 65 j 55 + 10 j 25 + 25 j 50 i 25 i 25 k 10 + 55 j 65 i + 45 k 10 i 55 k .
Then,
Φ ^ = Φ 1 B q ( B q + 2 ) = 225 225 i 900 j 75 + 75 i 300 k 300 + 75 j + 75 k 90 + 1890 i 495 j 855 k 630 30 i + 285 j 165 k 165 285 i 30 j 630 k 1125 225 i 675 j 225 k 75 + 375 i + 75 j 225 k 225 75 i + 375 j + 75 k 885 495 i 1890 j 90 k 165 + 285 i + 30 j 630 k 630 30 i + 285 j + 165 k ,
and finally by (34),
X = 1 11 60 i + 15 j + 6 k 60 15 j 15 k 19 + 19 i 76 k 57 33 i 126 j + 6 k 33 + 57 i + 6 j + 126 k 159.6 7.6 i + 72.2 j 41.8 k 15 45 i + 15 j 75 k 45 + 15 i 75 j 15 k 19 + 95 i + 19 j 75 k 6 126 i + 33 j 57 k 126 + 6 i 57 j 33 k 41.8 + 72.2 i + 7.6 j 159.6 k .
Similarly, based on Theorem 13, Cramer’s rule applied to the solution (39) yields
X = 1 11 54 i + 15 j 21 k 54 21 j 15 k 19 + 26.6 i 68.4 k 27 15 i 60 j 15 + 27 i + 60 k 76 + 34.2 j 19 k 15 39 i + 15 j 81 k 39 + 15 i 81 j 15 k 19 + 102.6 i + 19 j 49.4 k 60 i + 15 j 27 k 60 27 j 15 k 19 + 34.2 i 76 k .
By utilizing Cramer’s rule from Theorem 14, the solution (40) is
X = 1 15 100 i + 15 j 35 k 100 35 j 15 k 15 + 35 i 100 k 117 33 i 206 j + 6 k 33 + 117 i + 6 j + 206 k 206 6 i + 117 j 33 k 15 65 i + 15 j 135 k 65 + 15 i 135 j 15 k 15 + 135 i + 15 j 65 k 6 206 i + 33 j 117 k 206 + 6 i 117 j 33 k 33 + 117 i + 6 j 206 k .

8. Conclusions

The initial goals of the current research are focused on the characterizations and expressions of the weak group (WG) inverse and its dual over the quaternion skew field. We introduce a dual to the weak group inverse for the first time in the literature and present new characterizations for both the WG inverse and its dual, which we refer to as the right and left weak group inverses for quaternion matrices, respectively. In particular, within the framework of non-commutative row–column determinants (recently introduced by one of our authors), we provide determinantal representations of both right and left WG inverses as direct methods for their construction. For complex matrices, substituting noncommutative determinants with ordinary determinants yields novel and unique results concerning determinantal representations of the WG inverse and its dual. Additionally, our research addresses solving the quaternion restricted two-sided matrix equation AXB = C along with an associated approximation problem that can be expressed in terms of solutions involving right and left WG inverses. Utilizing the theory of noncommutative row–column determinants, we derive Cramer’s rules for computing these solutions based on determinantal representations of both right and left WG inverses. A numerical example is provided to illustrate these findings.

Author Contributions

Conceptualization, I.K. and D.M.; methodology, P.S.; software, I.K.; validation, I.K. and D.M.; formal analysis, P.S.; investigation, I.K. and D.M.; resources, P.S.; data curation, I.K.; writing—original draft preparation, I.K. and D.M.; writing—review and editing, I.K. and P.S.; visualization, I.K.; supervision, P.S.; project administration, D.M.; funding acquisition, P.S. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported for Dijana Mosić and Predrag Stanimirović by the Ministry of Education, Science and Technological Development, Republic of Serbia, Grant 451-03-47/2023-01/200124; for Predrag Stanimirović by the Science Fund of the Republic of Serbia, Grant No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM.

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Baksalary, O.M.; Trenkler, G. Core inverse of matrices. Linear Multilinear Algebra 2010, 8, 681–697. [Google Scholar] [CrossRef]
  2. Prasad, K.M.; Mohana, K.S. Core-EP inverse. Linear Multilinear Algebra 2014, 62, 792–802. [Google Scholar] [CrossRef]
  3. Kyrchei, I.I. Determinantal representations of the quaternion core inverse and its generalizations. Adv. Appl. Clifford Algebras 2019, 29, 104. [Google Scholar] [CrossRef]
  4. Gao, Y.; Chen, J. Pseudo core inverses in rings with involution. Commun. Algebra 2018, 46, 38–50. [Google Scholar] [CrossRef]
  5. Kyrchei, I.I. Weighted quaternion core-EP, DMP, MPD, and CMP inverses and their determinantal representations. Revista de la Real Academia de Ciencias Exactas Físicas y Naturales. Serie A. Matemáticas 2020, 114, 198. [Google Scholar] [CrossRef]
  6. Wang, H.X.; Chen, J.L. Weak group inverse. Open Math. 2018, 16, 1218–1232. [Google Scholar] [CrossRef]
  7. Mosić, D.; Stanimirović, P.S. Representations for the weak group inverse. Appl. Math. Comput. 2021, 397, 125957. [Google Scholar] [CrossRef]
  8. Mosić, D.; Zhang, D.C. Weighted weak group inverse for Hilbert space operators. Front. Math. China 2020, 15, 709–726. [Google Scholar] [CrossRef]
  9. Zhou, Y.K.; Chen, J.L.; Zhou, M.M. m-weak group inverses in a ring with involution. Revista de la Real Academia de Ciencias Exactas Físicas y Naturales. Serie A. Matemáticas 2021, 115, 1–13. [Google Scholar] [CrossRef]
  10. Zhou, M.M.; Chen, J.L.; Zhou, Y.K. Weak group inverses in proper *-rings. J. Algebra Appl. 2020, 19, 2050238. [Google Scholar] [CrossRef]
  11. Cai, J.; Chen, G. On determinantal representation for the generalized inverse A T , S ( 2 ) and its applications. Numer. Linear Algebra Appl. 2007, 14, 169–182. [Google Scholar] [CrossRef]
  12. Liu, X.; Yu, Y.; Wang, H. Determinantal representation of the weighted generalized inverse. Appl. Math. Comput. 2009, 208, 556–563. [Google Scholar] [CrossRef]
  13. Stanimirović, P.S.; Djordjević, D.S. Full-rank and determinantal representation of the Drazin inverse. Linear Algebra Appl. 2000, 311, 31–51. [Google Scholar] [CrossRef]
  14. Purushothama, D.S. A determinantal representation of core EP inverse. Aust. J. Math. Anal. Appl. 2023, 20, 11. [Google Scholar]
  15. Aslaksen, H. Quaternionic determinants. Math. Intell. 1996, 18, 57–65. [Google Scholar] [CrossRef]
  16. Cohen, N.; De Leo, S. The quaternionic determinant. Electron. J. Linear Algebra. 2000, 7, 100–111. [Google Scholar] [CrossRef]
  17. Zhang, F.Z. Quaternions and matrices of quaternions. Linear Algebra Appl. 1997, 251, 21–57. [Google Scholar] [CrossRef]
  18. Kyrchei, I.I. The theory of the column and row determinants in a quaternion linear algebra. In Advances in Mathematics Research 15; Baswell, A.R., Ed.; Nova Science Publishers: New York, NY, USA, 2012; pp. 301–359. [Google Scholar]
  19. Wang, H.X.; Gao, J.; Liu, X. The WG inverse and its application in a constrained matrix approximation problem. ScienceAsia 2022, 48, 361–368. [Google Scholar] [CrossRef]
  20. Took, C.C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef]
  21. Qi, L.; Luo, Z.Y.; Wang, Q.W.; Zhang, X. Quaternion matrix optimization: Motivation and analysis. J. Optim. Theory Appl. 2022, 193, 621–648. [Google Scholar] [CrossRef]
  22. Regalia, P.A.; Mitra, S.K. Kronecker products, unitary matrices and signal processing applications. SIAM Rev. 1989, 31, 586–613. [Google Scholar] [CrossRef]
  23. Wang, Q.W.; Wangm, X.X. Arnoldi method for large quaternion right eigenvalue problem. J. Sci. Comput. 2020, 58, 1–20. [Google Scholar] [CrossRef]
  24. Şimşek, S. Least-squares solutions of generalized Sylvester-type quaternion matrix equations. Adv. Appl. Clifford Algebras 2023, 33, 28. [Google Scholar] [CrossRef]
  25. He, Z.-H.; Tian, J.; Yu, S.-W. A system of four generalized sylvester matrix equations over the quaternion algebra. Mathematics 2024, 12, 2341. [Google Scholar] [CrossRef]
  26. He, Z.H.; Qin, W.L.; Tian, J.; Wang, X.X.; Zhang, Y. A new Sylvester-type quaternion matrix equation model for color image data transmission. Comp. Appl. Math. 2024, 43, 227. [Google Scholar] [CrossRef]
  27. Zhang, C.Q.; Wang, Q.W.; Dmytryshyn, A.; He, Z.H. Investigation of some Sylvester-type quaternion matrix equations with multiple unknowns. Comp. Appl. Math. 2024, 43, 181. [Google Scholar] [CrossRef]
  28. Xie, M.; Wang, Q.-W.; Zhang, Y. The BiCG algorithm for solving the minimal frobenius norm solution of generalized sylvester tensor equation over the quaternions. Symmetry 2024, 16, 1167. [Google Scholar] [CrossRef]
  29. Si, K.W.; Wang, Q.W.; Xie, L.M. A classical system of matrix equations over the split quaternion algebra. Adv. Appl. Clifford Algebras 2024, 34, 51. [Google Scholar] [CrossRef]
  30. Liu, X.; Shi, T.; Zhang, Y. Solution to Several Split Quaternion Matrix Equations. Mathematics 2024, 12, 1707. [Google Scholar] [CrossRef]
  31. Gao, Z.-H.; Wang, Q.-W.; Xie, L. The (anti-)η-Hermitian solution to a novel system of matrix equations over the split quaternion algebra. Math. Meth. Appl. Sci. 2024, 47, 13896–13913. [Google Scholar] [CrossRef]
  32. Shi, L.; Wang, Q.-W.; Xie, L.-M.; Zhang, X.-F. Solving the dual generalized commutative quaternion matrix equation AXB = C. Symmetry 2024, 16, 1359. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Wang, Q.-W.; Xie, L.-M. The hermitian solution to a new system of commutative quaternion matrix equations. Symmetry 2024, 16, 361. [Google Scholar] [CrossRef]
  34. Chen, Y.; Wang, Q.-W.; Xie, L.-M. Dual quaternion matrix equation AXB = C with applications. Symmetry 2024, 16, 287. [Google Scholar] [CrossRef]
  35. Kyrchei, I.I.; Mosić, D.; Stanimirović, P.S. MPD-DMP-solutions to quaternion two-sided restricted matrix equations. Comput. Appl. Math. 2021, 40, 177. [Google Scholar] [CrossRef]
  36. Kyrchei, I.I.; Mosić, D.; Stanimirović, P.S. MPCEP-* CEPMP-solutions of some restricted quaternion matrix equations. Adv. Appl. Clifford Algebras. 2022, 32, 16. [Google Scholar] [CrossRef]
  37. Song, G.J.; Wang, Q.W.; Chang, H.X. Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field. Comput. Math. Appl. 2011, 61, 1576–1589. [Google Scholar] [CrossRef]
  38. Kyrchei, I.I. Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear Multilinear Algebra 2011, 59, 413–431. [Google Scholar] [CrossRef]
  39. Wang, H. Core-EP decomposition and its applications. Linear Algebra Appl. 2016, 508, 289–300. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kyrchei, I.; Mosić, D.; Stanimirović, P. The Right–Left WG Inverse Solutions to Quaternion Matrix Equations. Symmetry 2025, 17, 38. https://doi.org/10.3390/sym17010038

AMA Style

Kyrchei I, Mosić D, Stanimirović P. The Right–Left WG Inverse Solutions to Quaternion Matrix Equations. Symmetry. 2025; 17(1):38. https://doi.org/10.3390/sym17010038

Chicago/Turabian Style

Kyrchei, Ivan, Dijana Mosić, and Predrag Stanimirović. 2025. "The Right–Left WG Inverse Solutions to Quaternion Matrix Equations" Symmetry 17, no. 1: 38. https://doi.org/10.3390/sym17010038

APA Style

Kyrchei, I., Mosić, D., & Stanimirović, P. (2025). The Right–Left WG Inverse Solutions to Quaternion Matrix Equations. Symmetry, 17(1), 38. https://doi.org/10.3390/sym17010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop