Next Article in Journal
Multitask Learning Based on Least Squares Support Vector Regression for Stock Forecast
Previous Article in Journal
Some Cardinal and Geometric Properties of the Space of Permutation Degree
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Characterizations of Matrix Equalities for Generalized Inverses of Matrix Products

Shanghai Business School, College of Business and Economics, Shanghai 201499, China
Axioms 2022, 11(6), 291; https://doi.org/10.3390/axioms11060291
Submission received: 3 May 2022 / Revised: 29 May 2022 / Accepted: 7 June 2022 / Published: 14 June 2022

Abstract

:
This paper considers how to construct and describe matrix equalities that are composed of algebraic operations of matrices and their generalized inverses. We select a group of known and new reverse-order laws for generalized inverses of several matrix products and derive various necessary and sufficient conditions for them to hold using the matrix rank method and the block matrix method.

1. Introduction

Throughout, let C m × n denote the collections of all m × n matrices with complex numbers; A * denote the conjugate transpose; r ( A ) denote the rank of A, i.e., the maximum order of the invertible submatrix of A; R ( A ) = { A x | x C n } and N ( A ) = { x C n | A x = 0 } denote the range and the null space of a matrix A C m × n , respectively; I m denote the identity matrix of order m; and [ A , B ] denote a columnwise partitioned matrix consisting of two submatrices A and B. The Moore–Penrose generalized inverse of A C m × n , denoted by A , is the unique matrix X C n × m that satisfies the four Penrose equations:
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A ,
see [1]. Starting with Penrose himself, a matrix X is called a { i , , j } -generalized inverse of A, denoted by A ( i , , j ) , if it satisfies the ith, , j th equations in (1). The collection of all { i , , j } -generalized inverses of A is denoted by { A ( i , , j ) } . There are in all 15 types of { i , , j } -generalized inverses of A by definition, but matrix X is called an inner inverse of A if it satisfies A X A = A and is denoted by A ( 1 ) = A .
In this paper, we focus our attention on { 1 } -generalized inverses of matrices. As usual, we denote matrix qualities composed of { 1 } -generalized inverses by
f ( A 1 , A 2 , , A p ) = g ( B 1 , B 2 , , B q ) ,
where A 1 , A 2 , , A p , B 1 , B 2 , , B q are given matrices of appropriate sizes. For a given general algebraic matrix equality, a primary task that we are confronted with is to determine clear and intrinsic identifying conditions for it to hold. However, there do not exist effective and useful rules and techniques of characterizing a given algebraic equality by means of ordinary operations of matrices and their generalized inverses because of the noncommutativity of matrix algebra and the singularity of matrices. In view of this fact, few of (2) can be described with satisfactory conclusions in the theory of generalized inverses except some kinds of special cases with simple and reasonable forms. As well-known examples of (2), we mention the following two matrix equalities:
( A B ) = B A , ( A B C ) = C B A ,
where A C m × n , B C n × p , and C C p × q . Obviously, they can be viewed as the direct extensions of the two ordinary reverse-order laws ( A B ) 1 = B 1 A 1 and ( A B C ) 1 = C 1 B 1 A 1 for the products of two or three invertible matrices of the same size, and therefore, they are usually called the reverse-order laws for generalized inverses of the matrix products A B and A B C , respectively. Apparently, the two reverse-order laws in (3) and their special forms, such as ( A B ) = B A and ( A B C ) = C B A , seem simple and neat in comparison with many other complicated matrix equalities that involve generalized inverses. On the other hand, since M M = I m and M M = I n do not necessarily hold for a singular matrix M, the two reverse-order laws in (3) do not necessarily hold for singular matrices. Therefore, it is a primary work to determine the necessary and sufficient conditions for the two reverse-order laws in (3) to hold before we can utilize them in dealing with calculations related to matrices and their generalized inverses. In fact, they were well known as classic objects in the theory of generalized inverses of matrices and have been studied by many authors since the 1960s; see, e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14] for the historical perspective and development on the subject area of reverse-order laws.
In addition to ordinary forms of reverse-order laws in (3), there are many other kinds of simple and complicated algebraic equalities that are composed of mixed reverse-order products of given matrices and their generalized inverses, such as
( A B ) = B ( A B B ) , ( A B ) = ( A A B ) A ,
( A B C ) = ( B C ) B ( A B ) , ( A B C ) = C ( A A B C C ) A ,
( A B C D ) = ( C D ) C ( B C ) B ( A B ) ,
( A B C D E ) = ( C D E ) C D ( B C D ) B C ( A B C ) .
These equalities are usually called the mixed or nested reverse-order laws for generalized inverses of matrices. Clearly, these reverse-order laws of special kinds are all constructed from the ordinary algebraic operations of the given matrices and their generalized inverses, and each of them has certain reasonable interpretations; in particular, they can be reduced to the reverse-order laws for standard inverses of matrix products when the given matrices in them are all invertible. Admittedly, knowing how to deal with a given matrix equality composed by matrices and their generalized inverses is a difficult problem. In fact, these kinds of problems have no uniformly acceptable solutions, and no algebraists and algebraic techniques can accurately tell people what to do with complicated matrix operations and matrix equalities.
The rest of this paper is organized as follows. In Section 2, the author introduces a group of known formulas, facts, and results about ranks, ranges, and generalized inverses. In Section 3, the author derives several groups of equivalent facts related to the matrix equalities in (3)–(7) and gives some of their consequences. Section 4 gives some remarks and further research problems pertaining to characterizations of matrix equalities for generalized inverses of matrix products.

2. Some Preliminaries

We begin with presentations and expositions of a series of known facts and results regarding matrices and their ordinary operations, which can be found in various reference books about linear algebra and matrix theory (cf. [2,15,16,17]).
Note from the definitions of generalized inverses of a matrix that they are in fact defined to be (common) solutions of some matrix equations. Thus, analytical expressions of generalized inverses of matrices, as shown below, can be written as certain matrix-valued functions with one or more variable matrices.
Lemma 1
([1]). Let A C m × n . Then, the general expression of A of A can be written as
A = A + F A U + V E A ,
where E A = I m A A , F A = I n A A , and U , V C n × m are arbitrary.
There is much good to be said about equalities and inequalities for ranks of matrices. In what follows, we present a series of well-known or established results and facts concerning ranks of matrices, which we shall use to deal with matrix equality problems and matrix set inclusion problems with regard to the generalized inverses of matrix products described above.
Lemma 2.
Let A C m × n , B C m × k ,   A 1 C m × n 1 ,     A 2 C m × n 2 , B 1 C m × p 1 , and B 2 C m × p 2 . Then,
R ( A ) R ( B ) a n d r ( A ) = r ( B ) R ( A ) = R ( B ) ,
R ( A 1 ) = R ( A 2 ) a n d R ( B 1 ) = R ( B 2 ) r [ A 1 , B 1 ] = r [ A 2 , B 2 ] .
Lemma 3
([18]). Let A C m × n , B C m × k , C C l × n , and D C l × k . Then,
r [ A , B ] = r ( A ) + r ( E A B ) = r ( B ) + r ( E B A ) ,
r A C = r ( A ) + r ( C F A ) = r ( C ) + r ( A F C ) ,
r A B C 0 = r ( B ) + r ( C ) + r ( E B A F C ) ,
r A B C D = r ( A ) + r 0 E A B C F A D C A B .
In particular, the following results hold:
(a) 
r [ A , B ] = r ( A ) R ( B ) R ( A ) A A B = B E A B = 0 .
(b) 
r A C = r ( A ) R ( C * ) R ( A * ) C A A = C C F A = 0 .
(c) 
r A B C 0 = r ( B ) + r ( C ) E B A F C = 0 .
(d) 
r A B C D = r ( A ) R ( B ) R ( A ) , R ( C * ) R ( A * ) , and C A B = D .
Lemma 4
([8]). Let A C m × n , B C m × k , C C l × n , and D C l × k . Then,
r ( D C A B ) = r A * A A * A * B C A * D r ( A ) .
In particular,
r ( D C A B ) = r A A * B C A * D r ( A ) i f R ( B ) R ( A ) ,
r ( D C A B ) = r A * A A * B C D r ( A ) i f R ( C * ) R ( A * ) .
Lemma 5
([18]). Let A C m × n , B C n × p , and C C p × q . Then,
r ( A B ) = r ( A ) + r ( B ) n + r ( ( I n B B ) ( I n A A ) ) ,
r ( A B C ) = r ( A B ) + r ( B C ) r ( B ) + r ( ( I n ( B C ) ( B C ) ) B ( I p ( A B ) ( A B ) ) )
hold for all A , B , ( A B ) , and ( B C ) . In particular, the following results hold:
(a) 
The rank of A B satisfies the following inequalities:
max { 0 , r ( A ) + r ( B ) n } r ( A ) + r ( B ) r [ A * , B ] r ( A B ) min { r ( A ) , r ( B ) } .
(b) 
The rank of A B C satisfies the following inequalities:
r ( A B C ) min { r ( A B ) , r ( B C ) } min { r ( A ) , r ( B ) , r ( C ) } , r ( A B C ) max { 0 , r ( A B ) + r ( B C ) r ( B ) } max { 0 , r ( A ) + r ( B ) + r ( C ) r [ A * , B ] r [ B * , C ] }
max { 0 , r ( A ) + r ( B ) + r ( C ) n p } , r ( A B C ) r ( A B ) + r ( C ) r [ ( A B ) * , C ] max { 0 , r ( A B ) + r ( C ) p }
max { 0 , r ( A ) + r ( B ) + r ( C ) n p } , r ( A B C ) r ( A ) + r ( B C ) r [ A * , B C ] max { 0 , r ( A ) + r ( B C ) n }
max { 0 , r ( A ) + r ( B ) + r ( C ) n p } .
(c) 
r ( A B C ) = r ( B ) r ( A B ) = r ( B C ) = r ( B ) .
(d) 
r ( A B C ) = r ( A ) + r ( B ) + r ( C ) n p r ( A B C ) = r ( A B ) + r ( C ) p and r ( A B ) = r ( A ) + r ( B ) n .
(e) 
r ( A B C ) = r ( A ) + r ( B ) + r ( C ) n p r ( A B C ) = r ( A ) + r ( B C ) n and r ( B C ) = r ( B ) + r ( C ) p .
Lemma 6
([19,20]). Let A C m × n , B C m × k , C C l × n , and D C l × k be given. Then,
max A { A } r ( D C A B ) = min r [ C , D ] , r B D , r A B C D r ( A ) .
Therefore,
C A B = D f o r a l l   A [ C , D ] = 0 o r B D = 0 o r A B C D = r ( A ) .
There is no doubt that analytical formulas for calculating ranks of matrices can be used to establish and analyze various complicated matrix expressions and matrix equalities. Specifically, the rank equalities and their consequences in the above four lemmas are understandable in elementary linear algebra. When the matrices are given in various concrete forms, these established results can be simplified further by usual computations of matrices, so that we can employ them to describe a variety of concrete matrix equalities that involve products of matrices and their generalized inverses in matrix analysis and applications.
At the end of this section, we give a known result regarding a matrix equality composed of six matrices and their generalized inverses.
Lemma 7
([21]). Let A 1 C m 1 × m 2 , A 2 C m 3 × m 2 , A 3 C m 3 × m 4 , A 4 C m 5 × m 4 , A 5 C m 5 × m 6 , and A C m 1 × m 6 be given. Then, the following five statements are equivalent:
(a) 
The equality A 1 A 2 A 3 A 4 A 5 = A holds for all A 2 and A 4 .
(b) 
The product A 1 A 2 A 3 A 4 A 5 is invariant with respect to the choices of A 2 and A 4 , and A 1 A 2 A 3 A 4 A 5 = A .
(c) 
One of the following six conditions holds:
(i) 
A 1 = 0 and A = 0 .
(ii) 
A 3 = 0 and A = 0 .
(iii) 
A 5 = 0 and A = 0 .
(iv) 
A = 0 , A 1 A 2 A 3 = 0 , R ( A 1 * ) R ( A 2 * ) , and R ( A 3 ) R ( A 2 ) .
(v) 
A = 0 , A 3 A 4 A 5 = 0 , R ( A 3 * ) R ( A 4 * ) , and R ( A 5 ) R ( A 4 ) .
(vi) 
A = A 1 A 2 A 3 A 4 A 5 , R ( A 1 * ) R ( A 2 * ) , R ( A 5 ) R ( A 4 ) , R ( ( A 1 A 2 A 3 ) * ) R ( A 4 * ) , R ( A 3 A 4 A 5 ) R ( A 2 ) , and E A 2 A 3 F A 4 = 0 .
(d) 
One of the following six conditions holds:
(i) 
A 1 = 0 and A = 0 .
(ii) 
A 3 = 0 and A = 0 .
(iii) 
A 5 = 0 and A = 0 .
(iv) 
A = 0 and r A 2 A 3 A 1 0 = r ( A 2 ) .
(v) 
A = 0 and r A 4 A 5 A 3 0 = r ( A 4 ) .
(vi) 
A = A 1 A 2 A 3 A 4 A 5 , R ( [ 0 , A 1 ] * ) R A 3 A 2 A 4 0 * , R 0 A 5 R A 3 A 2 A 4 0 , and r A 3 A 2 A 4 0 = r ( A 2 ) + r ( A 4 ) .
(e) 
One of the following six conditions holds:
(i) 
A 1 = 0 and A = 0 .
(ii) 
A 3 = 0 and A = 0 .
(iii) 
A 5 = 0 and A = 0 .
(iv) 
A = 0 and r A 2 A 3 A 1 0 = r ( A 2 ) .
(v) 
A = 0 and r A 4 A 5 A 3 0 = r ( A 4 ) .
(vi) 
r A 0 A 1 0 A 3 A 2 A 5 A 4 0 = r ( A 2 ) + r ( A 4 ) .
Obviously, all the preceding formulas and facts belong to mathematical competencies and conceptions in matrix algebra. Specifically, the rank equalities for block matrices in Lemma 7 are easy to understand and grasp, and thereby, they can technically and perspicuously be utilized to establish and describe many kinds of concrete matrix expressions and equalities consisting of matrices and their generalized inverses. As a matter of fact, the matrix rank method has been highly regarded as the ultimate manifestation of the characterizations of algebraic matrix equalities in comparison with other algebraic tools in matrix theory.

3. Set Inclusions for Generalized Inverses of Matrix Products

The formulas and facts in Lemma 7 are explicit in form and easily manageable for the different choices of the given matrices, and thereby, they are readily used to solve a wide range of problems for establishing algebraic equalities for matrices and generalized inverses. In this section, we propose a rich variety of matrix set inclusions that are originated from the reverse-order laws in (3)–(7) and derive several groups of equivalent statements associated with these matrix set inclusions through the use of formulas and facts prepared in Section 2.
Referring to Lemma 7, we can perspicuously illustrate how to describe matrix set inclusions for generalized inverses of different matrices.
Theorem 1.
Let A 1 C m 1 × m 2 , A 2 C m 3 × m 2 , A 3 C m 3 × m 4 , A 4 C m 5 × m 4 , A 5 C m 5 × m 6 , and A C m 6 × m 1 . Then, we have the following results:
(a) 
The following five statements are equivalent:
(i) 
{ A 1 A 2 A 3 A 4 A 5 } { A } , namely, A A 1 A 2 A 3 A 4 A 5 A = A holds for all A 2 and A 4 .
(ii) 
{ A A 1 A 2 A 3 A 4 A 5 } { A A } .
(iii) 
{ A 1 A 2 A 3 A 4 A 5 A } { A A } .
(iv) 
A = 0 or r A 0 A A 1 0 A 3 A 2 A 5 A A 4 0 = r ( A 2 ) + r ( A 4 ) .
(v) 
A = 0 or r A 3 A 2 A 4 A 5 A A 1 = r ( A 2 ) + r ( A 4 ) r ( A ) .
(b) 
The following four statements are equivalent:
(i) 
{ A 2 A 3 A 4 } { A } , namely A A 2 A 3 A 4 A = A holds for all A 2 and A 4 .
(ii) 
{ A A 2 A 3 A 4 } { A A } .
(iii) 
{ A 2 A 3 A 4 A } { A A } .
(iv) 
A = 0 or r A 3 A 2 A 4 A = r ( A 2 ) + r ( A 4 ) r ( A ) .
(c) 
Let A C m × n , B C p × m and C C p × n . Then, the following four statements are equivalent:
(i) 
{ A B } { C } , namely C A B C = C holds for all C .
(ii) 
{ C A B } { C C } .
(iii) 
{ A B C } { C C } .
(iv) 
C = 0 or r ( B A C ) = r ( A ) + r ( B ) r ( C ) m .
Proof. 
By definition, the set inclusion { A 1 A 2 A 3 A 4 A 5 } { A } is equivalent to A A 1 A 2 A 3 A 4 A 5 A = A holds for all A 2 and A 4 . In this case, replacing A 1 with A A 1 and A 5 with A 5 A in Lemma 7(a) and (e), and then simplifying lead to the equivalence of (i) and (iv) in (a) of this theorem.
Furthermore, it is easy to verify by elementary block matrix operations that the following rank equality:
r A 0 A A 1 0 A 3 A 2 A 5 A A 4 0 = r A 0 0 0 A 3 A 2 0 A 4 A 5 A A 1 = r ( A ) + r A 3 A 2 A 4 A 5 A A 1
holds. Substituting it into (vi) of Lemma 7(e) leads to the equivalence of (iv) and (v) in (a) of this theorem.
Pre- and post-multiplying both sides of the set inclusion in (i) of (a) of this theorem with A, respectively, lead to (ii) and (iii) in (a) of this theorem. Conversely, post- and pre-multiplying both sides of the set inclusions in (ii) and (iii) of (a) of this theorem with A, respectively, lead to (i) in (a) of this theorem.
Results (b) and (c) are direct consequences of (a) under the given assumptions. □
Mindful of the differences of both sides of the matrix set inclusions in the above theorem, we may say that the statements in Theorem 1 in fact provide some useful strategies and techniques of describing matrix set inclusions via matrix rank equalities, and thereby, they can be utilized to construct and solve various equality problems that appear in matrix theory and its applications with regard to products of matrices and their generalized inverses.
In the following, we present some applications of the above results in the characterizations of reverse-order laws for generalized inverses of two or more matrix products. Recall that there were plenty of classic discussions in the literature on the construction and characterization of reverse-order laws for generalized inverses of the matrix product A B , which motivated from time to time deep-going considerations and explorations of various universal algebraic methods to deal with reverse-order law problems. The first reverse-order law in (3) was proposed and well approached in the theory of generalized inverses of matrices; see, e.g., [7,9,14,22,23,24]. In view of this fact, we first derive from Theorem 1(c) a group of equivalent facts concerning the matrix set inclusion { ( A B ) } { B A } and its variation forms.
Theorem 2.
Let A C m × n and B C n × p be given. Then, the following 23 statements are equivalent:
(i) 
{ ( A B ) } { B A } .
(ii) 
{ ( A B ) } { B * ( B B * ) ( A * A ) A * } .
(iii) 
{ ( A B ) } { B ( B B ) ( A A ) A } .
(iv) 
{ A B ( A B ) } { A B B A } .
(v) 
{ A B ( A B ) } { A B B * ( B B * ) ( A * A ) A * } .
(vi) 
{ A B ( A B ) } { A B B ( B B ) ( A A ) A } .
(vii) 
{ ( A B ) A B } { B A A B } .
(viii) 
{ ( A B ) A B } { B * ( B B * ) ( A * A ) A * A B } .
(ix) 
{ ( A B ) A B } { B ( B B ) ( A A ) A A B } .
(x) 
{ B ( A B ) A } { B B A A } .
(xi) 
{ B ( A B ) A } { B B ( B B ) ( A A ) A A } .
(xii) 
{ ( A A B B ) } { ( B B ) ( A A ) } .
(xiii) 
{ ( B * A * ) } { ( A * ) ( B * ) } .
(xiv) 
{ ( A * A B B * ) } { ( B B * ) ( A * A ) } .
(xv) 
{ ( B B * A * A ) } { ( A * A ) ( B B * ) } .
(xvi) 
{ ( ( A * A ) 1 / 2 ( B B * ) 1 / 2 ) } { ( ( B B * ) 1 / 2 ) ( ( A * A ) 1 / 2 ) } .
(xvii) 
{ ( ( B B * ) 1 / 2 ( A * A ) 1 / 2 ) ) } { ( ( A * A ) 1 / 2 ) ( ( B B * ) 1 / 2 ) } .
(xviii) 
{ ( A A * A B B * B ) } { ( B B * B ) ( A A * A ) } .
(xix) 
{ ( B * B B * A * A A * ) } { ( A * A A * ) ( B * B B * ) } .
(xx) 
A B = 0 or r ( A B ) = r ( A ) + r ( B ) n .
(xxi) 
A B = 0 or ( I n B B ) ( I n A A ) = 0 for some/all A and B .
(xxii) 
N ( A ) R ( B ) or N ( A ) R ( B ) .
(xxiii) 
R ( A * ) N ( B * ) or R ( A * ) N ( B * ) .
Proof. 
Replacing C with A B in (i) and (iv) of Theorem 1(c), we see that (i) in this theorem holds if and only if A B = 0 or
r ( A ) + r ( B ) r ( A B ) = r I n B A A B = r I n 0 0 0 = n ,
establishing the equivalence of (i) and (xx).
By (i) and (v) in Theorem 1(a), (ii) in this theorem holds if and only if A B = 0 or
r ( A ) + r ( B ) r ( A B ) = r ( A * A ) + r ( B B * ) r ( A B ) = r I n B B * A * A A * A B B * = r I n 0 0 0 = n ,
establishing the equivalence of (ii) and (xx).
By (i) and (v) in Theorem 1(a), (iii) in this theorem holds if and only if A B = 0 or
r ( A ) + r ( B ) r ( A B ) = r ( A A ) + r ( B B ) r ( A B ) = r I n B B A A A A B B = r I n 0 0 0 = n ,
establishing the equivalence of (iii) and (xx).
The equivalences of (i) and (xii)–(xx) follow from the following rank equalities:
r ( A ) = r ( A A ) = r ( A A * ) = r ( A A * A ) ,
r ( B ) = r ( B B ) = r ( B * B ) = r ( B B * B ) , r ( A B ) = r ( B * A * ) = r ( A A B B ) = r ( A * A B B * ) = r ( B B * A * A ) = r ( ( A * A ) 1 / 2 ( B B * ) 1 / 2 ) = r ( ( B B * ) 1 / 2 ( A * A ) 1 / 2 )
= r ( A A * A B B * B ) = r ( B * B B * A * A A * ) .
The equivalences of (i)–(xi) in this theorem follow from (i), (ii), and (iii) in Theorem 1(a). The equivalences of (i) and (xx)–(xxiii) in this theorem were proven in [14]. □
Theorem 3.
Let A C m × n and B C n × p . Then, the following 16 statements are equivalent:
(i) 
{ ( A B ) } B A , i.e., A B B A A B = A B .
(ii) 
{ ( A B ) } { ( B * B ) B * A * ( A A * ) } .
(iii) 
{ A B ( A B ) } A B B A .
(iv) 
{ A B ( A B ) } { A B ( B * B ) B * A * ( A A * ) } .
(v) 
{ ( A B ) A B } B A A B .
(vi) 
{ ( A B ) A B } { ( B * B ) B * A * ( A A * ) A B } .
(vii) 
{ B ( A B ) A } B B A A .
(viii) 
{ B ( A B ) A } { B ( B * B ) B * A * ( A A * ) A } .
(ix) 
{ ( A A B B ) } ( B B ) ( A A ) .
(x) 
{ ( A * A B B * ) } ( B B * ) ( A * A ) .
(xi) 
{ ( B B * A * A ) } ( A * A ) ( B B * ) .
(xii) 
{ ( ( A * A ) 1 / 2 ( B B * ) 1 / 2 ) } ( ( B B * ) 1 / 2 ) ( ( A * A ) 1 / 2 ) .
(xiii) 
{ ( ( B B * ) 1 / 2 ( A * A ) 1 / 2 ) ) } ( ( A * A ) 1 / 2 ) ( ( B B * ) 1 / 2 ) .
(xiv) 
{ ( A A * A B B * B ) } ( B B * B ) ( A A * A ) .
(xv) 
{ ( B * B B * A * A A * ) } ( A * A A * ) ( B * B B * ) .
(xvi) 
r ( A B ) = r ( A ) + r ( B ) r [ A * , B ] .
Proof. 
The equivalence of (i) and (xvi) follows from the well-known rank formula:
r ( A B A B B A A B ) = r [ A * , B ] r ( A ) r ( B ) + r ( A B ) ;
see [25,26].
By (i) and (iv) in Theorem 1(b), (ii) in this theorem holds if and only if A B = 0 or
r ( A ) + r ( B ) r ( A B ) = r ( A A * ) + r ( B * B ) r ( A B ) = r B * A * B * B A A * A B = r [ A * , B ] * [ A * , B ] = r [ A * , B ] .
Note also that A B = 0 is a special case of the above matrix rank equality, thus establishing the equivalence of (ii) and (xvi).
The equivalences of (i) and (ix)–(xvi) in this theorem follow from (27)–(29), and
r [ A * , B ] = r [ ( A A ) * , B B ] = r [ A * A , B B * ] = r [ ( A * A ) 1 / 2 , ( B B * ) 1 / 2 ] = r [ A * A A * , B B * B ] .
The equivalences of (i)–(viii) in this theorem follow from (i), (ii), and (iii) in Theorem 1(b). □
In the following, the author presents two groups of results on set inclusions associated with the two reverse-order laws ( A B C ) = ( B C ) B ( A B ) and ( A B C ) = C B A and their variation forms for a triple matrix product A B C .
Theorem 4.
Let A C m × n , B C n × p , and C C p × q be given, and denote M = A B C . Then, the following 36 statements are equivalent:
(i) 
{ M } { ( B C ) B ( A B ) } .
(ii) 
{ M } { C * ( B C C * ) B ( A * A B ) A * } .
(iii) 
{ M } { ( B * B C ) B * B B * ( A B B * ) } .
(iv) 
{ M } { C * ( B * B C C * ) B * B B * ( A * A B B * ) A * } .
(v) 
{ M } { C ( B C C ) B ( A A B ) A } .
(vi) 
{ M } { ( B B C ) B B B ( A B B ) } .
(vii) 
{ M } { C ( B B C C ) B B B ( A A B B ) A } .
(viii) 
{ M M } { M ( B C ) B ( A B ) } .
(ix) 
{ M M } { M C * ( B C C * ) B ( A * A B ) A * } .
(x) 
{ M M } { M ( B * B C ) B * B B * ( A B B * ) } .
(xi) 
{ M M } { M C * ( B * B C C * ) B * B B * ( A * A B B * ) A * } .
(xii) 
{ M M } { M C ( B C C ) B ( A A B ) A } .
(xiii) 
{ M M } { M ( B B C ) B B B ( A B B ) } .
(xiv) 
{ M M } { M C ( B B C C ) B B B ( A A B B ) A } .
(xv) 
{ M M } { ( B C ) B ( A B ) M } .
(xvi) 
{ M M } { C * ( B C C * ) B ( A * A B ) A * M } .
(xvii) 
{ M M } { ( B * B C ) B * B B * ( A B B * ) M } .
(xviii) 
{ M M } { C * ( B * B C C * ) B * B B * ( A * A B B * ) A * M } .
(xix) 
{ M M } { C ( B C C ) B ( A A B ) A M } .
(xx) 
{ M M } { ( B B C ) B B B ( A B B ) M } .
(xxi) 
{ M M } { C ( B B C C ) B B B ( A A B B ) A M } .
(xxii) 
{ C M A } { C ( B C ) B ( A B ) A } .
(xxiii) 
{ C M A } { C C * ( B C C * ) B ( A * A B ) A * A } .
(xxiv) 
{ C M A } { C ( B * B C ) B * B B * ( A B B * ) A } .
(xxv) 
{ C M A } { C C * ( B * B C C * ) B * B B * ( A * A B B * ) A * A } .
(xxvi) 
{ C M A } { C C ( B C C ) B ( A A B ) A A } .
(xxvii) 
{ C M A } { C ( B B C ) B B B ( A B B ) A } .
(xxviii) 
{ C M A } { C C ( B B C C ) B B B ( A A B B ) A A } .
(xxix) 
{ ( A M C ) } { ( B C C ) B ( A A B ) } .
(xxx) 
{ ( A * M C * ) } { ( B C C * ) B ( A * A B ) } .
(xxxi) 
{ ( A A * M C * C ) } { ( B C C * C ) B ( A A * A B ) } .
(xxxii) 
{ ( ( A B ) M ( B C ) ) } { ( ( B C ) ( B C ) ) B ( ( A B ) ( A B ) ) } .
(xxxiii) 
{ ( ( A B ) * M ( B C ) * ) } { ( ( B C ) ( B C ) * ) B ( ( A B ) * ( A B ) ) } .
(xxxiv) 
{ ( ( ( A B ) * ( A B ) ) 1 / 2 B ( ( B C ) ( B C ) * ) 1 / 2 ) } { ( ( ( B C ) ( B C ) * ) 1 / 2 ) B ( ( ( A B ) * ( A B ) ) 1 / 2 ) } .
(xxxv) 
M = 0 or I n ( B C ) ( B C ) B I p ( A B ) ( A B ) = 0 for some/all ( A B ) and ( B C ) .
(xxxvi) 
M = 0 or r ( M ) = r ( A B ) + r ( B C ) r ( B ) .
Proof. 
By (i) and (iv) in Theorem 1(b), (i) in this theorem holds if and only if
M = 0 or r ( A B ) + r ( B C ) r ( M ) = r B B C A B A B C = r B 0 0 0 = r ( B ) ,
establishing the equivalence of (i) and (xxxvi). The equivalences of (i)–(xxviii) in this theorem can also be shown by Theorem 1(b) and (c). The details of the proofs are omitted here due to space limitation.
The equivalences of (i), (xxix)–(xxxiv), and (xxxvi) follow from the following basic facts:
r ( A B ) = r ( A A B ) = r ( A * B ) = r ( A A * A B ) , r ( B C ) = r ( B C C ) = r ( B C C * ) = r ( B C C * C ) , r ( M ) = r ( A M C ) = r ( A * M C * ) = r ( A A * M C * C ) = r ( ( A B ) M ( B C ) ) = r ( ( A B ) * M ( B C ) * ) .
The equivalence of (xxxv) and (xxxvi) follows from (19). □
Theorem 5.
Let A C m × n , B C n × p , and C C p × q be given, and denote M = A B C . Then, the following 27 statements are equivalent:
(i) 
{ M } { C B A } .
(ii) 
{ M } { C * ( C C * ) B ( A * A ) A * } .
(iii) 
{ M } { C ( C C ) B ( A A ) A } .
(iv) 
{ M } { ( B C ) A } and { ( B C ) } { C B } .
(v) 
{ M } { C ( A B ) } and { ( A B ) } { B A } .
(vi) 
{ M M } { M C B A } .
(vii) 
{ M M } { M C * ( C C * ) B ( A * A ) A * } .
(viii) 
{ M M } { M C ( C C ) B ( A A ) A } .
(ix) 
{ M M } { M ( B C ) A } and { B C ( B C ) } { B C C B } .
(x) 
{ M M } { M C ( A B ) } and { A B ( A B ) } { A B B A } .
(xi) 
{ M M } { C B A M } .
(xii) 
{ M M } { C * ( C C * ) B ( A * A ) A * M } .
(xiii) 
{ M M } { C ( C C ) B ( A A ) A M } .
(xiv) 
{ M M } { ( B C ) A M } and { ( B C ) B C } { C B B C } .
(xv) 
{ M M } { C ( A B ) M } and { ( A B ) A B } { B A A B } .
(xvi) 
{ C M A } { C C B A A } .
(xvii) 
{ C M A } { C C * ( C C * ) B ( A * A ) A * A } .
(xviii) 
{ C M A } { C C ( C C ) B ( A A ) A A } .
(xix) 
{ B C M A } { B C ( B C ) A A } and { C ( B C ) B } { C C B B } .
(xx) 
{ C M A B } { C C ( A B ) A B } and { B ( A B ) A } { B B A A } .
(xxi) 
{ ( C * B * A * ) } { ( A * ) ( B * ) ( C * ) } .
(xxii) 
{ ( A M C ) } { ( C C ) B ( A A ) } .
(xxiii) 
{ ( A * M C * ) } { ( C C * ) B ( A * A ) } .
(xxiv) 
{ ( A A * M C * C ) } { ( C C * C ) B ( A A * A ) } .
(xxv) 
M = 0 or r ( M ) = r ( A ) + r ( B ) + r ( C ) n p .
(xxvi) 
M = 0 or { r ( M ) = r ( A ) + r ( B C ) n and r ( B C ) = r ( B ) + r ( C ) p } .
(xxvii) 
M = 0 or { r ( M ) = r ( A B ) + r ( C ) p and r ( A B ) = r ( A ) + r ( B ) n } .
Proof. 
We first obtain from Lemma 5(b) the following inequalities:
p r ( C ) + r ( M ) r ( M ) 0 ,
n r ( A ) + r ( M ) r ( M ) 0 ,
r ( M ) r ( A ) + r ( B ) + r ( C ) n p 0 ,
which we shall use in the sequel. By (i) and (iv) in Theorem 1(b), (i) in this theorem holds if and only if
M = 0 or r B C A M r ( A ) r ( C ) + r ( M ) = 0
holds for all B . Applying (25) to the block matrix in (33), we obtain
max B r B C A M = max B r I p 0 B [ I n , 0 ] + 0 C A M = min r I p 0 C 0 A M , r I n 0 0 C A M , r B I n 0 I p 0 C 0 A M r ( B ) = min r I p 0 0 0 A 0 , r I n 0 0 C 0 0 , r 0 I n 0 I p 0 0 0 0 0 r ( B ) = min { p + r ( A ) , n + r ( C ) , n + p r ( B ) } .
Substituting this result into the second equality in (33) and simplifying by (30)–(32) lead to
min { p r ( C ) + r ( M ) , n r ( A ) + r ( M ) , n + p r ( A ) r ( B ) r ( C ) + r ( M ) } = n + p r ( A ) r ( B ) r ( C ) + r ( M ) = 0 .
Combining (35) with the first condition M = 0 in (33) leads to the equivalence of (i) and (xxv).
The equivalences of (i)–(xx) in this theorem can be shown from Theorem 1(a) and (b) by similar approaches, and therefore, their proofs are omitted here.
From Lemma 5(b) also, we obtain the following inequalities:
r ( M ) r ( A ) + r ( B C ) n r ( A ) + r ( B ) + r ( C ) n p , r ( M ) r ( A B ) + r ( C ) p r ( A ) + r ( B ) + r ( C ) n p .
Therefore,
r ( M ) = r ( A ) + r ( B ) + r ( C ) n p r ( M ) = r ( A ) + r ( B C ) n and r ( B C ) = r ( B ) + r ( C ) p r ( M ) = r ( A B ) + r ( C ) p and r ( A B ) = r ( A ) + r ( B ) n .
These facts imply that (xxv), (xxvi), and (xxvii) are equivalent.
The equivalences of (i) and (xxi)–(xxiv) follow from the basic rank equalities r ( M ) = r ( A M ) = r ( A * M ) = r ( A A * M ) , r ( M ) = r ( M C ) = r ( M C * ) = r ( M C * C ) , and r ( M ) = r ( A M C ) = r ( A * M C * ) = r ( A A * M C * C ) . □
Theorem 6.
Let A C m × n , B C n × p , C C p × q , and D C q × s be given, and denote N = A B C D . Then, the following 36 statements are equivalent:
(i) 
{ N } { ( C D ) C ( B C ) B ( A B ) } .
(ii) 
{ N } { ( C * C D ) C * C ( B C ) B B * ( A B B * ) } .
(iii) 
{ N } { ( C D ) C C * ( B * B C C * ) B * B ( A B ) } .
(iv) 
{ N } { D * ( C * C D D * ) C * C ( B C ) B B * ( A * A B B * ) A * } .
(v) 
{ N } { ( C * C D ) C * C C * ( B * B C C * ) B * B B * ( A B B * ) } .
(vi) 
{ N } { D * ( C * C D D * ) C * C C * ( B * B C C * ) B * B B * ( A * A B B * ) A * } .
(vii) 
{ N } { ( C C D ) C C ( B C ) B B ( A B B ) } .
(viii) 
{ N } { ( C D ) C C ( B B C C ) B B ( A B ) } .
(ix) 
{ N } { D ( C C D D ) C C ( B C ) B B ( A A B B ) A } .
(x) 
{ N } { ( C C D ) C C C ( B B C C ) B B B ( A B B ) } .
(xi) 
{ N } { D ( C C D D ) C C C ( B B C C ) B B B ( A A B B ) A } .
(xii) 
{ N N } { N ( C D ) C ( B C ) B ( A B ) } .
(xiii) 
{ N N } { N ( C * C D ) C * C ( B C ) B B * ( A B B * ) } .
(xiv) 
{ N N } { N ( C D ) C C * ( B * B C C * ) B * B ( A B ) } .
(xv) 
{ N N } { N D * ( C * C D D * ) C * C ( B C ) B B * ( A * A B B * ) A * } .
(xvi) 
{ N N } { N ( C * C D ) C * C C * ( B * B C C * ) B * B B * ( A B B * ) } .
(xvii) 
{ N N } { N D * ( C * C D D * ) C * C C * ( B * B C C * ) B * B B * ( A * A B B * ) A * } .
(xviii) 
{ N N } { N ( C C D ) C C ( B C ) B B ( A B B ) } .
(xix) 
{ N N } { N ( C D ) C C ( B B C C ) B B ( A B ) } .
(xx) 
{ N N } { N D ( C C D D ) C C ( B C ) B B ( A A B B ) A } .
(xxi) 
{ N N } { N ( C C D ) C C C ( B B C C ) B B B ( A B B ) } .
(xxii) 
{ N N } { N D ( C C D D ) C C C ( B B C C ) B B B ( A A B B ) A } .
(xxiii) 
{ N N } { ( C D ) C ( B C ) B ( A B ) N } .
(xxiv) 
{ N N } { ( C * C D ) C * C ( B C ) B B * ( A B B * ) N } .
(xxv) 
{ N N } { ( C D ) C C * ( B * B C C * ) B * B ( A B ) N } .
(xxvi) 
{ N N } { D * ( C * C D D * ) C * C ( B C ) B B * ( A * A B B * ) A * N } .
(xxvii) 
{ N N } { ( C * C D ) C * C C * ( B * B C C * ) B * B B * ( A B B * ) N } .
(xxviii) 
{ N N } { D * ( C * C D D * ) C * C C * ( B * B C C * ) B * B B * ( A * A B B * ) A * N } .
(xxix) 
{ N N } { ( C C D ) C C ( B C ) B B ( A B B ) N } .
(xxx) 
{ N N } { ( C D ) C C ( B B C C ) B B ( A B ) N } .
(xxxi) 
{ N N } { D ( C C D D ) C C ( B C ) B B ( A A B B ) A N } .
(xxxii) 
{ N N } { ( C C D ) C C C ( B B C C ) B B B ( A B B ) N } .
(xxxiii) 
{ N N } { D ( C C D D ) C C C ( B B C C ) B B B ( A A B B ) A N } .
(xxxiv) 
N = 0 or r ( N ) = r ( A B ) + r ( B C ) + r ( C D ) r ( B ) r ( C ) .
(xxxv) 
N = 0 or { r ( N ) = r ( A B C ) + r ( C D ) r ( C ) a n d r ( A B C ) = r ( A B ) + r ( B C ) r ( B ) } .
(xxxvi) 
N = 0 or { r ( N ) = r ( A B ) + r ( B C D ) r ( B ) a n d r ( B C D ) = r ( B C ) + r ( C D ) r ( C ) } .
Proof. 
We first obtain from Lemma 5(b) the following inequalities:
r ( N ) + r ( C ) r ( C D ) r ( N ) 0 ,
r ( N ) + r ( B ) r ( A B ) r ( N ) 0 ,
r ( N ) r ( A B ) r ( B C ) r ( C D ) + r ( B ) + r ( C ) 0 .
By (i) and (iv) in Theorem 1(b), (i) in this theorem holds if and only if
N = 0 or r C ( B C ) B C D A B N r ( A B ) r ( C D ) + r ( N ) = 0
holds for all ( B C ) , where by (25), the maximum rank of the block matrix in (39) is
max ( B C ) r C ( B C ) B C D A B A B C D = max ( B C ) r C 0 ( B C ) [ B , 0 ] + 0 C D A B A B C D = min r C 0 C D 0 A B A B C D , r B 0 0 C D A B A B C D , r B C B 0 C 0 C D 0 A B A B C D r ( B C ) = min r ( A B ) + r ( C ) , r ( C D ) + r ( B ) , r 0 B 0 C 0 0 0 0 0 r ( B C ) = min r ( A B ) + r ( C ) , r ( C D ) + r ( B ) , r ( B ) + r ( C ) r ( B C ) .
Substituting this result into the second equality in (39) and simplifying by (36)–(38) lead to
min { r ( N ) + r ( C ) r ( C D ) , r ( N ) + r ( B ) r ( A B ) , r ( N ) r ( A B ) r ( B C ) r ( C D ) + r ( B ) + r ( C ) } = r ( N ) r ( A B ) r ( B C ) r ( C D ) + r ( B ) + r ( C ) = 0 ,
Combining (41) with the first condition N = 0 in (39) leads to the equivalence of (i) and (xxxiv). The equivalences of (i)–(xxxiii) can be shown by similar approaches, and therefore, the details are omitted.
The equivalences of (xxxiv), (xxxv), and (xxxvi) in this theorem follow from Lemma 5(b). □
Given the above results and their derivations, we believe intuitively that there exist many possible variations and extensions of the matrix set inclusion problems. We conclude this section with direct applications of the preceding results to some specified operations of matrices.
Corollary 1.
Let A C m × m be given. Then, the following matrix set inclusions always hold:
{ ( A A 2 ) } { A ( I m A ) } ,
{ ( A A 2 ) } { ( I m A ) A } ,
{ ( I m A 2 ) } { ( I m + A ) ( I m A ) } ,
{ ( I m A 2 ) } { ( I m A ) ( I m + A ) } ,
{ ( A A 3 ) } { A ( I m + A ) ( I m A ) } ,
{ ( A A 3 ) } { ( I m + A ) A ( I m A ) } ,
{ ( A A 3 ) } { ( I m + A ) ( I m A ) A } .
Proof. 
Recall that the following three rank formulas:
r ( A A 2 ) = r ( A ) + r ( I m A ) m , r ( I m A 2 ) = r ( I m + A ) + r ( I m A ) m , r ( A A 3 ) = r ( A ) + r ( I m + A ) + r ( I m A ) 2 m
are well known in elementary linear algebra. In this situation, applying Theorem 2(i) and (xx), Theorem 5(i) and (xxv), and the above three rank formulas to the matrix products A A 2 = A ( I m A ) = ( I m A ) A , I m A 2 = ( I m + A ) ( I m A ) = ( I m A ) ( I m + A ) , and A A 3 = A ( I m + A ) ( I m A ) = ( I m + A ) A ( I m A ) = ( I m + A ) ( I m A ) A lead to (42)–(48). □
Theorem 7.
Let A , B C m × n be given. Then, the following six statements are equivalent:
(i) 
{ ( A + B ) } A B A 0 0 B [ A , B ] .
(ii) 
{ ( A + B ) } A * A B * B A * A A * 0 0 B * B B * [ A A * , B B * ] .
(iii) 
{ ( A + B ) } A A B B A A A 0 0 B B B [ A A , B B ] .
(iv) 
{ ( A + B ) } [ I n , I n ] A A A A B B B B A A A 0 0 B B B A A B B A A B B I m I m .
(v) 
{ ( A + B ) } [ I n , I n ] A * A A * A B * B B * B A * A A * 0 0 B * B B * A A * B B * A A * B B * I m I m .
(vi) 
A + B = 0 or r ( A + B ) = r A B + r [ A , B ] r ( A ) r ( B ) .
Proof. 
Writing the sum A + B as A + B = [ I m , I m ] A 0 0 B I n I n and applying Theorem 4 to this triple matrix product yield the desired results. □

4. Concluding Remarks

The author collected and proposed a series of known and novel equalities for products of matrices and their generalized inverses, including a wide range of reverse-order laws for generalized inverses (matrix set inclusions associated with generalized inverses), and also presented various necessary and sufficient conditions for these matrix equalities to hold through the skillful use of various equalities and inequalities for ranks of matrices. Clearly, this study is a critical manifestation of how to construct reasonable matrix equalities that involve generalized inverses and how to describe these equalities by means of the cogent matrix rank method.
Finally, the author gives some additional remarks about relevant research problems regarding reverse-order laws. It has been recognized that the construction and characterization of reverse-order laws for generalized inverses of multiple matrix products are a huge algebraic work in matrix theory and applications, which mainly includes the following research topics:
(I)
Construct and classify different types of reverse-order laws.
(II)
Establish necessary and sufficient conditions for each reverse-order law to hold through the use of various matrix analysis methods and techniques.
Furthermore, the author points out that this kind of research problems can reasonably be proposed and studied for generalized inverses of elements in other algebraic systems, in which many different kinds of generalized inverses can properly be defined; see, e.g., [22,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41] for their expositions.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The author thanks two anonymous Referees for their helpful comments on an earlier version of this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Penrose, R. A generalized inverse for matrices. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1955; Volume 51, pp. 406–413. [Google Scholar]
  2. Bernstein, D.S. Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas Revised and Expanded Edition, 3rd ed.; Princeton University Press: Princeton, NJ, USA; Oxford, UK, 2018. [Google Scholar]
  3. Erdelyi, I. On the “reverse-order law” related to the generalized inverse of matrix products. J. ACM 1966, 13, 439–443. [Google Scholar] [CrossRef]
  4. Erdelyi, I. Partial isometries closed under multiplication on Hilbert spaces. J. Math. Anal. Appl. 1968, 22, 546–551. [Google Scholar] [CrossRef] [Green Version]
  5. Greville, T.N.E. Note on the generalized inverse of a matrix product. SIAM Rev. 1966, 8, 518–521, Erratum in: SIAM Rev. 1967, 9, 249. [Google Scholar] [CrossRef]
  6. Izumino, S. The product of operators with closed range and an extension of the reverse-order law. Tôhoku Math. J. 1982, 34, 43–52. [Google Scholar] [CrossRef]
  7. Jiang, B.; Tian, Y. Necessary and sufficient conditions for nonlinear matrix identities to always hold. Aequat. Math. 2019, 93, 587–600. [Google Scholar] [CrossRef]
  8. Tian, Y. Reverse order laws for the generalized inverses of multiple matrix products. Linear Algebra Appl. 1994, 211, 185–200. [Google Scholar] [CrossRef] [Green Version]
  9. Tian, Y. A family of 512 reverse-order laws for generalized inverses of a matrix product: A review. Heliyon 2020, 6, e04924. [Google Scholar] [CrossRef]
  10. Tian, Y. Miscellaneous reverse-order laws for generalized inverses of matrix products with applications. Adv. Oper. Theory 2020, 5, 1889–1942. [Google Scholar] [CrossRef]
  11. Tian, Y. Two groups of mixed reverse-order laws for generalized inverses of two and three matrix products. Comp. Appl. Math. 2020, 39, 181. [Google Scholar] [CrossRef]
  12. Tian, Y. Classification analysis to the equalities A(i,…,j)=B(k,…,l) for generalized inverses of two matrices. Linear Multilinear Algebra 2021, 69, 1383–1406. [Google Scholar] [CrossRef]
  13. Tian, Y.; Liu, Y. On a group of mixed-type reverse-order laws for generalized inverses of a triple matrix product with applications. Electron. J. Linear Algebra 2007, 16, 73–89. [Google Scholar] [CrossRef] [Green Version]
  14. Werner, H.J. When is BA a generalized inverse of AB? Linear Algebra Appl. 1994, 210, 255–263. [Google Scholar] [CrossRef] [Green Version]
  15. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  16. Campbell, S.L.; Meyer, C.D., Jr. Generalized Inverses of Linear Transformations; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  17. Puntanen, S.; Styan, G.P.H.; Isotalo, J. Matrix Tricks for Linear Statistical Models, Our Personal Top Twenty; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Marsaglia, G.; Styan, G.P.H. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  19. Tian, Y. Upper and lower bounds for ranks of matrix expressions using generalized inverses. Linear Algebra Appl. 2002, 355, 187–214. [Google Scholar] [CrossRef] [Green Version]
  20. Tian, Y. More on maximal and minimal ranks of Schur complements with applications. Appl. Math. Comput. 2004, 152, 675–692. [Google Scholar] [CrossRef]
  21. Jiang, B.; Tian, Y. Invariance property of a five matrix product involving two generalized inverses. Anal. Univ. Ovid. Constanta-Ser. Mat. 2021, 29, 83–92. [Google Scholar] [CrossRef]
  22. Jiang, B.; Tian, Y. Linear and multilinear functional identities in a prime ring with applications. J. Algebra Appl. 2021, 20, 2150212. [Google Scholar] [CrossRef]
  23. Shinozaki, N.; Sibuya, M. The reverse-order law (AB)=BA. Linear Algebra Appl. 1974, 9, 29–40. [Google Scholar] [CrossRef] [Green Version]
  24. Shinozaki, N.; Sibuya, M. Further results on the reverse-order law. Linear Algebra Appl. 1979, 27, 9–16. [Google Scholar] [CrossRef] [Green Version]
  25. Tian, Y. Using rank formulas to characterize equalities for Moore–Penrose inverses of matrix products. Appl. Math. Comput. 2004, 147, 581–600. [Google Scholar] [CrossRef]
  26. Baksalary, J.K.; Styan, G.P.H. Around a formula for the rank of a matrix product with some statistical applications. In Graphs, Matrices, and Designs: Festschrift in Honor of N. J. Pullman on his Sixtieth Birthday; Rees, R.S., Ed.; Marcel Dekker: New York, NY, USA, 1993; pp. 1–18. [Google Scholar]
  27. Cvetković-Ilić, D.S. Reverse order laws for {1,3,4}-generalized inverses in C*-algebras. Appl. Math. Lett. 2011, 24, 210–213. [Google Scholar] [CrossRef]
  28. Cvetković-Ilić, D.S.; Harte, R. Reverse order laws in C*-algebras. Linear Algebra Appl. 2011, 434, 1388–1394. [Google Scholar] [CrossRef] [Green Version]
  29. Harte, R.E.; Mbekhta, M. On generalized inverses in C*-algebras. Studia Math. 1992, 103, 71–77. [Google Scholar] [CrossRef]
  30. Harte, R.E.; Mbekhta, M. On generalized inverses in C*, II. Studia Math. 1993, 106, 129–138. [Google Scholar] [CrossRef] [Green Version]
  31. Hartwig, R.; Patrício, P. Invariance under outer inverses. Aequat. Math. 2018, 92, 375–383. [Google Scholar] [CrossRef] [Green Version]
  32. Huang, D. Generalized inverses over Banach algebras. Integr. Qquat. Operat. Theory 1992, 15, 454–469. [Google Scholar] [CrossRef]
  33. Huylebrouck, D.; Puystjens, R.; Geel, J.V. The moore-penrose inverse of a matrix over a semi-simple artinian ring with respect to an involution. Linear Multilinear Algebra 1988, 23, 269–276. [Google Scholar] [CrossRef]
  34. Koliha, J.J. The Drazin and Moore-Penrose inverse in C*-algebras. Math. Proc. R. Ir. Acad. 1999, 99A, 17–27. [Google Scholar]
  35. Puystjens, R. Drazin–Moore–Penrose invertibility in rings. Linear Algebra Appl. 2004, 389, 159–173. [Google Scholar]
  36. Rakića, D.S.; Dinćić, N.Č.; Djordjević, D.S. Group, Moore–Penrose, core and dual core inverse in rings with involution. Linear Algebra Appl. 2014, 463, 115–133. [Google Scholar] [CrossRef]
  37. Rao, B.K.P.S. On generalized inverses of matrices over integral domains. Linear Algebra Appl. 1983, 49, 179–189. [Google Scholar]
  38. Mary, X. Moore–Penrose inverse in Kreĭn spaces. Integ. Equ. Oper. Theory 2008, 60, 419–433. [Google Scholar] [CrossRef]
  39. Mosić, D.; Djordjević, D.S. Some results on the reverse-order law in rings with involution. Aequat. Math. 2012, 83, 271–282. [Google Scholar] [CrossRef]
  40. Wang, L.; Zhang, S.; Zhang, X.; Chen, J. Mixed-type reverse-order law for Moore–Penrose inverse of products of three elements in ring with involution. Filomat 2014, 28, 1997–2008. [Google Scholar] [CrossRef]
  41. Zhu, H.; Zhang, X.; Chen, J. Generalized inverses of a factorization in a ring with involution. Linear Algebra Appl. 2015, 472, 142–150. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, Y. Characterizations of Matrix Equalities for Generalized Inverses of Matrix Products. Axioms 2022, 11, 291. https://doi.org/10.3390/axioms11060291

AMA Style

Tian Y. Characterizations of Matrix Equalities for Generalized Inverses of Matrix Products. Axioms. 2022; 11(6):291. https://doi.org/10.3390/axioms11060291

Chicago/Turabian Style

Tian, Yongge. 2022. "Characterizations of Matrix Equalities for Generalized Inverses of Matrix Products" Axioms 11, no. 6: 291. https://doi.org/10.3390/axioms11060291

APA Style

Tian, Y. (2022). Characterizations of Matrix Equalities for Generalized Inverses of Matrix Products. Axioms, 11(6), 291. https://doi.org/10.3390/axioms11060291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop