Next Article in Journal
Fog, Friction, and Failure in Organized Conflict: A Formal Study
Next Article in Special Issue
Optimal Construction for Decoding 2D Convolutional Codes over an Erasure Channel
Previous Article in Journal
Linear Programming-Based Fuzzy Alternative Ranking Order Method Accounting for Two-Step Normalization for Comprehensive Evaluation of Digital Economy Development in Provincial Regions
Previous Article in Special Issue
T-BT Inverse and T-GC Partial Order via the T-Product
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

C-S and Strongly C-S Orthogonal Matrices

1
School of Education, Guangxi Vocational Normal University, Nanning 530007, China
2
School of Mathematics and Physics, Guangxi Minzu University, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(2), 110; https://doi.org/10.3390/axioms13020110
Submission received: 18 December 2023 / Revised: 24 January 2024 / Accepted: 2 February 2024 / Published: 5 February 2024
(This article belongs to the Special Issue Advances in Linear Algebra with Applications)

Abstract

:
In this paper, we present a new concept of the generalized core orthogonality (called the C-S orthogonality) for two generalized core invertible matrices A and B. A is said to be C-S orthogonal to B if A S B = 0 and B A S = 0 , where A S is the generalized core inverse of A. The characterizations of C-S orthogonal matrices and the C-S additivity are also provided. And, the connection between the C-S orthogonality and C-S partial order has been given using their canonical form. Moreover, the concept of the strongly C-S orthogonality is defined and characterized.

1. Introduction

As we all know, there are two forms of the orthogonality: one-sided or two-sided orthogonality. We use R ( A ) and R ( B ) to denote the ranges of A and B, respectively. It is stated that R ( A ) and R ( B ) are orthogonal if A B = 0 . If A B = 0 , then R ( A ) and R ( B ) are orthogonal. And, we state that R ( A ) and R ( B ) are orthogonal if A B = 0 . If A B = 0 and B A = 0 , then A and B are orthogonal, denoted as A B . Notice that, when A # exists and A B = 0 , where A # is group inverse of A, we have A # B = A # A A # B = ( A # ) 2 A B = 0 . And, it is obvious that A # B = 0 implies A B = 0 . Thus, when A # exists, A B if and only if A # B = 0 and B A # = 0 (i.e., A and B are #-orthogonal, denoted as A # B ). Hestenes [1] gave the concept of ∗-orthogonality: let A , B C m × n ; if A B = 0 and B A = 0 , then A is ∗-orthogonal to B, denoted by A B . For matrices, Hartwig and Styan [2] stated that if the dagger additivity (i.e., ( A + B ) = A + B , where A is the Moore–Penrose inverse of A) and the rank additivity (i.e., r k ( A + B ) = r k ( A ) + r k ( B ) ), then A is ∗-orthogonal to B.
Ferreyra and Malik [3] introduced the core and strongly core orthogonal matrices by using the core inverse. If we let A , B C m × n with Ind ( A ) 1 , where Ind ( A ) is the index of A, if A # B = 0 and B A # = 0 , then A is core orthogonal to B, denoted as A # B . A , B C m × n , where Ind ( A ) 1 and Ind ( B ) 1 are strongly core orthogonal matrices (denoted as A s , # B ) if A # B and B # A . In [3], we can see that A s , # B implies ( A + B ) # = A # + B # (core additivity).
In [4], Liu, Wang, and Wang proved that A , B C n × n with Ind ( A ) 1 and Ind ( B ) 1 are strongly core orthogonal, if and only if ( A + B ) # = A # + B # and A # B = 0 (or B A # = 0 ), instead of A # B , which is more concise than Theorem 7.3 in [3]. And, Ferreyra and Malik in [3], have proven that if A is strongly core orthogonal to B, then rk ( A + B ) = rk ( A ) + rk ( B ) and ( A + B ) # = A # + B # . But, whether the reverse holds is still an open question. In [4], Liu, Wang, and Wang solved the problem completely. Furthermore, they also gave some new equivalent conditions for the strongly core orthogonality, which are related to the minus partial order and some Hermitian matrices.
On the basis of the core orthogonal matrix, Mosić, Dolinar, Kuzma, and Marovt [5] extended the concept of the core orthogonality and present the new concept of the core-EP orthogonality. A is said to be core-EP orthogonal to B, if A D B = 0 and B A D = 0 , where A D is core-EP inverse of A. A number of characterizations for core-EP orthogonality were proven in [5]. Applying the core-EP orthogonality, the concept and characterizations of the strongly core-EP orthogonality were introduced in [5].
In [6], Wang and Liu introduced the generalized core inverse (called the C-S inverse) and gave some properties and characterizations of the inverse. By the C-S inverse, a binary relation (denoted “ A S B ”) and a partial order (called the C-S partial order and denoted “ A C S B ”) are given.
Motivated by these ideas, we give the concepts of the C-S orthogonality and the strongly C-S orthogonality, and discuss their characterizations in this paper. The connection between the C-S partial order and the C-S orthogonality has been given. Moreover, we obtain some characterizing properties of the C-S orthogonal matrix when A is EP.

2. Preliminaries

For A , X C n × n , and k is the index of A, we consider the following equations:
1.
A X A = A ;
2.
X A X = X ;
3.
A X = A X ;
4.
X A = X A ;
5.
A X = X A ;
6.
X A 2 = A ;
7.
A X 2 = X ;
8.
A 2 X = A ;
9.
A X 2 = X ;
10.
X A k + 1 = A k .
The set of all elements X C n × n , which satisfies equations i , j , , k in Equations (1)–(10), are denoted as A i , j , , k . If there exists
A A 1 , 2 , 3 , 4 ,
then it is called the Moore–Penrose inverse of A, and A is unique. It was introduced by Moore [7] and improved by Bjerhammar [8] and Penrose [9]. Furthermore, based on the Moore–Penrose inverse, it is known to us that it is EP if and only if A A = A A . If there exists
A # A 1 , 2 , 5 ,
then it is called the group inverse of A, and A # is unique [10]. If there exists
A # A 1 , 2 , 3 , 6 , 7 ,
then A # is called the core inverse of A [11]. And, if there exists
A d A 3 , 9 , 10 ,
then A d is called the core-EP inverse of A [12]. Moreover, C d is the set of all core-EP invertible matrices of C n × n . The symbols C n G M and C n E P will stand for the subsets of C n × n consisting of group and EP matrices, respectively.
Drazin [13] introduces the star partial order on the set of all regular elements of semigroups with involution, and applies this definition to the complex matrices, which is defined as
A B A A = B A , A A = A B .
By using the 1 -inverse, Hartwig and Styan [2,14] give the definition of the minus partial order,
A B A A ( 1 ) = B A ( 1 ) , A ( 1 ) A = A ( 1 ) B , f o r s o m e A ( 1 ) A 1 .
And, Mitra [15] defines the sharp partial order as
A # B A A # = B A # , A # A = A # B .
According to the core inverse and the sharp partial order, Baksalary and Trenkler [11] propose the definition of the core partial order:
A # B A A # = B A # , A # A = A # B .
Definition 1
([6]). Let A , X C n × n , and I n d ( A ) = k . Then, the C-S inverse of A is defined as the solution of
X A k + 1 = A k , ( A k X k ) = A k X k , A X = A k X k ( A X ) ,
and X is denoted as A S .
Lemma 1
([16]). Let A C d , and A = A 1 + A 2 be the core-EP decomposition of A. Then, there exists a unitary matrix U such that
A 1 = U T S 0 0 U a n d A 2 = U 0 0 0 N U ,
where T is non-singular, and N is nilpotent.
Then, the core-EP decomposition of A is
A = T S 0 N .
And, by applying Lemma 1, Wang and Liu in [6] obtained the following canonical form for the C-S inverse of A:
A S = U T 1 0 0 N U .

3. C-S Orthgonality and Its Consequences

Firstly, we give the concept of the C-S orthogonality.
Definition 2.
Let A , B C n × n and Ind ( A ) = k . If
A S B = 0 , B A S = 0 ,
then A is generalized core orthogonal to B, A is C-S orthogonal to B, and is denoted as A S B .
If A , B C n × n , then
A B = 0 R ( B ) N ( A ) .
Remark 1.
Let A , B C n × n and Ind ( A ) = k . Notice that B ( A A S ) = B A k ( A S ) k ( A A S ) = 0 can be proven if B A = 0 . Then, we have B A S = B A = 0 . And, if B A S = 0 , we have B ( A A S ) = B A k ( A S ) k ( A A S ) = B A S A k + 1 ( A S ) k ( A A S ) = 0 , which implies B A = 0 . It is obvious that
B A = 0 B A S = 0 .
Applying Definition 2, we can also state that A is generalized core orthogonal to B, if
A S B = 0 , B A = 0 .
Next, we study the range and null space of the matrices which are C-S orthogonal. Firstly, we give some characterizations of the C-S inverse as follows.
Lemma 2.
Let A C n × n , and Ind ( A ) = k , then ( A S ) k = ( A k ) S .
Proof. 
Let (1) be the core-EP decomposition of A, where T is nonsingular with t : = r k ( T ) = r k ( A k ) and N is the nilpotent of index k. Then,
A k = U T k S ˜ 0 0 U ,
where S ˜ = i = 1 k T k i S N i 1 . And, by (2), we have
A S = U T 1 0 0 N U .
Then,
( A S ) k = U ( T 1 ) k 0 0 0 U
and
( A k ) S = U ( T k ) 1 0 0 0 U .
Since ( T 1 ) k = ( T k ) 1 , we have ( A S ) k = ( A k ) S . □
By (5) and (6), it is easy to obtain the following lemma.
Lemma 3.
Let A C n × n , and Ind ( A ) = k , then A k is core invertible. In this case, ( A S ) k = ( A k ) # .
Remark 2.
The core inverse of a square matrix of the index at most 1 satisfies the following properties [3]:
R ( A # ) = R ( ( A # ) ) = R ( A ) , N ( A # ) = N ( ( A # ) ) = N ( A ) ,
where A is a square matrix with I n d ( A ) = k . It has been proven that A k is core invertible in Lemma 3, so we have
R ( ( A k ) S ) = R ( ( ( A k ) S ) ) = R ( A k ) , N ( ( A k ) S ) = N ( ( ( A k ) S ) ) = N ( ( A k ) ) .
Theorem 1.
Let A , B C n × n , and I n d ( A ) = k ; then, the following are equivalent:
(1) 
A k S B ;
(2) 
( A k ) B = 0 , B A k = 0 ;
(3) 
R ( B ) N ( ( A k ) ) , R ( A k ) N ( B ) ;
(4) 
R ( B ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B ) ;
(5) 
( A k ) B = 0 , B A k = 0 ;
(6) 
R ( B ) N ( ( A k ) ) , R ( A k ) N ( B ) ;
(7) 
R ( B ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B ) .
Proof. 
( 1 ) ( 2 ) . From A S B = 0 , we have
A S B = 0 A k ( A S ) k B = 0 B ( A k ( A S ) k ) = 0 B A k ( A S ) k = 0 .
By Lemma 3, A k is core invertible, which implies A k ( A S ) k A k = A k . As a consequence, we have B A k = B A k ( A S ) k A k = 0 . By using B A S = 0 , we obtain
B A S = 0 B A S A k + 1 = 0 B A k = 0 .
( 2 ) ( 3 ) : this is evident.
( 3 ) ( 4 ) : according to Remark 1, we obtain R ( B ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B ) .
( 4 ) ( 1 ) : this is evident.
Applying properties of Transposition of ( 2 ) , we verify that ( 5 ) , ( 6 ) , and ( 7 ) are equivalent. □
In view of ( 1 ) and ( 2 ) in Theorem 1, we obtain A k S B from ( 5 ) . Using Lemma 4.4 in [3], we have that ( 1 ) ( 7 ) in Theorem 1 and A k # B are equivalent, i.e., A k S B and A k # B are equivalent. And, from Lemma 2.1 in [4], it can be seen that A k # B is equivalent to A D B and A D B . As a consequence of the theorem, we have the following.
Corollary 1.
Let A , B C n × n , and I n d ( A ) = k , then the following are equivalent:
(1) 
A k S B ;
(2) 
A k S B ;
(3) 
A k # B ;
(4) 
A D B ;
(5) 
A D B .
Lemma 4.
Let A , B C n × n , and I n d ( A ) = k , I n d ( B ) = l . If A k B l = 0 , then
(1) 
R ( A k ) R ( B l ) = 0 ;
(2) 
R ( ( A k ) ) R ( ( B l ) ) = 0 ;
(3) 
N ( A k + B l ) = N ( A k ) N ( B l ) ;
(4) 
N ( ( A k ) + ( B l ) ) = N ( ( A k ) ) N ( ( B l ) ) .
Proof. 
(1) By applying (3), we have A k B l = 0 R ( B l ) N ( A k ) . Then, by using the fact that A k has an index of 1 at most, we obtain
R ( A k ) R ( B l ) R ( A k ) N ( A k ) = 0 .
Moreover, it is obvious that 0 R ( A k ) R ( B l ) . Then, R ( A k ) R ( B l ) = 0 .
(2) Let A k B l = 0 , we have ( B l ) ( A k ) = 0 . Since ( B l ) has an index of 1 at most, then we can prove ( 2 ) by ( 1 ) .
(3) Let X N ( A k + B l ) , then ( A k + B l ) X = 0 , i.e., A k X = B l X . Since
A k X = ( A S ) k A 2 k X = ( A S ) k A k ( B l X ) = ( A S ) k A k B l X = 0 ,
and B l X = 0 , we obtain X N ( A k ) N ( B l ) , which implies N ( A k + B l ) N ( A k ) N ( B l ) .
On the other hand, it is obvious that N ( A k ) N ( B l ) N ( A k + B l ) . Then, N ( A k + B l ) = N ( A k ) N ( B l ) .
(4) Let A k B l = 0 , and we have ( B l ) ( A k ) = 0 . By ( 3 ) , it is easy to check that ( 4 ) is true. □
Theorem 2.
Let A , B C n × n , and I n d ( A ) = k , I n d ( B ) = l . If A S B , then
(1) 
R ( A k ) R ( B l ) = 0 ;
(2) 
R ( ( A k ) ) R ( ( B l ) ) = 0 ;
(3) 
N ( A k + B l ) = N ( A k ) N ( B l ) ;
(4) 
N ( ( A k ) + ( B l ) ) = N ( ( A k ) ) N ( ( B l ) ) ;
(5) 
R ( ( A k ) ) R ( B l ) = 0 ;
(6) 
R ( A k ) R ( ( B l ) ) = 0 ;
(7) 
N ( ( A k ) + B l ) = N ( ( A k ) ) N ( B l ) ;
(8) 
N ( A k + ( B l ) ) = N ( A k ) N ( ( B l ) ) .
Proof. 
By applying A S B , i.e., A S B = 0 and B A S = 0 , we obtain
( A k ) B = ( B A k ) = ( B A k ( A S ) k A k ) = ( A k ) A k ( A S ) k B = 0
and
B A k = B A S A k + 1 = 0 .
It is obvious that ( A k ) B l = 0 and B l A k = 0 . As a consequence, it is reasonable to obtain that the statements (1)–(8) are true by Lemma 4. □
Using the core-EP decomposition, we obtain the following characterization of C-S orthogonal matrices.
Theorem 3.
Let A , B C n × n , and I n d ( A ) = k , then the following are equivalent:
(1) 
A S B ;
(2) 
There exist nonsingular matrices T 1 , T 2 , nilpotent matrices 0 N 2 0 N 4 , N 5 , and a unitary matrix U, such that
A = U T 1 S 1 R 1 0 0 N 2 0 0 N 4 U , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U ,
where N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 .
Proof. 
( 1 ) ( 2 ) Let the core-EP decomposition of A be
A = U T S 0 N U ,
where T is nonsingular and N is nilpotent. Then, the decomposition of A S is (2). And, write
B = U B 1 B 2 B 3 B 4 U .
Since
A S B = U T 1 0 0 N B 1 B 2 B 3 B 4 U = U T 1 B 1 T 1 B 2 N B 3 N B 4 U = 0 ,
it implies that T 1 B 1 = 0 and T 1 B 2 = 0 ; that is, B 1 = B 2 = 0 .
Since
B A S = U 0 0 B 3 B 4 T 1 0 0 N U = U 0 0 B 3 T 1 B 4 N U = 0 ,
it implies that B 3 T 1 = 0 , and we have B 3 = 0 . Therefore,
B = U 0 0 0 B 4 U ,
where N B 4 = B 4 N = 0 , i.e., B 4 N .
Now, let
B 4 = U 2 T 2 S 2 0 N 5 U 2
be the core EP decomposition of B 4 and U = U 1 I 0 0 U 2 . Partition N according to the partition of B 4 ; then,
N = U 2 N 1 N 2 N 3 N 4 U 2 .
Applying B 4 N , we obtain
N B 4 = U 2 N 1 N 2 N 3 N 4 T 2 S 2 0 N 5 U 2 = U N 1 T 2 N 1 S 2 + N 2 N 5 N 3 T 2 N 3 S 2 + N 4 N 5 U 2 = 0 ,
which leads to N 1 T 2 = N 3 T 2 = 0 . Thus, N 1 = N 3 = 0 and N 2 N 5 = N 4 N 5 = 0 . And,
B 4 N = U 2 T 2 S 2 0 N 5 0 N 2 0 N 4 U 2 = U 0 T 2 N 2 + S 2 N 4 0 N 5 N 4 U 2 = 0 ,
which implies that T 2 N 2 + S 2 N 4 = 0 and N 5 N 4 = 0 . Then,
A = U T 1 S 1 R 1 0 0 N 2 0 0 N 4 U , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U ,
where N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 .
( 2 ) ( 1 ) . Let
A S = U T 1 1 0 0 0 0 N 2 0 0 N 4 U .
Using N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 , we can obtain
A S B = U T 1 1 0 0 0 0 N 2 0 0 N 4 0 0 0 0 T 2 S 2 0 0 N 5 U = U 0 0 0 0 0 N 2 N 5 0 0 N 4 N 5 U = 0
and
B A S = U 0 0 0 0 T 2 S 2 0 0 N 5 T 1 1 0 0 0 0 N 2 0 0 N 4 U = U 0 0 0 0 0 T 2 N 2 + S 2 N 4 0 0 N 5 N 4 U = 0 .
Thus, A S B . □
Example 1.
Consider the matrices
A = 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 0 , B = 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 .
Then,
A S = 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 .
By calculating the matrices, it can be obtained that A S B = 0 , B A S = 0 . Thus, A S B .
Next, based on the C-S partial order, we obtain some relation between the C-S orthogonality and the C-S partial order.
Lemma 5
([6]). Let A , B C n × n . There is a binary relation such that:
A S B : A ( A S ) = B ( A S ) , ( A S ) A = ( A S ) B .
In this case, there exists a unitary matrix U, such that
A = U T S 0 N U , B = U T S 0 B 4 U ,
where T is invertible, N is nilpotent, and N B 4 .
Lemma 6
([6]). Let A , B C n × n . The partial order on " C S " is defined as
A C S B : A S B , B A A A D = A A A A D .
We call it C-S partial order.
Theorem 4.
Let A , B C n × n , and I n d ( A ) = k ; then, the following are equivalent:
(1) 
A S B , B A A A D = 0 ;
(2) 
A C S A + B .
Proof. 
( 1 ) ( 2 ) . Let A S B , i.e., A S B = 0 and B A S = 0 . Then, B ( A S ) = 0 and ( A S ) B = 0 . Since
( A S ) ( A + B ) ( A S ) A = ( A S ) B = 0
and
( A + B ) ( A S ) A ( A S ) = B ( A S ) = 0 ,
we have A ( A S ) = B ( A S ) and ( A S ) A = ( A S ) B , which implies A S A + B .
By applying B A A A D = 0 , we have ( A + B ) A A A D = A A A A D = 0 .
Then, A C S A + B is established.
( 2 ) ( 1 ) . Let A C S A + B , i.e., ( A S ) ( A + B ) = ( A S ) A and ( A + B ) ( A S ) = A ( A S ) . It is clear that A S B = 0 and B A S = 0 . It follows that A S B . □
When A is an E P matrix, we have a more refined result, which reduces to the well-known characterizations of the orthogonality in the usual sense.
Theorem 5.
Let A C n E P ; then, the following are equivalent:
(1) 
A S B ;
(2) 
A # B ;
(3) 
A B ;
(4) 
A B ;
(5) 
There exist nonsingular matrices T 1 , T 2 , a nilpotent matrix N and a unitary matrix U, such that
A = U T 1 0 0 0 0 0 0 0 0 U , B = U 0 0 0 0 T 2 S 0 0 N U .
Proof. 
Since A C n E P , the decompositions of A and A S are
A = U T 1 0 0 0 0 0 0 0 0 U , A S = U T 1 1 0 0 0 0 0 0 0 0 U ,
where T 1 is nonsingular and U is unitary. Then, A S = A # . It is clear that A S B is equivalent to A # B . It follows from Corollary 4.8 in [3] that ( 1 ) ( 5 ) are equivalent. □

4. Strongly C-S Orthgonality and Its Consequences

The concept of strongly C-S orthogonality is considered in this section as a relation that is symmetric but unlike the C-S orthogonality.
Definition 3.
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . If
A S B , B S A ,
then A and B are said to be strongly C-S orthogonal, denoted as
A s , S B .
Remark 3.
Applying Remark 1, we have that A S B is equivalent to A S B = 0 , B A = 0 . Since A S B = 0 and A S B S = 0 are equivalent, it is interesting to observe that A S B A S B S = 0 , B A = 0 . Then, A s , S B is equivalent to A S B S = B S A S = 0 , B A = A B = 0 . Therefore, the concept of strongly C-S orthogonality can be defined by another condition; that is,
A s , S B A S B S , A B A S B S , A B B S A S , A B .
Theorem 6.
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . Then, the following statements are equivalent.
(1) 
A s , S B ;
(2) 
There exist nonsingular matrices T 1 , T 2 , nilpotent matrices N 4 , N 5 , and a unitary matrix U, such that
A = U T 1 0 R 1 0 0 0 0 0 N 4 U , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 .
Proof. 
( 1 ) ( 2 ) . Let A s , S B , i.e., A S B and B S A . From Theorem 3, the core-EP decompositions of A and B are (7), respectively. And,
B S = U 0 0 0 0 T 2 1 0 0 0 N 5 U .
Since
B S A = U 0 0 0 0 T 2 1 0 0 0 N 5 T 1 S 1 R 1 0 0 N 2 0 0 N 4 U = U 0 0 0 0 0 T 2 1 N 2 0 0 0 U = 0 ,
it implies T 2 1 N 2 = 0 ; that is, N 2 = 0 . On the other hand,
A B S = U T 1 S 1 R 1 0 0 0 0 0 N 4 0 0 0 0 T 2 1 0 0 0 N 5 U = U 0 S 1 T 2 1 R 1 N 5 0 0 0 0 0 0 U = 0 ,
which yields S 1 T 2 1 = R 1 N 5 = 0 ; that is, S 1 = R 1 N 5 = 0 . According to the above results, we have
A = U T 1 0 R 1 0 0 0 0 0 N 4 U , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 .
( 2 ) ( 1 ) . Let
A S = U T 1 1 0 0 0 0 0 0 0 N 4 U , B S = U 0 0 0 0 T 2 1 0 0 0 N 5 U .
It follows from R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 that
A S B = U T 1 1 0 0 0 0 0 0 0 N 4 0 0 0 0 T 2 S 2 0 0 N 5 U = U 0 0 0 0 0 0 0 0 N 4 N 5 U = 0 ,
B A S = U 0 0 0 0 T 2 S 2 0 0 N 5 T 1 1 0 0 0 0 0 0 0 N 4 U = U 0 0 0 0 0 S 2 N 4 0 0 N 5 N 4 U = 0 ,
B S A = U 0 0 0 0 T 2 1 0 0 0 N 5 T 1 0 R 1 0 0 0 0 0 N 4 U = U 0 0 0 0 0 0 0 0 N 5 N 4 U = 0
and
A B S = U T 1 0 R 1 0 0 0 0 0 N 4 0 0 0 0 T 2 1 0 0 0 N 5 U = U 0 0 R 1 N 5 0 0 0 0 0 N 4 N 5 U = 0 .
Thus, A s , S B . □
Example 2.
Consider the matrices
A = 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 , B = 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 .
Then,
A S = 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 , B S = 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 ,
By calculating the matrices, it can be seen that A S B = 0 , B A S = 0 , B S A = 0 and A B S = 0 . Thus, A s , S B .
Lemma 7.
Let B C n × n , I n d ( B ) = k , and the forms of B and B S be
B = U 0 B 2 0 B 4 U , B S = U 0 X 2 0 X 4 U
respectively. Then,
X 4 = B 4 S , X 2 B 4 k + 1 = B 2 B 4 k 1 , B 2 B 4 k 1 B 4 k = 0 .
Proof. 
Applying
B S B k + 1 = U 0 X 2 B 4 k + 1 0 X 4 B 4 k + 1 U = U 0 B 2 B 4 k 1 0 B 4 k U = B k ,
( B k ( B S ) k ) = U 0 0 ( B 2 B 4 k 1 X 4 k ) ( B 4 k X 4 k ) U = U 0 B 2 B 4 k 1 X 4 k 0 B 4 k X 4 k U = B k ( B S ) k
and
B k ( B S ) k ( B B S ) = U 0 0 0 B 4 k X 4 k ( B 4 B 4 S ) U = U 0 0 0 B 4 B 4 S U = B B S ,
we see that X 4 B 4 k + 1 = B 4 k , ( B 4 k X 4 k ) = B 4 k X 4 k and B 4 k X 4 k ( B 4 B 4 S ) = B 4 B 4 S , which lead to X 4 = B 4 S . And, X 2 B 4 k + 1 = B 2 B 4 k 1 , B 2 B 4 k 1 B 4 k = 0 . □
Theorem 7.
Let A , B C n × n , I n d ( A ) = I n d ( B ) = k and A B = 0 , then A s , S B , if and only if ( A + B ) S = A S + B S and B A S = 0 .
Proof. 
Only if: From Theorem 6, we have the forms of A and B from (9). Since N 4 , N 5 are nilpotent matrices with Ind ( A ) = Ind ( B ) = k , we can see that ( N 4 + N 5 ) k + 1 = ( N 4 + N 5 ) k = 0 .
It follows that
A + B = U T 1 0 R 1 0 T 2 S 2 0 0 N 4 + N 5 U ,
and
( A + B ) k = U T 1 k 0 R 1 ˜ 0 T 2 k S 2 ˜ 0 0 0 U ,
where R 1 ˜ = i = 1 k T 1 i 1 R 1 ( N 4 + N 5 ) k i and S 2 ˜ = i = 1 k T 2 i 1 S 2 ( N 4 + N 5 ) k i . And, it is clear that R 1 ˜ = T 1 k 1 R 1 + T 1 1 R 1 ˜ ( N 4 + N 5 ) and S 2 ˜ = T 1 k 1 S 2 + T 1 1 S 2 ˜ ( N 4 + N 5 ) .
By (10), let
X : = A S + B S = U T 1 1 0 0 0 T 2 1 0 0 0 N 4 + N 5 U .
Since
X ( A + B ) k + 1 = U T 1 1 0 0 0 T 2 1 0 0 0 N 4 + N 5 T 1 k + 1 0 T 1 k R 1 + R 1 ˜ ( N 4 + N 5 ) 0 T 2 k + 1 T 2 k S 2 + S 2 ˜ ( N 4 + N 5 ) 0 0 0 U = U T 1 k 0 T 1 k 1 R 1 + T 1 1 R 1 ˜ ( N 4 + N 5 ) 0 T 2 k T 2 k 1 S 2 + T 2 1 S 2 ˜ ( N 4 + N 5 ) 0 0 0 U = ( A + B ) k ,
( A + B ) k X k = U T 1 k 0 R 1 ˜ 0 T 2 k S 2 ˜ 0 0 0 T 1 k 0 0 0 T 2 k 0 0 0 0 U = U I r k ( A k ) 0 0 0 I r k ( B k ) 0 0 0 0 U = ( ( A + B ) k X k )
and
( A + B ) k X k ( A + B X ) = U I r k ( A k ) 0 0 0 I r k ( B k ) 0 0 0 0 T 1 T 1 1 0 R 1 0 T 2 T 2 1 S 2 0 0 0 U = U T 1 T 1 1 0 R 1 0 T 2 T 2 1 S 2 0 0 0 U = A X ,
we can see that X : = A S + B S = ( A + B ) S .
If: Let the core-EP decomposition of A be as in (1), and the form of A S be as in (6). Partition B according to the partition of A, then the form of B is (8). Then, write
B S = U X 1 X 2 X 3 X 4 U .
Applying A B = 0 and B A S = 0 , we have
A B = U T B 1 + S B 3 T B 2 + S B 4 N B 3 N B 4 U = 0 ,
and
B A S = U B 1 T 1 B 2 N B 3 T 1 B 4 N U = 0 .
Then, the form of B is
B = U 0 B 2 0 B 4 U ,
where T B 2 + S B 4 = 0 , B 2 N = 0 and N B 4 .
Let X : = A S + B S = ( A + B ) S , then
A + B = U T 1 S + B 2 0 N + B 4 U , ( A + B ) S = U T 1 1 + X 1 X 2 X 3 N + X 4 U .
Applying N B 4 , it is clear that ( B 4 + N ) k = B 4 k + N k = B 4 k . Thus,
( A + B ) k = U T 1 k S + B 2 ˜ 0 B 4 k U ,
where S + B 2 ˜ = i = 1 k T 1 i 1 ( S + B 2 ) ( B 4 + N ) k i . Then,
X ( A + B ) k + 1 = U T 1 1 + X 1 X 2 X 3 N + X 4 T 1 k + 1 Y 0 B 4 k + 1 U = U T 1 k + X 1 T 1 k + 1 ( T 1 1 + X 1 ) Y + X 2 B 4 k + 1 X 3 T 1 k + 1 B 4 k U = ( A + B ) k ,
where Y = T 1 k ( S + B 2 ) + S + B 2 ˜ ( B 4 + N ) and ( T 1 1 + X 1 ) Y + X 2 B 4 k + 1 = S + B 2 ˜ . Then, we obtain T 1 k + X 1 T 1 k + 1 = T 1 k and X 3 T 1 k + 1 = 0 , which imply that X 1 = X 3 = 0 . It follows from Lemma 7 that
B S = U 0 X 2 0 B 4 S U
and
B 2 B 4 2 k 1 = 0 .
Therefore, we obtain
X k = U T 1 k X ˜ 2 0 ( B 4 S + N ) k U ,
where X ˜ 2 = i = 1 k T 1 1 i X 2 ( B 4 S + N ) k i and T 1 k 1 ( S + B 2 ) + T 1 1 S + B 2 ˜ ( B 4 + N ) + X 2 B 4 k + 1 = S + B 2 ˜ . According to T 1 k 1 ( S + B 2 ) + T 1 1 S + B 2 ˜ ( B 4 + N ) = T 1 1 ( S + B 2 ) B 4 k + S + B 2 ˜ , we have that
T 1 1 ( S + B 2 ) B 4 k = X 2 B 4 k + 1 .
In addition,
( A + B ) k X k = U T 1 k S + B 2 ˜ 0 B 4 k T 1 k X ˜ 2 0 ( B 4 S + N ) k U = U I r k ( A k ) T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S + N ) k 0 B 4 k ( B 4 S + N ) k U = ( ( A + B ) k X k ) ,
which implies that
T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S + N ) k = 0
and ( B 4 k ( B 4 S + N ) k ) = B 4 k ( B 4 S + N ) k . Then, we have
B 4 k ( B 4 S + N ) k = U 2 T 2 k S 2 ˜ 0 0 T 2 k N 2 ˜ 0 ( N 4 + N 5 ) k U 2 = U 2 I r k ( B 4 k ) T 2 k N 2 ˜ + S 2 ˜ ( N 4 + N 5 ) k 0 0 U 2 = ( B 4 k ( B 4 S + N ) k ) ,
which implies T 2 k N 2 ˜ + S 2 ˜ ( N 4 + N 5 ) k = 0 .
By N 4 N 5 and N 4 k = N 5 k = 0 , it is clear that ( N 4 + N 5 ) k = 0 . Then, it is obvious that T 2 k N 2 ˜ = 0 , i.e., N 2 ˜ = i = 1 k T 1 1 i N 2 ( N 4 + N 5 ) k i = 0 . Using N B 4 , we have N 2 N 5 = 0 . Thus, there is N 2 ˜ = i = 1 k T 1 1 i N 2 N 4 k i = 0 . It follows from N k = 0 and N 2 ˜ N 4 k 1 = 0 that T 1 1 k N 2 N 4 k 1 = 0 , that is N 2 N 4 k 1 = 0 . And, it implies that N 2 ˜ N 4 k 2 = T 1 1 k N 2 N 4 k 2 = 0 . It is clear that N 2 N 4 k 2 = 0 . Therefore, it follows that N 2 ˜ N 4 k 3 = N 2 ˜ N 4 k 4 = = N 2 ˜ N 4 = 0 , which leads to N 2 N 4 k 2 = N 2 N 4 k 3 = = N 2 N 4 = N 2 = 0 .
Applying (13) and (14), we have
( T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S ) k ) B 4 2 k = T 1 k X ˜ 2 B 4 2 k + i = 1 k T 1 i ( T 1 1 ( S + B 2 ) B 4 k ) ( B 4 + N ) k i = T 1 k X ˜ 2 B 4 2 k + i = 1 k T 1 i ( X 2 B 4 k + 1 ) ( B 4 + N ) k i = 2 T 1 k X ˜ 2 B 4 2 k = 0 ,
which implies that X ˜ 2 B 4 2 k = i = 1 k T 1 1 i X 2 B 4 k + i = 0 .
By applying (11) and (12), we have
( i = 1 k T 1 1 i X 2 B 4 k + i ) B 4 k 5 = ( i = 1 k T 1 1 i B 2 B 4 k + 2 + i ) B 4 k 5 = B 2 B 4 2 k 2 = 0 .
It follows that
X ˜ 2 B 4 2 k B 4 k 5 = X ˜ 2 B 4 2 k B 4 k 4 = = X ˜ 2 B 4 2 k B 4 3 k 7 = 0 ,
which leads to B 2 B 4 2 k 2 = B 2 B 4 2 k 3 = = B 2 B 4 = B 2 = 0 .
Using T B 2 + S B 4 = 0 , we have
S B 4 = U 2 S 1 R 1 T 2 S 2 0 N 5 U 2 = U 2 S 1 T 2 S 1 S 2 + R 1 N 5 U 2 = 0 ,
where U = U 1 I 0 0 U 2 . It follows that S 1 = 0 and R 1 N 5 = 0 . Therefore, we obtain
A = U T 1 0 R 1 0 0 0 0 0 N 4 U , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 . By Theorem 6, A s , S B . □
Example 3.
Consider the matrices
A = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 , B = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 .
It is obvious that A B = 0 .
By calculating the matrices, it can be seen that
A + B = 1 0 0 1 0 1 0 1 0 0 1 0 0 0 0 1 , A S = 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 , B S = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
and
( A + B ) S = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ,
that is, ( A + B ) S = A S + B S and A S B = 0 . Then, we have A S B = B A S = A B S = B S A = 0 , i.e., A s , S B .
But, we consider the matrices
C = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 , D = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 .
It is obvious that C S D = 0 and ( C + D ) S = C S + D S . However,
C D S = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 = 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 .
Thus, we cannot see that C s , S D .
Corollary 2.
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . Then, the following are equivalent:
(1) 
A s , S B ;
(2) 
( A + B ) S = A S + B S , B A S = 0 and A B = 0 ;
(3) 
( A + B ) S = A S + B S , A B .
Proof. 
( 1 ) ( 2 ) . This follows from Theorem 7.
( 2 ) ( 3 ) . Applying Remark 1, we have that A S B is equivalent to A S B = 0 and B A = 0 . □
Theorem 8.
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . Then, the following are equivalent:
(1) 
A s , S B ;
(2) 
A C S A + B , B C S B + A .
Proof. 
( 1 ) ( 2 ) . Let A s , S B , i.e., A S B and B S A . By Definition 1 and A B S = 0 , we have
A B S B k + 1 = 0 A B k = 0 A B k ( B S ) k ( B B S ) = 0 A ( B B S ) = 0 ,
which implies A B = A B S = 0 . It follows that B A A A D = ( A B ) A A D = 0 . According to Theorem 4, we obtain A C S A + B . In the same way, we see that B C S B + A .
( 2 ) ( 1 ) . This is clear by Theorem 4. □

Author Contributions

Conceptualization, X.L., Y.L. and H.J.; methodology, X.L., Y.L. and H.J.; writing original draft preparation, X.L., Y.L. and H.J.; writing review and editing, X.L., Y.L. and H.J.; funding acquisition, X.L. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 12061015); Guangxi Science and Technology Base and Talents Special Project (No. GUIKE21220024) and Guangxi Natural Science Foundation (No. 2018GXNSFDA281023).

Data Availability Statement

Data will be made available on reasonable request.

Conflicts of Interest

No potential conflicts of interest was reported by the authors.

References

  1. Hestenes, M.R. Relative hermitian matrices. Pac. J. Math. 1961, 11, 225–245. [Google Scholar] [CrossRef]
  2. Hartwig, R.E.; Styan, G.P.H. On some characterizations of the “star” partial ordering for matrices and rank subtractivity. Linear Algebra Appl. 1986, 82, 145–161. [Google Scholar] [CrossRef]
  3. Ferreyra, D.E.; Malik, S.B. Core and strongly core orthogonal matrices. Linear Multilinear Algebr. 2021, 70, 5052–5067. [Google Scholar] [CrossRef]
  4. Liu, X.; Wang, C.; Wang, H. Further results on strongly core orthogonal matrix. Linear Multilinear Algebr. 2023, 71, 2543–2564. [Google Scholar] [CrossRef]
  5. Mosić, D.; Dolinar, G.; Kuzma, B.; Marovt, J. Core-EP orthogonal operators. Linear Multilinear Algebr. 2022, 1–15. [Google Scholar] [CrossRef]
  6. Wang, H.; Liu, N. The C-S inverse and its applications. Bull. Malays. Math. Sci. Soc. 2023, 46, 90. [Google Scholar] [CrossRef]
  7. Moore, E.H. On the reciprocal of the general algebraic matrix. Bull. Am. Math. Soc. 1920, 26, 394–395. [Google Scholar]
  8. Bjerhammar, A. Application of calculus of matrices to method of least squares: With special reference to geodetic calculations. Trans. R. Inst. Technol. Stock. Sweden 1951, 49, 82–84. [Google Scholar]
  9. Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  10. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  11. Baksalary, O.M.; Trenkler, G. Core inverse of matrices. Linear Multilinear Algebr. 2010, 58, 681–697. [Google Scholar] [CrossRef]
  12. Manjunatha, P.K.; Mohana, K.S. Core-EP inverse. Linear Multilinear Algebr. 2014, 62, 792–802. [Google Scholar] [CrossRef]
  13. Drazin, M.P. Natural structures on semigroups with involution. Bull. Am. Math. Soc. 1978, 84, 139–141. [Google Scholar] [CrossRef]
  14. Hartwig, R.E. How to partially order regular elements. Math. Jpn. 1980, 25, 1–13. [Google Scholar]
  15. Mitra, S.K. On group inverses and the sharp order. Linear Algebra Appl. 1987, 92, 17–37. [Google Scholar] [CrossRef]
  16. Wang, H. Core-EP decomposition and its applications. Linear Algebra Appl. 2016, 508, 289–300. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Liu, Y.; Jin, H. C-S and Strongly C-S Orthogonal Matrices. Axioms 2024, 13, 110. https://doi.org/10.3390/axioms13020110

AMA Style

Liu X, Liu Y, Jin H. C-S and Strongly C-S Orthogonal Matrices. Axioms. 2024; 13(2):110. https://doi.org/10.3390/axioms13020110

Chicago/Turabian Style

Liu, Xiaoji, Ying Liu, and Hongwei Jin. 2024. "C-S and Strongly C-S Orthogonal Matrices" Axioms 13, no. 2: 110. https://doi.org/10.3390/axioms13020110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop