C-S and Strongly C-S Orthogonal Matrices

: In this paper, we present a new concept of the generalized core orthogonality (called the C-S orthogonality) for two generalized core invertible matrices A and B . A is said to be C-S orthogonal to B if A S ⃝ B = 0 and BA S ⃝ = 0, where A S ⃝ is the generalized core inverse of A . The characterizations of C-S orthogonal matrices and the C-S additivity are also provided. And, the connection between the C-S orthogonality and C-S partial order has been given using their canonical form. Moreover, the concept of the strongly C-S orthogonality is defined and characterized.


Introduction
As we all know, there are two forms of the orthogonality: one-sided or two-sided orthogonality.We use R(A) and R(B) to denote the ranges of A and B, respectively.It is stated that R(A) and R(B) are orthogonal if A * B = 0.If AB * = 0, then R(A * ) and R(B * ) are orthogonal.And, we state that R(A * ) and R(B) are orthogonal if AB = 0.If AB = 0 and BA = 0, then A and B are orthogonal, denoted as A ⊥ B. Notice that, when A # exists and AB = 0, where A # is group inverse of A, we have A # B = A # AA # B = (A # ) 2 AB = 0. And, it is obvious that A # B = 0 implies AB = 0. Thus, when A # exists, A ⊥ B if and only if A # B = 0 and BA # = 0 (i.e., A and B are #-orthogonal, denoted as A ⊥ # B).Hestenes [1] gave the concept of * -orthogonality: let A, B ∈ C m×n ; if A * B = 0 and BA * = 0, then A is * -orthogonal to B, denoted by A ⊥ * B. For matrices, Hartwig and Styan [2] stated that if the dagger additivity (i.e., (A + B) † = A † + B † , where A † is the Moore-Penrose inverse of A) and the rank additivity (i.e., rk(A + B) = rk(A) + rk(B)), then A is * -orthogonal to B. Ferreyra and Malik [3] introduced the core and strongly core orthogonal matrices by using the core inverse.If we let A, B ∈ C m×n with Ind(A) ≤ 1, where Ind(A) is the index of A, if A # ⃝ B = 0 and BA # ⃝ = 0, then A is core orthogonal to B, denoted as A ⊥ # ⃝ B. A, B ∈ C m×n , where Ind(A) ≤ 1 and Ind(B) ≤ 1 are strongly core orthogonal matrices (denoted as A ⊥ s, # ⃝ B) if A ⊥ # ⃝ B and B ⊥ # ⃝ A. In [3], we can see that A ⊥ s, # ⃝ B implies (A + B) # ⃝ = A # ⃝ + B # ⃝ (core additivity).In [4], Liu, Wang, and Wang proved that A, B ∈ C n×n with Ind(A) ≤ 1 and Ind(B) ≤ 1 are strongly core orthogonal, if and only if (A + B) # ⃝ = A # ⃝ + B # ⃝ and A # ⃝ B = 0 (or BA # ⃝ = 0), instead of A ⊥ # ⃝ B, which is more concise than Theorem 7.3 in [3].And, Ferreyra and Malik in [3], have proven that if A is strongly core orthogonal to B, then rk(A + B) =rk(A)+rk(B) and (A + B) # ⃝ = A # ⃝ + B # ⃝ .But, whether the reverse holds is still an open question.In [4], Liu, Wang, and Wang solved the problem completely.Furthermore, they also gave some new equivalent conditions for the strongly core orthogonality, which are related to the minus partial order and some Hermitian matrices.
On the basis of the core orthogonal matrix, Mosić, Dolinar, Kuzma, and Marovt [5] extended the concept of the core orthogonality and present the new concept of the core-EP orthogonality.A is said to be core-EP orthogonal to B, if A D ⃝ B = 0 and BA D ⃝ = 0, where A D ⃝ is core-EP inverse of A. A number of characterizations for core-EP orthogonality were proven in [5].Applying the core-EP orthogonality, the concept and characterizations of the strongly core-EP orthogonality were introduced in [5].
In [6], Wang and Liu introduced the generalized core inverse (called the C-S inverse) and gave some properties and characterizations of the inverse.By the C-S inverse, a binary relation (denoted " A ≤ S ⃝ B ") and a partial order (called the C-S partial order and denoted " A ≤ CS ⃝ B ") are given.
Motivated by these ideas, we give the concepts of the C-S orthogonality and the strongly C-S orthogonality, and discuss their characterizations in this paper.The connection between the C-S partial order and the C-S orthogonality has been given.Moreover, we obtain some characterizing properties of the C-S orthogonal matrix when A is EP.

Preliminaries
For A, X ∈ C n×n , and k is the index of A, we consider the following equations: (AX) * = AX; 4.
The set of all elements X ∈ C n×n , which satisfies equations i, j, . . ., k in Equations ( 1)- (10), are denoted as A{i, j, . . ., k}.If there exists then it is called the Moore-Penrose inverse of A, and A † is unique.It was introduced by Moore [7] and improved by Bjerhammar [8] and Penrose [9].Furthermore, based on the Moore-Penrose inverse, it is known to us that it is EP if and only if AA † = A † A. If there exists then it is called the group inverse of A, and A # is unique [10].If there exists 11].And, if there exists . Moreover, C d ⃝ is the set of all core-EP invertible matrices of C n×n .The symbols C n GM and C n EP will stand for the subsets of C n×n consisting of group and EP matrices, respectively.Drazin [13] introduces the star partial order on the set of all regular elements of semigroups with involution, and applies this definition to the complex matrices, which is defined as By using the {1}-inverse, Hartwig and Styan [2,14] give the definition of the minus partial order, A ≤ − B ⇔ AA (1) = BA (1) , A (1) A = A (1) B, f or some A (1) ∈ A{1}.
And, Mitra [15] defines the sharp partial order as According to the core inverse and the sharp partial order, Baksalary and Trenkler [11] propose the definition of the core partial order: Definition 1 ([6]).Let A, X ∈ C n×n , and Ind(A) = k.Then, the C-S inverse of A is defined as the solution of and X is denoted as A S ⃝ .
Lemma 1 ([16]).Let A ∈ C d ⃝ , and A = A 1 + A 2 be the core-EP decomposition of A. Then, there exists a unitary matrix U such that where T is non-singular, and N is nilpotent.
Then, the core-EP decomposition of A is And, by applying Lemma 1, Wang and Liu in [6] obtained the following canonical form for the C-S inverse of A:

C-S Orthgonality and Its Consequences
Firstly, we give the concept of the C-S orthogonality.
then A is generalized core orthogonal to B, A is C-S orthogonal to B, and is denoted as A ⊥ S ⃝ B.
Applying Definition 2, we can also state that A is generalized core orthogonal to B, if Next, we study the range and null space of the matrices which are C-S orthogonal.Firstly, we give some characterizations of the C-S inverse as follows.
Lemma 2. Let A ∈ C n×n , and Proof.Let (1) be the core-EP decomposition of A, where T is nonsingular with t := rk(T) = rk(A k ) and N is the nilpotent of index k.Then, And, by (2), we have Then, and By (5) and ( 6), it is easy to obtain the following lemma.

Lemma 3.
Let A ∈ C n×n , and Ind(A) = k, then A k is core invertible.In this case, Remark 2. The core inverse of a square matrix of the index at most 1 satisfies the following properties [3]: where A is a square matrix with Ind(A) = k.It has been proven that A k is core invertible in Lemma 3, so we have Theorem 1.Let A, B ∈ C n×n , and Ind(A) = k; then, the following are equivalent: Proof.
In view of (1) and (2) in Theorem 1, we obtain A k ⊥ S ⃝ B * from (5).Using Lemma 4.4 in [3], we have that (1)-( 7) in Theorem 1 and A And, from Lemma 2.1 in [4], it can be seen that As a consequence of the theorem, we have the following.
Corollary 1.Let A, B ∈ C n×n , and Ind(A) = k, then the following are equivalent: Proof.
(1) By applying (3), we have Then, by using the fact that A k has an index of 1 at most, we obtain * has an index of 1 at most, then we can prove (2) by (1). ( and On the other hand, it is obvious that Proof.By applying A ⊥ S ⃝ B, i.e., A S ⃝ B = 0 and BA S ⃝ = 0, we obtain It is obvious that (A k ) * B l = 0 and B l A k = 0.As a consequence, it is reasonable to obtain that the statements ( 1)-( 8) are true by Lemma 4.
Using the core-EP decomposition, we obtain the following characterization of C-S orthogonal matrices. where Proof.
(1) ⇒ (2) Let the core-EP decomposition of A be where T is nonsingular and N is nilpotent.Then, the decomposition of A S ⃝ is (2).And, write Since it implies that B 3 T −1 = 0, and we have B 3 = 0. Therefore, where NB 4 = B 4 N = 0, i.e., B 4 ⊥ N.

Now, let
be the core EP decomposition of B 4 and U = U 1 I 0 0 U 2 .Partition N according to the partition of B 4 ; then, Applying B 4 ⊥ N, we obtain Using N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 ⊥ N 5 , we can obtain Then, By calculating the matrices, it can be obtained that A S ⃝ B = 0, BA S ⃝ = 0. Thus, A ⊥ S ⃝ B.
Next, based on the C-S partial order, we obtain some relation between the C-S orthogonality and the C-S partial order.

Lemma 5 ([6]
).Let A, B ∈ C n×n .There is a binary relation such that: In this case, there exists a unitary matrix U, such that where T is invertible, N is nilpotent, and N ≤ * B 4 .

Lemma 6 ([6]
).Let A, B ∈ C n×n .The partial order on " ≤ CS ⃝ " is defined as We call it C-S partial order.
When A is an EP matrix, we have a more refined result, which reduces to the wellknown characterizations of the orthogonality in the usual sense.(5) There exist nonsingular matrices T 1 , T 2 , a nilpotent matrix N and a unitary matrix U, such that Proof.Since A ∈ C EP n , the decompositions of A and A S ⃝ are where T 1 is nonsingular and U is unitary.Then, It follows from Corollary 4.8 in [3] that (1)-( 5) are equivalent.

Strongly C-S Orthgonality and Its Consequences
The concept of strongly C-S orthogonality is considered in this section as a relation that is symmetric but unlike the C-S orthogonality.

Definition 3. Let A, B ∈ C n×n , and Ind
then A and B are said to be strongly C-S orthogonal, denoted as Therefore, the concept of strongly C-S orthogonality can be defined by another condition; that is, Theorem 6.Let A, B ∈ C n×n , and Ind(A) = Ind(B) = k.Then, the following statements are equivalent.
(1) ⇒ (2).Let A ⊥ s, S ⃝ B, i.e., A ⊥ S ⃝ B and B ⊥ S ⃝ A. From Theorem 3, the core-EP decompositions of A and B are (7), respectively.And, On the other hand, According to the above results, we have It follows from Example 2. Consider the matrices Then, By calculating the matrices, it can be seen that A S ⃝ B = 0, BA S ⃝ = 0, B S ⃝ A = 0 and AB S ⃝ = 0. Thus, A ⊥ s, S ⃝ B.
Lemma 7. Let B ∈ C n×n , Ind(B) = k, and the forms of B and B S ⃝ be respectively.Then, Proof.Applying Proof.Only if: From Theorem 6, we have the forms of A and B from (9).Since N 4 , N 5 are nilpotent matrices with Ind(A) = Ind(B) = k, we can see that (N 4 + N 5 ) k+1 = (N 4 + N 5 ) k = 0.
It follows that and where If: Let the core-EP decomposition of A be as in (1), and the form of A S ⃝ be as in (6).Partition B according to the partition of A, then the form of B is (8).Then, write Applying AB = 0 and BA S ⃝ = 0, we have Then, the form of B is where Therefore, we obtain Using TB 2 + SB 4 = 0, we have

Theorem 3 .
Let A, B ∈ C n×n , and Ind(A) = k, then the following are equivalent: (1) A ⊥ S ⃝ B; (2) There exist nonsingular matrices T 1 , T 2 , nilpotent matrices 0 N 2 0 N 4 , N 5 , and a unitary matrix U, such that

Theorem 7 .
Let A, B ∈ C n×n , Ind(A) = Ind(B) = k and AB = 0, then A ⊥ s, S ⃝ B, if and only if (A + B) S ⃝ = A S ⃝ + B S ⃝ and BA S ⃝ = 0.

2 .Example 3 .Corollary 2 .
It follows that S 1 = 0 and R 1 N 5 = 0. Therefore, we obtain whereR 1 N 5 = S 2 N 4 = 0 and N 4 ⊥ N 5 .By Theorem 6, A ⊥ s, S ⃝ B. ConsiderIt is obvious that AB = 0.By calculating the matrices, it can be seen thatA + B = is, (A + B) S ⃝ = A S ⃝ + B S ⃝ and A S ⃝ B = 0.Then, we have A S ⃝ B = BA S ⃝ = AB S ⃝ = B S ⃝ A = 0, i.e., A ⊥ s, S It is obvious that C S ⃝ D = 0 and (C + D) S ⃝ = C S ⃝ + D S ⃝Thus, we cannot see that C ⊥ s, S ⃝ D. Let A, B ∈ C n×n ,and Ind(A) = Ind(B) = k.Then, the following are equivalent: