1. Introduction
Generalized inverses are studied in many branches of mathematical field: matrix theory, operator theory, -algebras, rings, etc. Additionally, generalized inverses turn out to be a powerful tool in many applications such as Markov chains, differential equations, difference equations, electrical networks, optimal control, and so on.
Penrose [
1] earliest defined the Moore–Penrose inverse using four matrix equations in 1955. Then, a wave of research on generalized inverses has begun. In 1958, Drazin [
2] introduced the Drazin inverse with spectral properties in associative rings and semigroups. The Drazin inverse of a group matrix is also known as its group inverse [
3]. Using the Moore–Penrose inverse and the Drazin inverse, Malik and Thome [
4] defined the DMP inverse in 2014. The core inverse was firstly proposed by Baksalary and Trenkler [
5] in 2010 as an alternative to the group inverse. Manjunatha Prasad and Mohana [
6] then extended the core inverse in 2014 by introducing the core-EP inverse. In 2016, Wang [
7] introduced the core-EP decomposition, which plays a crucial role in the investigation of generalized inverses. In 2018, Mehdipour introduced the CMP inverse by applying the Moore–Penrose inverse and the Drazin inverse. In 2020, Zuo [
8] defined the CCE inverse based on the core-EP decomposition.
In studying polarized hight, Renardy [
9] investigated the singular value decomposition in Minkowski space in order to quickly verify that a Mueller matrix can map the forward light cone into itself. Subsequently, the Minkowski inverse in Minkowski space was established by Meenakshi [
10], who also gave a condition for a Mueller matrix to have a singular value decomposition in Minkowski space according to its Minkowski inverse. Recently, Wang and Liu defined the
-core inverse [
11], the
-core-EP inverse [
12], the
-WG inverse [
13] and the
-WG
° inverse [
14] in Minkowski space, which can be regarded as extensions of the core inverse [
5], the core-EP inverse [
6], the weak group inverse [
15] and the m-weak group inverse [
16], respectively. In 2023, Zuo [
17] promoted the DMP inverse to Minkowski space; we call it the
-DMP inverse afterward.
Inspired by the study of the CCE inverse and the generalized inverses in Minkowski space, the intention of this paper is to introduce a new generalized inverse in Minkowski space, called the -CCE inverse, and discuss its properties, characterizations, representations and applications.
2. Preliminaries
In this paper, we denote the Minkowski space by , which is an n-dimensional complex vector space with the metric matrix diag , where denotes the identity matrix of order . It is easy to check that and . We define that for (including for ), where represents the set of all real matrices.
Let , be the sets of all complex n-dimensional vectors and complex matrices. * means the conjugate transpose of a matrix. The symbols , , and denote the conjugate transpose, range, null space, and rank, respectively, of . Let . If , we call X an outer inverse of A, which is denoted by . If an outer inverse X of A satisfies and , then X is called the outer inverse with the prescribed range and null space , which is denoted by . The Minkowski adjoint of matrix A is denoted by , and it is defined as , where and are Minkowski metric matrices with orders m and n, respectively. For a matrix A, the index of A is the smallest non-negative integer k such that , which is denoted as . Notice that if and only if A is invertible. For subspaces and satisfying their direct sum as , i.e., , the projector onto along is indicated by .
The Moore–Penrose inverse denoted by
of
is the unique matrix satisfying the following matrix equations [
1]
The Drazin inverse denoted by
of
with
is the unique matrix satisfying the following matrix equations [
2]
When
,
is called the group inverse of
A denoted by
[
3].
The DMP inverse denoted by
of
with
is the unique matrix satisfying the following matrix equations [
4]
notice that
and
represents the dual DMP (or MPD) inverse of
A.
The core-EP inverse denoted by
of
with
is the unique matrix satisfying the following matrix equations [
6].
notice that
.
The CMP inverse denoted by
of
is the unique matrix satisfying the following matrix equations [
18]
notice that
.
The CCE inverse denoted by
of
is the unique matrix satisfying the following matrix equations [
8]
notice that
. Evidently, the core-EP inverse underpins the CCE inverse by supplying essential properties and acting as its building block, whereas the CCE inverse constitutes an extension of the core-EP inverse. From Theorem 3.9 of [
8], we obtain that
if and only if
, where
.
The Minkowski inverse denoted by
of
with
and
in Minkowski space is the unique matrix satisfying the following matrix equations [
10]
The
-DMP inverse denoted by
of
with
and
in Minkowski space is the unique matrix satisfying the following matrix equations [
17]
notice that
and
represents the dual
-DMP (or
-MPD) inverse of
A.
The
-core inverse denoted by
of
with
and
in Minkowski space is the unique matrix satisfying the following matrix equations [
11]
The
-core-EP inverse denoted by
of
with
and
in Minkowski space is the unique matrix satisfying the following matrix equations [
12]
notice that
.
Lemma 1 ([
17])
. Let . Then, the following conditions are equivalent:- (1)
exists;
- (2)
;
- (3)
.
Lemma 2 ([
10])
. Let with . Then,- (1)
and ;
- (2)
;
- (3)
.
Lemma 3 ([
19])
. Let with , . Then,- (1)
and ;
- (2)
.
Lemma 4 ([
20])
. Let with and . Then,- (1)
and ;
- (2)
.
Lemma 5 ([
21])
. If X is an outer inverse of A, then Lemma 6 ([
19])
. Let . Let L and M be complementary subspaces of , i.e., .Then,- (1)
;
- (2)
.
Lemma 7 ([
22])
. Every matrix with has a Hartwig–Spindelböck decomposition:where is a unitary matrix, diag is a diagonal matrix, the elements on the diagonal being the singular values of the matrix A, , and and satisfy . Lemma 8 ([
17])
. Let be given by (1), let , and let the partition of the Minkowski metric matrix bewhere , and . If Δ
and are nonsingular, thenThe -core-EP
inverse of A is given as [14] 3. The -CCE Inverse in Minkowski Space
This section defines a new generalized inverse named -CCE inverse in Minkowski space. In order to do this, we recall the -core-EP decomposition firstly. Then, we consider a system of equations.
Lemma 9 ([
12])
. Let with and . Then, matrix A has an -core-EP
decomposition , and it has the following matrix formwhere , , , and is unitary. Furthermore,where and hold the form as Lemma 8, is nonsingular, , and is nilpotent such that . In addition, the -core-EP
decomposition of matrix A is unique and , .Let with , and having the decomposition as Lemma 9. Let . Consider the following system of equations. Theorem 1. The system (2) has a unique solution . Proof. It is easy to check that
satisfies the three equations in system (
2). For uniqueness, we suppose that
X satisfies (
2). Then,
Hence,
is the unique solution of (
2). □
Definition 1. Let with , and . The -CCE
inverse of A in Minkowski space, which is denoted by , is defined as Example 1. LetThen, , and . Also,This shows that -CCE
inverse is different from some known generalized inverse. 4. Characterizations and a Canonical Form of the -CEE Inverse
More characterizations for the Moore–Penrose inverse, the Drazin inverse, the DMP inverse, the CCE inverse, the Minkowski inverse, the
-core-EP inverse and the
-core inverse can be found in [
23,
24,
25,
26], and we now present characterizations for the
-CCE inverse.
Theorem 2. Let with , and . Then,
- (1)
;
- (2)
;
- (3)
.
Proof. (1) From Lemmas 4 and 5, we know
(2) According to Lemma 2, we know that
Using Lemma 6, we obtain that
which implies
From Lemma 5, we have that
This implies
Furthermore,
Hence, we obtain
.
- (3)
On the one hand, according to item (1) and item (2), we obtain
on the other hand, we can obtain
from
In addition, since
G denoted as a Minkowski metric matrice is nonsingular, it follows that
It is obvious that
which implies
As a consequence, we obtain
Due to
, we deduce that
In conclusion, we obtain
. □
Theorem 3. Let with , and . Then,
- (1)
;
- (2)
.
Proof. (1) Using Theorem 2, we possess
According to Theorem 2(2), we know
Hence, we know
.
- (2)
According to Theorem 2(1), we know
It is obvious that
which shows that
Hence, we can infer that
furthermore,
So,
. In conclusion,
. □
Theorem 4. Let with , and . Then,
- (1)
is an outer inverse of A;
- (2)
is a reflexive g-inverse of ;
- (3)
, .
Proof. (1) This is evident.
- (2)
Using the properties of and , we can compute directly that
In conclusion, we observe that
is a reflexive g-inverse of
.
Hence, we obtain the results. □
Theorem 5. Let with , and . Then, presents a unique solution of Proof. Because of Theorems 2 and 3, we observe that
satisfies (
3). To prove the uniqueness, we assume that
and
satisfy (
3). Then,
which implies
. From
we have that
Above all, we obtain
. So
. □
Theorem 6. Let with , and . Then, the following statements are equivalent:
- (1)
;
- (2)
, ;
- (3)
, ;
- (4)
, .
Proof. (1) ⇒ (2). It is easily obtained by (
2) and Theorem 2.
- (2) ⇒ (3).
Post-multiplying by , we earn
where the last equality follows from Lemmas 4 and 6.
- (3) ⇒ (4).
We know is an idempotent from
. It follows that
- (4) ⇒ (1).
Since Theorem 3, we obtain
using Lemma 6, we obtain
Applying
to
, we have
This finishes the proof. □
Theorem 7. Let with , and . Then, the following statements are equivalent:
- (1)
;
- (2)
, , ;
- (3)
, , ;
- (4)
, , .
Proof. (1) ⇒ (2). It is obvious by Theorem 4.
- (2) ⇒ (3).
Applying and to , we gain
According to Definition 1, we realize
(3) ⇒ (4). Applying
to
, we gain
(4) ⇒ (1). It is easily obtained by Theorem 1.
This completes the proof. □
Theorem 8. Let with , and . Then,
- (1)
;
- (2)
.
Proof. (1) The result is easily obtained by Lemma 9.
From item (1), we know
. □
Theorem 9. Let with , and . Then,
- (1)
;
- (2)
;
- (3)
;
- (4)
;
- (5)
;
- (6)
.
Proof. From Lemmas 3 and 4, we acquire
Using Lemma 6, we attain
It can be easily obtained that
Hence, we secure the following proof.
- (1)
According to the properties of and , we know
(2) According to the properties of
and
, we have
(3) According to the properties of
and
, we obtain
(4) According to the properties of
and
, we know
(5) Pre-multiplying
by
, we earn
It follows that
from
. The converse is obvious.
- (6)
Because and the fact that , we realize
The converse is obvious. □
Theorem 10. Let with be given by (1) with and . Then,where Δ
and hold the same forms as Lemma 8. Proof. Using (
1) and Lemma 8, we own
This finishes the proof. □
5. Representations
In this section, we present some representations for -CCE inverse.
Lemma 10 ([
27])
. Let , and . If exists, then Lemma 11 ([
28])
. Let . Then, Lemma 12 ([
19])
. Let with . If is a full-rank decomposition and are also full-rank decompositions. Then,- (1)
is invertible;
- (2)
;
- (3)
.
Lemma 13 ([
29])
. Let be a rank factorization of rank r and . Then, Theorem 11. Let with and . If A has a full-rank decomposition as Lemma 12, then Proof. Let . We must verify that X satisfies , and
. Then,
Thus, we obtain the result. □
Theorem 12. Let with and . If A has a full-rank decomposition as Lemma 12, thenwhere , . Proof. Let
,
. Then,
and
. According to Theorem 11, we know
It follows from
that
This finishes the proof. □
Theorem 13. Let with , and . If A has a full-rank decomposition as Lemma 12, thenwhere , . Proof. Using Lemma 13, Theorem 12 and the fact that
, we have
This finishes the proof. □
Example 2. LetThen, and . In addition, Example 3. Let us test the matrix A given in Example 1. Then, , andwhere and have been shown in Example 1. Theorem 14. Let with , and . Let A be the full-rank decomposition as Lemma 12. Then,where , , . Proof. It is clear that
which implies
In addition, since
is of full column rank, it follows that
This shows
exists by Lemma 1. Also, From Lemma 1 and Theorem 2, we have
According to Lemma 6, we obtain
Using Lemma 11 and Theorem 13, we attain
where
. This finishes the proof. □
Theorem 15. Let with , and . Then,
- (1)
;
- (2)
;
- (3)
;
- (4)
.
Proof. We have in Theorem 2 already. Combining with Lemmas 10 and 11(2), we have the following proof.
- (1)
Let , . Then,
(2) Let
,
. Then, we can obtain the result through a similar way.
- (3)
Let , . Then, we can obtain the result.
- (4)
Let , . Then, we can obtain the result. □
Example 4. Consider the matrix A given in Example 1. Then, ,It is easy to check that , where has been shown in Example 1. 6. Applications
Theorem 16. Let with , and . Then, the general solution to the linear systemis , where is arbitrary. Proof. On the one hand,
This implies
is a solution of (
4). On the other hand, if (
4) holds for some
x, then
which yields
Thus, the solution
x has the form
, where
is arbitrary. □
Theorem 17. Let with , and . Suppose and . If , then the restricted matrix equationhas the unique solution . Proof. Since
, it follows that
, which implies
is a solution of
. And it is obvious that
So
is a solution of (
5). To prove the uniqueness, we assume
Y is another solution of (
5). Then, we acquire
. This implies
This completes the proof. □
Theorem 18. Let with , and . Let and be of full column rank such that and . Then, the bordered matrixis nonsingular, and its inverse is given by Proof. From
, we realize
. Since
it follows that
.
Let
then
Hence,
. □
Theorem 19. Let with , and . Let and be as Theorem 18, and be as Theorem 17. Then, the unique solution of (5) is given by , wherewhere denotes the j-th column of D and and mean to substitute the i-th column of A and C by and 0, respectively. Proof. Since
X is the solution of (
5), we obtain that
, which implies
. Then, (
5) can be written as
According to Theorem 18, we obtain that
Hence,
and (
6) follows from the classical Cramer rule. □
7. A Binary Relation Based on the -CCE Inverse
In this section, we study the binary relation for the -CCE inverse.
Definition 2. Let with , and . We consider that A is below B under the relation ifThe relation is denoted by A B. It is easy to know that the binary relation is reflexive. According to the following examples, we observe it is neither anti-symmetric nor transitive.
Example 5. Consider a matrix as follows. Let B be the transpose of the matrix A.It is easy to check thatBy computing, we havei.e., and . But . This means the relation is not anti-symmetric. Example 6. Consider the matricesThen, we haveWhat is more, we havei.e., , , But , which means we cannot obtain . Thus, the relation is not transitive. Above all, we know is not a matrix partial order. 8. Conclusions
This paper introduces the -CCE inverse, which is an extension of the CCE inverse in Minkowski space. Some of its properties, characterizations, representations, and applications in solving a system of linear equations are also shown in this paper. In addition, we prove that the -CCE ordering is not a matrix partial ordering. Because of the wide range of research fields and application backgrounds of generalized inverses, we convince that further explorations of the -CCE inverse will receive more attention and interest from researchers. Some possibilities for further research are given as follows:
Further properties, characterizations and representations of the -CCE inverse.
We can further generalize the -CCE inverse to tensors.
The perturbation analysis and iterative methods for the -CCE inverse will be two topics worth studying.
It is equally interesting to discuss the -CCE inverse for rectangular matrices.