Next Article in Journal
New Uncertainty Principles in the Linear Canonical Transform Domains Based on Hypercomplex Functions
Next Article in Special Issue
Operator Newton Method for Large-Scale Coupled Riccati Equations Arising from Jump Systems
Previous Article in Journal
Stability Analysis of SEIAR Model with Age Structure Under Media Effect
Previous Article in Special Issue
The Reverse Order Law for the {1,3M,4N}—The Inverse of Two Matrix Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The m-CCE Inverse in Minkowski Space and Its Applications

1
School of Mathematical, Guangxi Minzu University, Nanning 530006, China
2
School of Education, Guangxi Vocational Normal University, Nanning 530007, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(6), 413; https://doi.org/10.3390/axioms14060413
Submission received: 26 April 2025 / Revised: 24 May 2025 / Accepted: 26 May 2025 / Published: 28 May 2025
(This article belongs to the Special Issue Advances in Linear Algebra with Applications, 2nd Edition)

Abstract

In this paper, we introduce a new generalized inverse called m -CCE inverse which presents a generalization of the CCE inverse in Minkowski space by using the m -core-EP decomposition and the Minkowski inverse. We first show the existence and the uniqueness of the generalized inverse. Then, a number of basic properties and diverse characterizations are derived for the m -CCE inverse as well as its limit and integral expressions. Additionally, applications of the m -CCE inverse are given in solving a system of linear equations. Applying the generalized inverse, we introduce a binary relation based on the m -CCE inverse.

1. Introduction

Generalized inverses are studied in many branches of mathematical field: matrix theory, operator theory, C * -algebras, rings, etc. Additionally, generalized inverses turn out to be a powerful tool in many applications such as Markov chains, differential equations, difference equations, electrical networks, optimal control, and so on.
Penrose [1] earliest defined the Moore–Penrose inverse using four matrix equations in 1955. Then, a wave of research on generalized inverses has begun. In 1958, Drazin [2] introduced the Drazin inverse with spectral properties in associative rings and semigroups. The Drazin inverse of a group matrix is also known as its group inverse [3]. Using the Moore–Penrose inverse and the Drazin inverse, Malik and Thome [4] defined the DMP inverse in 2014. The core inverse was firstly proposed by Baksalary and Trenkler [5] in 2010 as an alternative to the group inverse. Manjunatha Prasad and Mohana [6] then extended the core inverse in 2014 by introducing the core-EP inverse. In 2016, Wang [7] introduced the core-EP decomposition, which plays a crucial role in the investigation of generalized inverses. In 2018, Mehdipour introduced the CMP inverse by applying the Moore–Penrose inverse and the Drazin inverse. In 2020, Zuo [8] defined the CCE inverse based on the core-EP decomposition.
In studying polarized hight, Renardy [9] investigated the singular value decomposition in Minkowski space in order to quickly verify that a Mueller matrix can map the forward light cone into itself. Subsequently, the Minkowski inverse in Minkowski space was established by Meenakshi [10], who also gave a condition for a Mueller matrix to have a singular value decomposition in Minkowski space according to its Minkowski inverse. Recently, Wang and Liu defined the m -core inverse [11], the m -core-EP inverse [12], the m -WG inverse [13] and the m -WG° inverse [14] in Minkowski space, which can be regarded as extensions of the core inverse [5], the core-EP inverse [6], the weak group inverse [15] and the m-weak group inverse [16], respectively. In 2023, Zuo [17] promoted the DMP inverse to Minkowski space; we call it the m -DMP inverse afterward.
Inspired by the study of the CCE inverse and the generalized inverses in Minkowski space, the intention of this paper is to introduce a new generalized inverse in Minkowski space, called the m -CCE inverse, and discuss its properties, characterizations, representations and applications.

2. Preliminaries

In this paper, we denote the Minkowski space by M , which is an n-dimensional complex vector space with the metric matrix G = diag ( 1 , I n 1 ) , where I n 1 denotes the identity matrix of order n 1 . It is easy to check that G * = G and G 2 = I n . We define that A 0 = I n for A R n × n (including for A = 0 n ), where R n × n represents the set of all real n × n matrices.
Let C n , C m × n be the sets of all complex n-dimensional vectors and complex m × n matrices. * means the conjugate transpose of a matrix. The symbols A * , R ( A ) , N ( A ) and rank ( A ) denote the conjugate transpose, range, null space, and rank, respectively, of A C m × n . Let A { 2 } = { X C n × m | X A X = X } . If X A { 2 } , we call X an outer inverse of A, which is denoted by A ( 2 ) . If an outer inverse X of A satisfies R ( X ) = T and N ( X ) = S , then X is called the outer inverse with the prescribed range T and null space S , which is denoted by A T , S ( 2 ) . The Minkowski adjoint of matrix A is denoted by A , and it is defined as A = G 2 A * G 1 , where G 1 and G 2 are Minkowski metric matrices with orders m and n, respectively. For a n × n matrix A, the index of A is the smallest non-negative integer k such that rank ( A k + 1 ) = rank ( A k ) , which is denoted as Ind ( A ) . Notice that Ind ( A ) = 0 if and only if A is invertible. For subspaces T and S satisfying their direct sum as C n × 1 , i.e., S T = C n × 1 , the projector onto T along S is indicated by P T , S .
The Moore–Penrose inverse denoted by A = X C n × m of A C m × n is the unique matrix satisfying the following matrix equations [1]
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A .
The Drazin inverse denoted by A D = X C n × n of A C n × n with Ind ( A ) = k is the unique matrix satisfying the following matrix equations [2]
( 1 ) A k + 1 X = A k , ( 2 ) X A X = X , ( 3 ) A X = X A .
When k 1 , A D is called the group inverse of A denoted by A # [3].
The DMP inverse denoted by A D , = X C n × n of A C n × n with Ind ( A ) = k is the unique matrix satisfying the following matrix equations [4]
( 1 ) X A X = X , ( 2 ) X A = A D A , ( 3 ) A k X = A k A ,
notice that A D , = A D A A and A , D = A A A D represents the dual DMP (or MPD) inverse of A.
The core-EP inverse denoted by A = X C n × n of A C n × n with Ind ( A ) = k is the unique matrix satisfying the following matrix equations [6].
( 1 ) X A X = X , ( 2 ) X A k + 1 = A k , ( 3 ) ( A X ) * = A X , ( 4 ) R ( X ) R ( A k ) ,
notice that A = A D A k ( A k ) .
The CMP inverse denoted by A C , = X C n × n of A C n × n is the unique matrix satisfying the following matrix equations [18]
( 1 ) X A X = X , ( 2 ) A X A = A A D A , ( 3 ) A X = A A D A A , ( 4 ) X A = A A A D A ,
notice that A C , = A A A D A A A .
The CCE inverse denoted by A C ,   = X C n × n of A C n × n is the unique matrix satisfying the following matrix equations [8]
( 1 ) X A X = X , ( 2 ) X A = A A A A , ( 3 ) A X = A A A A ,
notice that A C ,   = A A A A A . Evidently, the core-EP inverse underpins the CCE inverse by supplying essential properties and acting as its building block, whereas the CCE inverse constitutes an extension of the core-EP inverse. From Theorem 3.9 of [8], we obtain that A C ,   = A if and only if A k A , D = A , D A k , where k = Ind ( A ) .
The Minkowski inverse denoted by A m = X C n × m of A C m × n with Ind ( A ) = k and rank ( A A A ) = rank ( A ) in Minkowski space is the unique matrix satisfying the following matrix equations [10]
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) = A X , ( 4 ) ( X A ) = X A .
The m -DMP inverse denoted by A D , m = X C n × n of A C n × n with Ind ( A ) = k and rank ( A A A ) = rank ( A ) in Minkowski space is the unique matrix satisfying the following matrix equations [17]
( 1 ) X A X = X , ( 2 ) X A = A D A , ( 3 ) A k X = A k A m ,
notice that A D , m = A D A A m and A m , D = A m A A D represents the dual m -DMP (or m -MPD) inverse of A.
The m -core inverse denoted by A m = X C n × n of A C n × n with Ind ( A ) = 1 and rank ( A ) = rank ( A A ) in Minkowski space is the unique matrix satisfying the following matrix equations [11]
( 1 ) A X A = A , ( 2 ) A X 2 = X , ( 3 ) ( A X ) = A X .
The m -core-EP inverse denoted by A E = X C n × n of A C n × n with Ind ( A ) = k and rank ( A k ) = rank ( A k ) A k in Minkowski space is the unique matrix satisfying the following matrix equations [12]
( 1 ) X A X = X , ( 2 ) X A k + 1 = A k , ( 3 ) ( A X ) = A X , ( 4 ) R ( X ) R ( A k ) ,
notice that A E = A k A D A k m .
Lemma 1 
([17]). Let A C m × n . Then, the following conditions are equivalent:
 (1) 
A m exists;
 (2) 
Rank ( A A ) = rank ( A A ) = rank ( A ) ;
 (3) 
Rank ( A A A ) = rank ( A ) .
Lemma 2 
([10]). Let A C m × n with rank ( A A A ) = rank ( A ) . Then,
 (1) 
R ( A m ) = R ( A ) and N ( A m ) = N ( A ) ;
 (2) 
A A m = P R ( A ) , N ( A ) ;
 (3) 
A m A = P R ( A ) , N ( A ) .
Lemma 3 
([19]). Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) . Then,
 (1) 
R ( A D ) = R ( A k ) and N ( A D ) = N ( A k ) ;
 (2) 
A A D = A D A = P R ( A k ) , N ( A k ) .
Lemma 4 
([20]). Let A C n × n with Ind ( A ) = k and rank ( A k ) = rank A k A k . Then,
 (1) 
R ( A E ) = R ( A k ) and N ( A E ) = N A k ;
 (2) 
A A E = P R ( A k ) , N A k .
Lemma 5 
([21]). If X is an outer inverse of A, then
R ( X A ) = R ( X ) a n d N ( A X ) = N ( X ) .
Lemma 6 
([19]). Let A C n × n . Let L and M be complementary subspaces of C n , i.e., L M = C n .Then,
 (1) 
P L , M A = A R ( A ) L ;
 (2) 
A = A P L , M M N ( A ) .
Lemma 7 
([22]). Every matrix A C n × n with rank ( A ) = r > 0 has a Hartwig–Spindelböck decomposition:
A = U Σ K Σ L 0 0 U * ,
where U C n × n is a unitary matrix, Σ = diag ( σ 1 I k 1 , σ 2 I k 2 , , σ t I k t ) is a diagonal matrix, the elements on the diagonal σ 1 > σ 2 > > σ t > 0 being the singular values of the matrix A, k 1 + k 2 + + k t = r = rank ( A ) , and K C r × r and L C r × ( n r ) satisfy K K * + L L * = I r .
Lemma 8 
([17]). Let A C n × n be given by (1), let Δ = K L U * G U K * L * , and let the partition of the Minkowski metric matrix G C n × n be
G = U G 1 G 2 G 2 * G 4 U * ,
where G 1 C r × r , G 2 C r × ( n r ) and G 4 C ( n r ) × ( n r ) . If Δ and G 1 are nonsingular, then
A m = G U K * ( G 1 Σ Δ ) 1 0 L * ( G 1 Σ Δ ) 1 0 U * G .
The m -core-EP inverse of A is given as [14]
A E = U ( Σ K ) E 0 0 0 U * .

3. The m -CCE Inverse in Minkowski Space

This section defines a new generalized inverse named m -CCE inverse in Minkowski space. In order to do this, we recall the m -core-EP decomposition firstly. Then, we consider a system of equations.
Lemma 9 
([12]). Let A C n × n with Ind ( A ) = k and rank ( A k ) = rank A k A k = r . Then, matrix A has an m -core-EP decomposition A = A 1 + A 2 , and it has the following matrix form
A = U T S 0 N U * ,
where Ind ( A 1 ) 1 , rank ( A 1 ) = rank ( A 1 A 1 ) , A 2 k = 0 , A 1 A 2 = A 2 A 1 = 0 and U C n × n is unitary. Furthermore,
A 1 = U T S + G 1 1 G 2 N 0 0 U * , A 2 = U 0 G 1 1 G 2 N 0 N U * ,
where G 1 and G 2 hold the form as Lemma 8, T C r × r is nonsingular, S C r × ( n r ) , and N C ( n r ) × ( n r ) is nilpotent such that N k = 0 . In addition, the m -core-EP decomposition of matrix A is unique and A 1 = A A E A , A 2 = A A A E A .
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k having the decomposition as Lemma 9. Let X C n × n . Consider the following system of equations.
X A X = X , A X = A 1 A m , X A = A m A 1 .
Theorem 1. 
The system (2) has a unique solution X = A m A 1 A m .
Proof. 
It is easy to check that X = A m A 1 A m satisfies the three equations in system (2). For uniqueness, we suppose that X satisfies (2). Then,
X = X A X = X A 1 A m = X A A E A A m = A m A 1 A E A A m = A m A A E A A E A A m = A m A A E A A m = A m A 1 A m .
Hence, X = A m A 1 A m is the unique solution of (2). □
Definition 1. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . The m -CCE inverse of A in Minkowski space, which is denoted by A C , E , is defined as
A C , E = A m A 1 A m = A m A A E A A m .
Example 1. 
Let
A = 0 4 1 1 3 1 2 2 0 .
Then, k = Ind ( A ) = 2 , rank ( A ) = rank ( A A A ) = 2 and rank ( A k ) = rank A k A k = 1 . Also,
A = 1 54 11 54 10 27 7 54 5 54 2 27 1 27 2 27 2 27 , A D = 2 27 14 27 4 27 1 27 7 27 2 27 2 27 14 27 4 27 , A D , = 2 9 2 9 0 1 9 1 9 0 2 9 2 9 0 ,
A = 4 27 2 27 4 27 2 27 1 27 2 27 4 27 2 27 4 27 , A C , = 1 6 1 6 0 1 6 1 6 0 0 0 0 , A C ,   = 1 9 1 18 1 9 1 9 1 18 1 9 0 0 0 ,
A m = 3 16 1 16 1 2 19 16 15 16 1 2 1 4 1 4 0 , A D , m = 2 14 9 8 9 1 7 9 4 9 2 14 9 8 9 , A E = 4 3 2 3 4 3 2 3 1 3 2 3 4 3 2 3 4 3 ,
A C , E = 9 8 9 16 9 8 7 8 7 16 7 8 1 2 1 4 1 2 .
This shows that m -CCE inverse is different from some known generalized inverse.

4. Characterizations and a Canonical Form of the m -CEE Inverse

More characterizations for the Moore–Penrose inverse, the Drazin inverse, the DMP inverse, the CCE inverse, the Minkowski inverse, the m -core-EP inverse and the m -core inverse can be found in [23,24,25,26], and we now present characterizations for the m -CCE inverse.
Theorem 2. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
R ( A C ,   E ) = R ( A m A k ) ;
 (2) 
N ( A C ,   E ) = N A k ;
 (3) 
A C ,   E = A R ( A m A k ) , N A k ( 2 ) = A R A m A k A k , N A m A k A k ( 2 ) .
Proof. 
(1) From Lemmas 4 and 5, we know
R ( A C ,   E ) = R ( A C ,   E A ) = R ( A m A A E A A m A ) = R ( A m A A E A ) = A m A R ( A E A ) = A m A R ( A k ) = A m R ( A k + 1 ) = A m R ( A k ) = R ( A m A k ) .
(2) According to Lemma 2, we know that
N ( A A m ) = N ( A ) N A k .
Using Lemma 6, we obtain that
A k A A m = A k ,
which implies
N A k A A m = N A k .
From Lemma 5, we have that
N ( A C ,   E ) = N ( A A C ,   E ) = N ( A A m A A E A A m ) = N ( A A E A A m ) .
This implies
x N ( A C ,   E ) if and only if A A m x N ( A A E ) = N A k .
Furthermore,
x N ( A C ,   E ) if and only if x N A k A A m = N A k .
Hence, we obtain N ( A C ,   E ) = N A k .
(3)
On the one hand, according to item (1) and item (2), we obtain
A C ,   E = A R ( A m A k ) , N A k ( 2 ) ,
on the other hand, we can obtain R ( A m A k ) = R A m A k A k from
R ( A m A k ) = R A m A k A k m A k R A m A k A k m = A m A k R A k m = A m A k R A k = R A m A k A k R ( A m A k ) .
In addition, since G denoted as a Minkowski metric matrice is nonsingular, it follows that
rank ( A k ) = rank A k * = rank G A k * G = rank A k .
It is obvious that
rank ( A m A k ) rank ( A k ) = rank ( A k + 1 ) = rank ( A A k ) = rank ( A A m A A k ) rank ( A m A k + 1 ) rank ( A m A k ) ,
which implies
rank ( A m A k ) = rank ( A k ) .
As a consequence, we obtain
rank A m A k A k = rank ( A m A k ) = rank ( A k ) = rank A k .
Due to N A k N A m A k A k , we deduce that
N A k = N A m A k A k .
In conclusion, we obtain A C , E = A R ( A m A k ) , N A k ( 2 ) = A R A m A k A k , N A m A k A k ( 2 ) . □
Theorem 3. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
A A C , E = P R ( A k ) , N A k ;
 (2) 
A C , E A = P R ( A m A k ) , N A k A .
Proof. 
(1) Using Theorem 2, we possess
R ( A A C , E ) = A R ( A C , E ) = A R ( A m A k ) = R ( A A m A k ) = R ( A k ) .
According to Theorem 2(2), we know
N ( A A C , E ) = N A k .
Hence, we know A A C , E = P R ( A k ) , N A k .
(2)
According to Theorem 2(1), we know
R ( A C , E A ) = R ( A m A k ) .
It is obvious that
N ( A C , E A ) = N ( A m A A E A ) N ( A A m A A E A ) = N ( A A E A ) N ( A E A A E A ) = N ( A E A ) N ( A m A A E A ) ,
which shows that
N ( A C , E A ) = N ( A m A A E A ) = N ( A E A ) .
Hence, we can infer that
x N ( A C , E A ) if and only if A x N ( A E ) = N A k ;
furthermore,
x N ( A C , E A ) if and only if x N A k A .
So, N ( A C , E A ) = N A k A . In conclusion, A C , E A = P R ( A m A k ) , N A k A . □
Theorem 4. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
A C , E is an outer inverse of A;
 (2) 
A C , E is a reflexive g-inverse of A 1 ;
 (3) 
A 1 A C , E = A 1 A m , A C , E A 1 = A m A 1 .
Proof. 
(1) This is evident.
(2)
Using the properties of A C , E and A 1 , we can compute directly that
A 1 A C , E A 1 = A A E A A m A A E A A m A A E A = A A E A = A 1 ,
A C , E A 1 A C , E = A m A A E A A m A A E A A m A A E A A m = A m A A E A A m = A C , E .
In conclusion, we observe that A C , E is a reflexive g-inverse of A 1 .
(3)
It is obvious that
A 1 A C , E = A A E A A m A A E A A m = A A E A A m = A 1 A m ,
A C , E A 1 = A m A A E A A m A A E A = A m A A E A = A m A 1 .
Hence, we obtain the results. □
Theorem 5. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then, A C , E presents a unique solution of
A X = P R ( A k ) , N A k , R ( X ) R ( A m A ) .
Proof. 
Because of Theorems 2 and 3, we observe that A C , E satisfies (3). To prove the uniqueness, we assume that X 1 and X 2 satisfy (3). Then,
A X 1 = A X 2 = P R ( A k ) , N A k , i . e . , A ( X 1 X 2 ) = 0 ,
which implies R ( X 1 X 2 ) N ( A ) N ( A m A ) . From
R ( X 1 ) R ( A m A ) , R ( X 2 ) R ( A m A ) ,
we have that
R ( X 1 X 2 ) R ( A m A ) .
Above all, we obtain R ( X 1 X 2 ) R ( A m A ) N ( A m A ) = 0 . So X 1 = X 2 . □
Theorem 6. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then, the following statements are equivalent:
 (1) 
X = A C , E ;
 (2) 
N ( X ) = N A k , X A = A m A 1 ;
 (3) 
N ( X ) = N A k , X A k = A m A k ;
 (4) 
N ( X ) = N A k , X A A E = A m A A E .
Proof. 
(1) ⇒ (2). It is easily obtained by (2) and Theorem 2.
(2) ⇒ (3).
Post-multiplying X A = A m A 1 by A k 1 , we earn
X A k = A m A 1 A k 1 = A m A A E A A k 1 = A m A A E A k = A m A k ,
where the last equality follows from Lemmas 4 and 6.
(3) ⇒ (4).
We know A A E is an idempotent from A E A A E = A E . It follows that
X A A E = X A k A E k = A m A A E .
(4) ⇒ (1).
Since Theorem 3, we obtain
N ( X ) = N A k = N ( A A C , E ) = N ( A A E A A m ) ,
using Lemma 6, we obtain
X = X A A E A A m .
Applying X A A E = A m A A E to X = X A A E A A m , we have
X = X A A E A A m = A m A A E A A m = A C , E .
This finishes the proof. □
Theorem 7. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then, the following statements are equivalent:
 (1) 
X = A C , E ;
 (2) 
X A 1 X = X , A 1 X = A 1 A m , X A 1 = A m A 1 ;
 (3) 
A m A 1 X = X , A X = A 1 A m , X A = A m A 1 ;
 (4) 
X A X = X , A X = A 1 A m , X A = A m A 1 .
Proof. 
(1) ⇒ (2). It is obvious by Theorem 4.
(2) ⇒ (3).
Applying A 1 X = A 1 A m and X A 1 = A m A 1 to X A 1 X = X , we gain
X = X A 1 X = A m A 1 X = A m A 1 A m = A C , E .
According to Definition 1, we realize
A X = A 1 A m a n d X A = A m A 1 .
(3) ⇒ (4). Applying X A = A m A 1 to A m A 1 X = X , we gain
X = A m A 1 X = X A X .
(4) ⇒ (1). It is easily obtained by Theorem 1. This completes the proof. □
Theorem 8. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
A 1 = A Ind ( A ) 1 ;
 (2) 
A C , E = A m Ind ( A ) 1 .
Proof. 
(1) The result is easily obtained by Lemma 9.
(2)
It is obvious that
A C , E = A m A m A A E A A m = A m A A E A = A A m A A 1 = A .
From item (1), we know A C , E = A m Ind ( A ) 1 . □
Theorem 9. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
A C , E = A A D , m A m A k = A k ;
 (2) 
A 1 = A C , E A 1 A m A k = A k ;
 (3) 
A 1 = A m , D A 1 A m A k = A k ;
 (4) 
A 1 = A D , m A 1 A k = A k + 1 ;
 (5) 
A A C , E = A A m A C , E = A m ;
 (6) 
A C , E A = A m A A C , E = A m .
Proof. 
From Lemmas 3 and 4, we acquire
R ( A A D ) = R ( A A E ) = R ( A k ) .
Using Lemma 6, we attain
A A D A A E = A A E .
It can be easily obtained that
R ( A 1 ) = R ( A A E A ) = A R ( A E A ) = A R ( A E ) = A R ( A k ) = R ( A k + 1 ) = R ( A k ) .
Hence, we secure the following proof.
(1)
According to the properties of A D , m and A C , E , we know
A C , E = A A D , m A m A A E A A m = A A D A A m A m A A E A = A A D A A m A A E = A A D A A E A m A A E = A A E ( A m I ) A A E = 0 R ( A A E ) N ( A m I ) R ( A k ) N ( A m I ) A m A k = A k .
(2) According to the properties of A 1 and A C , E , we have
A 1 = A C , E A 1 A 1 = A m A A E A A m A A E A A 1 = A m A A E A A 1 = A m A 1 ( I A m ) A 1 = 0 R ( A 1 ) N ( I A m ) R ( A k ) N ( I A m ) A m A k = A k .
(3) According to the properties of A 1 and A m , D , we obtain
A 1 = A m , D A 1 A 1 = A m A A D A A E A A 1 = A m A A E A A 1 = A m A 1 A m A k = A k .
(4) According to the properties of A 1 and A D , m , we know
A 1 = A D , m A 1 A 1 = A D A A m A A E A A 1 = A D A A E A A 1 = A D A 1 ( I A D ) A 1 = 0 R ( A 1 ) N ( I A D ) R ( A k ) N ( I A D ) ( I A D ) A k = 0 ( I A D ) A k + 1 = 0 A k + 1 = A D A k + 1 = A k .
(5) Pre-multiplying A A C , E = A A m by A m , we earn
A m A A C , E = A m A A m = A m .
It follows that A C , E = A m from A m A A C , E = A C , E . The converse is obvious.
(6)
Because A C , E A = A m A and the fact that A C , E A A m = A C , E , we realize
A C , E = A C , E A A m = A m A A m = A m .
The converse is obvious. □
Theorem 10. 
Let A C n × n with Ind ( A ) = k be given by (1) with rank ( A ) = rank ( A A A ) and rank ( A k ) = rank A k A k . Then,
A C , E = G U K * Δ 1 K ( Σ K ) E G 1 1 0 L * Δ 1 K ( Σ K ) E G 1 1 0 U * G ,
where Δ and G 1 hold the same forms as Lemma 8.
Proof. 
Using (1) and Lemma 8, we own
A m A A E = G U K * ( G 1 Σ Δ ) 1 0 L * ( G 1 Σ Δ ) 1 0 U * G U Σ K Σ L 0 0 ( Σ K ) E 0 0 0 U * = G U K * Δ 1 Σ 1 G 1 1 0 L * Δ 1 Σ 1 G 1 1 0 G 1 G 2 G 2 * G 4 Σ K ( Σ K ) E 0 0 0 U * = G U K * Δ 1 Σ 1 K * Δ 1 Σ 1 G 1 1 G 2 L * Δ 1 Σ 1 L * Δ 1 Σ 1 G 1 1 G 2 Σ K ( Σ K ) E 0 0 0 U * = G U K * Δ 1 K ( Σ K ) E 0 L * Δ 1 K ( Σ K ) E 0 U * ,
A A m = U Σ K Σ L 0 0 U * G U K * ( G 1 Σ Δ ) 1 0 L * ( G 1 Σ Δ ) 1 0 U * G = U Σ 0 K L U * G U K * L * ( G 1 Σ Δ ) 1 0 U * G = U Σ 0 Δ Δ 1 Σ 1 G 1 1 0 U * G = U Σ 0 Σ 1 G 1 1 0 U * G = U G 1 1 0 0 0 U * G ,
a n d A C , E = A m A A E A A m = G U K * Δ 1 K ( Σ K ) E 0 L * Δ 1 K ( Σ K ) E 0 G 1 1 0 0 0 U * G = G U K * Δ 1 K ( Σ K ) E G 1 1 0 L * Δ 1 K ( Σ K ) E G 1 1 0 U * G .
This finishes the proof. □

5. Representations

In this section, we present some representations for m -CCE inverse.
Lemma 10 
([27]). Let A C m × n , X C n × p and Y C p × m . If A R ( X Y ) , N ( X Y ) ( 2 ) exists, then
A R ( X Y ) , N ( X Y ) ( 2 ) = lim λ 0 X ( λ I p + Y A X ) 1 Y .
Lemma 11 
([28]). Let A C m × n . Then,
A m = 0 exp ( A A t ) A d t ,
A m = lim t 0 t I n + A A 1 A .
Lemma 12 
([19]). Let A C n × n with Ind ( A ) = k . If A = B 1 C 1 is a full-rank decomposition and C i B i = B i + 1 C i + 1 , ( i = 1 , 2 , , k 1 ) are also full-rank decompositions. Then,
 (1) 
C k B k is invertible;
 (2) 
A k = B 1 B k C k C 1 ;
 (3) 
A D = B 1 B k ( C k B k ) k 1 C k C 1 .
Lemma 13 
([29]). Let A = B 1 C 1 C m × n be a rank factorization of rank r and rank ( A A ) = rank ( A A ) = rank ( A ) = r . Then,
A m = C 1 C 1 C 1 1 B 1 B 1 1 B 1 .
Theorem 11. 
Let A C n × n with Ind ( A ) = 1 and rank ( A ) = rank ( A A ) . If A has a full-rank decomposition as Lemma 12, then
A m = B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 .
Proof. 
Let X = B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 . We must verify that X satisfies A X A = A , A X 2 = X and
( A X ) = A X . Then,
A X A = B 1 C 1 B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 B 1 C 1 = B 1 C 1 = A ;
A X 2 = B 1 C 1 B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 = B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 = X ;
A X = B 1 C 1 B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 = B 1 B 1 B 1 1 B 1 ,
( A X ) = B 1 B 1 B 1 1 B 1 = B 1 B 1 B 1 1 B 1 = B 1 B 1 B 1 1 B 1 = B 1 B 1 B 1 1 B 1 = A X .
Thus, we obtain the result. □
Theorem 12. 
Let A C n × n with Ind ( A ) = k and rank ( A k ) = rank A k A k . If A has a full-rank decomposition as Lemma 12, then
A E = B C B ( C k B k ) k 1 B B 1 B ,
where B = B 1 B k , C = C k C 1 .
Proof. 
Let B = B 1 B k , C = C k C 1 . Then, A k = B C and A D = B ( C k B k ) k 1 C . According to Theorem 11, we know
A k m = ( B C ) m = B ( C B ) 1 B B 1 B .
It follows from A E = A k A D A k m that
A E = B C B ( C k B k ) k 1 C B ( C B ) 1 B B 1 B = B C B ( C k B k ) k 1 B B 1 B .
This finishes the proof. □
Theorem 13. 
Let A C n × n with Ind ( A ) = k , rank ( A ) = rank ( A A A ) and rank ( A k ) = rank A k A k . If A has a full-rank decomposition as Lemma 12, then
A C , E = C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B ,
where B = B 1 B k , C = C k C 1 .
Proof. 
Using Lemma 13, Theorem 12 and the fact that A C , E = A m A A E A A m , we have
A C , E = C 1 C 1 C 1 1 B 1 B 1 1 B 1 B 1 C 1 B C B ( C k B k ) k 1 B B 1 B B 1 C 1 C 1 C 1 C 1 1 B 1 B 1 1 B 1 = C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B B 1 B 1 B 1 1 B 1 = C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B k B 2 B 1 = C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B .
This finishes the proof. □
Example 2. 
Let
A = 2 14 4 1 7 2 2 14 4
Then, Ind ( A ) = 1 and rank ( A ) = rank ( A A ) = 1 . In addition,
B 1 = 2 1 2 , C 1 = 1 7 2 , A m = 4 9 2 9 4 9 2 9 1 9 2 9 4 9 2 9 4 9 ,
B 1 ( C 1 B 1 ) 1 B 1 B 1 1 B 1 = 4 9 2 9 4 9 2 9 1 9 2 9 4 9 2 9 4 9 = A m .
Example 3. 
Let us test the matrix A given in Example 1. Then, k = Ind ( A ) = 2 , and
B 1 = 0 4 1 3 2 2 , C 1 = 1 0 1 4 0 1 1 4 , B 2 = 1 2 1 2 , C 2 = 1 7 ,
B = B 1 B 2 = 2 1 2 , C = C 2 C 1 = 1 7 2 ,
B C B ( C k B k ) k 1 B B 1 B = 4 3 2 3 4 3 2 3 1 3 2 3 4 3 2 3 4 3 = A E ,
C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B = 9 8 9 16 9 8 7 8 7 16 7 8 1 2 1 4 1 2 = A C , E ,
where A E and A C , E have been shown in Example 1.
Theorem 14. 
Let A C n × n with Ind ( A ) = k , rank ( A ) = rank ( A A A ) and rank ( A k ) = rank A k A k . Let A be the full-rank decomposition as Lemma 12. Then,
A C , E = 0 exp ( C 1 C 1 t ) C 1 d t M ,
where B = B 1 B k , C = C k C 1 , M = C 1 B C B ( C k B k ) k 1 B B 1 B .
Proof. 
It is clear that
rank ( C 1 C 1 ) rank ( C 1 ) = rank ( A ) = rank ( A A A ) = rank ( C 1 B 1 B 1 C 1 C 1 B 1 ) rank ( C 1 C 1 ) ,
which implies
rank ( C 1 ) = rank ( C 1 C 1 ) .
In addition, since C 1 is of full column rank, it follows that
rank ( C 1 ) = rank ( C 1 C 1 ) = rank ( C 1 C 1 C 1 ) .
This shows C 1 m exists by Lemma 1. Also, From Lemma 1 and Theorem 2, we have
R ( A C , E ) = R ( A m A k ) R ( A m ) = R ( A ) = R ( C 1 B 1 ) R ( C 1 ) = R ( C 1 m ) = R ( C 1 m C 1 ) .
According to Lemma 6, we obtain
A C , E = C 1 m C 1 A C , E .
Using Lemma 11 and Theorem 13, we attain
A C , E = 0 exp ( C 1 C 1 t ) C 1 d t C 1 C 1 C 1 C 1 1 C 1 B C B ( C k B k ) k 1 B B 1 B = 0 exp ( C 1 C 1 t ) C 1 d t C 1 B C B ( C k B k ) k 1 B B 1 B = 0 exp ( C 1 C 1 t ) C 1 d t M ,
where M = C 1 B C B ( C k B k ) k 1 B B 1 B . This finishes the proof. □
Theorem 15. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then,
 (1) 
A C , E = lim λ 0 λ I n + A A 1 A A k λ I n + A k A k 1 A k ;
 (2) 
A C , E = lim λ 0 λ I n + A A 1 A λ I n + A k A k A λ I n + A A 1 A 1 A k A k ;
 (3) 
A C , E = lim λ 0 λ I n + A A 1 A A k A k λ I n + A k A k 1 ;
 (4) 
A C , E = lim λ 0 λ I n + λ I n + A A 1 A A k A k A 1 λ I n + A A 1 A A k A k .
Proof. 
We have A C , E = A R A m A k A k , N A m A k A k ( 2 ) in Theorem 2 already. Combining with Lemmas 10 and 11(2), we have the following proof.
(1)
Let X = A m A k , Y = A k . Then,
A C , E = lim λ 0 A m A k λ I n + A k A A m A k 1 A k = lim λ 0 A m A k λ I n + A k A k 1 A k = lim λ 0 λ I n + A A 1 A A k λ I n + A k A k 1 A k .
(2) Let X = A m , Y = A k A k . Then, we can obtain the result through a similar way.
(3)
Let X = A m A k A k , Y = I n . Then, we can obtain the result.
(4)
Let X = I n , Y = A m A k A k . Then, we can obtain the result. □
Example 4. 
Consider the matrix A given in Example 1. Then, k = Ind ( A ) = 2 ,
B = λ I n + A A 1 A A k λ I n + A k A k 1 A k = λ I n + A A 1 A A k A k λ I n + A k A k 1 = 312 ( λ 3 ) ( λ 4 ) 2 ( λ + 52 ) 156 ( λ 3 ) ( λ 4 ) 2 ( λ + 52 ) 312 ( λ 3 ) ( λ 4 ) 2 ( λ + 52 ) ( 104 ( λ 7 ) ) ( λ 4 ) 2 ( λ + 52 ) 52 ( λ 7 ) ( λ 4 ) 2 ( λ + 52 ) 104 ( λ 7 ) ( λ 4 ) 2 ( λ + 52 ) 104 ( λ 4 ) ( λ + 52 ) 52 ( λ 4 ) ( λ + 52 ) 104 ( λ 4 ) ( λ + 52 ) ,
C = λ I n + A A 1 A λ I n + A k A k A λ I n + A A 1 A 1 A k A k = λ I n + λ I n + A A 1 A A k A k A 1 λ I n + A A 1 A A k A k = 312 ( λ 3 ) λ 3 + 8 λ 2 + 348 λ 832 156 ( λ 3 ) λ 3 + 8 λ 2 + 348 λ 832 312 ( λ 3 ) λ 3 + 8 λ 2 + 348 λ 832 104 ( λ 7 ) λ 3 + 8 λ 2 + 348 λ 832 52 ( λ 7 ) λ 3 + 8 λ 2 + 348 λ 832 104 ( λ 7 ) λ 3 + 8 λ 2 + 348 λ 832 104 ( λ 4 ) λ 3 + 8 λ 2 + 348 λ 832 52 ( λ 4 ) λ 3 + 8 λ 2 + 348 λ 832 104 ( λ 4 ) λ 3 + 8 λ 2 + 348 λ 832 .
It is easy to check that lim λ 0 B = lim λ 0 C = A C , E , where A C , E has been shown in Example 1.

6. Applications

Theorem 16. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Then, the general solution to the linear system
A x = A A C , E b , b C n
is A C , E b + ( I A m A ) y , where y C n is arbitrary.
Proof. 
On the one hand,
A ( A C , E b + ( I A m A ) y ) = A A C , E b + ( A A A m A ) y = A A C , E b .
This implies A C , E b + ( I A m A ) y is a solution of (4). On the other hand, if (4) holds for some x, then
A m A x = A m A A C , E b = A m A A m A A E A A m b = A m A A E A A m b = A C , E b ,
which yields
x = A C , E b + x A C , E b = A C , E b + x A m A x = A C , E b + ( I A m A ) x .
Thus, the solution x has the form A C , E b + ( I A m A ) y , where y C n is arbitrary. □
Theorem 17. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Suppose X C n × m and D C n × m . If R ( D ) R ( A k ) , then the restricted matrix equation
A X = D , R ( X ) R ( A m A k )
has the unique solution X = A C , E D .
Proof. 
Since R ( D ) R ( A k ) = R ( A A C , E ) , it follows that A A C , E D = D , which implies A C , E D is a solution of A X = D . And it is obvious that
R ( X ) = R ( A C , E D ) R ( A C , E ) = R ( A m A k ) .
So A C , E D is a solution of (5). To prove the uniqueness, we assume Y is another solution of (5). Then, we acquire R ( Y ) R ( A m A k ) = R ( A C , E A ) . This implies
X = A C , E D = A C , E A Y = Y .
This completes the proof. □
Theorem 18. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Let B C n × r and C * C n × r be of full column rank such that R ( B ) = N A k and N ( C ) = R ( A m A k ) . Then, the bordered matrix
L = A B C 0
is nonsingular, and its inverse is given by
L 1 = A C , E ( I n A C , E A ) C B ( I n A A C , E ) B ( A A C , E A A ) C .
Proof. 
From R ( A C , E ) = R ( A m A k ) = N ( C ) , we realize C A C , E = 0 . Since
R ( I n A A C , E ) = N ( A A C , E ) = N A k = R ( B ) = R ( B B ) ,
it follows that B B ( I n A A C , E ) = I n A A C , E .
Let
T = A C , E ( I n A C , E A ) C B ( I n A A C , E ) B ( A A C , E A A ) C ,
then
L T = A A C , E + B B ( I n A A C , E ) A ( I n A C , E A ) C + B B ( A A C , E A A ) C C A C , E C ( I n A C , E A ) C = A A C , E + I n A A C , E A ( I n A C , E A ) C ( I n A A C , E ) A C 0 C C = I n 0 0 I r
Hence, T = L 1 . □
Theorem 19. 
Let A C n × n with Ind ( A ) = k , rank ( A A A ) = rank ( A ) and rank ( A k ) = rank A k A k . Let B C n × r and C * C n × r be as Theorem 18, X C n × m and D C n × m be as Theorem 17. Then, the unique solution of (5) is given by X = [ x i j ] , where
x i j = d e t A ( i d j ) B C ( i 0 ) 0 d e t A B C 0 , i = 1 , 2 , , n , j = 1 , 2 , , m ,
where d j denotes the j-th column of D and A ( i d j ) and C ( i 0 ) mean to substitute the i-th column of A and C by d j and 0, respectively.
Proof. 
Since X is the solution of (5), we obtain that R ( X ) R ( A m A k ) = N ( C ) , which implies C X = 0 . Then, (5) can be written as
A B C 0 X 0 0 0 = A X 0 C X 0 = D 0 0 0 .
According to Theorem 18, we obtain that
X 0 0 0 = A B C 0 1 D 0 0 0 = A C , E ( I n A C , E A ) C B ( I n A A C , E ) B ( A A C , E A A ) C D 0 0 0 .
Hence, X = A C , E D and (6) follows from the classical Cramer rule. □

7. A Binary Relation Based on the m -CCE Inverse

In this section, we study the binary relation for the m -CCE inverse.
Definition 2. 
Let A , B C n × n with Ind ( A ) = k , rank ( A ) = rank ( A A A ) and rank ( A k ) = rank A k A k . We consider that A is below B under the relation C , E if
A C , E A = A C , E B a n d A A C , E = B A C , E .
The relation is denoted by A C , E B.
It is easy to know that the binary relation C , E is reflexive. According to the following examples, we observe it is neither anti-symmetric nor transitive.
Example 5. 
Consider a matrix A C 3 × 3 as follows. Let B be the transpose of the matrix A.
A = 0 1 0 0 0 0 0 0 1 , B = 0 0 0 1 0 0 0 0 1 .
It is easy to check that
A C , E = B C , E = 0 0 0 0 0 0 0 0 1 .
By computing, we have
A C , E A = A C , E B = A A C , E = B A C , E = B C , E B = B C , E A = B B C , E = A B C , E = 0 0 0 0 0 0 0 0 1 ,
i.e., A C , E B and B C , E A . But A B . This means the relation C , E is not anti-symmetric.
Example 6. 
Consider the matrices
A = 1 2 3 0 0 1 0 0 0 , B = 1 2 3 0 0 0 0 0 0 , C = 1 2 3 5 1 1 0 0 0 .
Then, we have
A C , E = 1 3 0 0 2 3 0 0 0 0 0 , B C , E = 1 12 0 0 1 6 0 0 1 4 0 0 .
What is more, we have
A C , E A = A C , E B = 1 3 2 3 1 2 3 4 3 2 0 0 0 ,
A A C , E = B A C , E = B B C , E = C B C , E = 1 0 0 0 0 0 0 0 0 ,
B C , E B = B C , E C = 1 12 1 6 1 4 1 6 1 3 1 2 1 4 1 2 3 4 ,
C A C , E = 1 0 0 1 0 0 0 0 0 ,
i.e., A C , E B , B C , E C , But A A C , E C A C , E , which means we cannot obtain A C , E C . Thus, the relation C , E is not transitive. Above all, we know C , E is not a matrix partial order.

8. Conclusions

This paper introduces the m -CCE inverse, which is an extension of the CCE inverse in Minkowski space. Some of its properties, characterizations, representations, and applications in solving a system of linear equations are also shown in this paper. In addition, we prove that the m -CCE ordering is not a matrix partial ordering. Because of the wide range of research fields and application backgrounds of generalized inverses, we convince that further explorations of the m -CCE inverse will receive more attention and interest from researchers. Some possibilities for further research are given as follows:
  • Further properties, characterizations and representations of the m -CCE inverse.
  • We can further generalize the m -CCE inverse to tensors.
  • The perturbation analysis and iterative methods for the m -CCE inverse will be two topics worth studying.
  • It is equally interesting to discuss the m -CCE inverse for rectangular matrices.

Author Contributions

Methodology, X.L.; formal analysis, X.T.; writing—original draft, X.T.; writing—review & editing, X.L.; Funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

The authors were supported by the National Natural Science Foundation of China (Grant Number 12061015) and Guangxi Natural Science Foundation (Grant Number 2024GXNSFAA010503).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Penrose, R. A generalized inverse for matrices. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1955; Volume 51, pp. 406–413. [Google Scholar] [CrossRef]
  2. Drazin, M.P. The Minkowski inver Pseudo-inverses in associative rings and semigroups. Am. Math. Mon. 1958, 65, 506–514. [Google Scholar] [CrossRef]
  3. Erdélyi, I. On the matrix equation Ax = λBx. J. Math. Anal. Appl. 1967, 17, 119–132. [Google Scholar] [CrossRef]
  4. Malik, B.S.; Thome, N. On a new generalized inverse for matrices of an arbitrary index. Appl. Math. Comput. 2014, 226, 575–580. [Google Scholar] [CrossRef]
  5. Baksalary, O.M.; Trenkler, G. Core inverse of matrices. Linear Multilinear Algebra 2010, 58, 681–697. [Google Scholar] [CrossRef]
  6. Manjunatha Prasad, K.; Mohana, K.S. Core-EP inverse. Linear Multilinear Algebra 2014, 62, 792–802. [Google Scholar] [CrossRef]
  7. Wang, H. Core-EP decomposition and its applications. Linear Algebra Its Appl. 2016, 508, 289–300. [Google Scholar] [CrossRef]
  8. Zuo, K.; Li, Y.; Luo, G. A new generalized inverse of matrices from core-EP decomposition. arXiv 2007, arXiv:2007.02364. [Google Scholar] [CrossRef]
  9. Renardy, M. Singular value decomposition in Minkowski space. Linear Algebra Its Appl. 1996, 236, 53–58. [Google Scholar] [CrossRef]
  10. Meenakshi, A.R. Generalized inverses of matrices in Minkowski space. In Proceedings of the National Seminar on Algebra and Its Applications; Annamalai University: Chidambaram, India, 2000; Volume 57, pp. 1–14. [Google Scholar]
  11. Wang, H.; Li, N.; Liu, X. The 𝔪-core inverse and its applications. Linear Multilinear Algebra 2021, 69, 2491–2509. [Google Scholar] [CrossRef]
  12. Wang, H.; Wu, H.; Liu, X. The 𝔪-core-EP inverse in Minkowski space. Bull. Iran. Math. Soc. 2022, 48, 2577–2601. [Google Scholar] [CrossRef]
  13. Wu, H.; Wang, H.; Jin, H. The 𝔪-WG inverse in Minkowski space. Filomat 2022, 36, 1125–1141. [Google Scholar] [CrossRef]
  14. Liu, X.; Zhang, K.; Jin, H. The 𝔪-WG° inverse in the Minkowski space. Open Math. 2023, 21, 20230145. [Google Scholar] [CrossRef]
  15. Wang, H.; Chen, J. Weak group inverse. Open Math. 2018, 16, 1218–1232. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Chen, J.; Zhou, M. m-weak group inverses in a ring with involution. Rev. Real Acad. Cienc. Exactas Fís. Nat. Ser. Mat. 2021, 115, 1–13. [Google Scholar] [CrossRef]
  17. Gao, J.; Wang, Q.; Zuo, K. The 𝔪-DMP inverse in Minkowski space and its applications. Filomat 2024, 38, 1663–1679. [Google Scholar] [CrossRef]
  18. Mehdipour, M.; Salemi, A. On a new generalized inverse of matrices. Linear Multilinear Algebra 2018, 66, 1046–1053. [Google Scholar] [CrossRef]
  19. Wang, G.; Wei, Y.; Qiao, S.; Lin, P.; Chen, Y. Generalized Inverses: Theory and Computations; Springer: Singapore, 2018. [Google Scholar] [CrossRef]
  20. Wu, H.; Wei, H.; Wang, H.; Jin, H. Characterizations and representations for the 𝔪-core-EP inverse. Filomat 2024, 38, 4893–4910. [Google Scholar] [CrossRef]
  21. Ferreyra, E.D.; Levis, E.F.; Thome, N. Characterizations of k -commutative equalities for some outer generalized inverses. Linear Multilinear Algebra 2020, 68, 177–192. [Google Scholar] [CrossRef]
  22. Hartwig, R.E.; Spindelböck, K. Matrices for which A* and A commute. Linear Multilinear Algebra 1983, 14, 241–256. [Google Scholar] [CrossRef]
  23. Kheirandish, E.; Salemi, A.; Thome, N. Properties of core-EP matrices and binary relationships. Comput. Appl. Math. 2024, 43, 316. [Google Scholar] [CrossRef]
  24. Zhu, X.; Xiong, F. Research on Generalized Inverse of Matrix in Minkowski Space. Pure Math. 2023, 13, 2396. [Google Scholar] [CrossRef]
  25. Zuo, K.; Cvetković Ilić, D.; Cheng, Y. Different characterizations of DMP-inverse of matrices. Linear Multilinear Algebra 2020, 70, 1–8. [Google Scholar] [CrossRef]
  26. Kamaraj, K.; Sivakumar, K.C. Moore-Penrose inverse in an indefinite inner product space. J. Appl. Math. Comput. 2005, 19, 297–310. [Google Scholar] [CrossRef]
  27. Yuan, Y.; Zuo, K. Compute limλ→0x(λIp+YAX)−1Y by the product singular value decomposition. Linear Multilinear Algebra 2016, 64, 269–278. [Google Scholar] [CrossRef]
  28. Kılıçman, A.; Zhour, A.Z. The representation and approximation for the weighted Minkowski inverse in Minkowski space. Math. Comput. Model. 2007, 47, 363–371. [Google Scholar] [CrossRef]
  29. Zekraoui, H.; Al-Zhour, Z.; Özel, C. Some New Algebraic and Topological Properties of the Minkowski Inverse in the Minkowski Space. Sci. World J. 2013, 2013, 765732. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tan, X.; Liu, X. The m-CCE Inverse in Minkowski Space and Its Applications. Axioms 2025, 14, 413. https://doi.org/10.3390/axioms14060413

AMA Style

Tan X, Liu X. The m-CCE Inverse in Minkowski Space and Its Applications. Axioms. 2025; 14(6):413. https://doi.org/10.3390/axioms14060413

Chicago/Turabian Style

Tan, Xin, and Xiaoji Liu. 2025. "The m-CCE Inverse in Minkowski Space and Its Applications" Axioms 14, no. 6: 413. https://doi.org/10.3390/axioms14060413

APA Style

Tan, X., & Liu, X. (2025). The m-CCE Inverse in Minkowski Space and Its Applications. Axioms, 14(6), 413. https://doi.org/10.3390/axioms14060413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop