Next Article in Journal
Advanced Feature-Selection-Based Hybrid Ensemble Learning Algorithms for Network Intrusion Detection Systems
Previous Article in Journal
Asymmetry of Endocast Surface Shape in Modern Humans Based on Diffeomorphic Surface Matching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Minimum-Norm Least Squares Solutions to Quaternion Tensor Systems

1
Department of Mathematics, Shanghai University, Shanghai 200444, China
2
Collaborative Innovation Center for the Marine Artificial Intelligence, Shanghai 200444, China
3
Department of Mathematics, University of Manitoba, Winnipeg, MB R3T 2N2, Canada
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1460; https://doi.org/10.3390/sym14071460
Submission received: 15 June 2022 / Revised: 11 July 2022 / Accepted: 14 July 2022 / Published: 17 July 2022
(This article belongs to the Section Mathematics)

Abstract

:
In this paper, we investigate the minimum-norm least squares solution to a quaternion tensor system A 1 N X 1 = C 1 , A 1 N X 2 + A 2 N X 3 = C 2 , E 1 N X 1 M F 1 + E 1 N X 2 M F 2 + E 2 N X 3 M F 2 = D by using the Moore–Penrose inverses of block tensors. As an application, we discuss the quaternion tensor system A N X = C , E N X M F = D for minimum-norm least squares reducible solutions. To illustrate the results, we present an algorithm and a numerical example.

1. Introduction

Previous research on the classic systems of matrix equations may go back to Cecioni [1] on
A X = C , X B = D .
During the past few decades, a great deal of research has been done on various generalized systems of (1) over complex number field C and quaternions H (see, for example, [2,3]). As a natural extension of a matrix, an N-mode tensor A = a i 1 i N 1 i j I j , j = 1 , , N is a multidimensional array. In this paper, we consider solutions to some quaternion tensor systems.
Recall that a quaternion, introduced by Hamilton [4], is an associative and non-commutative division algebra over a real number field R and is generally represented in the form:
H = q 0 + q 1 i + q 2 j + q 3 k i 2 = j 2 = k 2 = ijk = 1 , q 0 , q 1 , q 2 , q 3 R .
We refer the reader to [5,6,7,8] for quaternions and some applications. Nowadays, quaternion tensors are often used to solve problems in quantum mechanics, control theory, linear modeling, etc. [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25].
It is well known that there are several definitions of tensor products. In this paper, we focus on the so-called “Einstein product”. Given two tensors A H I 1 × × I P × K 1 × × K N , B H K 1 × × K N × J 1 × × J M , their Einstein product A N B H I 1 × × I P × J 1 × × J M [26] is defined as
A N B i 1 i P j 1 j M = k 1 , , k N = 1 K 1 , , K N a i 1 i P k 1 k N b k 1 k N j 1 j M .
For simplicity, we will denote the above summation by k 1 k N . When N = P = M = 1 , A and B are quaternion matrices, and their Einstein product is the usual matrix product.
Solving quaternion tensor systems has recently been explored. For example, He [27] gave the necessary and sufficient condition for their existence and the expression for the general solution to a quaternion tensor system
A N X = C , X N B = D ,
where A , C H J 1 × × J N × I 1 × × I N , B , D H I 1 × × I N × L 1 × × L N are given. Since reducible matrices can be applied in many areas such as stochastic processes, random walks in graphs, and the connection of directed graphs [28,29,30], researchers have already started to investigate the reducible solutions of systems of matrix equations and tensor equations. For example, Nie et al. [2] presented the necessary and sufficient condition for a system of matrix equations to have a reducible solution and gave the representation of such a solution if it is possible. Xie and Wang [31] investigated the solvable condition and the expression of the reducible solution to the classical quaternion tensor equation
A N X N B = C ,
where A H I 1 × × I N × L 1 × × L N , B H L 1 × × L N × J 1 × × J N , C H I 1 × × I N × J 1 × × J N are given. In addition, they also derived the solvable condition and the expression of the general solution to the quaternion tensor equation
A 1 N X 1 M B 1 + A 1 N X 2 M B 2 + A 2 N X 3 M B 2 = C ,
where A 1 , B 1 , A 2 , B 2 , and C are given tensors with appropriate dimensions.
Motivated by above results and the applications of tensor equations, in this paper, we consider the minimum-norm least squares solution to the system of quaternion tensor equations:
A 1 N X 1 = C 1 A 1 N X 2 + A 2 N X 3 = C 2 E 1 N X 1 M F 1 + E 1 N X 2 M F 2 + E 2 N X 3 M F 2 = D ,
where A , C , E H I 1 × × I N × L 1 × × L N , F H L 1 × × L N × J 1 × × J N , D H I 1 × × I N × J 1 × × J N , and A 1 ,   C 1 ,   E 1 H I 1 × × I N × K 1 × × K N ,   A 2 ,   C 2 ,   E 2 H I 1 × × I N × Q 1 × × Q N , F 1 H K 1 × × K N × J 1 × × J N , F 2 H Q 1 × × Q N × J 1 × × J N are given, and the minimum-norm least squares reducible solution to the system of quaternion tensor equations:
A N X = C , E N X M F = D .
As special cases, we also give the the minimum-norm least squares reducible solution to Equation (3) and the minimum-norm least squares solution to Equation (4).
This paper is organized as follows. In Section 2, we introduce some notation and review some results that will be used throughout the paper. In Section 3, we discuss the forms of the M–P inverses of block tensors. In Section 4, we present the minimum-norm least square solutions to systems (4) and (5). The least squares reducible solutions to systems (3) and (6) are also obtained. Finally, we develop an algorithm and calculate a numerical example to show the accuracy of our results in Section 5.

2. Preliminaries

We first recall some definitions and fix some notation. It is well known that the transpose of a tensor can be defined in many ways, according to the different partitions of its indices. To avoid potential confusion, we use different index letters to indicate the transpose of a tensor. That is, given a tensor A = a i 1 i N j 1 j M H I 1 × × I N × J 1 × × J M , the conjugate transpose of A is defined as A = b j 1 j M i 1 i N H J 1 × × J M × I 1 × × I N hereafter, where b j 1 j M i 1 i N = a ¯ i 1 i N j 1 j M . In particular, when A is a real tensor, A is called the transpose of A and is simply denoted by A T . Throughout this paper, we use the Einstein product for the tensor product and I for the identity tensor with appropriate dimension. Thus, the inverse of a tensor A is a tensor B satisfying A N B = B N A = I . Finally, the Frobenius norm of a tensor A is defined as A F = a i 1 i N j 1 j M a i 1 i N j 1 j M 2 1 / 2 .
The Moore–Penrose inverses of tensors via the Einstein product have been discussed over the complex number field and quaternions (see, for example, [22,31,32]).
Definition 1.
Let A H I 1 × × I N × J 1 × × J N . A tensor X H J 1 × × J N × I 1 × × I N satisfying
(i) 
A N X N A = A ;
(ii) 
X N A N X = X ;
(iii) 
A N X = A N X ;
(iv) 
X N A = X N A
is called a Moore–Penrose inverse of A , abbreviated as the M–P inverse, denoted by A .
It has been shown that the M–P inverse of a tensor A exists and is unique. Two projectors about the M–P inverse are defined as: L A = I A N A and R A = I A N A . The following properties of quaternion tensors will be used in the following sections.
Proposition 1
(Xie and Wang [31]). Let A H I 1 × × I N × K 1 × × K N . Then,
(1) 
A = A ;
(2) 
A = A ;
(3) 
A N A = A N A , A N A = A N A ;
(4) 
A N R A = 0 and R A N A = 0 ;
(5) 
A N L A = 0 and L A N A = 0 .
For the same reason as block matrices in matrix theory, the block techniques for tensors play important roles in tensor theory. Next, we extend the concept of row block tensors given in [22] to the quaternion case.
Definition 2.
Let A H I 1 × × I N × J 1 × × J M , B H I 1 × × I N × K 1 × × K M . The row block tensor consisted of A and B is defined as
A B H α N × β 1 × β M ,
where α N = I 1 × × I N and β i = J i + K i ( i = 1 , , M ) ,
A B i 1 i N l 1 l M = a i 1 i N l 1 l M , i 1 i N I 1 × × I N , l 1 l M J 1 × × J M b i 1 i N l 1 l M , i 1 i N I 1 × × I N , l 1 l M γ 1 × × γ M 0 , else ,
where γ i = J i + 1 , , J i + K i ( i = 1 , , M ) .
The following properties of block tensors are presented in [22] for complex tensors. It is easy to check that they also hold for quaternion tensors.
Proposition 2.
The following equalities hold:
(1) 
A 1 B 1 A 2 B 2 M C D = A 1 M C + B 1 M D A 2 M C + B 2 M D H ρ 1 × × ρ N × α N ;
(2) 
G H N A 1 B 1 A 2 B 2 = G N A 1 + H N A 2 G N B 1 + H N B 2 is belong to H S 1 × × S N × β 1 × × β M , where C H J 1 × × J M × I 1 × × I N , D H K 1 × × K M × I 1 × × I N , G H S 1 × × S N × I 1 × × I N , H H S 1 × × S N × L 1 × × L N , α N = I 1 × × I N , and ρ i = I i + L i , i = 1 , , N ; β j = J j + K j , j =   1 , , M .
Using above notations for block tensors and properties, we can give the complex and real representations of a quaternion tensor as follows.
Let A = A 1 + A 2 j H I 1 × × I N × J 1 × × J M with A 1 , A 2 C I 1 × × I N × J 1 × × J M . The complex vector representation of A is Φ A = A 1 A 2 , and its complex representation is f ( A ) = A 1 A 2 A 2 ¯ A 1 ¯ C 2 I 1 × × 2 I N × 2 J 1 × × 2 J M . Furthermore, let the real part and the imaginary part of a complex tensor P be denoted by P and P , respectively. Then, A can be expressed as A = A 1 + A 1 i + A 2 j + A 2 k , and the real vector representation of A is A = Re A 1 Im A 1 Re A 2 Im A 2 .
Different papers might use different orderings of the columns for a tensor unfolding. In this paper, we will use the transformation presented in Brazell et al. [9] and He et al. [32], which transforms a quaternion tensor into a quaternion matrix.
Definition 3.
The unfolding transformation
θ : H I 1 × × I N × J 1 × × J N M ( I 1 · I 2 I N 1 · I N ) × ( J 1 · J 2 J N 1 · J N ) ( H ) A A
is defined by
( A ) i 1 i 2 i n j 1 j 2 j n θ ( A ) [ i 1 + k = 2 N ( i k 1 ) s = 1 k 1 I s ] [ j 1 + k = 2 N ( j k 1 ) s = 1 k 1 J s ] .
Using above transformation θ, we give the definition of a permutation tensor which is similar to [31] (Definition 4).
Definition 4.
A tensor K H I 1 × × I N × I 1 × × I N is called a θ-permutation tensor (simply called a “permutation tensor" hereafter if no confusion arises) if the unfolding θ ( K ) of K is a permutation matrix.
As [32] shows, θ is a one-to-one correspondence and preserves the Einstein tensor products, that is, θ ( A N B ) = θ ( A ) θ ( B ) . Therefore, a permutation tensor A is invertible, since θ ( A ) is invertible.
Definition 5.
A tensor A H α 1 × × α N × α 1 × × α N is said to be K -reducible if there exists a permutation tensor K such that A is permutationally similar to a triangular (upper or lower) block tensor, that is,
A = K N B 1 B 2 0 B 3 N K 1 ,
where B 1 H I 1 × × I N × I 1 × × I N , B 2 H I 1 × × I N × J 1 × × J N , B 3 H J 1 × × J N × J 1 × × J N , and K   H α 1 × × α N × α 1 × × α N α i = I i + J i , i = 1 , , N .

3. The M–P Inverses of Block Tensors

The solutions of tensor equations in the next section will be given in terms of block tensors. In this section, we will discuss the M–P inverses of block tensors. First, we recall the following lemma regrading the vec operator acting on a complex representation Φ in [33].
Lemma 1.
Let A = A 1 + A 2 j H I 1 × × I N × J 1 × × J N ,   B = B 1 + B 2 j H J 1 × × J N × L 1 × × L M , C = C 1 + C 2 j H L 1 × × L M × K 1 × × K M . Set
K 4 γ N = I γ N × γ N i I γ N × γ N 0 0 0 0 I γ N × γ N i I γ N × γ N I γ N × γ N i I γ N × γ N 0 0 0 0 I γ N × γ N i I γ N × γ N ,
where
γ N = J 1 × × J N , I γ N × γ N = I 0 0 0 I 0 0 0 I , I R L 1 × × L M × L 1 × × L M .
Then, we have
vec Φ A N B M C = A 1 f ( C ) T A 2 f ( C j ) M K K 4 γ N M vec ( B ) ,
where A f ( C ) = A C 1 A C 2 A C ¯ 2 A C ¯ 1 , and f ( C j ) = C 2 C 1 C 1 ¯ C 2 ¯ .
We have obtained the following lemma about the M–P inverse of a column block tensor in [33] and will use it to prove Proposition 3.
Lemma 2.
Let T 1 , T 2 R I 1 × × I N × J 1 × × J N , A = T 1 T 2 . Then,
A = T 1 T 2 = T 1 H T N T 2 N T 1 H T ,
where
H = R + I R N R N Z N T 2 N T 1 N T 1 T N I T 2 T N R , R = I T 1 N T 1 N T 2 T , Z = ( I + ( I R N R ) N T 2 N T 1 N ( T 1 ) T N T 2 T N ( I R N R ) 1 .
In the following proposition, we give the general form of the M–P inverse of a tensor with three sub-blocks. We will apply it to describe the solutions in next section. One technique in this proof is using the quaternion SVD ([32]). For a tensor A H I 1 × × I N × J 1 × × J N , the singular value decomposition (SVD) of A has the following form:
A = U N B N V ,
where B R I 1 × × I N × J 1 × × J N is diagonal and U H I 1 × × I N × I 1 × I N and V H J 1 × × J N × J 1 × × J N are unitary tensors such that U N U = I and V N V = I .
Proposition 3.
Given P 1 R I 1 × × I N × J 1 × × J N , Q 1 R I 1 × × I N × K 1 × × K N , and Y 1 R I 1 × × I N × L 1 × × L N , let A = P 1 Q 1 Y 1 . Then, the M–P inverse of row block tensor A can be expressed as
A = P 1 Q 1 Y 1 = P 1 N I Q 1 N H 1 N I Y 1 N H 2 H 1 N I Y 1 N H 2 H 2 ,
where
H 1 = R 1 + I R 1 N R 1 N Z 1 N Q 1 T N P 1 T N P 1 N I Q 1 N R 1 , H 2 = R 2 + I R 2 N R 2 N Z 2 N Y 1 T N S 1 T N S 1 N I Y 1 N R 2 , R 1 = ( I P 1 N P 1 ) N Q 1 , R 2 = ( I S 1 N S 1 ) N Y 1 , S 1 = P 1 P 1 N Q 1 N H 1 H 1 , ,   for   S 1 = P 1 Q 1 , Z 1 = I + ( I R 1 N R 1 ) N Q 1 T N P 1 T N P 1 N Q 1 N ( I R 1 N R 1 ) 1 , Z 2 = I + ( I R 2 N R 2 ) N Y 1 T N S 1 T N S 1 N Y 1 N ( I R 2 N R 2 ) 1 .
Proof of Proposition 3.
Let X be the right-hand side of (7). We only need to show that X satisfies the conditions in Definition 1.
We start by proving the existence of Z 1 and Z 2 in (8). Let K 1 = P 1 N Q 1 N I R 1 N R 1 and K 2 = P 2 N Q 2 N I R 2 N R 2 . Then, we have the SVD of K 1 = U 1 N M 1 N V 1 , where U 1 R J 1 × × J N × J 1 × × J N , V 1 R K 1 × × K N × K 1 × × K N are unitary tensors and M 1 R J 1 × × J N × K 1 × × K N is diagonal. We write I + K 1 T N K 1 = V 1 N I + M 1 T N M 1 N V 1 . Since the diagonal tensor I + K 1 T N K 1 has nonzero diagonal entries, we know that Z 1 exists. Similarly, Z 2 exists. Next, we prove that X satisfies the four conditions in Definition 1.
We first prove that A N X T = A N X . It is easy to verify that S 1 N S 1 = P 1 N P 1 + R 1 N R 1 , and so R 2 = I P 1 N P 1 + R 1 N R 1 N Y 1 . For any tensor Q , by Proposition 1, we have
I Q N Q N Q = 0 , Q N I Q N Q = 0 ,
and it follows that
R 1 N H 1 = R 1 N R 1 , R 2 N H 2 = R 2 N R 2 .
Therefore, by Proposition 2, we have
A N X = P 1 N P 1 N I Q 1 N H 1 N I Y 1 N H 2 + Q 1 N H 1 N I Y 1 N H 2 + Y 1 N H 2 = P 1 N P 1 + I P 1 N P 1 Q 1 N H 1 + I P 1 N P 1 I P 1 N P 1 N Q 1 N H 1 N Y 1 N H 2 = P 1 N P 1 + R 1 N R 1 + R 2 N R 2 .
Since P 1 N P 1 , R 1 N R 1 and R 2 N R 2 are symmetric, we have A N X T = A N X .
Next we prove that A N X N A = A . For any tensor Q , it is obvious that I Q N Q is idempotent. Then, we have
R 1 N P 1 = 0 , R 2 N S 1 = 0 , R 2 N Q 1 = 0 , R 1 N Q 1 = R 1 N R 1 .
Based on this, we can derive that I P 1 N P 1 R 1 N R 1 is idempotent and R 2 N Y 1 = R 2 N R 2 . Thus,
A N X N A = P 1 N P 1 + R 1 N R 1 + R 2 N R 2 N P 1 Q 1 Y 1 = P 1 P 1 N P 1 N Q 1 + R 1 P 1 N P 1 + R 1 N R 1 N Y 1 + R 2 = P 1 Q 1 Y 1 = A .
To prove that X N A N X = X , we set B = P 1 N P 1 + R 1 N R 1 + R 2 N R 2 . It is easy to verify that
R 1 N R 2 = 0 , P 1 N B = P 1 , R 1 N B = R 1 , R 2 N B = R 2 , H 1 N B = H 1 , S 1 N B = S 1 , H 2 N B = H 2 .
It follows from Proposition 2 that
X N A N X = P 1 N I Q 1 N H 1 N I Y 1 N H 2 H 1 N I Y 1 N H 2 H 2 N B = P 1 N I Q 1 N H 1 N B Y 1 N H 2 H 1 N B Y 1 N H 2 H 2 = P 1 N I Q 1 N H 1 N I Y 1 N H 2 H 1 N I Y 1 N H 2 H 2 = X .
Finally, we prove that X N A T = X N A . By Proposition 2, we know that X N A can be written as
X N A = S 1 N I Y 1 N H 2 H 2 N S 1 Y 1 = S 1 N I Y 1 N H 2 N S 1 S 1 N I Y 1 N H 2 N Y 1 H 2 N S 1 H 2 N Y 1 a 1 b 1 a 2 b 2 .
Next, we need to prove that a 1 , b 1 , a 2 , b 2 are symmetric. Set
U 2 = S 1 N Y 1 N I R 2 N R 2 , V 2 = I R 2 N R 2 N Z 2 N Y 1 T N S 1 T = Z 2 N U 2 T .
Here, Z 2 = I U 2 T N U 2 1 = Z 2 T . Then,
P 1 T N P 1 N P 1 = P 1 T , S 1 T N S 1 N S 1 = S 1 T ,
Z 2 1 N I R 2 N R 2 = I R 2 N R 2 N Z 2 1 ,
and
V 2 N U 2 = I R 2 N R 2 N Z 2 N Y 1 T N S 1 T N S 1 N Y 1 N I R 2 N R 2 = Z 2 N I R 2 N R 2 N Y 1 T N S 1 T N S 1 N Y 1 N I R 2 N R 2 = Z 2 N U 2 T N U 2 = I U 2 T N U 2 1 N U 2 T N U 2 = I Z .
Because I R 2 N R 2 is idempotent, using (9) and (10), we obtain
a 1 = S 1 N I Y 1 N H 2 N S 1 = S 1 N S 1 S 1 N Y 1 N H 2 N S 1 = S 1 N S 1 S 1 N Y 1 N R 2 N S 1 + V 2 N S 1 N S 1 Y 1 N R 2 N S 1 = S 1 N S 1 S 1 N Y 1 N I R 2 N R 2 Z 2 N Y 1 T N S 1 T N S 1 N S 1 = S 1 N S 1 U 2 N Z 2 N U 2 T .
Similarly, we have
a 2 = H 2 N S 1 = R 2 N S 1 + V 2 N S 1 N S 1 Y 1 N R 2 N S 1 = V 2 .
Using R 2 N Y 1 = R 2 N R 2 and (11),
b 1 = S 1 N I Y 1 N H 2 N Y 1 = S 1 N Y 1 S 1 N Y 1 N H 2 N Y 1 = S 1 N Y 1 S 1 N Y 1 N R 2 N Y 1 + V 2 N S 1 N Y 1 Y 1 N R 2 N Y 1 = S 1 N Y 1 S 1 N Y 1 N R 2 N R 2 + V 2 N S 1 N Y 1 N I R 2 N R 2 = S 1 N Y 1 N I R 2 N R 2 U 2 N V 2 N U 2 = U 2 N I V 2 N U 2 = U 2 N Z 2 = V 2 T .
Similarly to the above,
b 2 = H 2 N Y 1 = R 2 N Y 1 + V 2 N S 1 N I Y 1 N R 2 N Y 1 = R 2 N R 2 + V 2 N S 1 N Y 1 N I R 2 N R 2 = R 2 N R 2 + U 2 N V 2 = R 2 N R 2 + U 2 N Z 2 N U 2 T .
Thus, X N A T = X N A is proved.
Now we have proved that X satisfies all four conditions in Definition 1. Therefore, X is the M–P inverse of A . □

4. The Minimum-Norm Least Squares Solutions

In the first part of this section, we consider the minimum-norm least squares solution of tensor system (5). The minimum-norm least squares solution of tensor Equation (4) will be given as a special case. For the coefficient tensors in system (5), we fix some notation as follows:
A 1 = A 11 + A 12 j H I 1 × × I N × K 1 × × K N , A 2 = A 21 + A 22 j H I 1 × × I N × Q 1 × × Q N , C 1 = C 11 + C 12 j H I 1 × × I N × K 1 × × K N , C 2 = C 21 + C 22 j H I 1 × × I N × Q 1 × × Q N , E 1 = E 11 + E 12 j H I 1 × × I N × K 1 × × K N , E 2 = E 21 + E 22 j H I 1 × × I N × Q 1 × × Q N , F 1 = F 11 + F 12 j H K 1 × × K N × J 1 × × J N , F 2 = F 21 + F 22 j H Q 1 × × Q N × J 1 × × J N , D = D 1 + D 2 j H I 1 × × I N × J 1 × × J N .
Moreover, we set
P = A 11 f T I 1 A 12 f I 1 j 0 0 E 11 f T F 1 E 12 f F 1 j N K 4 γ N , Q = 0 0 A 11 f T I 1 A 12 f I 1 j E 11 f T F 2 E 12 f F 2 j N K 4 γ N , Y = 0 0 A 21 f T I 1 A 22 f I 1 j E 21 f T F 2 E 22 f F 2 j N K 4 γ N ,
P 1 = Re P , P 2 = Im P , Q 1 = Re Q , Q 2 = Im Q , Y 1 = Re Y , Y 2 = Im Y ,
T 1 = P 1 Q 1 Y 1 , T 2 = P 2 Q 2 Y 2 , G = vec Re Φ C 1 vec Re Φ C 2 vec Re Φ D vec Im Φ C 1 vec Im Φ C 2 vec Im Φ D .
From Lemma 2, let
H = R + I R N R N V N T 2 N T 1 N T 1 T N I T 2 T N R .
In general, tensor system (5) may have many solutions or no solution at all. Thus, it is useful to find the minimum-norm least squares solution [ X 1 L , X 2 L , X 3 L ] of (5) such that
X 1 L , X 2 L , X 3 L F 2 = min X 1 , X 2 , X 3 H L X 1 F 2 + X 2 F 2 + X 3 F 2 ,
where
H L = X 1 , X 2 , X 3 A 1 N X 1 L C 1 F 2 + A 1 N X 2 L + A 2 N X 3 L C 2 F 2 + E 1 N X 1 L N F 1 + E 1 N X 2 L N F 2 + E 2 N X 3 L N F 2 D F 2 = min X i A 1 N X 1 C 1 F 2 + A 1 N X 2 + A 2 N X 3 C 2 F 2 + E 1 N X 1 N F 1 + E 1 N X 2 N F 2 + E 2 N X 3 N F 2 D F 2 .
Theorem 1.
With the above notation, the minimum-norm least squares solution [ X 1 L , X 2 L , X 3 L ] of (5) is determined by
H L = { X 1 , X 2 , X 3 vec X 1 vec X 2 vec X 3 = T 1 H T N T 2 N T 1 , H T N G + I T 1 N T 1 R N R N W ,
where W is an arbitrary real tensor with appropriate dimension.
Proof of Theorem 1.
The following calculations are based on Lemmas 1 and 2, and Proposition 3.
min X 1 , X 2 , X 3 { A 1 N X 1 C 1 F 2 + A 1 N X 2 + A 2 N X 3 C 2 F 2 + E 1 N X 1 N F 1 + E 1 N X 2 N F 2 + E 2 N X 3 N F 2 D F 2 } = min X 1 , X 2 , X 3 { Φ A 1 N X 1 C 1 F 2 + Φ A 1 N X 2 + A 2 N X 3 C 2 F 2 + Φ E 1 N X 1 N F 1 + E 1 N X 2 N F 2 + E 2 N X 3 N F 2 D F 2 } = min X 1 , X 2 , X 3 { vec ( Φ A 1 N X 1 C 1 ) F 2 + vec ( Φ A 1 N X 2 + A 2 N X 3 C 2 ) F 2 + vec ( Φ E 1 N X 1 N F 1 + E 1 N X 2 N F 2 + E 2 N X 3 N F 2 D ) F 2 } = min X 1 , X 2 , X 3 { vec ( Φ A 1 N X 1 ) vec ( Φ C 1 ) F 2 + vec ( Φ A 1 N X 2 ) + vec ( Φ A 2 N X 3 ) vec ( Φ C 2 ) F 2 + vec ( Φ E 1 N X 1 N F 1 ) + vec ( Φ E 1 N X 2 N F 2 ) + vec ( Φ E 2 N X 3 N F 2 ) vec ( Φ D ) F 2 } = min X 1 , X 2 , X 3 A 11 f T I A 12 f I j N K 4 γ N N vec ( X 1 ) vec Φ C 2 F 2 + A 11 f T I A 21 f I j N K 4 γ N N vec ( X 2 ) + A 21 f T I A 22 f I j N K 4 γ N N vec ( X 3 ) vec Φ C 2 F 2 + E 11 f T F 1 E 12 f F 1 j N K 4 γ N N vec ( X 1 ) + E 11 f T F 2 E 21 f F 2 j N K 4 γ N N vec ( X 2 ) + E 21 f T F 2 E 22 f F 2 j N K 4 γ N N vec ( X 3 ) vec Φ D F 2 = min X 1 , X 2 , X 3 P N vec ( X 1 ) + Q N vec ( X 2 ) + Y N vec ( X 3 ) G F 2 = min X 1 , X 2 , X 3 P 1 Q 1 Y 1 P 2 Q 2 Y 2 N vec ( X 1 ) vec ( X 2 ) vec ( X 3 ) G F 2 = min X 1 , X 2 , X 3 T 1 T 2 N vec ( X 1 ) vec ( X 2 ) vec ( X 3 ) G F 2 .
Using the results of the real tensor equation in [22], we have
vec X 1 vec X 2 vec X 3 = T 1 T 2 N G + I T 1 T 2 N T 1 T 2 N W ,
where W is an arbitrary real tensor over R . A simple calculation gives
vec X 1 vec X 2 vec X 3 = T 1 H T N T 2 N T 1 , H T N G + I T 1 N T 1 R N R N W .
Using Lemma 2, we have the following solvability condition.
Corollary 1.
The quaternion tensor system (5) is solvable if and only if
I T 1 T 2 T 1 T 2 N G = S 11 S 12 S 12 T S 22 N G = 0 ,
where
S 11 = I T 1 N T 1 + T 1 T N T 2 T N Z N I R N R , S 12 = T 1 T N T 2 T N Z N I R N R , S 22 = Z N I R N R .
In this case, the solution satisfies
H G = { X 1 , X 2 , X 3 vec X 1 vec X 2 vec X 3 = T 1 H T N T 2 N T 1 , H T N G + I T 1 N T 1 R N R N W ,
where W is an arbitrary real tensor with appropriate dimension.
Furthermore, the uniqueness of the solution of (5) can be obtained as follows.
Corollary 2.
The tensor system (5) has a unique solution X 1 L , X 2 L , X 3 L H L , which satisfies
vec ( X 1 L ) vec ( X 2 L ) vec ( X 3 L ) = T 1 H T N T 2 N T 1 H T N G .
Proof of Corollary 2.
The solution set H L is a nonempty and closed convex set. Thus, it must have a unique solution X 1 L , X 2 L , X 3 L H L . It follows that
X 1 L , X 2 L , X 3 L F 2 = X 1 L F 2 + X 2 L F 2 + X 3 L F 2 = min X 1 , X 2 , X 3 H L X 1 F 2 + X 2 F 2 + X 3 F 2 = min X 1 , X 2 , X 3 H L vec X 1 F 2 + vec X 2 F 2 + vec X 3 F 2 = min X 1 , X 2 , X 3 H L vec X 1 vec X 2 vec X 3 F 2 .
Hence, when W = 0 , X 1 L , X 2 L , X 3 L is the minimum-norm least squares solution of system (5), which satisfies
vec ( X 1 L ) vec ( X 2 L ) vec ( X 3 L ) = T 1 H T N T 2 N T 1 H T N G .
As a special case, we give the solution of tensor Equation (4). For the coefficient tensors in (4), let
A 1 = A 11 + A 12 j H I 1 × × I N × K 1 × × K N , A 2 = A 21 + A 22 j H I 1 × × I N × Q 1 × × Q N , B 1 = B 11 + B 12 j H K 1 × × K N × J 1 × × J N , B 2 = B 21 + B 22 j H Q 1 × × Q N × J 1 × × J N , C = C 1 + C 2 j H I 1 × × I N × J 1 × × J N ,
and set
P 01 = A 11 f T B 1 A 12 f B 1 j N K 4 γ N , Q 01 = A 11 f T B 2 A 12 f B 2 j N K 4 γ N , Y 01 = A 21 f T B 2 A 22 f B 2 j N K 4 γ N ,
T 01 = Re P 01 Re Q 01 Re Y 01 , T 02 = Im P 01 Im Q 01 Im Y 01 ,
R 01 = I T 01 N T 01 N T 02 T , Z 01 = I + I R 01 N R 01 N T 02 N T 01 N T 01 T N T 02 T N I R 01 N R 01 1 , H 01 = R 01 + I R 01 N R 01 N Z 01 N T 02 N T 01 N T 01 T N I T 02 T N R 01 .
Theorem 2.
With the above notation, the minimum-norm least squares solution [ X 1 L , X 2 L , X 3 L ] of tensor Equation (4) is determined by
H L 1 = { X 1 , X 2 , X 3 A 1 N X 1 N B 1 + A 1 N X 2 N B 2 + A 2 N X 3 N B 2 C F 2 = min X i ˜ A 1 N X 1 ˜ N B 1 + A 1 N X 2 ˜ N B 2 + A 2 N X 3 ˜ N B 2 C F 2 = { [ X 1 , X 2 , X 3 ] vec X 1 vec X 2 vec X 3 = T 01 H T N T 02 N T 01 , H 01 T N vec ( Φ C ) + I T 01 N T 01 R N R N W 1 } ,
where W 1 is an arbitrary real tensor with appropriate dimension.
The minimum-norm least squares solution X 1 L , X 2 L , X 3 L H L 1 satisfies
vec X 1 L vec X 2 L vec X 3 L = T 01 H T N T 02 N T 01 , H 01 T N vec ( Φ C ) .
As applications of the minimum-norm least squares solutions of (5) and (4), in the second part of this section we consider the minimum-norm least squares reducible solutions of system (6) and Equation (3). We write the reducible solution X H L 1 × × L N × L 1 × × L N in the following form:
X = K N X 1 X 2 0 X 3 N K 1 ,
where K is a permutation tensor and X 1 H K 1 × × K N × K 1 × × K N , X 2 H K 1 × × K N × Q 1 × × Q N , X 3 H Q 1 × × Q N × Q 1 × × Q N , K i + Q i = L i , i = 1 , , N . Next, for the coefficient tensors in (6), we denote their products in the following block tensors:
A N K = A 1 A 2 , A 1 H I 1 × × I N × K 1 × × K N , A 2 H I 1 × × I N × Q 1 × × Q N , C N K = C 1 C 2 , C 1 H I 1 × × I N × K 1 × × K N , C 2 H I 1 × × I N × Q 1 × × Q N , E N K = E 1 E 2 , E 1 H I 1 × × I N × K 1 × × K N , E 2 H I 1 × × I N × Q 1 × × Q N , K 1 N F = F 1 F 2 , F 1 H K 1 × × K N × J 1 × × J N , F 2 H Q 1 × × Q N × J 1 × × J N .
Substituting (23) and (24) into tensor system (6), we compare the corresponding tensors on both sides, and thus obtain a system in the form of (5). Therefore, we can use our previous results on (5) to find the minimum-norm least squares reducible solution X R = K N X 1 R X 2 R O X 3 R N K 1 such that
X R F 2 = min X H R X F 2 = min X H R X 1 F 2 + X 2 F 2 + X 3 F 2 ,
where
H R = X = K N X 1 X 2 O X 3 N K 1 A 1 N X 1 R C 1 F 2 + A 1 N X 2 R + A 2 N X 3 R C 2 F 2 + E 1 N X 1 R N F 1 + E 1 N X 2 R N F 2 + E 2 N X 3 R N F 2 D F 2 = min X i A 1 N X 1 C 1 F 2 + A 1 N X 2 + A 2 N X 3 C 2 F 2 + E 1 N X 1 N F 1 + E 1 N X 2 N F 2 + E 2 N X 3 N F 2 D F 2 .
Theorem 3.
With above notation for coefficient tensors A , C , E , F , D in (6), T 1 , T 2 , G in (13) and permutation tensor K , we have
H R = X = K N X 1 X 2 O X 3 N K 1 vec X 1 vec X 2 vec X 3 = T 1 H T N T 2 N T 1 , H T N G + I T 1 N T 1 R N R N W ,
where W is an arbitrary real tensor with appropriate dimension. The tensor system (6) has the minimum-norm least squares reducible solution X R = K N X 1 R X 2 R O X 3 R N K 1 H R , which satisfies
vec ( X 1 R ) vec ( X 2 R ) vec ( X 3 R ) = T 1 H T N T 2 N T 1 H T N G .
In particular, we can obtain the minimum-norm least squares reducible solution of tensor Equation (3). Let
A N K = A 1 A 2 , A 1 H I 1 × × I N × K 1 × × K N , A 2 H I 1 × × I N × Q 1 × × Q N , K 1 N B = B 1 B 2 , B 1 H K 1 × × K N × J 1 × × J N , B 2 H Q 1 × × Q N × J 1 × × J N .
Theorem 4.
With the above notation for coefficient tensors in Equation (3), and T 01 , T 02 , and vec ( Φ C ) in (21), we have
H R 1 = K N X 1 X 2 O X 3 N K 1 A 1 N X 1 R N B 1 + A 1 N X 2 R N B 2 + A 2 N X 3 R N B 2 C F 2 = min X i A 1 N X 1 N B 1 + A 1 N X 2 N B 2 + A 2 N X 3 N B 2 C F 2 } = K N X 1 X 2 O X 3 N K 1 vec X 1 vec X 2 vec X 3 = T 01 H T N T 02 N T 01 , H 01 T N vec ( Φ C ) + I T 01 N T 01 R N R N W 1 ,
where W 1 is an arbitrary real tensor with appropriate dimension. The minimum-norm least squares reducible solution X R = K N X 1 R X 2 R O X 3 R N K 1 H R 1 satisfies
vec X 1 R vec X 2 R vec X 3 R = T 01 H T N T 02 N T 01 , H 01 T N vec ( Φ C ) .

5. Algorithm and Numerical Example

Based on the methods discussed in Section 4, we develop the following Algorithm 1 to solve tensor system (6). We also present a numerical example to show the accuracy of our method.
Algorithm 1: Finding the minimum-norm least squares reducible solution of system (6)
  • Input: the coefficient tensors A , C , E , F , and D in (6), and permutation tensor K H L 1 × × L N × L 1 × × L N .
  • Compute the A 1 , A 2 , C 1 , C 2 , E 1 , E 2 , F 1 , F 2 , D in (24).
  • Compute T 1 , T 2 , R , Z , H , S 11 , S 12 , S 22 , and G , which are defined in Section 3 and Section 4.
  • If (18) holds, we compute X 1 R , X 2 R , X 3 R H G and obtain the least squares reducible solution X R = K N X 1 R X 2 R O X 3 R N K 1 .
    Otherwise, go to next step.
  • Compute X 1 R , X 2 R , X 3 R H L .
    Then we obtain the least squares reducible solution X R = K N X 1 R X 2 R O X 3 R N K 1 .
We implemented the above algorithm in MATLAB. The codes are available upon request. The following example shows that our algorithm works well.
Example 1.
Let I 1 = I 2 = J 1 = J 2 = Q 1 = Q 2 = K 1 = K 2 = 2 . Given tensor A , E , F in system (6) and permutation tensor K H 4 × 4 × 4 × 4 :
A ( : , : , 1 , 1 ) = j + k i + j 0 i + j , A ( : , : , 2 , 1 ) = 1 + i i 1 1 + j , A ( : , : , 1 , 2 ) = j i + j k i + j + k , A ( : , : , 2 , 2 ) = j + k i + j 1 + j + k 1 + j , A ( : , : , 1 , 3 ) = i + j 1 + i + j + k 0 i + k , A ( : , : , 2 , 3 ) = k 1 + i 1 + j 1 + j , A ( : , : , 1 , 4 ) = 1 + i + j + k 1 + i + j 1 + i + j k , A ( : , : , 2 , 4 ) = 1 + j i + j 1 + j + k i , A ( : , : , 3 , 1 ) = A ( : , : , 4 , 1 ) = 0 , A ( : , : , 3 , 2 ) = A ( : , : , 4 , 2 ) = 0 , A ( : , : , 3 , 3 ) = A ( : , : , 4 , 3 ) = 0 , A ( : , : , 3 , 4 ) = A ( : , : , 4 , 4 ) = 0 , E ( : , : , 1 , 1 ) = i + j k 1 + k i + j + k , E ( : , : , 2 , 1 ) = 1 + i + j j 1 + i + j 1 + j + k , E ( : , : , 1 , 2 ) = i + j i + j i + k 1 + k , E ( : , : , 2 , 2 ) = 1 + j 0 1 + i + j 1 + i + k , E ( : , : , 1 , 3 ) = i + k 1 + j i + k j + k , E ( : , : , 2 , 3 ) = 1 + i 0 i 1 + i + k , E ( : , : , 1 , 4 ) = 1 1 + i + j + k 1 + i + j + k 1 + k , E ( : , : , 2 , 4 ) = i + k 1 + i + j + k 1 + j + k 1 + i + j , E ( : , : , 3 , 1 ) = E ( : , : , 4 , 1 ) = 0 , E ( : , : , 3 , 2 ) = E ( : , : , 4 , 2 ) = 0 , E ( : , : , 3 , 3 ) = E ( : , : , 4 , 3 ) = 0 , E ( : , : , 3 , 4 ) = E ( : , : , 4 , 4 ) = 0 , F ( : , : , 1 , 1 ) = 1 0 0 0 1 1 0 0 0 0 j 0 0 0 1 + j 1 , F ( : , : , 2 , 1 ) 1 + j 0 0 0 0 0 0 0 0 0 1 j 0 0 1 1 + j , F ( : , : , 1 , 2 ) = j 0 0 0 j 1 0 0 0 0 1 1 0 0 j 1 + j , F ( : , : , 2 , 2 ) 1 j 0 0 1 1 + j 0 0 0 0 1 1 0 0 j j ,
and
K ( : , : , 1 , 1 ) = 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 2 , 1 ) = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 3 , 1 ) = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 , K ( : , : , 4 , 1 ) = 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 , K ( : , : , 1 , 2 ) = 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 2 , 2 ) = 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 3 , 2 ) = 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 , K ( : , : , 4 , 2 ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 , K ( : , : , 1 , 3 ) = 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 2 , 3 ) = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 ,
K ( : , : , 3 , 3 ) = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 , K ( : , : , 4 , 3 ) = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 , K ( : , : , 1 , 4 ) = 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 , K ( : , : , 2 , 4 ) = 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 , K ( : , : , 3 , 4 ) = 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 , K ( : , : , 4 , 4 ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 .
We set the solution as follows:
X = K N X 1 X 2 0 X 3 N K 1 ,
where
X 1 ( : , : , 1 , 1 ) = j + k 1 0 i , X 1 ( : , : , 2 , 1 ) = 1 + j 1 + k 0 1 + i + k , X 1 ( : , : , 1 , 2 ) = 1 + i + k i + j 1 k , X 1 ( : , : , 2 , 2 ) = i + k 1 + i 1 + i + k k , X 2 ( : , : , 1 , 1 ) = 1 j i i + j , X 2 ( : , : , 2 , 1 ) = 1 i 0 0 , X 2 ( : , : , 1 , 2 ) = i + k j + k 1 + j + k 1 + j , X 2 ( : , : , 2 , 2 ) = 1 + i 1 + k j 1 + i + k , X 3 ( : , : , 1 , 1 ) = 1 + k i i 1 , X 3 ( : , : , 2 , 1 ) = 1 + i + k i + j 0 i + j + k , X 3 ( : , : , 1 , 2 ) = i + j + k k 1 + k 1 + j , X 3 ( : , : , 2 , 2 ) = j + k i + j + k 1 + j + k i .
C and D in (6) can be obtained from the above assumptions. Using the Algorithm 1, we have N 1 = 2.1936 × 10 11 and the minimum-norm least squares reducible solution X R = K N X 1 R X 2 R 0 X 3 R N K 1 H R such that
X R X F 2 = 7.6082 × 10 12 .

6. Conclusions

We obtained the minimum-norm least squares solution for system (5) by using the Moore–Penrose inverses of block tensors. In terms of applications, we derived the the minimum-norm least squares reducible solution for system (6). In addition, we used an algorithm and a numerical example to verify the main results of this paper. It is worth noting that the main results of (5) and (6) can be used for solving other quaternion tensor systems.

Author Contributions

All authors contributed equally to the conceptualization, formal analysis, investigation, methodology, software, validation, writing of the original draft, writing of the review, and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (11971294) and the China Scholarship Council (#202006890063).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cecioni, F. Sopra alcune operazioni algebriche sulle matrici. Ann. Scuola Norm. Sup. Pisa Sci. Fis. Mat. 1910, 11, 1–141. [Google Scholar]
  2. Nie, X.; Wang, Q.W.; Zhang, Y. A system of matrix equations over the quaternion algebra with applications. Algebra Colloq. 2017, 24, 233–253. [Google Scholar] [CrossRef]
  3. Yuan, Y.X.; Mahmoud, S.M. Least-squares solutions to the matrix equations AX = B and XC = D. Appl. Math. Comput. 2010, 216, 3120–3125. [Google Scholar]
  4. Hamilton, W.R. Elements of Quaternions; Longmans, Green, & Company: London, UK, 1866. [Google Scholar]
  5. Le Bihan, N.; Mars, J. Singular value decomposition of quaternion matrices: A new tool for vector-sensor signal processing. Signal Process. 2004, 84, 1177–1199. [Google Scholar] [CrossRef]
  6. Pei, S.C.; Chang, J.H.; Ding, J.J. Quaternion matrix singular value decomposition and its applications for color image processing. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003; Volume 1, p. I-805. [Google Scholar]
  7. Took, C.C.; Mandic, D.P. Quaternion-valued stochastic gradient-based adaptive IIR filtering. IEEE Trans. Signal Process. 2010, 58, 3895–3901. [Google Scholar]
  8. Took, C.C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef] [Green Version]
  9. Brazell, M.; Li, N.; Navasca, C.; Tamon, C. Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 2013, 54, 542–570. [Google Scholar]
  10. Chen, Z.; Lu, L. A projection method and Kronecker product preconditioner for solving Sylvester tensor equations. Sci. China Math. 2012, 55, 1281–1292. [Google Scholar]
  11. Guan, Y.; Chu, D. Numerical computation for orthogonal low-rank approximation of tensors. SIAM J. Matrix Anal. Appl. 2019, 40, 1047–1065. [Google Scholar]
  12. Guan, Y.; Chu, M.T.; Chu, D. SVD-based algorithms for the best rank-1 approximation of a symmetric tensor. SIAM J. Matrix Anal. 2018, 39, 1095–1115. [Google Scholar]
  13. Guan, Y.; Chu, M.T.; Chu, D. Convergence analysis of an SVD-based algorithm for the best rank-1 tensor approximation. Linear Algebra Appl. 2018, 555, 53–69. [Google Scholar]
  14. Li, T.; Wang, Q.W.; Zhang, X.F. A Modified conjugate residual method and nearest kronecker product preconditioner for the generalized coupled Sylvester tensor equations. Mathematics 2022, 10, 1730. [Google Scholar] [CrossRef]
  15. Liu, L.S.; Wang, Q.W.; Chen, J.F.; Xie, Y.Z. An exact solution to a quaternion matrix equation with an application. Symmetry 2022, 14, 375. [Google Scholar] [CrossRef]
  16. Liu, L.S.; Wang, Q.W.; Mahmoud Saad, M. A Sylvester-type matrix equation over the hamilton quaternions with an application. Mathematics 2022, 10, 1758. [Google Scholar] [CrossRef]
  17. Mahmoud Saad, M.; Wang, Q.W. Three symmetrical systems of coupled Sylvester-like quaternion matrix equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  18. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef] [Green Version]
  19. Qi, L. Symmetric nonnegative tensors and copositive tensors. Linear Algebra Appl. 2013, 439, 228–238. [Google Scholar] [CrossRef]
  20. Qi, L.; Chen, H.; Chen, Y. Tensor Eigenvalues and Their Applications; Springer: Singapore, 2018. [Google Scholar]
  21. Qi, L.; Luo, Z. Tensor Analysis: Spectral Theory and Special Tensors; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  22. Sun, L.; Zheng, B.; Bu, C.; Wei, Y. Moore-Penrose inverse of tensors via Einstein product. Automatica 2016, 64, 686–698. [Google Scholar]
  23. Wang, R.N.; Wang, Q.W.; Liu, L.S. Solving a system of sylvester-like quaternion matrix equations. Symmetry 2022, 14, 1056. [Google Scholar] [CrossRef]
  24. Wei, M.; Li, Y.; Zhang, F.; Zhao, J. Quaternion Matrix Computations; Nova Science Publishers, Inc.: Hauppauge, NY, USA, 2018. [Google Scholar]
  25. Xu, Y.F.; Wang, Q.W.; Liu, L.S.; Mahmoud Saad, M. A constrained system of matrix equations. Comput. Appl. Math. 2022, 41, 1–24. [Google Scholar]
  26. Einstein, A. The formal foundation of the general theory of relativity. Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) 1914, 1914, 1030–1085. [Google Scholar]
  27. He, Z.H. The general solution to a system of coupled Sylvester-type quaternion tensor equations involving η-Hermicity. Bull. Iran. Math. Soc. 2019, 45, 1407–1430. [Google Scholar] [CrossRef]
  28. Kirkland, S.; Neumann, M.; Xu, J. Transition matrices for well-conditioned Markov chains. Linear Algebra Appl. 2007, 424, 118–131. [Google Scholar] [CrossRef] [Green Version]
  29. Rivin, I. Walks on groups, counting reducible matrices, polynomials, and surface and free group automorphisms. Duke Math. J. 2008, 142, 353–379. [Google Scholar] [CrossRef] [Green Version]
  30. Wu, C.W. On bounds of extremal eigenvalues of irreducible and m-reducible matrices. Linear Algebra Appl. 2005, 402, 29–45. [Google Scholar] [CrossRef] [Green Version]
  31. Xie, M.; Wang, Q.W. The reducible solution to a quaternion tensor equation. Front. Math. China 2020, 15, 1047–1070. [Google Scholar] [CrossRef]
  32. He, Z.H.; Navasca, C.; Wang, Q.W. Tensor decompositions and tensor equations over quaternion algebra. arXiv 2017, arXiv:1710.07552. [Google Scholar]
  33. Wang, Q.W.; Lv, R.; Zhang, Y. The least-squares solution with the least norm to a system of tensor equations over the quaternion algebra. Linear Multilinear Algebra 2022, 70, 1942–1962. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, M.; Wang, Q.-W.; Zhang, Y. The Minimum-Norm Least Squares Solutions to Quaternion Tensor Systems. Symmetry 2022, 14, 1460. https://doi.org/10.3390/sym14071460

AMA Style

Xie M, Wang Q-W, Zhang Y. The Minimum-Norm Least Squares Solutions to Quaternion Tensor Systems. Symmetry. 2022; 14(7):1460. https://doi.org/10.3390/sym14071460

Chicago/Turabian Style

Xie, Mengyan, Qing-Wen Wang, and Yang Zhang. 2022. "The Minimum-Norm Least Squares Solutions to Quaternion Tensor Systems" Symmetry 14, no. 7: 1460. https://doi.org/10.3390/sym14071460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop