Next Article in Journal
High-Dimensional Conditional Covariance Matrices Estimation Using a Factor-GARCH Model
Previous Article in Journal
Velocity-Free State Feedback Fault-Tolerant Control for Satellite with Actuator and Sensor Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Practical Criteria for H-Tensors and Their Application

School of Mathematics and Statistics, Beihua University, Jilin 132013, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(1), 155; https://doi.org/10.3390/sym14010155
Submission received: 15 December 2021 / Revised: 9 January 2022 / Accepted: 11 January 2022 / Published: 13 January 2022
(This article belongs to the Topic Applied Metaheuristic Computing)

Abstract

:
Identifying the positive definiteness of even-order real symmetric tensors is an important component in tensor analysis. H -tensors have been utilized in identifying the positive definiteness of this kind of tensor. Some new practical criteria for identifying H -tensors are given in the literature. As an application, several sufficient conditions of the positive definiteness for an even-order real symmetric tensor were obtained. Numerical examples are given to illustrate the effectiveness of the proposed method.

1. Introduction

Tensor theory is widely used in digital signal processing, medical image processing, data mining, quantum entanglement, and other fields [1,2,3,4,5,6,7]. H -tensor theory is an integral part of tensor theory. It plays an important role in physics such as control theory and dynamic control systems and in mathematics such as the numerical solution of partial differential equations and the degree of discretization of nonlinear parabolic equations [8,9,10,11,12,13].
Let C ( R ) be the complex (real) field and N = { 1 , 2 , , n } . A complex (real)-order m dimension n tensor A = ( a i 1 i 2 i m ) consists of n m complex (real) entries:
a i 1 i 2 i m C ( R ) ,
where i j = 1 , 2 , , n and j = 1 , 2 , , m . A tensor A = ( a i 1 i 2 i m ) is called symmetric [14], if:
a i 1 i 2 i m = a π ( i 1 i 2 i m ) , π Π m ,
where Π m is the permutation group of m indices. Furthermore, a tensor I = ( δ i 1 i 2 i m ) is called the unit tensor [15], if its entries:
δ i 1 i 2 i m = 1 , i f i 1 = i 2 = = i m , 0 , o t h e r w i s e .
Let A = ( a i 1 i 2 i m ) be a tensor with order m and dimension n. If there exist a complex number λ and a non-zero complex vector x = ( x 1 , x 2 , , x n ) T that are solutions of the following homogeneous polynomial equations:
A x m 1 = λ x [ m 1 ] ,
then we call λ an eigenvalue of A and x an eigenvector of A associated with λ [14,16,17,18,19,20], and A x m 1 and λ x [ m 1 ] are vectors, whose ith components are:
( A x m 1 ) i = i 2 , i 3 , , i m N a i i 2 i 3 i m x i 2 x i 3 x i m
and:
( x [ m 1 ] ) i = x i m 1 .
In particular, if  λ and x are restricted to the real field, then we call λ an H-eigenvalue of A and x an H-eigenvector of A associated with λ [14].
It is known that an mth-degree homogeneous polynomial of n variables f ( x ) can be denoted as:
f ( x ) = i 1 , i 2 , , i m N a i 1 i 2 i m x i 1 x i 2 x i m ,
where x = ( x 1 , x 2 , , x n ) T R n . The homogeneous polynomial f ( x ) can be expressed as the tensor product of a symmetric tensor A with order m and dimension n and x m defined by:
f ( x ) A x m = i 1 , i 2 , , i m N a i 1 i 2 i m x i 1 x i 2 x i m ,
where x = ( x 1 , x 2 , , x n ) T R n . When m is even, f ( x ) is called positive definite if f ( x ) > 0 , for any x R n \ { 0 } . The symmetric tensor A is called positive definite if f ( x ) is positive definite [5].
It is well known that the positive definiteness of a multivariate polynomial f ( x ) plays an important role in the stability study of non-linear autonomous systems [12,21]. However, for n > 3 and m > 4 , it is a hard problem to identify the positive definiteness of such a multivariate form. To solve this problem, Qi [14] pointed out that f ( x ) A x m is positive definite if and only if the real symmetric tensor A is positive definite and provided an eigenvalue method to verify the positive definiteness of A when m is even ([14], Theorem 1.1).
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n; we denote:
R i ( A ) = i 2 , , i m N δ i i 2 i m = 0 | a i i 2 i m | = i 2 , , i m N | a i i 2 i m | | a i i i | , i N .
Definition 1
([14]). Let A be a tensor with order m and dimension n. A is called a diagonally dominant tensor if | a i i i | R i ( A ) , i N . A is called a strictly diagonally dominant tensor if | a i i i | > R i ( A ) , i N .
Definition 2
([22]). Let A be a complex tensor with order m and dimension n. A is called an H -tensor if there is a positive vector x = ( x 1 , x 2 , , x n ) T R n , such that:
| a i i i | x i m 1 > i 2 , , i m N δ i i 2 i m = 0 | a i i 2 i m | x i 2 x i m , i N .
Definition 3
([23]). Let A be a complex tensor with order m and dimension n, X = diag ( x 1 , x 2 , , x n ) . Denote B = ( b i 1 i 2 i m ) = A X m 1 ,
b i 1 i 2 i m = a i 1 i 2 i m x i 2 x i m , i j N , j = 1 , 2 , , m .
We call B the product of the tensor A and the matrix X.
Lemma 1
([14]). Let A be an even-order real symmetric tensor, then A is positive definite if and only if all of its H-eigenvalues are positive.
From Lemma 1, we can verify the positive definiteness of an even-order symmetric tensor A (the positive definiteness of the mth-degree homogeneous polynomial f ( x ) ) by computing the H-eigenvalues of A . In [6,24,25], for a non-negative tensor, some algorithms were provided to compute its largest eigenvalue. In [1,26], based on semi-definite programming approximation schemes, some algorithms were also given to compute the eigenvalues for general tensors with moderate sizes. However, it is not easy to compute all these H-eigenvalues when m and n are very large. Recently, by introducing the definition of H -tensor, References [22,27] and Li et al. [27] provided a practical sufficient condition for identifying the positive definiteness of an even-order symmetric tensor (see Lemmas 2, 4, and 5).
Lemma 2
([22]). If A = ( a i 1 i 2 i m ) is a strictly diagonally dominant tensor, then A is an H -tensor.
Lemma 3
([27]). Let A = ( a i 1 i 2 i m ) be an even-order real symmetric tensor of order m and dimension n with a i i i > 0 for all i N . If  A is an H -tensor, then A is positive definite.
Lemma 4
([27]). Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n. If there exists a positive diagonal matrix X such that A X m 1 is an H -tensor, then A is an H -tensor.
Lemma 5.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n. If A is an H -tensor, then there exists at least one index i 0 N such that | a i 0 i 0 i 0 | > R i 0 ( A ) .
Proof of Lemma 5. 
According to Definition 2, there is a positive vector x = ( x 1 , x 2 , , x n ) T R n , such that:
| a i i i | x i m 1 > i 2 , , i m N δ i i 2 i m = 0 | a i i 2 i m | x i 2 x i m , i N .
Denote x i 0 = min i N { x i } , then for the index i 0 N , we have:
| a i 0 i 0 i 0 | > i 2 , , i m N δ i i 2 i m = 0 | a i 0 i 2 i m | x i 2 x i 0 x i m x i 0 i 2 , , i m N δ i i 2 i m = 0 | a i 0 i 2 i m | = R i 0 ( A ) .
The proof is complete.    □

2. Practical Criteria for the H -Tensor

Throughout this paper, we use the following definitions and notation.
N = { 1 , 2 , , n } = i = 1 k N i , N i N j = , 1 i j k . t { 1 , 2 , , k } , N t m 1 = { i 2 i 3 i m | i j N t , j = 1 , 2 , , m } . N m 1 \ N t m 1 = { i 2 i 3 i m | i 2 i 3 i m N m 1 , i 2 i 3 i m N t m 1 } .
For all i N , there exists a constant t { 1 , 2 , , k } that satisfies i N t . Denote:
α N t ( i ) = i 2 i m N t m 1 δ i i 2 i m = 0 | a i i 2 i m | , α ¯ N t ( i ) = i 2 , , i m N m 1 \ N t m 1 | a i i 2 i m | = R i ( A ) α N t ( i ) . N * = { i | | a i i i | > R i ( A ) , i N } , N + = { i | | a i i i | > α N t ( i ) , i N t N } , N 0 = { i | | a i i i | = α N t ( i ) , i N t N } , J + = { ( i , j ) | ( | a i i i | α N t ( i ) ) ( | a j j j | α N s ( j ) ) > α ¯ N t ( i ) · α ¯ N s ( j ) , i N t , j N s , 1 t s k } J 0 = { ( i , j ) | ( | a i i i | α N t ( i ) ) ( | a j j j | α N s ( j ) ) = α ¯ N t ( i ) · α ¯ N s ( j ) , i N t , j N s , 1 t s k }
Definition 4.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n. For all i N t , j N s , 1 t s k , if: 
( | a i i i | α N t ( i ) ) ( | a j j j | α N s ( j ) ) α ¯ N t ( i ) · α ¯ N s ( j ) ,
we call A a locally double-diagonally dominant tensor and denote A L D D 0 T . If: 
( | a i i i | α N t ( i ) ) ( | a j j j | α N s ( j ) ) > α ¯ N t ( i ) · α ¯ N s ( j ) ,
we call A a strictly locally double-diagonally dominant tensor and denote A L D D T .
Remark 1.
If A L D D 0 T and N * , then N = N 0 N + ; if A L D D T and N * , then N = N + .
Lemma 6.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n. If  A L D D T and N * , then there exists at most one l 0 { 1 , 2 , , k } such that,
| a i i i | α N l 0 ( i ) α ¯ N l 0 ( i ) , i N l 0 .
Proof of Lemma 6. 
If there exists l 0 1 l 0 2 { 1 , 2 , , k } , such that for all i N l 0 1 , j N l 0 2 ,
| a i i i | α N l 0 1 ( i ) α ¯ N l 0 1 ( i ) , | a j j j | α N l 0 2 ( j ) α ¯ N l 0 2 ( j ) .
Notice that A L D D T and N * . By Remark 1, we have:
| a i i i | α N l 0 1 ( i ) > 0 , | a j j j | α N l 0 2 ( j ) > 0 .
These imply:
( | a i i i | α N l 0 1 ( i ) ) · ( | a j j j | α N l 0 2 ( j ) ) α ¯ N l 0 1 ( i ) · α ¯ N l 0 2 ( j ) ,
which contradicts A L D D T . The proof is complete.    □
Remark 2.
Based on Lemma 6, when A L D D T and N * , we always assume that N 1 = { i | | a i i i | α N t ( i ) α ¯ N t ( i ) , i N } .
Lemma 7.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n, A L D D T and N * . If  N 1 = , then A is an H -tensor.
Proof of Lemma 7. 
Since A L D D T , N * and N 1 = . For all i N = N 2 N 3 N k , we have:
| a i i i | α N t ( i ) > α ¯ N t ( i ) , t { 2 , 3 , , k } .
This implies:
| a i i i | > α N t ( i ) + α ¯ N t ( i ) = R i ( A ) , i N .
By Lemma 2, A is an H -tensor.    □
Theorem 1.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n, A L D D T and N * . If for all i N t , t { 2 , 3 , , k } , j N 1 , we have:
( | a i i i | α N t ( i ) ) ( | a j j j | i 2 i 3 i m N m 1 \ ( N \ N 1 ) m 1 δ j i 2 i m = 0 | a j i 2 i m | ) > α ¯ N t ( i ) · ( i 2 i 3 i m ( N \ N 1 ) m 1 | a j i 2 i 3 i m | ) ,
then A is an H -tensor.
Proof of Theorem 1. 
From the inequality (3), there exists a positive constant d such that:
min i N t t { 2 , 3 , , k } { | a i i i | α N t ( i ) α ¯ N t ( i ) } > d > max j N 1 { i 2 i 3 i m ( N \ N 1 ) m 1 | a j i 2 i 3 i m | | a j j j | i 2 i 3 i m N m 1 \ ( N \ N 1 ) m 1 δ j i 2 i m = 0 | a j i 2 i m | } .
If α ¯ N t ( i ) = 0 , we denote | a i i i | α N t ( i ) α ¯ N t ( i ) = + . It is obvious that d > 1 . Construct a positive diagonal matrix X = diag ( x 1 , x 2 , , x n ) , where:
x i = 1 , i N \ N 1 , d 1 m 1 , i N 1 .
Let B = A X m 1 = ( b i 1 i 2 i m ) . For i N \ N 1 = { N 2 , N 3 , , N k } , by the first inequality in (4),
| b i i i | = | a i i i | · 1 · 1 1 m 1 = | a i i i | > α N t ( i ) + α ¯ N t ( i ) · d = i 2 i m N t m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m N m 1 \ N t m 1 | a i i 2 i m | d 1 m 1 d 1 m 1 m 1 i 2 i m N t m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m N m 1 \ N t m 1 | a i i 2 i m | x i 2 x i m = i 2 i m N t m 1 δ i i 2 i m = 0 | b i i 2 i m | + i 2 i m N m 1 \ N t m 1 | b i i 2 i m | = R i ( B ) .
For i N 1 , by the second inequality in (4),
| b i i i | = | a i i i | · d = | a i i i | · d 1 m 1 d 1 m 1 m 1 > d · i 2 i m N m 1 \ ( N \ N 1 ) m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m ( N \ N 1 ) m 1 | a i i 2 i m | = d ( i 2 i m N 1 m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m [ N m 1 \ ( N \ N 1 ) m 1 ] \ N 1 m 1 | a i i 2 i m | ) + i 2 i m ( N \ N 1 ) m 1 | a i i 2 i m | i 2 i m N 1 m 1 δ i i 2 i m = 0 | a i i 2 i m | · d 1 m 1 d 1 m 1 m 1 + i 2 i m [ N m 1 \ ( N \ N 1 ) m 1 ] \ N 1 m 1 | a i i 2 i m | x i 2 x i m + i 2 i m ( N \ N 1 ) m 1 | a i i 2 i m | = i 2 i m N 1 m 1 δ i i 2 i m = 0 | b i i 2 i m | + i 2 i 3 i m N m 1 \ N 1 m 1 | b i i 2 i m | = R i ( B ) .
Thus, we have proven that:
| b i i i | > i 2 i m N m 1 δ i i 2 i m = 0 | b i i 2 i m | = R i ( B ) , i N .
i.e., B is a strictly diagonally dominant tensor. By Lemmas 2 and 4, A is an H -tensor. The proof is complete.    □
Lemma 8.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n. If  N * and for all i N t , j N s , 1 t s k ,
( | a i i i | α N t ( i ) ) ( | a j j j | α N s ( j ) ) α ¯ N t ( i ) · α ¯ N s ( j ) ,
then there exists only one natural number k 0 { 1 , 2 , , k } such that N * N k 0 .
Proof of Lemma 8. 
Since N * , then there exists one k 0 { 1 , 2 , , k } such that N * N k 0 . If there exists another k 1 { 1 , 2 , , k } \ { k 0 } such that N * N k 1 , then we have:
( | a i i i | α N k 0 N * ( i ) ) ( | a j j j | α N k 1 N * ( j ) ) > α ¯ N k 0 N * ( i ) · α ¯ N k 1 N * ( j )
which contradicts the inequality (5). Therefore, there exists only one natural number k 0 { 1 , 2 , , k } that satisfies N * N k 0 .    □
Remark 3.
Based on Lemma 8, when A satisfies (5) and N * , then we always assume that N * N k (where N = N 1 N 2 N k ).
Theorem 2.
Let A = ( a i 1 i 2 i m ) be a complex tensor with order m and dimension n and N * . For all i N k , j N s , s { 1 , 2 , , k 1 } , if:
0 < ( | a i i i | i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | ) ( | a j j j | α N s ( j ) ) ( i 2 i m ( N \ N k ) m 1 | a i i 2 i m | ) · α ¯ N s ( j ) ,
then A is not an H -tensor.
Proof of Theorem 2. 
From the inequality (6), there exists a positive constant d 0 such that:
max i N k { | a i i i | i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | i 2 i m ( N \ N k ) m 1 | a i i 2 i m | } d 0 min j N \ N k { α ¯ N s ( j ) | a j j j | α N s ( j ) } .
If | a j j j | α N s ( j ) = 0 , j N \ N k , we denote α ¯ N s ( j ) | a j j j | α N s ( j ) = + . Obviously,
max i N k { | a i i i | i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | i 2 i m ( N \ N k ) m 1 | a i i 2 i m | } max i N * { | a i i i | i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | i 2 i m ( N \ N k ) m 1 | a i i 2 i m | } > 1 ,
so d 0 > 1 . Construct a positive diagonal matrix X = diag ( x 1 , x 2 , , x n ) , where:
x i = 1 , i N k , d 0 1 m 1 , i N \ N k .
Let B = A X m 1 = ( b i 1 i 2 i m ) . For i N * , by the first inequality in (7),
| b i i i | = | a i i i | · 1 · 1 1 m 1 = | a i i i | i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | + d 0 · i 2 i m ( N \ N k ) m 1 | a i i 2 i m | i 2 i m N k m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m [ N m 1 \ ( N \ N k ) m 1 ] \ N k m 1 | a i i 2 i m | x i 2 x i m + i 2 i m ( N \ N k ) m 1 | a i i 2 i m | · d 1 m 1 d 1 m 1 m 1 = i 2 i m N k m 1 δ i i 2 i m = 0 | b i i 2 i m | + i 2 i m N m 1 \ N k m 1 | b i i 2 i m | = R i ( B ) .
For i N k \ N * ,
| b i i i | = | a i i i | · 1 · 1 1 m 1 = | a i i i | i 2 i m N k m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m N m 1 \ N k m 1 | a i i 2 i m | i 2 i m N k m 1 δ i i 2 i m = 0 | a i i 2 i m | + i 2 i m N m 1 \ ( N m 1 \ N k m 1 ) | a i i 2 i m | x i 2 x i m + i 2 i m N m 1 \ N k m 1 | a i i 2 i m | d 0 1 m 1 d 0 1 m 1 m 1 = i 2 i m N k m 1 δ i i 2 i m = 0 | b i i 2 i m | + i 2 i m N m 1 \ N k m 1 | b i i 2 i m | = R i ( B ) .
For i N \ N k = N 1 N 2 N k 1 , by the second inequality in (7),    
| b i i i | = d 0 | a i i i | = | a i i i | d 0 1 m 1 d 0 1 m 1 m 1 d 0 · α N s ( i ) + α ¯ N s ( i ) = i 2 i m N s m 1 δ i i 2 i m = 0 | a i i 2 i m | d 0 1 m 1 d 0 1 m 1 m 1 + i 2 i m N m 1 \ N s m 1 | a i i 2 i m | i 2 i m N s m 1 δ i i 2 i m = 0 | a i i 2 i m | d 0 1 m 1 d 0 1 m 1 m 1 + i 2 i m N m 1 \ N s m 1 | a i i 2 i m | x i 2 x i m = i 2 i m N s m 1 δ i i 2 i m = 0 | b i i 2 i m | + i 2 i m N m 1 \ N s m 1 | b i i 2 i m | = R i ( B ) .
Thus we have proved that:
| b i i i | i 2 i m N m 1 δ i i 2 i m = 0 | b i i 2 i m | = R i ( B ) , i N .
i.e., B is not strictly diagonally dominant for all i N . By Lemma 5, B is not an H -tensor. By Lemma 4, A is not an H -tensor. The proof is complete.    □

3. An Algorithm for Identifying H -Tensors

Based on the results in the above section, an algorithm for identifying H -tensors is put forward in this section:
Remark 4. 
(a)
s = the total number of tensors, k 1 = the number of H -tensors, k 2 = the number of tensors which are not H -tensor, and s k 1 k 2 = the number of tensors, which are not checkable by using Algorithm 1;
(b)
The calculations of Algorithm 1 only depend on the elements of the tensor, so Algorithm 1 stops after a finite amount of steps.
Algorithm 1 An algorithm for identifying H -tensors
  • Step 1.  Set k 1 : = 0 , k 2 : = 0 , k 3 : = 0 and s : = 50 .
  • Step 2. Given a complex tensor A = ( a i 1 i m ) . If  k 3 = s , then output k 1 and k 2 , and stop. Otherwise,
  • Step 3. Compute | a i i | and R i ( A ) for all i N .
  • Step 4. If N * = N , then print “ A is a H -tensor”, and go to Step 5. Otherwise, go to Step 6.
  • Step 5. Replace k 1 by k 1 + 1 , and replace k 3 by k 3 + 1 , then go to Step 2.
  • Step 6. If N * = , then print “ A is not a H -tensor”, and go to Step 7. Otherwise, go to Step 8.
  • Step 7. Replace k 2 by k 2 + 1 , and replace k 3 by k 3 + 1 , then go to Step 2.
  • Step 8. Compute | a i i | , α N t ( i ) , α ¯ N t ( i ) , | a j j | , α N s ( j ) , α ¯ N s ( j ) for all i N t , j N s , 1 t s k .
  • Step 9. If Inequality (3) holds, then print “ A is a H -tensor”, and go to Step 5. Otherwise,
  • Step 10. Compute:
    i 2 i m N m 1 \ ( N \ N k ) m 1 δ i i 2 i m = 0 | a i i 2 i m | and i 2 i m ( N \ N k ) m 1 | a i i 2 i m | .
  • Step 11. If Inequality (6) holds, then print “ A is not a H -tensor”, and go to Step 5. Otherwise,
  • Step 12. Print “Whether A is a strong H -tensor is not checkable”, and replace k 3 by k 3 + 1 . Go to Step 2.

4. Numerical Examples

Example 1.
Consider a tensor A = ( a i 1 i 2 i 3 ) , with order three and dimension six, defined as follows:
a 111 = 3.9 , a 112 = a 121 = a 122 = 1 , a 113 = a 131 = a 133 = a 144 = a 155 = 0.2 , a 222 = 5 , a 211 = a 212 = a 221 = a 235 = a 236 = 1 , a 214 = a 215 = a 232 = a 256 = a 266 = 0.2 , a 333 = 4 , a 334 = a 343 = 0.5 , a 344 = 1 , a 311 = a 322 = a 331 = a 326 = a 352 = 0.2 , a 444 = 6.5 , a 433 = a 434 = a 443 = 1 , a 412 = a 414 = a 421 = a 441 = 0.5 , a 555 = 6.3 , a 556 = 1 , a 512 = a 515 = a 521 = a 523 = a 551 = a 565 = a 566 = 0.5 , a 666 = 8 , a 655 = a 656 = a 665 = 1 , a 625 = a 626 = a 652 = a 662 = 0.5
and other a i 1 i 2 i 3 = 0 . By calculations, we have:
R 1 ( A ) = 4 , R 2 ( A ) = 6 , R 3 ( A ) = 3 , R 4 ( A ) = 5 , R 5 ( A ) = 4.5 , R 6 ( A ) = 5 ,
N 1 = { 1 , 2 } , N 2 = { 3 , 4 } , N 3 = { 5 , 6 } , N \ N 1 = { 3 , 4 , 5 , 6 } = N 2 N 3 ,
N 1 3 1 = N 1 2 = { 11 , 12 , 21 , 22 } , N 2 3 1 = N 2 2 = { 33 , 34 , 43 , 44 } , N 3 3 1 = N 3 2 = { 55 , 56 , 65 , 66 } , ( N \ N 1 ) 3 1 = ( N \ N 1 ) 2 = { 33 , 34 , 35 , 36 , 43 , 44 , 45 , 46 , 53 , 54 , 55 , 56 , 63 , 64 , 65 , 66 } , N 3 1 \ ( N \ N 1 ) 3 1 = N 2 \ ( N \ N 1 ) 2 = { 11 , 12 , 13 , 14 , 15 , 16 , 21 , 22 , 23 , 24 , 25 , 26 , 31 , 32 , 41 , 42 , 51 , 52 , 61 , 62 } .
α N 1 ( 1 ) = i 2 i 3 N 1 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | = | a 112 | + | a 121 | + | a 122 | = 1 + 1 + 1 = 3 .
α N 1 ( 2 ) = i 2 i 3 N 1 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | = | a 211 | + | a 212 | + | a 221 | = 1 + 1 + 1 = 3 .
α N 2 ( 3 ) = i 2 i 3 N 2 2 δ 3 i 2 i 3 = 0 | a 3 i 2 i 3 | = | a 334 | + | a 343 | + | a 344 | = 0.5 + 0.5 + 1 = 2 .
α N 2 ( 4 ) = i 2 i 3 N 2 2 δ 4 i 2 i 3 = 0 | a 4 i 2 i 3 | = | a 433 | + | a 434 | + | a 443 | = 1 + 1 + 1 = 3 .
α N 3 ( 5 ) = i 2 i 3 N 3 2 δ 5 i 2 i 3 = 0 | a 5 i 2 i 3 | = | a 556 | + | a 565 | + | a 566 | = 1 + 0.5 + 0.5 = 2 .
α N 3 ( 6 ) = i 2 i 3 N 3 2 δ 6 i 2 i 3 = 0 | a 6 i 2 i 3 | = | a 655 | + | a 656 | + | a 665 | = 1 + 1 + 1 = 3 .
i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | = | a 112 | + | a 121 | + | a 122 | + | a 113 | + | a 131 | + 0 = 1 + 1 + 1 + 0.2 + 0.2 = 3.4 .
i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | = | a 211 | + | a 212 | + | a 221 | + | a 214 | + | a 215 | + | a 232 | + 0 = 1 + 1 + 1 + 0.2 + 0.2 + 0.2 = 3.6 .
i 2 i 3 \ ( N \ N 1 ) 2 | a 1 i 2 i 3 | = | a 133 | + | a 144 | + | a 155 | + 0 = 0.2 + 0.2 + 0.2 = 0.6 .
i 2 i 3 ( N \ N 1 ) 2 | a 2 i 2 i 3 | = | a 235 | + | a 236 | + | a 256 | + | a 266 | + 0 = 1 + 1 + 0.2 + 0.2 = 2.4 .
α ¯ N 1 ( 1 ) = 1 , α ¯ N 1 ( 2 ) = 3 , α ¯ N 2 ( 3 ) = 1 , α ¯ N 2 ( 4 ) = 2 , α ¯ N 3 ( 5 ) = 2.5 , α ¯ N 3 ( 6 ) = 2 .
( | a 111 | α N 1 ( 1 ) ) · ( | a 333 | α N 2 ( 3 ) ) = ( 3.9 3 ) × ( 4 2 ) = 1.8 > α ¯ N 1 ( 1 ) · α ¯ N 2 ( 3 ) = 1 × 1 = 1 .
( | a 111 | α N 1 ( 1 ) ) · ( | a 444 | α N 2 ( 4 ) ) = ( 3.9 3 ) × ( 6.5 3 ) = 3.15 > α ¯ N 1 ( 1 ) · α ¯ N 2 ( 4 ) = 1 × 2 = 2 .
( | a 111 | α N 1 ( 1 ) ) · ( | a 555 | α N 3 ( 5 ) ) = ( 3.9 3 ) × ( 6.3 2 ) = 3.87 > α ¯ N 1 ( 1 ) · α ¯ N 3 ( 5 ) = 1 × 2.5 = 2.5 .
( | a 111 | α N 1 ( 1 ) ) · ( | a 666 | α N 3 ( 6 ) ) = ( 3.9 3 ) × ( 8 3 ) = 4.5 > α ¯ N 1 ( 1 ) · α ¯ N 3 ( 6 ) = 1 × 2 = 2 .
( | a 222 | α N 1 ( 2 ) ) · ( | a 333 | α N 2 ( 3 ) ) = ( 5 3 ) × ( 4 2 ) = 4 > α ¯ N 1 ( 2 ) · α ¯ N 2 ( 3 ) = 3 × 1 = 3 .
( | a 222 | α N 1 ( 2 ) ) · ( | a 444 | α N 2 ( 4 ) ) = ( 5 3 ) × ( 6.5 3 ) = 7 > α ¯ N 1 ( 2 ) · α ¯ N 2 ( 4 ) = 3 × 2 = 6 .
( | a 222 | α N 1 ( 2 ) ) · ( | a 555 | α N 3 ( 5 ) ) = ( 5 3 ) × ( 6.3 2 ) = 8.6 > α ¯ N 1 ( 2 ) · α ¯ N 3 ( 5 ) = 3 × 2.5 = 7.5 .
( | a 222 | α N 1 ( 2 ) ) · ( | a 666 | α N 3 ( 6 ) ) = ( 5 3 ) × ( 8 3 ) = 10 > α ¯ N 1 ( 2 ) · α ¯ N 3 ( 6 ) = 3 × 2 = 6 .
From Definition 4, we have A L D D T . Furthermore,
( | a 333 | α N 2 ( 3 ) ) · ( | a 111 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | ) = ( 4 2 ) × ( 3.9 3.4 ) = 1 > α ¯ N 2 ( 3 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 1 i 2 i 3 | ) = 1 × 0.6 = 0.6 ,
( | a 333 | α N 2 ( 3 ) ) · ( | a 222 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | ) = ( 4 2 ) × ( 5 3.6 ) = 2.8 > α ¯ N 2 ( 3 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 2 i 2 i 3 | ) = 1 × 2.4 = 2.4 ,
( | a 444 | α N 2 ( 4 ) ) · ( | a 111 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | ) = ( 6.5 3 ) × ( 3.9 3.4 ) = 1.75 > α ¯ N 2 ( 4 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 1 i 2 i 3 | ) = 2 × 0.6 = 1.2 ,
( | a 444 | α N 2 ( 4 ) ) · ( | a 222 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | ) = ( 6.5 3 ) × ( 5 3.6 ) = 4.9 > α ¯ N 2 ( 4 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 2 i 2 i 3 | ) = 2 × 2.4 = 4.8 ,
( | a 555 | α N 3 ( 5 ) ) · ( | a 111 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | ) = ( 6.3 2 ) × ( 3.9 3.4 ) = 2.15 > α ¯ N 3 ( 5 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 1 i 2 i 3 | ) = 2.5 × 0.6 = 1.5 ,
( | a 555 | α N 3 ( 5 ) ) · ( | a 222 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | ) = ( 6.3 2 ) × ( 5 3.6 ) = 6.02 > α ¯ N 3 ( 5 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 2 i 2 i 3 | ) = 2.5 × 2.4 = 6 ,
( | a 666 | α N 3 ( 6 ) ) · ( | a 111 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 1 i 2 i 3 = 0 | a 1 i 2 i 3 | ) = ( 8 3 ) × ( 3.9 3.4 ) = 2.5 > α ¯ N 3 ( 6 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 1 i 2 i 3 | ) = 2 × 0.6 = 1.2 ,
( | a 666 | α N 3 ( 6 ) ) · ( | a 222 | i 2 i 3 N 2 \ ( N \ N 1 ) 2 δ 2 i 2 i 3 = 0 | a 2 i 2 i 3 | ) = ( 8 3 ) × ( 5 3.6 ) = 7 > α ¯ N 3 ( 6 ) · ( i 2 i 3 ( N \ N 1 ) 2 | a 2 i 2 i 3 | ) = 2 × 2.4 = 4.8 .
These imply that A satisfies all the conditions of Theorem 1. Therefore, A is an H -tensor.

5. Conclusions

In this paper, several practical criteria for identifying H -tensors were given. These criteria only depend on the self-constituent elements of the tensors. Therefore, they are easy to implement. Based on the above criteria, we also gave the corresponding algorithm to determine the H -tensor.

Author Contributions

M.L. and H.S. presented the idea and algorithms as supervisors. P.L. contributed to the preparation of the paper. G.H. contributed to the implementation of the algorithm. All authors have read and agreed to the submitted version of the manuscript.

Funding

This research was funded by the Jilin Province Department of Education, Science and Technology research project under Grant JJKH20220041KJ, the 13th Five-Year Plan of Educational Science in Jilin Province under Grant GH19057, and the National Natural Science Foundations of China (11171133).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their gratitude to the Editor for his assistance, as well as the anonymous respected reviewers for providing insightful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, G.; Qi, L.; Yu, G. The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory. Numer. Linear Algebra Appl. 2013, 20, 1001–1029. [Google Scholar] [CrossRef]
  2. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing valuesin visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  3. Moakher, M. On the averaging of symmetric positive-definite tensors. J. Elast. 2006, 82, 273–296. [Google Scholar] [CrossRef]
  4. Nikias, C.L.; Mendel, J.M. Signal processing with higher-order spectra. IEEE Signal Process. Mag. 1993, 10, 10–37. [Google Scholar] [CrossRef]
  5. Ni, Q.; Qi, L.; Wang, F. An eigenvalue method for the positive definiteness identification problem. IEEE Trans. Automat. Control 2008, 53, 1096–1107. [Google Scholar] [CrossRef] [Green Version]
  6. Michael, N.; Qi, L.; Zhou, G. Finding the largest eigenvalue of a non-negative tensor. SIAM J. Matrix Anal. Appl. 2010, 31, 1090–1099. [Google Scholar]
  7. Oeding, L.; Ottaviani, G. Eigenvectors of tensors and algorithms for Waring decomposition. J. Symb. Comput. 2013, 54, 9–35. [Google Scholar] [CrossRef] [Green Version]
  8. Yang, Y.; Yang, Q. Singular values of nonnegative rectangular tensors. Front. Math. China 2011, 6, 363–378. [Google Scholar] [CrossRef]
  9. Brachat, J.; Comon, P.; Mourrain, B.; Tsigaridas, E. Symmetric tensor decomposition. Linear Algebra Appl. 2010, 433, 1851–1872. [Google Scholar] [CrossRef] [Green Version]
  10. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 25, pp. 1–3. [Google Scholar]
  11. Tartar, L. Mathematical Tools for Studying Oscillations and Concentrations: From Y oung Measures to H-Measures and Their Variants. In Multiscale Problems in Science and Technology; Springer: Berlin/Heidelberg, Germany, 2002; pp. 1–84. [Google Scholar]
  12. Zhang, L.; Qi, L.; Zhou, G. M -tensors and some applications. SIAM J. Matrix Anal. Appl. 2014, 35, 437–452. [Google Scholar] [CrossRef]
  13. De Lathauwer, L.; De Moor, B.; Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. 2000, 21, 1253–1278. [Google Scholar] [CrossRef] [Green Version]
  14. Qi, L. Eigenvalues of a real supersymetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, Y.; Yang, Q. Fruther results for Perron-Frobenius theorem for nonnegative tensors. SIAM J. Mayrix Anal. Appl. 2010, 31, 2517–2530. [Google Scholar] [CrossRef]
  16. Li, C.Q.; Li, Y.T.; Xu, K. New eigenvalue inclusion sets for tensor. Numer. Linear Algebra Appl. 2014, 21, 39–50. [Google Scholar] [CrossRef]
  17. Kolda, T.G.; Mayo, J.R. Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl. 2011, 32, 1095–1124. [Google Scholar] [CrossRef] [Green Version]
  18. Lim, L.H. Singular values and eigenvalues of tensors: A variational approach. In Proceedings of the IEEE International Workshop on Computational Advances in Multisensor Adaptive Processing, Puerto Vallarta, Mexico, 13–15 December 2005; pp. 129–132. [Google Scholar]
  19. Qi, L. Eigenvalues and invariants of tensors. J. Math. Anal. Appl. 2007, 325, 1363–1377. [Google Scholar] [CrossRef] [Green Version]
  20. Qi, L.; Wang, F.; Wang, Y. Z-eigenvalue methods for a global polynomial optimization problem. Math. Program. 2009, 118, 301–316. [Google Scholar] [CrossRef] [Green Version]
  21. Ni, G.; Qi, L.; Wang, F.; Wang, Y. The degree of the E-characteristic polynomial of an even order tensor. J. Math. Anal. Appl. 2007, 329, 1218–1229. [Google Scholar] [CrossRef] [Green Version]
  22. Ding, W.; Qi, L.; Wei, Y. M -tensors and nonsingular M -tensors. Linear Algebra Appl. 2013, 439, 3264–3278. [Google Scholar] [CrossRef]
  23. Kannana, M.R.; Mondererb, N.S.; Bermana, A. Some properties of strong H -tensors and general H -tensors. Linear Algebra Appl. 2015, 476, 42–55. [Google Scholar] [CrossRef]
  24. Hu, S.; Li, G.; Qi, L.; Song, Y. Finding the maximum eigenvalue of essentially nonnegative symmetric tensors via sum of squares programming. J. Optim. Theory Appl. 2013, 158, 713–738. [Google Scholar] [CrossRef] [Green Version]
  25. Hu, S.; Li, G.; Qi, L. A tensor analogy of Yuans theorem of the alternative and polynomial optimization with sign structure. J. Optim. Theory Appl. 2016, 168, 446–474. [Google Scholar] [CrossRef] [Green Version]
  26. Cui, C.F.; Dai, Y.H.; Nie, J.W. All real eigenvalues of symmetric tensors. SIAM J. Matrix Anal. Appl. 2014, 35, 1582–1601. [Google Scholar] [CrossRef]
  27. Li, C.Q.; Wang, F.; Zhao, J.X.; Li, C.; Zhu, Y.; Li, Y. Criterions for the positive definiteness of real supersymmetric tensors. J. Comput. Appl. Math. 2014, 255, 1–14. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, M.; Sang, H.; Liu, P.; Huang, G. Practical Criteria for H-Tensors and Their Application. Symmetry 2022, 14, 155. https://doi.org/10.3390/sym14010155

AMA Style

Li M, Sang H, Liu P, Huang G. Practical Criteria for H-Tensors and Their Application. Symmetry. 2022; 14(1):155. https://doi.org/10.3390/sym14010155

Chicago/Turabian Style

Li, Min, Haifeng Sang, Panpan Liu, and Guorui Huang. 2022. "Practical Criteria for H-Tensors and Their Application" Symmetry 14, no. 1: 155. https://doi.org/10.3390/sym14010155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop