Next Article in Journal
On Asymptotic Behavior of a 2-Linear Functional Equation
Previous Article in Journal
Maintenance Strategies Definition Based on Systemic Resilience Assessment: A Fuzzy Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation Hierarchies for the Copositive Tensor Cone and Their Application to the Polynomial Optimization over the Simplex

by
Muhammad Faisal Iqbal
1 and
Faizan Ahmed
2,*
1
Department of Applied Mathematics and Statistics, Institute of Space Technology, Islamabad 44000, Pakistan
2
Formal Methods and Tools Group, University of Twente, 7522 NB Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1683; https://doi.org/10.3390/math10101683
Submission received: 1 April 2022 / Revised: 10 May 2022 / Accepted: 10 May 2022 / Published: 14 May 2022
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
In this paper, we discuss the cone of copositive tensors and its approximation. We describe some basic properties of copositive tensors and positive semidefinite tensors. Specifically, we show that a non-positive tensor (or Z-tensor) is copositive if and only if it is positive semidefinite. We also describe cone hierarchies that approximate the copositive cone. These hierarchies are based on the sum of squares conditions and the non-negativity of polynomial coefficients. We provide a compact representation for the approximation based on the non-negativity of polynomial coefficients. As an immediate consequence of this representation, we show that the approximation based on the non-negativity of polynomial coefficients is polyhedral. Furthermore, these hierarchies are used to provide approximation results for optimizing a (homogeneous) polynomial over the simplex.

1. Introduction

Multidimensional arrays or tensors arise naturally as an extension of matrices. They occur in applications where one needs to represent multidimensional data such as in signal processing [1,2,3], machine learning [2,4,5], material science [6], and speech recognition [7]. For example, any homogeneous polynomial of degree d in n-variables is associated with a symmetric tensor of order d and dimension n. In polynomial optimization and, likewise, in control theory, checking the non-negativity of a polynomial is a fundamental problem [8,9]. Nonnegativity of a polynomial over the real space or its non-negative orthant results in positive semidefinite and copositive tensors, respectively.
The copositive and completely positive cones of matrices, which are tensors of order two, are very well explored (see, e.g., [10,11,12] and also [13] for a list of open problems). Therefore, it seems natural to study similar results for copositive tensors. The generalization from matrix to tensor is not trivial since a higher dimension usually destroys the nice structure present at a lower dimension.
The current research on copositive tensor is focused on describing properties that can be generalized from the quadratic case to higher dimensional case. The area is not very well explored. Similar to its matrix analog a characterization of copositive tensors using eigenvectors of principal sub-tensors is described in [14]. Moreover, Qi and co-authors have discussed several basic properties of copositive tensors in a series of papers (see e.g., [15] and the book [16]).
The set of copositive tensors forms the copositive cone. The copositive cone is used to reformulate hard combinatorial optimization problem as a linear conic optimization problem. Reformulation of polynomial optimization as copositive program does not reduce its complexity. However, the complexity is packaged in the copositivity constraint, which is known to be NP-hard. To approximate copositive cone several tractable approximation hierarchies are developed. These approximation hierarchies are based on sum-of-squares conditions, non-negativity of polynomial coefficients, simplicial partition, or rational griding of the simplex. For instance, Parrilo [17] had provided a hierarchy of linear and semi-definite inner approximations for copositive cone (see also [18]). Moreover, Bomze and de Klerk [19] developed a hierarchy for copositive matrices based on non-negativity of polynomial coefficients.
In this paper, we describe approximation hierarchies for the cone of copositive tensors. We extend the hierarchies presented by Parrilo [17] (represented by K n , d r ) an Bomze and de Kelerk [19] (represented by C n , d r ) for higher dimension. Earlier work was focused on extending polynomial approximation schemes from lower to higher dimension. For example, Bomze and de Kelerk [19] focused on improving the approximation result as presented by Nesterov [20]. They first presented a polyhedral representation of C n , 2 r and then used this representation to derive an approximation scheme for polynomial optimization over the simplex. The approximation scheme was extended to a higher-degree fixed-degree polynomial by De Klerk, Laurent and Parrilo [21]. However, they used rational griding of the simplex to derive this polynomial time approximation scheme. The results was further refined using the Bernstein approximation in [22]. Since these approximation schemes relied on rational griding, they have therefore also presented an error analysis based on multivariate hyper-geometric distribution (see [23] and also [24] for a note on the convergence rate).
Furthermore, we develop a compact representation of C n , d ( r ) with an aim to extend the initial results as presented in Bomze and de Kelerk [19]. To the best of our knowledge, the closest attempt in this direction is by [25]. However, their results are restricted to the tensor of order four. Secondly, the results rely heavily on breaking the tensor of dimension four into tensors of lower dimensions. The representation process was tedious and had no obvious generalization to higher dimension cases. Our representation is general and holds for all dimensions. These hierarchies can be used to develop polynomial time approximation schemes for polynomial optimization over the simplex. Moreover, we have provided results in this direction (see Section 5).
The main contributions of this paper are as follows. (a) To discuss basic properties of the copositive tensor cone. Moreover, we show that every Z-tensor is copositive if and only if it is positive semidefinite. (b) To describe approximation hierarchies for the copositive cone of tensors based on sum of square decomposition and non-negativity of the polynomial coefficients.Moreover, we present the polyhedral representation of the approximation hierarchical cone C n , d ( r ) based on non-negative coefficients. The representation is compact and has not appeared in the literature. (c) application of approximation hierarchies to the polynomial optimization over the simplex.
The article is arranged as follows. Section 2 is comprised of the basic definitions and notations. In Section 3, we define tensor cones and related results. In Section 4, we present approximation hierarchies for copositive cone of tensors. We discuss special cases and characterization of these hierarchies. Section 5 provides approximation results based on these hierarchies. In Section 6, we provide a conclusion and future directions.

2. Preliminaries

Throughout this article, the n-dimensional Euclidean space and its non-negative orthant are denoted by R n and R + n , respectively. The set of natural numbers is denoted by N and the set of first n natural number is denoted by [ 1 : n ] . The set of whole numbers is denoted by N 0 = { 0 , 1 , 2 , } while the set of first n + 1 whole numbers is denoted by [ 0 : n ] . For any α [ 0 : d ] n we define | α | : = i = 1 n α i . We also define the index set
I n ( d ) = { α [ 0 : d ] n : | α | = d } .
The cardinality of I n ( d ) is | I n ( d ) | = n + d 1 d (see, e.g., [22]).
A tensor is a multi-dimensional array of real numbers. Specifically, an n-dimensional dth order tensor is given by A = a i 1 i 2 i d 1 i 1 , , i d n . Moreover, a tensor A is said to be symmetric if
a i 1 i 2 i 3 i d = a i σ ( 1 ) i σ ( 2 ) i σ ( 3 ) i σ ( d ) for all permutations σ o n { 1 , 2 , , d } .
The set of all symmetric tensors is denoted by S n , d . For brevity of notation, if some index i j of an element a i 1 i 2 i d A is repeated k-times, we write it as ( i j ) k i.e.,
a i 1 i 2 ( i j ) k i d k = a i 1 i 2 i j i j i j k t i m e s i d k .
Using above notation the ith diagonal element of the tensor A S n , d is denoted by a ( i ) d .
The inner product of two tensors A , B S n , d is defined as follows
A , B = i 1 , i 2 , , i d = 1 n a i 1 , i 2 , , i d b i 1 , i 2 , , i d .
For α I n ( d ) and x R n , x α : = x 1 α 1 x 2 α 2 x n α n = i = 1 n x i α i represent a monomial. The maximum degree among all the monomials of p ( x ) is called the degree of a polynomial. A polynomial whose all monomials have the same degree is termed as a ‘form’ or a ‘homogeneous polynomial’. Moreover, for x R n , x d denotes a symmetric tensor of dimension n and degree d and is given below,
x d = x x x d t i m e s = ( x i 1 x i 2 x i d ) 1 i 1 , , i d n S n , d .
Notice that, the entries of tensor x d are the monomials of degree d in n-variables. Thus for any symmetric tensor A S n , d its associated form can be written as
h A ( x ) : = A x d = i 1 , , i d = 1 n a i 1 , , i d x i 1 x i d = α I n ( d ) c ( α ) A α x α .
where A α = a i 1 , , i d denote the coefficient of monomial x α in h A ( x ) and
c ( α ) = | α | ! i n ( α i ! ) i f α N 0 n 0 i f α R n \ N 0 n
and α i ! denotes the factorial of α i .
For A S n , d a subset K S n , d is said to be a cone if for each tensor A K the scalar product λ A K for all λ 0 . Moreover, the cone K is said to be a convex cone if for A , B K and for non-negative scalars λ 1 , λ 2 R we have, λ 1 A + λ 2 B K . The dual K * of the cone K is defined as
K * : = U S n , d : U , V 0 , for all V K .
A convex cone K is said to be pointed if { K } { K } = { 0 } , and K is said to be solid if its interior is nonempty. A convex cone which is closed, pointed, and solid is termed as proper cone. A convex cone K is said to be a polyhedral cone if it is finitely generated.
The cone of entry-wise non-negative tensors is denoted by N n , d . Finally, a tensor with non-positive off-diagonal entries is termed as Z-tensor (it is also called essentially non-positive tensor see e.g., [26]). Finally, the standard simplex Δ n in n-dimensional Euclidean space R n is
Δ n : = x R n : e T x = 1
where e T = 1 , 1 , , 1 R n .

3. Positive Semidefinite Tensors, Copositive Tensors and Their Duals

The cone of dth-order (with d even) n-dimensional, positive semidefinite tensors is denoted by S n , d + and is given below
S n , d + : = A S n , d : h A ( x ) = A x d 0 , x R n .
For a tensor A S n , d + , the polynomial h A ( x ) is called PSD polynomial. The dual of PSD cone S n , d + is the cone of completely positive semidefinite tensors, denoted by S n , d + * , which is defined as.
S n , d + * : = X S n , d : A , X 0 , A S n , d + = k = 1 N ( x k ) d : x k R n , N N .
For d = 2 , PSD cone is self dual i.e., S n , 2 + = S n , 2 + * (cf. [16]). However, for d 4 we have S n , d + S n , d + * in general (see [27], Example 4.5).
It is well known that a function is convex if and only if its Hessian matrix is positive semidefinite (see, e.g., [28], Theorem 4.5). Therefore the convexity of homogeneous polynomial defined in (3) amounts to checking if 2 h A ( x ) S n , 2 + . It has been shown that if a polynomial h A ( x ) is convex then its associated tensor A is positive semidefinite (see [27], Proposition 5.10). However, the converse need not to be true in general (see [27] (Example 5.11) and [29]).
A tensor A S n , d is said to be copositive if h A ( x ) = A x d 0 for all x R + n , whereas a tensor is called strictly copositive if h A ( x ) = A x d > 0 for all x R + n \ { 0 } . The set of n-dimensional, dth-order copositive tensors defines a cone given below,
C n , d : = A S n , d : A x d 0 , x R + n .
It is well known that, a tensor A i n t C n , d if and only if it is strictly copositive, where i n t C n , d denotes the interior of the set C n , d . The dual of copositive cone C n , d is completely positive cone denoted by C n , d * (see e.g., [16] (Theorem 6.9), [30]), which is defined below,
C n , d * = x d S n , d : A , x d 0 , A C n , d and x R + n
= k = 1 N ( x k ) d : x k R + n , N N .
It is clear that if a tensor is positive semidefinite then it is copositive also. A copositive tensor need not to be positive semidefinite (cf. Example 1). The question arises in which case a copositive tensor is also positive semidefinite. In the following theorem, we describe one such case (for d = 2 see [31], Lemma 2.6).
Theorem 1.
Let A S n , d be a Z-tensor, then A is copositive if and only if it is positive semidefinite, where d is even.
Proof. 
Let us take A C n , d and since A is Z-tensor, we can write A = P Q where P , Q N n , d such that,
p i 1 i d : = a ( i ) d if i 1 = = i d = i 0 otherwise q i 1 i d : = 0 if i 1 = = i d = i | a i 1 i d | otherwise
To show that A S n , d + take x R n and consider,
h A ( x ) = A x d = ( P Q ) x d = P x d Q x d x R n .
Since d is even, we have h P ( x ) : = P x d = i n a ( i ) d x i d 0 for all x R n . However, h Q ( x ) : = Q x d can be positive or negative. Note that if h Q ( x ) 0 for some x R n then clearly from (13) we have, h A ( x ) = P x d + | Q x d | 0 .
So, the only case left is when h Q ( x ) 0 for some x R n . To show that h A ( x ) 0 in this case also, we define x + R + n such that x i + = x i x i 0 x i x i < 0 . To show that h Q ( x ) h Q ( x + ) consider,
S : = α I n ( d ) : x α < 0
and note that for α S we have x α = ( x + ) α . Clearly, we have
0 h Q ( x ) = α I n ( d ) c ( α ) Q α x α = α I n ( d ) S c ( α ) Q α x α + α S c ( α ) Q α x α = α I n ( d ) S c ( α ) | A α | ( x + ) α 0 α S c ( α ) | A α | ( x + ) α 0 α I n ( d ) S c ( α ) | A α | ( x + ) α + α S c ( α ) | A α | ( x + ) α = h Q ( x + ) .
Since d is even therefore we have h P ( x ) = h P ( x + ) . Furthermore, A is copositive and x + R + n implying h P ( x + ) h Q ( x + ) . Thus, we have
h P ( x ) = h P ( x + ) h Q ( x + ) h Q ( x ) .
From (13) and (15) we deduce that h A ( x ) 0 in this case also. Hence, A is positive semidefinite.
Converse is obvious since every positive semidefinite tensor is also copositive. □
Note that the above result also appeared in Zhang et al. [32] (Theorem 3.5(e) and Theorem 3.12), where the proof is constructed based on the spectral properties of the so called M-tensors. The proof given above is self-contained and does not require any extra structure.

4. Approximation Hierarchies for the Copositive Cone

Recall that a tensor A S n , d is copositive if h A ( x ) : = α I n ( d ) A α x α 0 for all x R + n . Notice that for any x R + n , we can write x = z z R + n for some z R n , where ∘ indicates the component wise (Hadamard) product, giving
h A ( z z ) : = α I n ( d ) A α ( z z ) α = i 1 , , i d = 1 n a i 1 i 2 i d z i 1 2 z i 2 2 z i d 2 .
Thus, the copositivity condition translates to h A ( z z ) 0 for all z R n , for which a sufficient condition is that if (16) can be written as a sum of squares (SOS). Let us illustrate this with an example:
Example 1.
Consider a tensor A S 2 , 4 such that a 1111 = 0 , a 1112 = 1 / 4 , a 1122 = 1 / 6 , a 1222 = 1 / 2 , a 2222 = 1 . Then for x R 2 , we have
h A ( x ) : = ( x 1 x 2 ) 2 x 2 2 + x 1 3 x 2 = x 1 3 x 2 + x 1 2 x 2 2 2 x 1 x 2 3 + x 2 4 .
Clearly, A is not positive semidefinite since h A ( [ 5 , 2 ] T ) = 54 . Let’s consider x = z z R + 2 where z R 2 with x 1 = z 1 2 , x 2 = z 2 2 , then we have,
h A ( z z ) = z 1 6 z 2 2 + z 1 4 z 2 4 2 z 1 2 z 2 6 + z 2 8 = ( z 1 2 z 2 2 ) z 2 2 2 + z 1 3 z 2 2 .
Thus, A is copositive.
In the above example h A ( z z ) can be written as sum of squares. However, this is not the case in general (see Example 2). Therefore, to develop higher order sufficient conditions, the following polynomial, introduced by Parrilo [17], is most often used.
P ( r ) z : = h A ( z z ) k = 1 n z k 2 r f o r a l l z R n .
Clearly, P ( r ) z is a polynomial of degree 2 ( r + d ) . Based on (18), one can define two cone approximations for the copositive cone (as we will see),
K n , d ( r ) = A S n , d : P ( r ) ( z ) has an SOS decomposition
C n , d ( r ) = A S n , d : P ( r ) ( z ) has non - negative coefficients .
For the tensor A S 2 , 4 given in Example 1, we have A K 2 , 4 ( 0 ) . It is clear from ( ) that h A ( z z ) has a negative coefficient implying A C n , d ( 0 ) . However, one can show that A C 2 , 4 ( 11 ) .
Inclusions K n , d ( r ) K n , d ( r + 1 ) and C n , d ( r ) C n , d ( r + 1 ) for r N 0 are evident from the following,
P ( r + 1 ) ( z ) = h A ( z z ) k = 1 n z k 2 r + 1 = k = 1 n z k 2 h A ( z z ) k = 1 n z k 2 r
= k = 1 n z k 2 P ( r ) ( z ) .
The cone hierarchies K n , d ( r ) and C n , d ( r ) approximates the copositive cone from the inside. For instance if A K n , d ( r ) then P ( r ) ( z ) being sum of squares is non-negative i.e., P ( r ) ( z ) 0 for all z R n , which in turn imply that h A ( z z ) 0 for all z R n , hence A C n , d . Similarly, one can show that if A C n , d ( r ) for some r N 0 then A is copositive. Thus, we have
r = 0 K n , d ( r ) C n , d and r = 0 C n , d ( r ) C n , d .
Referring to Polya’s theorem (see e.g., [25], Theorem 2.1), which states that for a tensor A i n t ( C n , d ) there exists a large enough r such that A C n , d ( r ) . This further implies that, for some r N 0 the strictly copositive tensor A i n t ( C n , d ) allows P ( r ) ( z ) to have sum of squares decomposition, that is A K n , d ( r ) . Therefore, the infinite union of these cones contains the interior of copositive cone i.e.,
i n t ( C n , d ) r = 0 C n , d ( r ) and i n t ( C n , d ) r = 0 K n , d ( r ) .
For the tensor A S 2 , 4 as given in Example 1 it holds A i n t ( C 2 , 4 ) since h A [ 1 , 0 ] T = 0 . Thus, neither r = 0 K n , d ( r ) i n t ( C n , d ) nor r = 0 C n , d ( r ) i n t ( C n , d ) holds.

4.1. The Case R = 0

The case r = 0 is interesting and require further exploration. It is clear that C n , d ( 0 ) = N n , d , which require no further exploration. For d = 2 , the tensor A K n , 2 ( 0 ) is often characterized in terms of the decomposition A = S + T , where S S n , 2 + and T N n , 2 . However, for d > 4 only one direction is possible, as shown below.
Theorem 2.
Let A S n , d and if A K n , d ( 0 ) then A = S + T , where S S n , d + and T N n , d i.e., K n , d ( 0 ) S n , d + + N n , d .
Proof. 
The proof for the matrix case can be easily generalized to accommodate for higher order tensors see e.g., ([19], Theorem 2.1). □
The converse of the above theorem is not true in general. The following is a counter example,
Example 2.
Let A S 3 , 6 be such that
a i 1 i 2 i 3 i 4 i 5 i 6 = 1 i 1 = i 2 = = i 6 { 1 , 2 , 3 } 1 / 30 i 1 = i 2 = i , i 3 = i 4 = j , i 5 = i 6 = k , i j k { 1 , 2 , 3 } 1 / 15 i 1 = i 2 = i 3 = i 4 = i , i 5 = i 6 = j , i j { 1 , 2 , 3 } 0 otherwise
The associated polynomial is given by (cf. (3))
h A ( x ) = x 1 6 + x 2 6 + x 3 6 + 3 x 1 2 x 2 2 x 3 2 ( x 1 4 x 2 2 + x 1 4 x 3 2 + x 1 2 x 2 4 + x 1 2 x 3 4 + x 2 4 x 3 2 + x 2 2 x 3 4 )
The polynomial given in (26) is the well known Robinson’s polynomial [33]. It is well known that h A ( x ) 0 for all x R 3 . Therefore, trivially A can be written as sum of positive semidefinite and non-negative (actually zero) tensor. It is also well known that Robinson polynomial cannot be written as SOS (see [33] for a proof) i.e., A K 3 , 6 ( 0 ) .
To find special cases in which the converse of Theorem 2 holds note that from A = S + T , we have
h A ( z z ) = ( S + T ) , ( z z ) d = S , ( z z ) d = h S ( z z ) + T , ( z z ) d z R n
Clearly, T , ( z z ) d is SOS since T is non-negative. Thus, if h S ( z z ) can be written as SOS the converse of the Theorem 2 holds. It is well known that a matrix (i.e., d = 2 ) is positive semidefinite if an only if its associated polynomial can be written as sum of squares. Therefore, the converse of Theorem 2 holds for d = 2 (see [19] (Theorem 2.1) for a proof). Moreover, for n = 2 and d = 4 a tensor is positive semidefinite if and only if its associated form is sum-of-squares, see, e.g., [33]. Therefore, the converse of Theorem 2 holds in this case also. We close this discussion by showing that the converse of Theorem 2 holds for the Z-tensors.
Theorem 3.
Let A S n , d be a Z-tensor and A = S + T , where S S n , d + and T N n , d then A K n , d ( 0 ) .
Proof. 
Note that since A is a Z-tensor and T is non-negative therefore S is also a Z-tensor. It is well-known that a Z-tensor is positive semidefinite if and only if it is sum of squares. (see e.g., [26] (Proposition 2.1), [34] (Theorem 11)). Hence (27) can be written as sum of square. □

4.2. Characterization of K n , d ( r )

In this subsection, we formulate a characterization for K n , d ( r ) . Before presenting this characterization, we provide bounds on the value of coefficients for the polynomial P ( r ) ( z ) . For this consider,
P ( r ) ( z ) = h A ( z z ) k = 1 n z k 2 r = i 1 , , i d = 1 n a i 1 i 2 i d z i 1 2 z i 2 2 z i d 2 α I n ( r ) c ( α ) z 2 α
Note that z i j 2 can be written as z 2 e i j where e i j is a unit vector and 1 j n . Using this notation (28) can be written as,
P ( r ) ( z ) = i 1 , , i d = 1 n a i 1 i d z 2 e i 1 z 2 e i 2 z 2 e i d ) ( α I n ( r ) c ( α ) z 2 α
Denote by θ : = e i 1 + e i 2 + e i d I n ( d ) , then ω : = α + θ I n ( d + r ) , and with these notations we can write (29) as follows,
P ( r ) ( z ) = ω I n ( r + d ) B ω ( A ) z 2 ω where B ω A : = θ I n ( d ) r ! i = 1 n ( ω i θ i ) ! c ( θ ) A θ
where ω i θ i 0 for all i = 1 , , n .
Note that the maximum value of i = 1 n ( ω i θ i ) ! is r ! (Observe that i n ( ω i θ i ) = r + d d = r . Take r i = ω i θ i then maximum value occurs when r i = r and r j = 0 for all i j ). Moreover, the minimum value of i = 1 n ( ω i θ i ) ! occurs when ω i θ i { 0 , 1 } for all i [ 1 : n ] and the minimum value is 1. Thus, we have
1 r ! i = 1 n ( ω i θ i ) ! r ! , where ω i θ i 0 i [ 1 : n ] .
To show that the upper bound in (31) is sharp take n = r + d and ω = ( 1 , 1 , , 1 ) T I n ( d + r ) then for θ i = i , 1 i d , we have ( ω i θ i ) { 0 , 1 } , thus the denominator reduces to 1 and the upper bound is exactly r ! . In the following theorem, we describe a characterization of K n , d ( r ) .
Theorem 4
(cf. [19] for d = 2 ). The tensor A K n , d ( r ) if and only if there exists a PSD matrix M ˜ S n ˜ r , 2 + associated with the polynomial (18)
P ( r ) ( z ) = z ˜ T M ˜ z ˜ where z ˜ = [ z α ] α I n ( d + r ) R n ˜ r with n ˜ r = n + r + d 1 r + d
such that
β i , β j I n ( d + r ) × I n ( d + r ) : β i + β j = 2 ω m ˜ i , j : = B ω ( A ) for all ω I n ( d + r )
β i , β j I n ( d + r ) × I n ( d + r ) : β i + β j = u m ˜ i , j : = 0 for all u I n ( 2 d + 2 r ) \ 2 I n ( d + r ) .
Proof. 
The proof is an easy generalization of matrix case (Theorem 2.2, [19]), which is presented here for the sake of completeness. Let us consider the polynomial P ( r ) ( z ) described in (30) and using its matrix formulation as follows
P ( r ) ( z ) = z ˜ T M ˜ z ˜ where z ˜ = [ z α ] α I n ( d + r ) R n ˜ r with n ˜ r = n + d + r 1 d + 1
Taking A K n , d ( r ) implies that its associate polynomial P ( r ) ( z ) allows a sum of squares decomposition, which further implies that the matrix M ˜ is PSD. For converse we assume that, the matrix M ˜ in (34) is PSD. So the following decomposition is evident
M ˜ = k = 1 m r u ˜ k u ˜ k T where m r is the rank of M ˜ .
Combining (34) and (35) gives,
P ( r ) ( z ) = k = 1 m r u ˜ k T z ˜ T u ˜ k T z ˜ where z ˜ = [ z α ] α I n ( d + r ) R n ˜ r
= k = 1 m r u ˜ k T z ˜ 2 where z ˜ = [ z α ] α I n ( d + r ) R n ˜ r
By comparing (30) and (34), we obtain the following
z ˜ T M ˜ z ˜ = ω I n ( d + r ) B ω ( A ) z 2 ω
Comparing the coefficient of z 2 ω in (38) gives,
β i , β j I n ( d + r ) × I n ( d + r ) : β i + β j = 2 ω m ˜ i , j = B ω ( A ) for all ω I n ( d + r ) .
However, for u I n ( 2 d + 2 r ) \ 2 I n ( d + r ) the coefficients of z u on R.H.S. of (38) are all zero, thus we have
β i , β j I n ( d + r ) × I n ( d + r ) : β i + β j = u m ˜ i , j = 0 for all u I n ( 2 d + 2 r ) \ 2 I n ( d + r ) .
Hence the proof is complete. □
From Theorem 4, it is clear that the matrix M ˜ is sparse. To deal with the sparsity Ahmadi and Majumdar [35] has introduced polyhedral approximations using the observation that a diagonally dominant matrix is positive semidefinite.

4.3. Polyhedral Characterization of C n , d ( r )

In this section, we present a compact representation for the cone C n , d ( r ) . The representation is useful in deducing polyhedral characterization of C n , d ( r ) . For this, recall that for any ω I n ( r + d ) (30) can be re-written as follows
B ω ( A ) = θ I n ( d ) c ( θ ) c ( ω θ ) A θ = θ I n ( d ) c ( θ ) | ω θ | ! ω 1 θ 1 ! ω 2 θ 2 ! ω n θ n ! A θ = r ! θ I n ( d ) c ( θ ) ω 1 ( ω 1 1 ) ( ω 1 ( θ 1 1 ) ) ω n ( ω n 1 ) ( ω n ( θ n 1 ) ) ( ω 1 ) ! ( ω n ) ! A θ = r ! θ I n ( d ) c ( θ ) k = 1 n ω k ( ω k 1 ) ( ω k ( θ k 1 ) ) ( ω k ) ! A θ = r ! k = 1 n ( ω k ) ! θ I n ( d ) c ( θ ) A θ k = 1 n ω k ( ω k 1 ) ( ω k ( θ k 1 ) )
Note that the first equality follows by using the definition of polynomial coefficient (see (4)). Recognize that the product ω k ( ω k 1 ) ( ω k ( θ k 1 ) ) is linked with the falling factorials. The falling factorials can be represented using the Stirling number of the first kind as follows
ω k ( ω k 1 ) ( ω k ( θ k 1 ) ) = m = 0 θ k s ( θ k , m ) ω k m
where s ( θ k , m ) : = ( 1 ) θ k m θ k m is well known Stirling number of the first kind (see e.g., [36], Chapter 6.1). Using (41) in (40) gives
B ω ( A ) = r ! k = 1 n ( ω k ) ! θ I n ( d ) c ( θ ) A θ k = 1 n m = 0 θ k s ( θ k , m ) ω k m
For θ I n ( d ) and α I n ( t ) the inequality α θ is defined to hold element wise i.e., the inequality is true if α i θ i for all 1 i n . Furthermore, s ( θ , α ) = s ( θ 1 , α 1 ) s ( θ 2 , α 2 ) s ( θ 2 , α 2 ) . Observe that, if θ k = m then θ k m = 1 and also θ k m = 0 if θ k < m . The observation leads to the following simplification for each θ I n ( d ) and α I n ( t ) for all t [ 0 : d ]
k = 1 n m = 0 θ k s ( θ k , m ) ω k m = = m 1 , m 2 , , m n = 0 θ 1 , θ 2 , , θ n s ( θ 1 , m 1 ) s ( θ k , m 2 ) s ( θ n , m n ) ω 1 m 1 ω 2 m 2 ω n m n = α θ s ( θ , α ) ω α
Combining (42) and (43) leads to
B ω ( A ) = r ! k = 1 n ( ω k ) ! θ I n ( d ) c ( θ ) A θ α I n ( t ) , t [ 0 : d ] α θ s ( θ , α ) ω α
We define the tensors S ( t ) and W ( t ) of order | α | = t and dimension n as follows,
S ( t ) = s ( θ , α ) α I n ( t ) a n d W ( t ) = ω α α I n ( t ) for all t [ 0 : d ] .
Remark 1.
Interestingly for each α I n ( d ) we have
S ( d ) α : = 1 i f α = θ 0 o t h e r w i s e .
Consequently, the above notations lead to the following simplified representation of (44)
B ω ( A ) = r ! k = 1 n ( ω k ) ! θ I n ( d ) c ( θ ) A θ t = 0 d S ( t ) , W ( t ) θ .
Finally, we define the notation Y θ t : = S ( t ) , W ( t ) θ . The notation leads to define the tensor
Y t : = Y θ t θ I n ( d )
of order d and dimension n, whose entries are either forms of degree t or zero that is,
Y θ t : = α I n ( t ) s ( θ , α ) ω α α θ 0 o t h e r w i s e
Note that Y θ t R and Y t S n , d . Moreover, from (49) one can easily assert that Y θ d = ω d .
Remark 2.
Recall that θ I n ( d ) implying θ can be written as a linear combination of unit vectors as follows θ = β 1 e 1 + + β n e n , β [ 0 : d ] n . Since, θ I n ( d ) the maximum number of non-zero elements of β will be m i n { d , n } . Similarly, for α I n ( t ) , we can have at most m i n { n , t } non-zero coefficients in the linear combination of the basis. Based on this observation we can obtain an explicit representation of the tensor Y t S n , d for all t [ 1 : d 1 ] . Thus, for each θ I n ( d ) the entries of tensor Y t are described as
Y θ t : = s ( d , t ) ω i t i f t e i = α θ = d e i i [ 1 : n ] α i 1 , α i 2 = 1 s . t . α i 1 + α i 2 = t t s ( θ i 1 , α i 1 ) s ( θ i 2 , α i 2 ) ω i 1 α i 1 ω i 2 α i 2 i f α θ α = α i 1 e i 1 + α i 2 e i 2 θ = θ i 1 e i 1 + θ i 2 e i 2 i 1 i 2 [ 1 : n ] α i 1 , α i 2 , α i 3 = 1 s . t . α i 1 + α i 2 + α i 3 = t t s ( θ i 1 , α i 1 ) s ( θ i 2 , α i 2 ) s ( θ i 3 , α i 3 ) ω i 1 α i 1 ω i 2 α i 2 ω i 3 α i 3 i f α θ α = α i 1 e i 1 + α i 2 e i 2 + α i 3 e i 3 θ = θ i 1 e i 1 + θ i 2 e i 2 + θ i 3 e i 3 i 1 i 2 i 3 [ 1 : n ] α i 1 , , α i m = 1 s . t . α i 1 + + α i m = t w h e r e m : = min { t , n } t s ( θ i 1 , α i 1 ) s ( θ i m , α i m ) ω i 1 α i 1 ω i m α i m i f α θ α = α i 1 e i 1 + + α i m e i m θ = θ i 1 e i 1 + + θ i m e i m i 1 i 3 [ 1 : n ] 0 o t h e r w i s e
Thus, from (45) and (48) one obtain the following notionally convenient formulation of (44),
B ω ( A ) = r ! k = 1 n ( ω k ) ! A , Y 0 + Y 1 + + Y d 1 + Y d = r ! k = 1 n ( ω k ) ! A , ω d + t = 0 ( d 1 ) A , Y t .
Remark 3.
For sanity check we consider a special case, the tensor of all ones A = E , that is a i 1 , , i d = 1 for all i 1 , i 2 , , i d [ 1 : n ] . Note that E , ω d = | ω | d and from (51) we have
B ω ( E ) = r ! k = 1 n ( ω k ) ! E , ω d + E , Y d 1 + + E , Y 1 + E , Y 0 = r ! k = 1 n ( ω k ) ! | ω | d + s ( d , d 1 ) | ω | d 1 + + s ( d , 1 ) | ω | 1 = r ! k = 1 n ( ω k ) ! ( r + d ) ( r + ( d 1 ) ) ( r + 2 ) ( r + 1 ) = r ! k = 1 n ( ω k ) ! ( r + d ) ! r ! = ( r + d ) ! k = 1 n ( ω k ) ! = c ( ω )
The above representation leads to a polyhedral representation of the cone hierarchy C n , d ( r ) and is presented in the following theorem (cf. [19], Theorem 2.4).
Theorem 5.
For all r N 0 and n N the polyhedral representation of cones C n , d ( r ) is given as
C n , d ( r ) = A S n , d | A , ω d + t = 0 ( d 1 ) Y t 0 for all ω I n ( r + d )
where Y t : = S ( t ) , W ( t ) θ I n ( d ) for all t [ 0 : d 1 ] .
Proof. 
The proof follows immediately from (19) and (51). □

5. Approximating Polynomial Optimization over the Simplex

In this section, we consider the homogeneous polynomial optimization over the simplex
min h A ( x ) s u b j e c t t o : h B ( x ) = 1 f o r x R + n
where A , B S n , d . It is well-known (see e.g., [18,19,25]) that (53) can be equivalently reformulated as a copositive program over the cone of completely positive tensors. The reformulation is as follows,
min A , x d s u b j e c t t o : B , x d = 1 , x d C n , d * .
The dual formulation of (54) is also a conic program over the cone of copositive tensors, which is given below.
max λ R λ s u b j e c t t o : A λ B C n , d
We consider a special case where B = E . Obviously, the feasible set { E x d = 1 , x R + n } is precisely the simplex Δ n . Thus, the minimum (maximum) value of (53) in this special case, that is optimization of the homogeneous polynomial over the simplex Δ n is
h A min ( Δ n ) : = min x Δ n h A ( x ) h A max ( Δ n ) : = max x Δ n h A ( x )
As mentioned before testing if a tensor is copositive is co-NP-hard. To find an approximate solution we replace the cone C n , d (for the special case B = E ) in (55) by it’s approximation C n , d ( r ) where r N 0 ,
v C ( r ) : = max λ | A λ E C n , d ( r ) , λ R
We are interested to compute the bound on the difference of approximate solution (to the dual program) v C ( r ) and the exact solution h A min ( Δ n ) . For this we use rational girding of the simplex Δ n i.e., for non-negative integer r N 0 we have,
Δ n ( r ) : = { x Δ n : ( r + d ) x N 0 n } = I n ( r + d ) ( r + d ) .
The rational grid Δ n ( r ) is a discretization of Δ n , which leads to a natural approximation of (56), i.e.,
h A min Δ n ( r ) : = min x Δ n ( r ) { h A ( x ) : = A , x d }
Note that v C ( r ) approximates the dual while h A min Δ n ( r ) approximates the primal as given in (56). It is interesting to investigate the connection between these two approximations. The connection is given below (cf. [19] (Theorem 3.1), for d = 2 and [25] (Theorem 3.1) for d = 4 ),
Theorem 6.
Let Δ n ( r ) be a rational discretization of simplex Δ n as given in (58) for any r N 0 , then for Q S n , d we have
v C ( r ) = r ! ( r + d ) d ( r + d ) ! min x Δ n ( r ) Q , x d + t = 1 ( d 1 ) X t ( r + d ) d t
where X t S n , d .
Proof. 
Let A : = Q λ E be a feasible point of the program given in (57) then for ω I n ( r + d ) it follows, from (51) and the definition of C n , d ( r ) , that,
0 B ω ( A ) = B ω ( Q ) λ B ω ( E )
= r ! k = 1 n ( ω k ) ! Q , ω d + t = 1 ( d 1 ) Q , Y t λ c ( ω )
= ( r + d ) ( r + d 1 ) ( r + 1 ) r ! k = 1 n ( ω k ) ! Q , ω d + t = 1 ( d 1 ) Q , Y t ( r + d ) ( r + d 1 ) ( r + 1 ) λ c ( ω )
= c ( ω ) Q , ω d + t = 1 ( d 1 ) Q , Y t ( r + d ) ( r + d 1 ) ( r + 1 ) λ .
Giving,
Q , ω d + t = 1 ( d 1 ) Q , Y t ( r + d ) ( r + d 1 ) ( r + 1 ) λ .
The above imply that the maximum value of λ in (57) is attained at the minimum value of Q , ω d + t = 1 ( d 1 ) Q , Y t ( r + d ) ( r + d 1 ) ( r + 1 ) . Thus, (57) can be equivalently written as follows,
v C ( r ) = min ω I n ( d + r ) ( r + d ) d 1 ( r + d 1 ) ( r + 1 ) Q , ω d + t = 1 ( d 1 ) Q , Y t ( r + d ) d = r ! ( r + d ) d ( r + d ) ! min ω I n ( d + r ) Q , ω d ( r + d ) d + t = 1 ( d 1 ) Y t ( r + d ) d .
A mere change of variable ω = ( r + d ) x where x Δ n ( r ) and Y t = ( r + d ) t X t in (65) yields the required expression, i.e.,
v C ( r ) = r ! ( r + d ) d ( r + d ) ! min x Δ n ( r ) Q , x d + t = 1 ( d 1 ) X t ( r + d ) d t .
Note that, for any r N 0 we have the relation h Q min ( Δ n ( r ) ) h Q min ( Δ n ) . However, in (66) a correction term t = 0 d 1 Q , X t ( r + d ) d t is deducted from the actual objective Q , x d for obtaining a closer approximation to h Q min ( Δ n ) . Clearly, for increasing r N 0 the value v C ( r ) surpass the value h Δ n ( r ) invariably. However, one has to compensate with the factor r ! ( r + d ) d ( r + d ) ! = 1 + O ( 1 r d 1 ) > 1 . It would be interesting to find bounds on the difference between two approximations namely v C ( r ) and h Q min ( Δ n ( r ) ) . To compute the bound we define some notations. First recall that θ α denotes Stirling number. In addition, for x Δ n , t [ 0 : d ] we define a function q t ( x ) as follows,
q t ( x ) : = 1 r + d d t α θ θ I n ( d ) , α I n ( t ) θ α c ( θ ) Q θ x α
: = 1 r + d d t α θ , Q θ 0 θ I n ( d ) , α I n ( t ) θ α c ( θ ) | Q θ | x α α θ , Q θ < 0 θ I n ( d ) , α I n ( t ) θ α c ( θ ) | Q θ | x α
: = q t ( x ) + q t ( x )
If there is no dependence on the variable x we simply write q t that is,
q t + : = 1 r + d d t α θ , Q θ 0 θ I n ( d ) , α I n ( t ) θ α c ( θ ) | Q θ |
One can define q t + analogously.
Theorem 7.
Let Δ n ( r ) be a rational discretization of simplex Δ n as given in (58) for any r N 0 , then for Q S n , d we have
t [ 0 : ( d 1 ) ] N E v e n q t + t [ 0 : ( d 1 ) ] N O d d q t + ( r + d ) ! r ! ( r + d ) d v C ( r ) h Q min ( Δ n ( r ) )
t [ 0 : ( d 1 ) ] N E v e n q t + + t [ 0 : ( d 1 ) ] N O d d q t
Proof. 
From the expression given in Equation (66),
( r + d ) ! r ! ( r + d ) d v C ( r ) = min x Δ n ( r ) Q , x d + Q , t = 0 ( d 1 ) X t ( r + d ) d t
= min x Δ n ( r ) Q , x d + t = 0 ( d 1 ) θ I n ( d ) c ( θ ) Q θ X θ t ( r + d ) d t
= min x Δ n ( r ) Q , x d + t = 0 ( d 1 ) α θ θ I n ( d ) & α I n ( t ) θ α c ( θ ) Q θ x α ( 1 ) t r + d d t
Notice that x Δ n then for all α I n ( d ) , we have x α 1 . From this observation, we have that q t ( x ) ± q t ± . This observation leads to the following,
( r + d ) ! r ! ( r + d ) d v C ( r ) = min x Δ n ( r ) Q , x d + t = 0 ( d 1 ) q t ( x )
= min x Δ n ( r ) Q , x d + t [ 0 : ( d 1 ) ] N E v e n q t ( x ) + + t [ 0 : ( d 1 ) ] N O d d q t ( x ) t [ 0 : ( d 1 ) ] N E v e n q t ( x ) + t [ 0 : ( d 1 ) ] N O d d q t ( x ) +
min x Δ n ( r ) Q , x d + t [ 0 : ( d 1 ) ] N E v e n q t ( x ) + + t [ 0 : ( d 1 ) ] N O d d q t ( x )
min x Δ n ( r ) Q , x d + t [ 0 : ( d 1 ) ] N E v e n q t + + t [ 0 : ( d 1 ) ] N O d d q t
h Q min ( Δ n ( r ) ) + t [ 0 : ( d 1 ) ] N E v e n q t + + t [ 0 : ( d 1 ) ] N O d d q t
Thus, we have
( r + d ) ! r ! ( r + d ) d v C ( r ) h Q min ( Δ n ( r ) ) t [ 0 : ( d 1 ) ] N E v e n q t + + t [ 0 : ( d 1 ) ] N O d d q t
The lower bound could be done similar manner. □

6. Conclusions

The paper was focused on describing the copositive tensor cone and its approximations. We have shown that every Z-tensor is copositive if and only if it is positive definite. The result has appeared already in [32] (Theorem 3.5(e) and Theorem 3.12). However, the proof given by Zhang et al. relies heavily on the notion of M-tensor and convex analysis. The proof we have provided is simpler and self-contained. We had discussed some approximation hierarchies for the copositive cone, focusing on providing a compact representation of these hierarchies. For the Parrilo cone, K n , d r the proof techniques are a straightforward generalization to a high dimensional case. For the cone, C n , d r a more rigorous approach is used to derive the representation. Most notions used are unique and have not appeared in the literature. We have illustrated this by applying our compact representation to the polynomial optimization over the simplex. We have compared the approximation obtained by our representation C n , d r with the approximation based on the rational griding. The bounds are proved between the two approximations. Moreover, the characterization helped to simplify the proofs and results related to approximating polynomial optimization over the simplex. In future it would be interesting to investigate the convergence rate of these approximations.
In the future, we work towards utilizing these hierarchies for providing approximation results related to copositive optimization, especially to recover approximation results for polynomial optimization over the simplex as obtained by De Klerk and co-authors [21,22,23,24]. Furthermore, our aim is to use these approximation hierarchies to develop numerical algorithms for application domains such as approximating clique numbers for uniform hypergraphs (see, e.g., [37,38]).

Author Contributions

Writing—original draft, M.F.I.; Writing—review & editing, F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maricic, B.; Luo, Z.; Davidson, T. Blind constant modulus equalization via convex optimization. IEEE Trans. Signal Process. 2003, 51, 805–818. [Google Scholar] [CrossRef]
  2. Sidiropoulos, N.D.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  3. Weiland, S.; van Belzen, F. Singular Value Decompositions and Low Rank Approximations of Tensors. Signal Process. IEEE Trans. 2010, 58, 1171–1182. [Google Scholar] [CrossRef]
  4. Cohen, N.; Sharir, O.; Shashua, A. On the expressive power of deep learning: A tensor analysis. In Proceedings of the Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; pp. 698–728. [Google Scholar]
  5. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  6. Soare, S.; Yoon, J.W.; Cazacu, O. On the use of homogeneous polynomials to develop anisotropic yield functions with applications to sheet forming. Int. J. Plast. 2008, 24, 915–944. [Google Scholar] [CrossRef]
  7. Micchelli, C.A.; Olsen, P. Penalized maximum-likelihood estimation, the Baum-Welch algorithm, diagonal balancing of symmetric matrices and applications to training acoustic data. J. Comput. Appl. Math. 2000, 119, 301–331. [Google Scholar] [CrossRef] [Green Version]
  8. Hamadneh, T.; Ali, M.; AL-Zoubi, H. Linear optimization of polynomial rational functions: Applications for positivity analysis. Mathematics 2020, 8, 283. [Google Scholar] [CrossRef] [Green Version]
  9. Henrion, D.; Garulli, A. Positive Polynomials in Control; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005; Volume 312. [Google Scholar]
  10. Berman, A.; Shaked-Monderer, N. Completely Positive Matrices; World Scientific Publishing Company Pte Limited: Singapore, 2003. [Google Scholar]
  11. Dickinson, P.J.C. The Copositive Cone, the Completely Positive Cone and Their Generalisations. Ph.D. Thesis, Groningen University, Groningen, The Netherlands, 2013. [Google Scholar]
  12. Kostyukova, O.; Tchemisova, T. Structural Properties of Faces of the Cone of Copositive Matrices. Mathematics 2021, 9, 2698. [Google Scholar] [CrossRef]
  13. Avi Berman, M.D.; Shaked-mondered, N. Open Problems in the Theory of Completely Positive and Copositive Matrices. Electron. J. Linear Algebra 2015, 29, 46–58. [Google Scholar] [CrossRef]
  14. Song, Y.; Qi, L. Necessary and sufficient conditions for copositive tensors. Linear Multilinear Algebra 2015, 63, 120–131. [Google Scholar] [CrossRef] [Green Version]
  15. Qi, L. Symmetric nonnegative tensors and copositive tensors. Linear Algebra Its Appl. 2013, 439, 228–238. [Google Scholar] [CrossRef]
  16. Qi, L.; Luo, Z. Tensor Analysis: Spectral Theory and Special Tensors; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  17. Parrilo, P.A. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. Ph.D. Thesis, California Institute of Information Technology, Pasadena, CA, USA, 2000. [Google Scholar]
  18. de Klerk, E.; Pasechnik, D.V. Approximation of the Stability Number of a Graph via Copositive Programming. SIAM J. Optim. 2002, 12, 875–892. [Google Scholar] [CrossRef]
  19. Bomze, I.M.; De Klerk, E. Solving standard quadratic optimization problems via linear, semidefinite and copositive programming. J. Glob. Optim. 2002, 24, 163–185. [Google Scholar] [CrossRef]
  20. Nesterov, Y. Global quadratic optimization on the sets with simplex structure. In LIDAM Discussion Papers CORE 1999015; Université Catholique de Louvain, Center for Operations Research and Econometrics (CORE): Louvain-la-Neuve, Belgium, 1999. [Google Scholar]
  21. de Klerk, E.; Laurent, M.; Parrilo, P.A. A PTAS for the minimization of polynomials of fixed degree over the simplex. Theor. Comput. Sci. 2006, 361, 210–225. [Google Scholar] [CrossRef] [Green Version]
  22. de Klerk, E.; Laurent, M.; Sun, Z. An alternative proof of a PTAS for fixed-degree polynomial optimization over the simplex. Math. Program. 2015, 151, 433–457. [Google Scholar] [CrossRef] [Green Version]
  23. de Klerk, E.; Laurent, M.; Sun, Z. An Error Analysis for Polynomial Optimization over the Simplex Based on the Multivariate Hypergeometric Distribution. SIAM J. Optim. 2015, 25, 1498–1514. [Google Scholar] [CrossRef] [Green Version]
  24. de Klerk, E.; Laurent, M.; Sun, Z.; Vera, J.C. On the convergence rate of grid search for polynomial optimization over the simplex. Optim. Lett. 2017, 11, 597–608. [Google Scholar] [CrossRef] [Green Version]
  25. Ling, C.; He, H.; Qi, L. Improved approximation results on standard quartic polynomial optimization. Optim. Lett. 2017, 11, 1767–1782. [Google Scholar] [CrossRef]
  26. Hu, S.; Li, G.; Qi, L. A tensor analogy of Yuan’s theorem of the alternative and polynomial optimization with sign structure. J. Optim. Theory Appl. 2016, 168, 446–474. [Google Scholar] [CrossRef] [Green Version]
  27. Luo, Z.; Qi, L.; Ye, Y. Linear operators and positive semidefiniteness of symmetric tensor spaces. Sci. China Math. 2015, 58, 197–212. [Google Scholar] [CrossRef] [Green Version]
  28. Rockafellar, R. Convex Analysis; Princeton Landmarks in Mathematics and Physics; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  29. Ahmadi, A.A.; Parrilo, P.A. A convex polynomial that is not SOS-convex. Math. Program. 2012, 135, 275–292. [Google Scholar] [CrossRef]
  30. Peña, J.; Vera, J.C.; Zuluaga, L.F. Completely positive reformulations for polynomial optimization. Math. Program. 2015, 151, 405–431. [Google Scholar] [CrossRef] [Green Version]
  31. Ahmed, F. Copositive Programming and Related Problems. Ph.D. Thesis, University of Twente, Twente, The Netherlands, 2014. [Google Scholar]
  32. Zhang, L.; Qi, L.; Zhou, G. M-tensors and some applications. SIAM J. Matrix Anal. Appl. 2014, 35, 437–452. [Google Scholar] [CrossRef]
  33. Reznick, B. Uniform denominators in Hilbert’s seventeenth problem. Math. Z. 1995, 220, 75–97. [Google Scholar] [CrossRef]
  34. Chen, H.; Wang, Y.; Zhou, G. High-order sum-of-squares structured tensors: Theory and applications. Front. Math. China 2020, 15, 255–284. [Google Scholar] [CrossRef]
  35. Ahmadi, A.A.; Majumdar, A. DSOS and SDSOS optimization: More tractable alternatives to sum of squares and semidefinite optimization. SIAM J. Appl. Algebra Geom. 2019, 3, 193–230. [Google Scholar] [CrossRef]
  36. Graham, R.L.; Knuth, D.E.; Patashnik, O. Concrete Mathematics: A Foundation for Computer Science; Addison-Wesley: Boston, MA, USA, 1994. [Google Scholar]
  37. Ahmed, F.; Still, G. Two methods for the maximization of homogeneous polynomials over the simplex. Comput. Optim. Appl. 2021, 80, 523–548. [Google Scholar] [CrossRef]
  38. Rota Bulò, S.; Pelillo, M. A generalization of the Motzkin–Straus theorem to hypergraphs. Optim. Lett. 2009, 3, 287–295. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Iqbal, M.F.; Ahmed, F. Approximation Hierarchies for the Copositive Tensor Cone and Their Application to the Polynomial Optimization over the Simplex. Mathematics 2022, 10, 1683. https://doi.org/10.3390/math10101683

AMA Style

Iqbal MF, Ahmed F. Approximation Hierarchies for the Copositive Tensor Cone and Their Application to the Polynomial Optimization over the Simplex. Mathematics. 2022; 10(10):1683. https://doi.org/10.3390/math10101683

Chicago/Turabian Style

Iqbal, Muhammad Faisal, and Faizan Ahmed. 2022. "Approximation Hierarchies for the Copositive Tensor Cone and Their Application to the Polynomial Optimization over the Simplex" Mathematics 10, no. 10: 1683. https://doi.org/10.3390/math10101683

APA Style

Iqbal, M. F., & Ahmed, F. (2022). Approximation Hierarchies for the Copositive Tensor Cone and Their Application to the Polynomial Optimization over the Simplex. Mathematics, 10(10), 1683. https://doi.org/10.3390/math10101683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop