Next Article in Journal
DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems
Previous Article in Journal
Maximal Norms of Orthogonal Projections and Closed-Range Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Biquadratic Tensors: Eigenvalues and Structured Tensors

1
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
2
School of Mathematical Sciences, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1158; https://doi.org/10.3390/sym17071158
Submission received: 13 June 2025 / Revised: 11 July 2025 / Accepted: 18 July 2025 / Published: 20 July 2025
(This article belongs to the Section Mathematics)

Abstract

The covariance tensors in statistics and Riemann curvature tensor in relativity theory are both biquadratic tensors that are weakly symmetric, but not symmetric in general. Motivated by this, in this paper, we consider nonsymmetric biquadratic tensors and extend M-eigenvalues to nonsymmetric biquadratic tensors by symmetrizing these tensors. We present a Gershgorin-type theorem for biquadratic tensors, and show that (strictly) diagonally dominated biquadratic tensors are positive semi-definite (definite). We introduce Z-biquadratic tensors, M-biquadratic tensors, strong M-biquadratic tensors, B 0 -biquadratic tensors, and B-biquadratic tensors. We show that M-biquadratic tensors and symmetric B 0 -biquadratic tensors are positive semi-definite, and that strong M-biquadratic tensors and symmetric B-biquadratic tensors are positive definite. A Riemannian Limited-memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) method for computing the smallest M-eigenvalue of a general biquadratic tensor is presented. Numerical results are reported.

1. Introduction

Let m , n be positive integers and m , n 2 . We call a real fourth-order ( m × n × m × n ) -dimensional tensor A = ( a i 1 j 1 i 2 j 2 ) R m × n × m × n a biquadratic tensor. This is different from [1]. Denote [ n ] : = { 1 , , n } . If for all i 1 , i 2 [ m ] and j 1 , j 2 [ n ] , we have
a i 1 j 1 i 2 j 2 = a i 2 j 2 i 1 j 1 ,
then we say that A is weakly symmetric. If furthermore for all i 1 , i 2 [ m ] and j 1 , j 2 [ n ] , we have
a i 1 j 1 i 2 j 2 = a i 2 j 1 i 1 j 2 = a i 1 j 2 i 2 j 1 ,
then we say A is symmetric. In fact, all the biquadratic tensors in [1] are symmetric biquadratic tensors in this paper. We denote the set of all biquadratic tensors in R m × n × m × n by B Q ( m , n ) , the set of all weakly symmetric biquadratic tensors in B Q ( m , n ) , by  W B Q ( m , n ) , and the set of all symmetric biquadratic tensors in B Q ( m , n ) , by  S B Q ( m , n ) , respectively. Then B Q ( m , n ) , W B Q ( m , n ) , and S B Q ( m , n ) are all linear spaces.
A biquadratic tensor A B Q ( m , n ) is called positive semi-definite if for any x R m and y R n ,
f ( x , y ) A , x y x y i 1 , i 2 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i 2 j 2 x i 1 y j 1 x i 2 y j 2 0 .
Here, ∘ represents the vector outer product. This means that the ( i 1 , j 1 , i 2 , j 2 ) th element of x y x y is the product of the corresponding vector elements x i 1 y j 1 x i 2 y j 2 . The tensor A is called an SOS (sum-of-squares) biquadratic tensor if f ( x , y ) can be written as a sum of squares. A biquadratic tensor A B Q ( m , n ) is called positive definite if for any x R m , x x = 1 and y R n , y y = 1 ,
f ( x , y ) A , x y x y i 1 , i 2 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i 2 j 2 x i 1 y j 1 x i 2 y j 2 > 0 .
In [2], bi-block tensors were studied. A bi-block tensor may have an order higher than 4, but its dimension is uniformly n. We recall that a tensor A = ( a i 1 i m ) with i 1 , , i m [ n ] is called a cubic tensor [3]. Hence, bi-block tensors are special cubic tensors. On the other hand, biquadratic tensors are not cubic tensors. Thus, they are different.
Biquadratic tensors arise in solid mechanics [4,5], statistics [6,7], quantum physics [8], spectral graph theory [9], and polynomial theory [10,11]. In the next section, we review such application background of biquadratic tensors. In particular, we point out that the covariance tensors in statistics and Riemann curvature tensor in relativity theory are both biquadratic tensors that are weakly symmetric, but not symmetric in general. This motivates us to consider nonsymmetric biquadratic tensors and study possible conditions and algorithms for identifying positive semi-definiteness and definiteness of such biquadratic tensors.
In the study of strong ellipticity condition of the elasticity tensor in solid mechanics, in 2009, Qi, Dai and Han [12] introduced M-eigenvalues for symmetric biquadratic tensors. In Section 3, we extend this definition to nonsymmetric biquadratic tensors by symmetrizing them.
In Section 4, we present a Gershgorin-type theorem for biquadratic tensors. We introduce diagonally dominated biquadratic tensors and strictly diagonally dominated biquadratic tensors. Then, we show that diagonally dominated biquadratic tensors are positive semi-definite, and strictly diagonally dominated biquadratic tensors are positive definite.
In Section 5, we introduce Z-biquadratic tensors, M-biquadratic tensors, strong M-biquadratic tensors, B 0 -biquadratic tensors, and B-biquadratic tensors. We show that M-biquadratic tensors and symmetric B 0 -biquadratic tensors are positive semi-definite, and  that strong M-biquadratic tensors and symmetric B-biquadratic tensors are positive definite. A Riemannian LBFGS method for computing the smallest M-eigenvalue of a general biquadratic tensor is presented in Section 6. Convergence analysis of this algorithm is also given there. Numerical results are reported in Section 7. Some final remarks are made in Section 8.

2. Application Backgrounds of Biquadratic Tensors

2.1. The Covariance Tensor in Statistics

The covariance matrix is a key concept in statistics and machine learning. It is a square matrix that contains the covariances between multiple random variables. The diagonal elements represent the variances of each variable, reflecting their individual dispersion. The off-diagonal elements represent the covariances between pairs of variables, indicating their linear relationships. The covariance matrix is symmetric and positive semi-definite, providing a comprehensive view of the interdependencies among variables. However, when the variable takes the form of a matrix, the corresponding covariance matrix transforms into a covariance tensor.
We let X = ( X i j ) R m × n be a random matrix with each element being a random variable. We denote the mean and variance of the element X i j as E ( X i j ) = μ i j and var ( X i j ) = σ i j ( i [ m ] , j [ n ] ), respectively. The covariance with another element X k l is denoted as cov ( X i j , X k l ) = σ i j k l . Subsequently, it is formulated as a fourth-order covariance tensor. The fourth-order covariance tensor was proposed in [6] for portfolio selection problems and in [7] for group identification, respectively.
For any random matrix X R m × n , its fourth-order covariance tensor is defined as A = ( σ i 1 j 1 i 2 j 2 ) R m × n × m × n , where
σ i 1 j 1 i 2 j 2 = E ( X i 1 j 1 μ i 1 j 1 ) ( X i 2 j 2 μ i 2 j 2 ) .
Then A is a weakly symmetric biquadratic tensor, which may be nonsymmetric.
Proposition 1.
The fourth-order covariance tensor defined in (2) is positive semi-definite.
Proof. 
For any x R m and y R n , we have
A , x y x y i 1 , i 2 = 1 m j 1 , j 2 = 1 n E ( X i 1 j 1 μ i 1 j 1 ) ( X i 2 j 2 μ i 2 j 2 ) x i 1 y j 1 x i 2 y j 2 = E i 1 , i 2 = 1 m j 1 , j 2 = 1 n ( X i 1 j 1 μ i 1 j 1 ) ( X i 2 j 2 μ i 2 j 2 ) x i 1 y j 1 x i 2 y j 2 = E x ( X U ) y 2 0
Here, U = ( μ i j ) R m × n . This completes the proof.    □
We let X ( t ) represent the tth observed matrix data for t = 1 , , T . We assume that each X ( t ) is an independent and identically distributed (iid) sample of the random matrix X = ( X i j ) R m × n . Then the estimated mean is X ¯ = ( X ¯ i j ) R m × n , where X ¯ i j = 1 T t = 1 T X i j ( t ) . A natural estimate for the covariance can be formulated as A ^ = ( σ ^ ) i 1 j 1 i 2 j 2 R m × n × m × n , where
σ ^ i j k l = 1 T t = 1 T X i j ( t ) X ¯ i j X k l ( t ) X ¯ k l .
Proposition 2.
The fourth-order covariance tensor defined in (3) is a positive semi-definite biquadratic tensor.
Proof. 
This result can be proven similar with that of Proposition 1, and for brevity, we omit the detailed proof here.    □

2.2. The Elasticity Tensor in Solid Mechanics

The field equations for a homogeneous, compressible, nonlinearly elastic material in two or three dimensions, pertaining to static problems in the absence of body forces, can be formulated as follows [4]:
a i j k l 1 + u u k , l j = 0 .
Here, u i ( X ) (with i = 1 , 2 or i = 1 , 2 , 3 ) represents the displacement field, X denotes the coordinate of a material point in its reference configuration, and u k , l j = 2 u k x l x j . Given n = 2 or n = 3 , A = ( a i 1 j 1 i 2 j 2 ) B Q ( n , n ) signifies the elasticity tensor and
a i j k l ( F ) = a k l i j ( F ) = 2 W ( F ) F i j F k l , F = 1 + u .
The tensor of elastic moduli is invariant with respect of permutations of indices as follows [5]:
a i j k l = a j i k l = a i j l k .
For hyperelastic materials a i j k l also has the property
a i j k l = a k l i j .
The elasticity tensor is the most well-known tensor in solid mechanics and engineering [13]. The above equations are strongly elliptic if and only if for any x R n and y R n , (1) holds. Several methods were proposed for verifying strongly ellipticity of the elasticity tensor [2,14,15,16].
In solid mechanics, the Eshelby tensor is also a biquadratic tensor. The Eshelby inclusion problem is one of the hottest topics in modern solid mechanics [17].
In 2009, Qi, Dai, and Han [12] proposed an optimization method for tackling the problem of the strong ellipticity and also to give an algorithm for computing the most possible directions along which the strong ellipticity can fail. Subsequently, a practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor was proposed by Wang et al. [18]. Later, in [15,16], bounds of M-eigenvalues and strong ellipticity conditions for elasticity tensors were provided.

2.3. The Riemann Curvature Tensor in Relativity Theory

In the domain of differential geometry, the Riemann curvature tensor, or the Riemann–Christoffel tensor (named after Bernhard Riemann and Elwin Bruno Christoffel), stands as the preeminent method for describing the curvature of Riemannian manifolds [19]. This tensor assigns a specific tensor to each point on a Riemannian manifold, thereby constituting a tensor field. Essentially, it serves as a local invariant of Riemannian metrics, quantifying the discrepancy in the commutation of second covariant derivatives. Notably, a Riemannian manifold possesses zero curvature if and only if it is flat, meaning it is locally isometric to Euclidean space.
We let ( M , g ) be a Riemannian manifold, and  X ( M )  be the space of all vector fields on M. The Riemann curvature tensor is defined as a map X ( M ) × X ( M ) × X ( M ) X ( M ) by the following formula:
R ( X , Y ) Z = X Y Z Y X Z [ X , Y ] Z ,
where ∇ is the Levi–Civita connection, [ X , Y ] is the Lie bracket of vector fields. It turns out that the right-hand side actually only depends on the value of the vector fields  X , Y , Z  at a given point. Hence, R is a (1,3)-tensor field.  By using the tensor index notation and the Christoffel symbols, we have
R j k l i = k Γ l j i l Γ k j i + Γ k p i Γ l j p Γ l p i Γ k j p ,
where Γ l j i are Christoffel symbols of the first kind. For convenience, we could also write R j k l i as R i j k l .
We denote R = ( R i j k l ) B Q ( m , n ) as the Riemann curvature tensor. Then it is a biquadratic tensor. The curvature tensor has the following symmetry properties [8]:
R i j k l = R j i k l = R i j l k = R k l i j , R i j k l + R i l j k + R i k l j = 0 .
By R i j k l = R k l i j for all i , k = 1 , , m and j , l = 1 , , n , the Riemann curvature tensor is a weakly symmetric biquadratic tensor, and it may not be symmetric.

2.4. Bipartite Graphs and Graph Matching

Bipartite matching, or bipartite graph matching, is a fundamental problem in graph theory and combinatorics. It involves finding an optimal way to pair nodes from two disjoint sets in a bipartite graph, ensuring that no two pairs share nodes from the same set. This problem arises in various real-world scenarios, such as job assignment, task scheduling, and network flow optimization [9].
We consider a bipartite graph with two subgraphs G 1 = ( V 1 , E 1 ) and G 2 = ( V 2 , E 2 ) , where V 1 and V 2 are disjoint sets of points with | V 1 | = m , | V 2 | = n , and E 1 , E 2 are sets of edges. The bipartite graph matching aims to find the best correspondence (also referred to as matching) between V 1 and V 2 with the maximum matching score. Specifically, we let X = ( x i j ) R m × n be the assignment matrix between V 1 and V 2 , i.e.,  x i j = 1 if i V 1 is assigned to j V 2 and x i j = 0 otherwise. Two edges ( i 1 , i 2 ) E 1 and ( j 1 , j 2 ) E 2 are considered matched if and only if x i 1 j 1 = x i 2 j 2 = 1 . Namely, i 1 V 1 is assigned to j 1 V 2 and i 2 V 1 is assigned to j 2 V 2 .
We let a i 1 j 1 i 2 j 2 be the matching score between ( i 1 , j 1 ) and ( i 2 , j 2 ) , where higher values indicate greater matching likelihood. Then A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) is a biquadratic tensor. Here, A denotes the affinity tensor [9] that encodes pairwise similarity through a fusion of complementary metrics, combining edge-based geometric and attribute similarities with node-level features as well as higher-order topological relationships and local neighborhood structure. We assume that m n . We let 1 n R n denote the vector of all ones. The graph matching problem can be formulated as
max X { 0 , 1 } m × n ( i 1 , j 1 ) E 1 ( i 2 , j 2 ) E 2 a i 1 j 1 i 2 j 2 x i 1 j 1 x i 2 j 2 s . t . X 1 n = 1 n .
It is commonly assumed that a i 1 j 1 i 2 j 2 0 and a i 1 j 1 i 2 j 2 = a i 2 j 2 i 1 j 1 , i.e., A is both weakly symmetric and nonnegative.
Adjacency tensors and Laplacian tensors are basic tools in spectral hypergraph theory [20,21]. Given a bipartite graph, the corresponding adjacency tensors and Laplacian tensors can also be formulated as biquadratic tensors.

2.5. Biquadratic Polynomials and Polynomial Theory

Given a biquadratic tensor A B Q ( m , n ) , the biquadratic polynomial f ( x , y ) A , x y x y constitutes a significant branch within the realm of polynomial theory.
In 2009, Ling et al. [11] studied the biquadratic optimization over unit spheres,
min x R m , y R n A , x y x y s.t. x = 1 , y = 1 .
Then they presented various approximation methods based on semi-definite programming relaxations. In 2012, Yang and Yang [22] studied the biquadratic optimization with wider constrains. They relaxed the original problem to its semi-definite programming problem, discussed the approximation ratio between them, and showed that the relaxed problem is tight under some conditions. In 2016, O’Rourke et al. [23] subsequently leveraged the theory of biquadratic optimization to address the joint signal-beamformer optimization problem and proposed a semi-definite relaxation to formulate a more manageable version of the problem. This demonstrates the significance of investigating biquadratic optimization.
One important problem in polynomial theory is Hilbert’s 17th Problem. In 1900, German mathematician David Hilbert listed 23 unsolved mathematical challenges proposed at the International Congress of Mathematicians. Among these problems, Hilbert’s 17th Problem stands out as it relates to the representation of polynomials. Specifically, Hilbert’s 17th Problem asks whether every nonnegative polynomial can be represented as a sum of squares (SOS) of rational functions. The Motzkin polynomial is the first example of a nonnegative polynomial that is not an SOS. Hilbert proved that in several cases, every nonnegative polynomial is an SOS, including univariate polynomials, quadratic polynomials in any number of variables, bivariate quartic polynomials  (i.e., polynomials of Degree 4 in two variables) [24]. In general, a quartic nonnegative polynomial may not be represented as an SOS, such as the Robinson polynomial and the Choi–Lam polynomial. Very recently, Cui, Qi, and Xu [10] established that despite being four-variable quartic polynomials, all nonnegative biquadratic polynomials with m = n = 2 can be expressed as the sum of squares of three quadratic polynomials. The key technique is that an m × n biquadratic polynomial can be expressed as a tripartite homogeneous quartic polynomial of m + n 1  variables.
If A B Q ( m , n ) is a nonnegative diagonal biquadratic tensor, then we have
f ( x , y ) i = 1 m j = 1 n a i j i j x i 2 x j 2 ,
which is an SOS expression. In the following, we present a sufficient condition for the nonnegative biquadratic polynomial to be SOS.
Proposition 3.
Given a biquadratic tensor A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) with m , n 2 and the biquadratic polynomial f ( x , y ) A , x y x y . We suppose that there exist factor matrices A ( k ) = a i j ( k ) R m × n for k = 1 , , K such that A = k = 1 K A ( k ) A ( k ) , i.e.,  a i 1 j 1 i 2 j 2 = k = 1 K a i 1 j 1 ( k ) a i 2 j 2 ( k ) , then f can be expressed to be an SOS.
Proof. 
We suppose that A = k = 1 K A ( k ) A ( k ) . Then for any x R m and y R n , we have
f ( x , y ) = A , x y x y = k = 1 K x A ( k ) y 2 ,
which is an SOS. This completes the proof.    □
In fact, the nonnegative diagonal biquadratic tensor A = ( a i 1 j 1 i 2 j 2 ) may be rewritten as
A = i = 1 m j = 1 n a i j i j E i j E i j ,
where E i j { 0 , 1 } m × n contains exactly one nonzero entry, specifically located at the ( i , j ) th position.
We consider a radar datacube collected over L pulses with N fast time samples per pulse in M spatial bins. The biquadratic tensor in [23] could be written as
C = q = 1 Q Γ q Γ q H ,
where Γ q C N M L × N is the spatiodoppler response matrix, Q is the number of independent clutter patches. If Γ q is in the real field, then C satisfies the assumptions in Proposition 3.

3. Eigenvalues of Biquadratic Tensors

We suppose that A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) . A  real number λ is called an M-eigenvalue of A if there are real vectors x = ( x 1 , , x m ) R m , y = ( y 1 , , y n ) R n such that the following equations are satisfied. For i [ m ] ,
i 1 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i j 2 x i 1 y j 1 y j 2 + i 2 = 1 m j 1 , j 2 = 1 n a i j 1 i 2 j 2 y j 1 x i 2 y j 2 = 2 λ x i ;
for j [ n ] ,
i 1 , i 2 = 1 m j 1 = 1 n a i 1 j 1 i 2 j x i 1 y j 1 x i 2 + i 1 , i 2 = 1 m j 2 = 1 n a i 1 j i 2 j 2 x i 1 x i 2 y j 2 = 2 λ y j ;
and
x x = y y = 1 .
Then x and y are called the corresponding M-eigenvectors. M-eigenvalues were introduced for symmetric biquadratic tensors in 2009 by Qi, Dai and Han [12], i.e., for i [ m ] ,
i 2 = 1 m j 1 , j 2 = 1 n a i j 1 i 2 j 2 y j 1 x i 2 y j 2 = λ x i ;
for j [ n ] ,
i 1 , i 2 = 1 m j 2 = 1 n a i 1 j i 2 j 2 x i 1 x i 2 y j 2 = λ y j .
Subsequently, several numerical methods, including the WQZ method [18], semi-definite relaxation method [11,22], and the shifted inverse power method [25,26], were proposed. It is easy to see that if A is symmetric, then (5) and (6) reduce to the definition of M-eigenvalues in Equations (8) and (9) in [12]. Hence, we extend M-eigenvalues and M-eigenvectors to nonsymmetric biquadratic tensors by symmetrizing these tensors.
We have the following theorem.
Theorem 1.
We suppose that A B Q ( m , n ) . Then A always have M-eigenvalues. Furthermore, A is positive semi-definite if and only if all of its M-eigenvalues are nonnegative, A is positive definite if and only if all of its M-eigenvalues are positive.
Proof. 
We consider the optimization problem
min { f ( x , y , x , y ) : x x = 1 , y y = 1 } .
The feasible set is compact. The objective function is continuous. Thus, an optimizer of this problem exists. Furthermore, the problem satisfies the linear independence constraint qualification. Hence, at the optimizer, the optimal conditions hold. This implies that (5) and (6) hold, i.e., an M-eigenvalue always exists. By (5) and (6), we have
λ = f ( x , y , x , y ) .
Hence, A is positive semi-definite if and only if all of its M-eigenvalues are nonnegative, A is positive definite if and only if all of its M-eigenvalues are positive.    □
We let I = ( I i 1 j 1 i 2 j 2 ) B Q ( m , n ) , where
I i 1 j 1 i 2 j 2 = 1 , if i 1 = i 2 and j 1 = j 2 , 0 , otherwise .
I is referred to as M-identity tensor in [27,28]. In the following proposition, we see that I acts as the identity tensor for biquadratic tensors.
Proposition 4.
We suppose that A B Q ( m , n ) is a biquadratic tensor and let λ R be an arbitrary real number that need not be an eigenvalue of A . Then μ λ (resp. λ μ ) is an M-eigenvalue of A λ I (resp. λ I A ) if and only if μ is an M-eigenvalue of A .
Proof. 
By the definitions of M-eigenvalues and I , we have the conclusion.    □
We suppose that A = a i 1 j 1 i 2 j 2 B Q ( m , n ) . Then we call a i j i j diagonal entries of A for i [ m ] and j [ n ] . The other entries of A are called off-diagonal entries of A . If all the off-diagonal entries are zeros, then A is called a diagonal biquadratic tensor. If  A is diagonal, then A has m n M-eigenvalues, which are its diagonal elements, with corresponding vectors x = e i ( m ) and y = e j ( n ) , where e i ( m ) is the ith unit vector in R m and  e j ( n ) is the jth unit vector in R n , as their M-eigenvectors. However, different from cubic tensors, in this case, A may have some other M-eigenvalues and M-eigenvectors. In fact, for a diagonal biquadratic tensor A , (5) and (6) have the following forms: for i [ m ] ,
j = 1 n a i j i j y j 2 x i = λ x i ;
and for j [ n ] ,
i = 1 m a i j i j x i 2 y j = λ y j .
We now present an example with m = n = 2 .
Example 1.
A diagonal biquadratic tensor A may possess M-eigenvalues that are distinct from its diagonal elements. We suppose that a 1111 = 1 , a 1212 = α , a 2121 = β , and  a 2222 = γ . Then by Equations (11) and (12), we have
x 1 y 1 2 + α y 2 2 λ = 0 , x 2 β y 1 2 + γ y 2 2 λ = 0 , y 1 x 1 2 + β x 2 2 λ = 0 , y 2 α x 1 2 + γ x 2 2 λ = 0 .
We let x 1 2 + x 2 2 = 1 and y 1 2 + y 2 2 = 1 . We suppose that x 1 , x 2 , y 1 , y 2 are nonzero numbers. Otherwise, if any of these numbers are zero, the eigenvalue shall be equal to one of the diagonal elements. Then we have an M-eigenvalue λ = γ α β γ + 1 α β and the corresponding M-eigenvectors satisfy
x 1 2 = γ β γ + 1 α β , x 2 2 = 1 α γ + 1 α β ,
and
y 1 2 = γ α γ + 1 α β , y 2 2 = 1 β γ + 1 α β .
For instance, if  α = β = 1 and γ = 1 , we have λ = 0 , which is not a diagonal element of A , and the corresponding M-eigenvectors satisfy x 1 2 = x 2 2 = y 1 2 = y 2 2 = 1 2 .
We let A = a i 1 j 1 i 2 j 2 B Q ( m , n ) . Then its diagonal entries form an m × n rectangular matrix D = a i j i j R m × n .
By (11) and (12), we have the following proposition.
Proposition 5.
We suppose that A = a i 1 j 1 i 2 j 2 B Q ( m , n ) is a diagonal biquadratic tensor, then A has m n M-eigenvalues, which are its diagonal elements a i j i j , with corresponding vectors x = e i ( m ) and y = e j ( n ) as their M-eigenvectors. Furthermore, though  A may have some other M-eigenvalues, they are still in the convex hull of some diagonal entries.
Proof. 
When x = e i ( m ) and y = e j ( n ) , we may verify that (11) and (12) hold with λ = a i j i j . This shows the first conclusion.
Furthermore, we let I x = { i : x i 0 } , I y = { j : y j 0 } , and  D I R | I x | × | I y | . Here, D I is composed of the rows I x and columns I y of D = a i j i j R m × n . Then Equations (11) and (12) can be reformulated as
D I y I y [ 2 ] = λ 1 | I x | , D I x I x [ 2 ] = λ 1 | I y | .
Here, x I x [ 2 ] = { ( x i 2 ) } i I x and y I y [ 2 ] = { ( y j 2 ) } j I y . Therefore, given the index sets I x and I y , as long as 1 | I x | and 1 | I y | are in the convex hull of D I and D I , respectively, then there is an M-eigenvalue in the convex hull of the entries in D I .
This completes the proof.    □
We have the following theorem for biquadratic tensors.
Theorem 2.
A necessary condition for a biquadratic tensor A B Q ( m , n ) to be positive semi-definite is that all of its diagonal entries are nonnegative. A necessary condition for A to be positive definite is that all of its diagonal entries are positive. If  A is a diagonal biquadratic tensor, then these conditions are sufficient.
Proof. 
We let A = ( a i 1 j 1 i 2 j 2 ) and
f ( x , y ) = A , x y x y i 1 , i 2 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i 2 j 2 x i 1 y j 1 x i 2 y j 2 .
Then f ( e i ( m ) , e j ( n ) ) = a i j i j for i [ m ] and j [ n ] . This leads to the first two conclusions. If  A is a diagonal biquadratic tensor, then
f ( x , y ) = i = 1 m j = 1 n a i j i j x i 2 y j 2 .
The last conclusion follows.    □

4. Gershgorin-Type Theorem and Diagonally Dominated Biquadratic Tensors

RWe rcall that for square matrices, there is a Gershgorin theorem from which we may show that (strictly) diagonally dominated matrices are positive semi-definite (definite). These have been successfully generalized to cubic tensors [21,29]. Then, can these be generalized to nonsymmetric biquadratic tensors? To answer this question, we have to understand the “rows” and “columns” of a biquadratic tensor.
We suppose that A = a i 1 j 1 i 2 j 2 B Q ( m , n ) . Then A has m rows. In the ith row of A , there are n diagonal entries a i j i j for j [ n ] , and totally 2 m n 2 entries a i 1 j 1 i j 2 and a i j 1 i 2 j 2 for i 1 , i 2 [ m ] and j 1 , j 2 [ n ] . We use the notation a ¯ i j 1 i 2 j 2 . We let a ¯ i 1 j 1 i 2 j 2 = a i 1 j 1 i 2 j 2 if a i 1 j 1 i 2 j 2 is an off-diagonal entry, and  a ¯ i 1 j 1 i 2 j 2 = 0 if a i 1 j 1 i 2 j 2 is a diagonal entry. Then, in the ith row, the diagonal entry a i j i j has the responsibility to dominate the 4 m n entries a ¯ i 1 j i j 2 , a ¯ i 1 j 1 i j , a ¯ i j i 2 j 2 and a ¯ i j 1 i 2 j for i 1 , i 2 [ m ] and j 1 , j 2 [ n ] . Therefore, for  i [ m ] and j [ n ] , we define
r i j = 1 4 i 1 = 1 m j 2 = 1 n a ¯ i 1 j i j 2 + j 1 = 1 n a ¯ i 1 j 1 i j + 1 4 i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 + j 1 = 1 n a ¯ i j 1 i 2 j .
If A is weakly symmetric, then (13) reduces to
r i j = 1 2 i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 + j 1 = 1 n a ¯ i j 1 i 2 j .
If A is symmetric, then (13) reduces to
r i j = i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 .
For M-eigenvalues, here we have the following theorem, which generalizes the Gershgorin theorem of matrices to biquadratic tensors.
Theorem 3.
We suppose that A = a i 1 j 1 i 2 j 2 B Q ( m , n ) . We let r i j for i [ m ] and j [ n ] be defined by Equation (13). Then any M-eigenvalue λ of A lies in one of the following m intervals:
min j = 1 , , n a i j i j r i j , max j = 1 , , n a i j i j + r i j
for i [ m ] , and one of the following n intervals:
min i = 1 , , m a i j i j r i j , max i = 1 , , m a i j i j + r i j ,
for j [ n ] .
Proof. 
We suppose that λ is an M-eigenvalue of A , with M-eigenvectors x and y . We assume that x i 0 is the component of x with the largest absolute value. From (5), we have
λ = j = 1 n a i j i j y j 2 + 1 2 i 1 = 1 m j 1 , j 2 = 1 n a ¯ i 1 j 1 i j 2 x i 1 x i y j 1 y j 2 + 1 2 i 2 = 1 m j 1 , j 2 = 1 n a ¯ i j 1 i 2 j 2 y j 1 x i 2 x i y j 2 j = 1 n a i j i j y j 2 + 1 2 i 1 = 1 m j 1 , j 2 = 1 n a ¯ i 1 j 1 i j 2 y j 1 y j 2 + 1 2 i 2 = 1 m j 1 , j 2 = 1 n a ¯ i j 1 i 2 j 2 y j 1 y j 2 j = 1 n a i j i j y j 2 + 1 4 i 1 = 1 m j 1 , j 2 = 1 n a ¯ i 1 j 1 i j 2 y j 1 2 + y j 2 2 + 1 4 i 2 = 1 m j 1 , j 2 = 1 n a ¯ i j 1 i 2 j 2 y j 1 2 + y j 2 2 = j = 1 n y j 2 a i j i j + 1 4 i 1 = 1 m j 2 = 1 n a ¯ i 1 j i j 2 + j 1 = 1 n a ¯ i 1 j 1 i j + 1 4 i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 + j 1 = 1 n a ¯ i j 1 i 2 j max j = 1 , , n a i j i j + r i j ,
for i [ m ] . The other inequalities of (14) and (15) can be derived similarly.
This completes the proof.    □
In the symmetric case, the inclusion intervals and bounds of M-eigenvalues are presented in [15,16,28,30]. Here, Theorem 3 covers both the symmetric the nonsymmetric cases.
If the entries of A satisfy
a i j i j r i j 1 4 i 1 = 1 m j 2 = 1 n a ¯ i 1 j i j 2 + j 1 = 1 n a ¯ i 1 j 1 i j + 1 4 i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 + j 1 = 1 n a ¯ i j 1 i 2 j ,
for all i [ m ] and j [ n ] , then A is called a diagonally dominated biquadratic tensor. If the strict inequality holds for all these m n inequalities, then A is called a strictly diagonally dominated biquadratic tensor.
We suppose that A = a i 1 j 1 i 2 j 2 S B Q ( m , n ) . Then Equation (16) can be simplified as
a i j i j i 2 = 1 m j 2 = 1 n a ¯ i j i 2 j 2 ,
for all i [ m ] and j [ n ] .
Corollary 1.
A diagonally dominated biquadratic tensor is positive semi-definite. A strictly diagonally dominated biquadratic tensor is positive definite.
Proof. 
This result follows directly from Theorems 1 and 3.    □

5. B-Biquadratic Tensors and M-Biquadratic Tensors

Several structured tensors, including B 0 -tensors, B-tensors, Z-tensors, M-tensors, and strong M-tensors have been studied. See [21] for this. It has been established that even-order symmetric B 0 -tensors and even-order symmetric M-tensors are positive semi-definite, even-order symmetric B-tensors and even-order symmetric strong M-tensors are positive definite [21,31]. We now extend such structured cubic tensors and the above properties to biquadratic tensors.
We let A = a i 1 j 1 i 2 j 2 B Q ( m , n ) . We suppose that for i [ m ] and j [ n ] , we have
i 1 = 1 m j 2 = 1 n a i 1 j i j 2 + j 1 = 1 n a i 1 j 1 i j + i 2 = 1 m j 2 = 1 n a i j i 2 j 2 + j 1 = 1 n a i j 1 i 2 j 0 ,
and for i , i 2 [ m ] and j , j 1 , j 2 [ n ] , we have
1 4 m n i 1 = 1 m j 2 = 1 n a i 1 j i j 2 + j 1 = 1 n a i 1 j 1 i j + i 2 = 1 m j 2 = 1 n a i j i 2 j 2 + j 1 = 1 n a i j 1 i 2 j max a ¯ i j i 2 j 2 , a ¯ i 1 j i j 2 , a ¯ i j 1 i 2 j , a ¯ i 1 j 1 i j .
Then we say that A is a B 0 -biquadratic tensor. If all strict inequalities hold in (18) and (19), then we say that A is a B-biquadratic tensor.
A biquadratic tensor A in B Q ( m , n ) is called a Z-biquadratic tensor if all of its off-diagonal entries are nonpositive. If A is a Z-biquadratic tensor, then it can be written as A = α I B , where I is the identity tensor in B Q ( m , n ) , and  B is a nonnegative biquadratic tensor. By the discussion in the last section, B has an M-eigenvalue. We denote λ max ( B ) as the largest M-eigenvalue of B . If α λ max ( B ) , then A is called an M-biquadratic tensor. If  α > λ max ( B ) , then A is called a strong M-biquadratic tensor.
If B S B Q ( 3 , 3 ) and is nonnegative, it follows from Theorem 6 of [27] that λ max ( B ) = ρ M ( B ) , where ρ M ( B ) is the M-spectral radius of B , i.e., the largest absolute value of the M-eigenvalues of B . Checking the proof of Theorem 6 of [27], we may find that this is true for B S B Q ( m , n ) , m , n 2 and B being nonnegative. If  B is nonnegative but not symmetric, this conclusion is still an open problem at this moment.
The following proposition is a direct generalization of Proposition 5.37 of [21] for cubic tensors to biquadratic tensors.
Proposition 6.
We suppose A B Q ( m , n ) is a Z-biquadratic tensor. Then
(i) 
A is diagonally dominated if and only if A is a B 0 -biquadratic tensor;
(ii) 
A is strictly diagonally dominated if and only if A is a B-biquadrtic tensor.
Proof. 
By the fact that all off-diagonal entries of A are nonpositive, for all i [ m ] and j [ n ] , we have
1 4 i 1 = 1 m j 2 = 1 n a i 1 j i j 2 + j 1 = 1 n a i 1 j 1 i j + i 2 = 1 m j 2 = 1 n a i j i 2 j 2 + j 1 = 1 n a i j 1 i 2 j = a i j i j 1 4 i 1 = 1 m j 2 = 1 n | a ¯ i 1 j i j 2 | + j 1 = 1 n | a ¯ i 1 j 1 i j | + i 2 = 1 m j 2 = 1 n | a ¯ i j i 2 j 2 | + j 1 = 1 n | a ¯ i j 1 i 2 j | .
Therefore, (16) and (18) are equivalent.
If A is diagonally dominated, then (16) holds true. Hence, (18) also holds. Since the left hand side of (19) is nonnegative, and the right hand side is nonpositive, we have that (19) holds.
On the other hand, we suppose that A is a B 0 -biquadratic tensor. Then (18) holds, thus (16) holds true.
Similarly, we could show the last statement of this proposition. This completes the proof.    □
By Proposition 4, we have the following proposition.
Proposition 7.
An M-biquadratic tensor is positive semi-definite. A strong M-biquadratic tensor is positive definite.
Proof. 
We suppose that A = α I B , where α λ max ( B ) , I is the identity tensor in B Q ( m , n ) , and  B is a nonnegative biquadratic tensor. It follows from Proposition 4 that every M-eigenvalue of A satisfies
α μ λ max ( B ) μ 0 ,
where μ is an M-eigenvalue of B . This combined with Theorem 1 shows the first statement of this proposition.
Similarly, we could show the last statement of this proposition. This completes the proof.    □
Proposition 8. 
We let A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) be a Z-biquadratic tensor. A is positive semi-definite if and only if it is an M-biquadratic tensor. Similarly, A is positive definite if and only if it is a strong M-biquadratic tensor.
Proof. 
The necessity part follows from Proposition 7. We now demonstrate the sufficiency aspect.
Let
B = γ I A , where γ = max max i j a i j i j , λ M ( A ) .
Then B is nonnegative. Furthermore, by Proposition 4, we have
λ max ( B ) = max { γ λ : λ is an M-eigenvalue of A } γ .
Here, the last inequality follows from the fact that A is positive semidefinite and all the M-eigenvalues are nonnegative. This shows that A = γ I B is an M-biquadratic tensor.
Similarly, we could show the last statement of this proposition. This completes the proof.    □
We let A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) be a biquadratic tensor and let D = ( d i ) R m × m and F = ( f j ) R n × n be two positive diagonal matrices. The transformed tensor C = A × 1 D × 2 F × 3 D × 4 F is defined componentwise by c i 1 j 1 i 2 j 2 = a i 1 j 1 i 2 j 2 d i 1 f j 1 d i 2 f j 2 . The following corollary is an immediate consequence of Proposition 8.
Corollary 2.
We let A = ( a i 1 j 1 i 2 j 2 ) B Q ( m , n ) be a Z-biquadratic tensor. If all diagonal entries of A are nonnegative and there exist two positive diagonal matrices D R m × m and F R n × n such that C : = A × 1 D × 2 F × 3 D × 4 F is diagonally dominated, then A is an M-biquadratic tensor. Similarly, if all diagonal entries of A are positive and there exist two positive diagonal matrices D R m × m and F R n × n such that C : = A × 1 D × 2 F × 3 D × 4 F is strictly diagonally dominated, then A is a strong M-biquadratic tensor.
Proof. 
By Corollary 1, C is positive semi-definite. By Theorem 7 in [32], an M-eigenvalue of C is also an M-eigenvalue of A . Therefore, A is also positive semi-definite. This result follows directly from Proposition 8.
Similarly, we could show the last statement of this corollary. This completes the proof.    □
We let [ m ] [ n ] = { ( i , j ) : i [ m ] , j [ n ] } . For any J = ( J x , J y ) [ m ] [ n ] , we denote I J = ( I i 1 j 1 i 2 j 2 ) as a biquadratic tensor in B Q ( m , n ) , where I i 1 j 1 i 2 j 2 = 1 if i 1 , i 2 J x and j 1 , j 2 J y , and  I i 1 j 1 i 2 j 2 = 0 otherwise. Then for any nonzero vectors x R m and x R m , we have
I J , x y x y = x J x x J x y J y y J y 0 ,
where x J x = ( x ˜ i ) R m with x ˜ i = 1 if i J x and x ˜ i = 0 otherwise.
Similar with Theorem 5.38 in [21] for cubic tensors, we could establish the following interesting decomposition theorem.
Theorem 4.
We suppose B = ( b i 1 j 1 i 2 j 2 ) S B Q ( m , n ) is a symmetric B 0 -quadratic tensor, i.e., for i , i 2 [ m ] and j , j 2 [ n ] , we have
i 2 = 1 m j 2 = 1 n b i j i 2 j 2 0 and 1 m n i 2 = 1 m j 2 = 1 n b i j i 2 j 2 b ¯ i j i 2 j 2 .
Then either B is a diagonal dominated symmetric M-biquadratic tensor itself, or it can be decomposed as
B = M + k = 1 s h k I J k ,
where M is a diagonal dominated symmetric M-biquadratic tensor, s is a nonnegative integer, h k > 0 , J k = { ( i k , j k ) } , i k [ m ] , j k [ n ] , and  J s J s 1 J 1 . If B is a symmetric B-quadratic tensor, then either B is a strictly diagonal dominated symmetric M-quadratic tensor itself, or it can be decomposed as (22) with M being a strictly diagonal dominated symmetric M-biquadratic tensor.
Proof. 
For any given symmetric B 0 -quadratic tensor B = ( b i 1 j 1 i 2 j 2 ) S B Q ( m , n ) , we define
J 1 = ( i , j ) [ m ] [ n ] : ( i 2 , j 2 ) ( i , j ) such that b i j i 2 j 2 > 0 .
If J 1 = , then B itself is already a Z-biquadratic tensor. It follows from Proposition 6 that B is a diagonally dominated symmetric Z-biquadradtic tensor. Furthermore, it follows from (21) that
b i j i j m n max i 2 j 2 b i j i 2 j 2 i 2 = 1 m j 2 = 1 n b ¯ i j i 2 j 2 0 .
Namely, all diagonal elements of B are nonnegative. By virtue of Corollary 2, B is a symmetric M-biquadratic tensor itself.
If J 1 , we denote B 1 : = B ,
d i j : = max ( i 2 , j 2 ) ( i , j ) b i j i 2 j 2 , ( i , j ) J 1 and h 1 = min ( i , j ) J 1 d i j .
By (23), we have d i j > 0 and h 1 > 0 . We let B 2 : = B 1 h 1 I J 1 . Now we claim that B 2 is still a B 0 -biquadratic tensor. In fact, we have
i 2 = 1 m j 2 = 1 n ( B 2 ) i j i 2 j 2 = i 2 = 1 m j 2 = 1 n b i j i 2 j 2 k i j h 1 m n d i j k i j h 1 m n ( d i j h 1 ) 0 ,
where k i j is the number of nonzero elements in I J 1 ( i , j , : , : ) , and the first inequality follows from (21). By the symmetry of B 1 , we know that if b i j i 2 j 2 > 0 for some ( i 2 , j 2 ) ( i , j ) , then ( i 2 , j 2 ) , ( i , j 2 ) , and  ( i 2 , j ) are also in J 1 . Thus, for any ( i , j ) J 1 ,
max ( i 2 , j 2 ) ( i , j ) ( B 2 ) i j i 2 j 2 = max ( i 2 , j 2 ) J 1 ( B 2 ) i j i 2 j 2 = d i j h 1 0 .
Therefore, we have
1 m n i 2 = 1 m j 2 = 1 n ( B 2 ) i j i 2 j 2 max ( i 2 , j 2 ) ( i , j ) ( B 2 ) i j i 2 j 2 0 .
Thus, we complete the proof of the claim that B 2 is still a B 0 -biquadratic tensor.
We continue the above procedure until the remaining part B s is an M-tensor. It is not hard to find out that J k + 1 = J k J ^ k with J ^ k = { ( i , j ) J k : d i j = h k } for any k { 1 , , s 1 } . Similarly, we can show the second part of the theorem for symmetric B-biquadratic tensors.    □
Based on the above theorem, we show the following result.
Corollary 3.
A symmetric B 0 -biquadratic tensor is positive semi-definite, and a symmetric B-biquadratic tensor is positive definite.
Proof. 
The results follow directly from Theorem 4 and the fact that the sum of positive semi-definite biquadratic tensors are positive semi-definite. Furthermore, the sum of a positive definite biquadratic tensor and several positive semi-definite biquadratic tensors are positive definite.    □

6. A Riemannian LBFGS Method for Computing the Smallest M-Eigenvalue of a Biquadratic Tensor

We let A x y x y = i 1 , i 2 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i 2 j 2 x i 1 y j 1 x i 2 y j 2 denote the multilinear form. Similarly, we define the contracted forms A · y x y = i 2 = 1 m j 1 , j 2 = 1 n a i 1 j 1 i 2 j 2 y j 1 x i 2 y j 2 and A x · x y = i 1 , i 2 = 1 m j 2 = 1 n a i 1 j 1 i 2 j 2 x i 1 y j 1 x i 2 . We consider the following two-unit-sphere constrained optimization problem:
min x R m , y R n A x y x y x x y y s . t . x x = 1 , y y = 1 .
Since the linearly independent constraint qualification is satisfied, every local optimal solution must satisfy the KKT conditions:
A · y x y + A x y · y 2 ( A x y x y ) x = 2 μ 1 x , A x · x y + A x y x · 2 ( A x y x y ) y = 2 μ 2 y , x x = 1 , y y = 1 .
Here, we simplify the above equations by taking into account the constraints x x = 1 and y y = 1 . By multiplying x and y at both hand sides of the first two equations of (25), we have
μ 1 = μ 2 = 0 .
Thus, (25) is equivalent to the definitions of M-eigenvalues in (5)–(7) with λ = A x y x y . In other words, every KKT point of (24) corresponds to an M-eigenvector of A with the associated M-eigenvalue given by λ = A x y x y . Moreover, the smallest M-eigenvalue and its corresponding M-eigenvectors of  A are associated with the global optimal solution of (24). As established in Theorem 1, the smallest M-eigenvalue can be utilized to verify the positive semidefiniteness (definiteness) of  A .
For convenience, we let z = [ x , y ] R m + n and denote
f ( z ) = A x y x y x x y y .
Then the gradient of f ( z ) is given by
f ( z ) = A · y x y + A x y · y ( x x ) ( y y ) 2 ( A x y x y ) ( y y ) x ( x x ) 2 ( y y ) 2 A x · x y + A x y x · ( x x ) ( y y ) 2 ( A x y x y ) ( x x ) y ( x x ) 2 ( y y ) 2 .
Under the constraints x x = 1 and y y = 1 , the gradient simplifies to
f ( z ) = A · y x y + A x y · y 2 ( A x y x y ) x A x · x y + A x y x · 2 ( A x y x y ) y .
Therefore, the definitions of M-eigenvalues in (5)–(7) are also equivalent to
f ( z ) = 0 , x x = 1 , y y = 1 .
Furthermore, it can be validated that
x f ( z ) x = 0 and y f ( z ) y = 0 .
Consequently, the partial gradients x f ( z ) and y f ( z ) inherently reside in the tangent spaces of their respective unit spheres. This geometric property provides fundamental motivation for our selection of the optimization model (24), as it naturally aligns with the underlying manifold structure of the problem.

6.1. Algorithm Framework

Next, we present a Riemannian LBFGS (limited memory BFGS) method for solving (24). At the kth step, the LBFGS method generate a search direction as
p ( k ) = H ( k ) f ( z ( k ) ) ,
where H ( k ) R ( m + n ) × ( m + n ) is the quasi-Newton matrix. Notably, the LBFGS method approximates the BFGS method without explicitly storing the dense Hessian matrix H ( k ) . Instead, it uses a limited history of updates to implicitly represent the Hessian or its inverse. This could be done in the two-loop recursion [33]. To establish theoretical convergence guarantees, we impose the following essential conditions:
p x ( k ) x f ( z ( k ) ) C L x f ( z ( k ) ) 2 , p y ( k ) y f ( z ( k ) ) C L y f ( z ( k ) ) 2 ,
and
p x ( k ) C U x f ( z ( k ) ) , p y ( k ) C U y f ( z ( k ) ) .
Here, 0 < C L 1 C U are predefined parameters. These conditions may not always hold for the classic LBFGS method. Consequently, when the descent Condition (27) or (28) is not satisfied, we adopt a safeguard strategy by setting the search direction as p ( k ) = f ( z ( k ) ) . Namely
p ( k ) = H ( k ) f ( z ( k ) ) , if ( 27 ) , ( 28 ) hold , f ( z ( k ) ) , otherwise .
The classic LBFGS method was originally developed for unconstrained optimization problems. However, our optimization Model (24) involves additional two unit sphere constraints. To address this challenge while maintaining the convergence properties of LBFGS, we employ a modified approach inspired by Chang et al. [34]. Specifically, we incorporate the following Cayley transform, which inherently preserves the spherical constraints:
x ( k + 1 ) ( α ) = 1 α ( x ( k ) ) p x ( k ) 2 α 2 p x ( k ) x ( k ) + 2 α p x ( k ) 1 + α 2 p x ( k ) 2 α ( x ( k ) ) p x ( k ) 2 ,
y ( k + 1 ) ( α ) = 1 α ( y ( k ) ) p y ( k ) 2 α 2 p y ( k ) y ( k ) + 2 α p y ( k ) 1 + α 2 p y ( k ) 2 α ( y ( k ) ) p y ( k ) 2 .
Here, α > 0 represents the step size, which is determined through a backtracking line search procedure to ensure the Armijo condition is satisfied, i.e.,
f ( z ( k + 1 ) ( α ( k ) ) ) f ( z ( k ) ) + η α ( k ) p ( k ) f ( z ( k ) ) ,
where η ( 0 , 2 ) a predetermined constant that controls the descent amount. The stepsize α ( k ) is determined through an iterative reduction process, starting from an initial value α = 1 and gradually decrease by α = β α until the condition specified in Equation (32) is satisfied. Here, β ( 0 , 1 ) is the decrease ratio.
The Riemannian LBFGS algorithm terminates when the following stopping condition is satisfied:
z ( k + 1 ) z ( k ) z ( k ) ϵ 1 , f ( z ( k + 1 ) ) ϵ 2 , f ( z ( k + 1 ) ) f ( z ( k ) ) f ( z ( k ) ) + 1 ϵ 3 ,
where ϵ 1 , ϵ 2 , ϵ 3 are small positive tolerance parameters approaching zero.
We present the Riemannian LBFGS method for computing M-eigenvalues of biquadratic tensors in Algorithm 1.
Algorithm 1 A Riemannian LBFGS method for computing M-eigenvalues of biquadratic tensors
  • Require: A B Q ( m , n ) , x ( 0 ) R m , y ( 0 ) R n . The parameters η ( 0 , 2 ) , k max ,
ϵ 1 , ϵ 2 , ϵ 3 ( 0 , 1 ) , and  0 < C L 1 C U are required in Equations (29)–(33).
1:
 for  k = 0 , , k max ,  do
2:
     Compute f ( z ( k ) ) using (26) and then determine the descent direction p ( k ) by (29).
3:
     Compute the stepsize α ( k ) through a backtracking procedure until (32) is satisfied.
4:
     Update x ( k + 1 ) and y ( k + 1 ) by (30) and (31), respectively.
5:
     if The stopping criterion (33) is reached then
6:
         Stop.
7:
     end if
8:
 end for
9:
 Output:  x ( k + 1 ) and y ( k + 1 )

6.2. Convergence Analysis

We now establish the convergence analysis of Algorithm 1. Our Riemannian LBFGS method is specifically designed for solving optimization Problem (24) that is constrained on the product of two unit spheres. This setting differs from existing Riemannian LBFGS methods that primarily address optimization problems constrained to a single unit sphere [34], as well as classic LBFGS methods developed for unconstrained optimization Problems [33].
We begin with several lemmas.
Lemma 1.
We let z ( k ) be a sequence generated by Algorithm 1. We consider the descent direction p ( k ) defined by (29). For any positive constants 0 < C L 1 and C U 1 , Conditions (27) and (28) hold.
Proof. 
We consider the two choices of the descent direction p ( k ) as defined in (29).
If p ( k ) = f ( z ( k ) ) , the conclusions of this lemma follow directly by setting C L = 1 and C U = 1 .
If the descent direction p ( k ) is selected according to the LBFGS method, then both the conditions (27) and (28) must be satisfied.
Combining the both cases, the proof is completed. □
Lemma 2.
The objective function in (24), its gradient vector and Hessian matrix, are bounded on any feasible set of (24). Namely, there exists a positive constant M such that
f ( z ) M , f ( z ) M , 2 f ( z ) M ,
for any z = ( x , y ) satisfying x x = 1 and y y = 1 .
Proof. 
Under the conditions that x x = 1 and y y = 1 , f ( z ) , f ( z ) and 2 f ( z ) are all polynomial functions over z . Then it follows from z = 2 is a compact set such that (34) holds. □
Lemma 3.
We let z ( k ) be a sequence generated by Algorithm 1. We let α ^ ( 2 η ) C L ( 2 + η ) M C U 2 and α min = β α ^ , where β < 1 is the decrease ratio in the Armijo line search. Then for all k 0 , the stepsize α ( k ) that satisfies (32) is lower bounded, i.e.,
α min α ( k ) 1 .
Proof. 
By (34), we have
f ( z ( k + 1 ) ( α ) ) f ( z ( k ) ) x f ( z ( k ) ) x ( k + 1 ) ( α ) x ( k ) + M 2 x ( k + 1 ) ( α ) x ( k ) 2 + y f ( z ( k ) ) y ( k + 1 ) ( α ) y ( k ) + M 2 y ( k + 1 ) ( α ) y ( k ) 2 .
Following from Lemma 4.3 in [34], for any α satisfying 0 < α α ^ , it holds that
x f ( z ( k ) ) x ( k + 1 ) ( α ) x ( k ) + M 2 x ( k + 1 ) ( α ) x ( k ) 2 η α p x ( k ) x f ( z ( k ) )
and
y f ( z ( k ) ) y ( k + 1 ) ( α ) y ( k ) + M 2 y ( k + 1 ) ( α ) y ( k ) 2 η α p y ( k ) y f ( z ( k ) ) .
Therefore, we have
f ( z ( k + 1 ) ( α ) ) f ( z ( k ) ) η α p ( k ) f ( z ( k ) )
for any 0 < α α ^ . By the backtracking line search rule, we have α ( k ) β α ^ for all k. This completes the proof. □
We now prove that the objective function in (24) converges to a constant value, and any limit point is a KKT point.
Theorem 5.
We let z ( k ) be a sequence generated by Algorithm 1. Then we have
f ( z ( k + 1 ) ) f ( z ( k ) ) η α min C L f ( z ( k ) ) 2 .
Furthermore, the objective function f ( z ( k ) ) monotonously converges to a constant value and
lim k + f ( z ( k ) ) = 0 .
Namely, any limit point of Algorithm 1 is a KKT point of Problem (24), which is also an M-eigenpair of the biquadratic tensor A .
Proof. 
It follows from Lemma 3 that
f ( z ( k + 1 ) ) f ( z ( k ) ) η α min p ( k ) f ( z ( k ) ) .
Combining with Lemma 1 derives (35). In other words, the objective function f ( z ( k ) ) is monotonously decreasing. Lemma 2 shows that f ( z ) is lower bounded. Therefore, f ( z ( k ) ) monotonously converges to a constant value λ * .
Furthermore, summing the both hand sides of (35) over k gives
λ * f ( z 0 ) = k = 0 f ( z ( k + 1 ) ) f ( z ( k ) ) η α min C L k = 0 f ( z ( k ) ) 2 .
Therefore, we have
k = 0 f ( z ( k ) ) 2 f ( z 0 ) λ * η α min C L ,
which implies lim k + f ( z ( k ) ) = 0 .
This completes the proof. □
Before demonstrating the sequential global convergence, we first present the following lemma.
Lemma 4.
We let z ( k ) be a sequence generated by Algorithm 1. Then there exist two positive constants l u such that
l z ( k + 1 ) z ( k ) f ( z ( k ) ) u z ( k + 1 ) z ( k ) .
Here, l = 1 2 C U and u = C U ( 1 + C U M ) 2 α min C L 2 .
Proof. 
By Theorem 4.6 in [34], we have
x ( k + 1 ) x ( k ) 2 C U α ( k ) x f ( z ( k ) ) , y ( k + 1 ) y ( k ) 2 C U α ( k ) y f ( z ( k ) ) .
Therefore, we have
z ( k + 1 ) z ( k ) 2 C U α ( k ) f ( z ( k ) ) 2 C U f ( z ( k ) ) .
Here, the last inequality follows from α ( k ) 1 . This shows the left inequality in (37) holds.
Furthermore, by Lemma 4.7 in [34], we have
x ( k + 1 ) x ( k ) 2 α min C L C U ( 1 + C U M ) p x ( k ) 2 α min C L 2 C U ( 1 + C U M ) x f ( z ( k ) ) , y ( k + 1 ) y ( k ) 2 α min C L C U ( 1 + C U M ) p y ( k ) 2 α min C L 2 C U ( 1 + C U M ) y f ( z ( k ) ) .
Therefore, we have
z ( k + 1 ) z ( k ) 2 α min C L 2 C U ( 1 + C U M ) f ( z ( k ) ) .
This completes the proof. □
Now we are ready to present the global convergence of the sequence produced by Algorithm 1.
Theorem 6.
We let z ( k ) be a sequence generated by Algorithm 1. Then we have the following result:
k = 0 z ( k + 1 ) z ( k ) + .
Namely, sequence z ( k ) converges globally to a limit point z * . Furthermore, z * is an M-eigenvector of A with the corresponding eigenvalue λ * = A x * y * x * y * .
Proof. 
By Theorem 5 and Lemma 4, we may deduce the sufficient decrease property
f ( z ( k + 1 ) ) f ( z ( k ) ) η α min C L l 2 z ( k + 1 ) z ( k ) 2 ,
and the gradient lower bound for the iterates gap
f ( z ( k ) ) 1 l z ( k + 1 ) z ( k ) .
Both the objective function and the constraints in (24) are semi-algebraic functions that satisfy the KL property [35]. Therefore, it follows from Theorem 1 in [35] that (38) holds. In other words, z ( k ) is a Cauchy sequence, and it converges globally to a limit point z * .
This completes the proof. □

7. Numerical Experiments

7.1. Inclusion Intervals of M-Eigenvalues

In this subsection, we show the bounds of M-eigenvalues presented in Theorem 3 by several examples. We begin with the elastic moduli tensor taken from Example 4.2 in Zhao [30].
Example 2.
We consider the elastic moduli tensor C = ( c i 1 j 1 i 2 j 2 ) B Q ( 2 , 2 ) for the Tetragonal system with 7 elasticities in the equilibrium Equation (4), where
c 1123 = c 1131 = c 2223 = c 2231 = c 3323 = c 3331 = c 3312 = c 2331 = c 2312 = c 3112 = 0 , c 1112 = c 2212 , c 2222 = c 1111 , c 2233 = c 1133 , c 3131 = c 2323 ,
with
c 1111 = 4 , c 1122 = 4 , c 1133 = 2 , c 1112 = 1 , c 3333 = 3 , c 2323 = 4 , c 1212 = 4 .
We then transform C into a symmetric biquadratic tensor A by
a i 1 j 1 i 2 j 2 = 1 4 a i 1 j 1 i 2 j 2 + a i 2 j 1 i 1 j 2 + a i 1 j 2 i 2 j 1 + a i 2 j 2 i 1 j 1
and show the all entries of A as follows
a 1111 = 4 , a 1112 = a 1211 = 1 , a 1121 = a 2111 = 1 , a 1212 = 4 , a 1222 = a 2212 = 1 , a 1133 = a 1331 = a 3113 = a 3311 = 1 , a 1313 = 4 , a 2121 = 4 , a 2122 = a 2221 = 1 , a 2222 = 4 , a 2233 = a 2332 = a 3223 = a 3322 = 1 , a 2323 = 4 , a 3131 = 4 , a 3232 = 4 , a 3333 = 3 .
The lower and upper bounds for the M-eigenvalues, as computed by He, Li and Wei [36], Zhao [30], and Theorem 3, are shown in Table 1. Notably, since C is nonsymmetric, the results of [30,36] are not applicable here. We also keep most entries of A unchanged and adjusted the values of a 1212 , a 1312 and a 1213 , and a 1313 , respectively, to generate additional examples. From this table, we see that Theorem 3 could generate smaller intervals compared with other methods. In particular, for the instances with a 1212 = 2 and a 1312 = a 1213 = 2 , Theorem 3 could show positive semi-definiteness, and for the instance with a 1313 = 2 , Theorem 3 could show positive definiteness, while the other methods may not obtain the same conclusion.
We continue with larger scale examples.
Example 3.
We suppose that A = B + ( m + n ) I S B Q ( m , n ) , where B is a random symmetric biquadratic tensor whose entries are uniformly distributed random integers ranging from 1 to m + n , and I denotes the identity biquadratic tensor.
We compare the lower and upper bounds of M-eigenvalues obtained by He, Li and Wei [36], Zhao [30], and Theorem 3. We repeat all experiments 100 times with different random tensors B and show the average results in Table 2. It follows from this table that Theorem 3 could return a smaller interval on average in most cases.
We also compare the quality of the intervals obtained by Zhao [30] and Theorem 3 in the following way. We denote l z and u z as the lower and upper bounds of M-eigenvalues obtained by Zhao [30], and l and u as the lower and upper bounds of M-eigenvalues obtained by Theorem 3, respectively. We continue to show the the number cases that l z = , > , < l and u z = , > , < u , respectively, in Table 3. It follows from Table 3 that Theorem 3 could return a smaller interval in most cases. This phenomenon becomes more obvious when m and n are large.

7.2. Computing the Smallest M-Eigenvalue

In this subsection, we present the numerical results for computing the smallest M-eigenvalues using Algorithm 1. For all experiments, we generate x R m , y R n randomly by the normal distribution and then normalize them to obtain the initial points by x ( 0 ) = x x and y ( 0 ) = y y , respectively. To ensure robustness, each experiment is repeated 20 times with different initial points, and the average results are reported. The parameters are set as follows: η = 10 3 , k max = 1000 , ϵ 1 = 10 6 , ϵ 2 = 10 6 , ϵ 3 = 10 16 , C L = 10 16 , and C U = 10 16 .
We first consider the biquadratic tensor in Example 2. The average iteration count for implementing Algorithm 1 is 18.55, the average CPU time consumed is 1.08 × 10 1 seconds, and the average value of f ( z ) is found to be 3.72 × 10 7 . Among all 20 repeated trials, an M-eigenvalue of 2.5 was achieved in 19 cases. We also show the values of M-eigenpairs in Table 4. Since for any M-eigenpair ( λ , x , y ) , its variants ( λ , x , y ) , ( λ , x , y ) , ( λ , x , y ) are also M-eigenpairs of A , we exclude these redundant M-eigenpairs from our results. Furthermore, we present the iterative procedure of Algorithm 1 for computing M-eigenvalues in Figure 1. It is evident that our Riemannian LBFGS algorithm exhibits rapid convergence.
At last, we present an example from statistics.
Example 4.
We generate a sequence of 10,000 independent and identically distributed random matrices X ( t ) R m × n uniformly from the interval [ 0 , 10 ] . Subsequently, we compute the covariance tensor by (3).
We show the numerical results for computing M-eigenvalues of the covariance biquadratic tensors by Algorithm 1 in Table 5. Here, ‘ λ ’ represents the smallest M-eigenvalue obtained from 20 repeated trials, and ’Rate’ denotes the success rate of returning the smallest M-eigenvalue. Additionally, ’Iteration’, ’Time (s)’, and ’Res’ correspond to the average values of the number of iterations, the CPU time consumed, and the norm of the gradient f ( z ) at the final iteration, respectively.
It is confirmed that all the smallest eigenvalues obtained are positive, which aligns with Proposition 2, indicating that the covariance tensor is positive definite. The success rate remains high for small-dimensional problems but gradually decreases as the values of m and n increase. This observation is consistent with the theoretical expectation that the number of M-eigenvalues for a higher-dimensional biquadratic tensor generally increases, leading to more KKT points and potentially greater difficulty in converging to the global optimal solution. Furthermore, the number of iterations is consistently below 100, and the gradient norm is smaller than the specified tolerance of 10 6 in all cases. At last, we present the iterative procedure of Algorithm 1 for computing M-eigenvalues of the covariance tensor with m = 10 and n = 30 in Figure 2. It is evident that our Riemannian LBFGS algorithm exhibits rapid convergence.

8. Final Remarks

Motivated by applications, in this paper, we extend the definition of M-eigenvalues of symmetric biquadratic tensors to nonsymmetric biquadratic tensors, show that M-eigenvalues always exist, a biquadrtic tensor is positive semi-definite (definite) if and only if its M-eigenvalues are nonnegative (positive). We present a Gershgorin-type theorem, several structured biquadratic tensors, and an algorithm for computing the smallest M-eigenvalue of a biquadratic tensor. We may explore further in the following three topics.
  • Study more precise bounds for M-eigenvalues of biquadratic tensors and construct more efficient algorithms for computing the smallest M-eigenvalue of a biquadratic tensor.
  • Identify more classes of structured biquadratic tensors.
  • Study the SOS problem of biquadratic tensors.

Author Contributions

Conceptualization, L.Q. and C.C.; methodology, L.Q. and C.C.; software, C.C.; validation, L.Q. and C.C.; formal analysis, L.Q. and C.C.; investigation, L.Q. and C.C.; writing—original draft preparation, L.Q. and C.C.; writing—review and editing, L.Q. and C.C.; supervision, L.Q.; project administration, L.Q.; funding acquisition, L.Q. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Research Center for Intelligent Operations Research, The Hong Kong Polytechnic University (4-ZZT8), the National Natural Science Foundation of China (Nos. 12471282 and 12131004), the R&D project of Pazhou Lab (Huangpu) (Grant no. 2023K0603), and the Fundamental Research Funds for the Central Universities (Grant No. YWF-22-T-204).

Data Availability Statement

Data will be made available on reasonable request.

Acknowledgments

We thank the editor and all the reviewers for their comments which have contributed to the enhancement of our paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qi, L.; Hu, S.; Zhang, X.; Xu, Y. Biquadratic tensors, biquadratic decomposition and norms of biquadratic tensors. Front. Math. China 2021, 16, 171–185. [Google Scholar] [CrossRef]
  2. Huang, Z.H.; Li, X.; Wang, Y. Bi-block positive semidefiniteness of bi-block symmetric tensors. Front. Math. China 2021, 16, 141–169. [Google Scholar] [CrossRef]
  3. Kolda, T.G.; Bader, B.W. Tensor decomposition and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  4. Knowles, J.K.; Sternberg, E. On the failure of ellipticity of the equations for finite elastostatic plane strain. Arch. Ration. Mech. Anal. 1976, 63, 321–336. [Google Scholar] [CrossRef]
  5. Zubov, L.M.; Rudev, A.N. On necessary and sufficient conditions of strong ellipticity of equilibrium equations for certain classes of anisotropic linearly elastic materials. J. Appl. Math. Mech. 2016, 9, 1096–1102. [Google Scholar] [CrossRef]
  6. Bomze, I.M.; Ling, C.; Qi, L.; Zhang, X. Standard bi-quadratic optimization problems and unconstrained polynomial reformulations. J. Glob. Optim. 2012, 52, 663–687. [Google Scholar] [CrossRef]
  7. Chen, Y.; Hu, Z.; Hu, J.; Shu, L. Block structure-based covariance tensor decomposition for group identification in matrix variables. Stat. Probab. Lett. 2025, 216, 110251. [Google Scholar] [CrossRef]
  8. Xiang, H.; Qi, L.; Wei, Y. M-eigenvalues of the Riemann curvature tensor. Commun. Math. Sci. 2018, 16, 2301–2315. [Google Scholar] [CrossRef]
  9. Yan, J.; Yin, X.; Lin, W.; Deng, C.; Zha, H.; Yang, X. A short survey of recent advances in graph matching. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, New York, NY, USA, 6–9 June 2016; pp. 167–174. [Google Scholar]
  10. Cui, C.; Qi, L.; Xu, Y. PSD and SOS biquadratic polynomials. Mathematics 2025, 13, 2294. [Google Scholar]
  11. Ling, C.; Nie, J.; Qi, L.; Ye, Y. Biquadratic optimization over unit spheres and semidefinite programming relaxations. SIAM J. Optim. 2010, 20, 1286–1310. [Google Scholar] [CrossRef]
  12. Qi, L.; Dai, H.H.; Han, D. Conditions for strong ellipticity and M-eigenvalues. Front. Math. China 2009, 4, 349–364. [Google Scholar] [CrossRef]
  13. Nye, J.F. Physical Properties of Crystals: Their Representation by Tensors and Matrices, 2nd ed.; Clarendon Press: Oxford, UK, 1985. [Google Scholar]
  14. Che, H.; Chen, H.; Zhou, G. New M-eigenvalue intervals and application to the strong ellipticity of fourth-order partially symmetric tensors. J. Ind. Manag. Optim. 2021, 17, 3685–3694. [Google Scholar] [CrossRef]
  15. Li, S.; Li, C.; Li, Y. M-eigenvalue inclusion intervals for a fourth-order partially symmetric tensor. J. Comput. Appl. Math. 2019, 356, 391–401. [Google Scholar] [CrossRef]
  16. Li, S.; Chen, Z.; Liu, Q.; Lu, L. Bounds of M-eigenvalues and strong ellipticity conditions for elasticity tensors. Linear Multilinear Algebra 2022, 70, 4544–4557. [Google Scholar] [CrossRef]
  17. Zou, W.; He, Q.; Huang, M.; Zheng, Q. Eshelby’s problem of non-elliptical inclusions. J. Mech. Phys. Solids 2010, 58, 346–372. [Google Scholar] [CrossRef]
  18. Wang, Y.; Qi, L.; Zhang, X. A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor. Numer. Linear Algebra Appl. 2009, 16, 589–601. [Google Scholar] [CrossRef]
  19. Rindler, W. Relativity: Special, General and Cosmological, 2nd ed.; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  20. Cooper, J.; Dutle, A. Spectra of uniform hypergraphs. Linear Algebra Appl. 2012, 436, 3268–3292. [Google Scholar] [CrossRef]
  21. Qi, L.; Luo, Z. Tensor Analysis: Spectral Theory and Special Tensors; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  22. Yang, Y.; Yang, Q. On solving biquadratic optimization via semidefinite relaxation. Comput. Optim. Appl. 2012, 53, 845–867. [Google Scholar] [CrossRef]
  23. O’Rourke, S.M.; Setlur, P.; Rangaswamy, M.; Swindlehurst, A.L. Relaxed biquadratic optimization for joint filter-signal design in signal-dependent STAP. IEEE Trans. Signal Process. 2018, 66, 1300–1315. [Google Scholar] [CrossRef]
  24. Hilbert, D. Über die Darstellung definiter Formen als Summe von Formenquadraten. Math. Ann. 1888, 32, 342–350. [Google Scholar] [CrossRef]
  25. Wang, C.; Chen, H.; Wang, Y.; Yan, H. An alternating shifted inverse power method for the extremal eigenvalues of fourth-order partially symmetric tensors. Appl. Math. Lett. 2023, 141, 108601. [Google Scholar] [CrossRef]
  26. Zhao, J.; Liu, P.; Sang, C. Shifted inverse power method for computing the smallest M-eigenvalue of a fourth-order partially symmetric tensor. J. Optim. Theory Appl. 2024, 200, 1131–1159. [Google Scholar] [CrossRef]
  27. Ding, W.; Liu, J.; Qi, L.; Yan, H. Elasticity M-tensors and the strong ellipticity condition. Appl. Math. Comput. 2020, 373, 124982. [Google Scholar] [CrossRef]
  28. Wang, G.; Sun, L.; Liu, L. M-eigenvalues-based sufficient conditions for the positive definiteness of fourth-order partially symmetric tensors. Complexity 2020, 2020, 2474278. [Google Scholar] [CrossRef]
  29. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  30. Zhao, J. Conditions of strong ellipticity and calculation of M-eigenvalues for a partially symmetric tensor. Appl. Math. Comput. 2023, 458, 128245. [Google Scholar] [CrossRef]
  31. Qi, L.; Song, Y. An even order symmetric B tensor is positive definite. Linear Algebra Its Appl. 2014, 457, 303–312. [Google Scholar] [CrossRef]
  32. Jiang, B.; Yang, F.; Zhang, S. Tensor and its tucker core: The invariance relationships. Numer. Linear Algebra Appl. 2017, 24, e2086. [Google Scholar] [CrossRef]
  33. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar]
  34. Chang, J.; Chen, Y.; Qi, L. Computing eigenvalues of large scale sparse tensors arising from a hypergraph. SIAM J. Sci. Comput. 2016, 38, A3618–A3643. [Google Scholar] [CrossRef]
  35. Bolte, J.; Sabach, S.; Teboulle, M. Proximal alternating linearized minimizationfor nonconvex and nonsmooth problems. Math. Program. Ser. A 2014, 146, 459–494. [Google Scholar] [CrossRef]
  36. He, J.; Li, C.; Wei, Y. M-eigenvalue intervals and checkable sufficient conditions for the strong ellipticity. Appl. Math. Lett. 2020, 102, 106137. [Google Scholar] [CrossRef]
Figure 1. The iterative procedure of Algorithm 1 for computing M-eigenvalues of the biquadratic tensor A in Example 2. The left panel shows the objective function, while the right panel displays the norm of the gradient.
Figure 1. The iterative procedure of Algorithm 1 for computing M-eigenvalues of the biquadratic tensor A in Example 2. The left panel shows the objective function, while the right panel displays the norm of the gradient.
Symmetry 17 01158 g001
Figure 2. The iterative procedure of Algorithm 1 for computing M-eigenvalues of a nonsymmetric covariance tensor with m = 10 and n = 30 in Example 4. The left panel shows the objective function, while the right panel displays the norm of the gradient.
Figure 2. The iterative procedure of Algorithm 1 for computing M-eigenvalues of a nonsymmetric covariance tensor with m = 10 and n = 30 in Example 4. The left panel shows the objective function, while the right panel displays the norm of the gradient.
Symmetry 17 01158 g002
Table 1. Interval of M-eigenvalues in Example 2. Here, ‘−’ denotes the corresponding algorithm is not applicable. Numbers in bold are relatively better.
Table 1. Interval of M-eigenvalues in Example 2. Here, ‘−’ denotes the corresponding algorithm is not applicable. Numbers in bold are relatively better.
He, Li and Wei [36]Zhao [30]Theorem 3
C [−5, 13]
A [−1, 9][1, 7][1, 7]
A with a 1212 = 2 [−3, 9][−1, 7][0, 7]
A with a 1312 = a 1213 = 2 [−3, 11][−0.8637, 8.8637][0, 8]
A with a 1313 = 2 [−2, 9][0, 7][1, 7]
Table 2. Interval of M-eigenvalues in Example 3. Numbers in bold are relatively better.
Table 2. Interval of M-eigenvalues in Example 3. Numbers in bold are relatively better.
mnHe, Li and Wei [36]Zhao [30]Theorem 3
22[−8.10, 21.36][−2.37, 15.65][−2.68, 16.07]
5[−60.73, 82.29][−34.43, 56.00][−33.60, 55.79]
10[−224.41, 261.26][−135.97, 172.98][−131.85, 168.69]
20[−827.48, 893.86][−518.89, 584.25][−496.26, 561.33]
52[−60.75, 82.43][−34.32, 56.14][−33.52, 55.84]
5[−598.99, 630.01][−174.55, 205.50][−143.27, 174.20]
10[−1821.69, 1867.11][−498.30, 544.05][−435.28, 481.81]
20[−5942.73, 6018.45][−1584.09, 1660.27][−1423.54, 1501.01]
Table 3. A summary of the number of instances for the relationships between the lower and upper bounds obtained by Zhao [30] and Theorem 4.1, as illustrated in Example 3.
Table 3. A summary of the number of instances for the relationships between the lower and upper bounds obtained by Zhao [30] and Theorem 4.1, as illustrated in Example 3.
mn l z = l l z < l l z > l u z = u u z < u u z > u
222796418757
54514534354
100782201882
2009550694
523653234552
50100000100
100100000100
Table 4. M-eigenpairs computed from Example 2.
Table 4. M-eigenpairs computed from Example 2.
λ x 1 x 2 x 3 y 1 y 2 y 3
2.50000.6533−0.27060.7071−0.65330.27060.7071
2.5000−0.65330.27060.70710.6533−0.27060.7071
2.50000.27060.65330.70710.27060.6533−0.7071
2.50000.27060.6533−0.70710.27060.65330.7071
3.00000.6088−0.79330.0003−0.9915−0.13040.0003
Table 5. Numerical results for computing M-eigenvalues of the covariance biquadratic tensors in Example 4 by Algorithm 1.
Table 5. Numerical results for computing M-eigenvalues of the covariance biquadratic tensors in Example 4 by Algorithm 1.
mn λ RateIterationTime (s)Res
557.814235%30.750.3312.15 × 10 7
107.735125%44.800.2952.22 × 10 7
207.444640%54.900.4422.99 × 10 7
307.337020%57.350.5275.26 × 10 7
1057.721145%43.050.3192.64 × 10 7
107.596320%47.650.3524.17 × 10 7
207.333215%64.700.6435.42 × 10 7
307.20515%80.701.1025.17 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qi, L.; Cui, C. Biquadratic Tensors: Eigenvalues and Structured Tensors. Symmetry 2025, 17, 1158. https://doi.org/10.3390/sym17071158

AMA Style

Qi L, Cui C. Biquadratic Tensors: Eigenvalues and Structured Tensors. Symmetry. 2025; 17(7):1158. https://doi.org/10.3390/sym17071158

Chicago/Turabian Style

Qi, Liqun, and Chunfeng Cui. 2025. "Biquadratic Tensors: Eigenvalues and Structured Tensors" Symmetry 17, no. 7: 1158. https://doi.org/10.3390/sym17071158

APA Style

Qi, L., & Cui, C. (2025). Biquadratic Tensors: Eigenvalues and Structured Tensors. Symmetry, 17(7), 1158. https://doi.org/10.3390/sym17071158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop