Abstract
The homogeneous polynomial defined by a tensor, for , has been used in many recent problems in the context of tensor analysis and optimization, including the tensor eigenvalue problem, tensor equation, tensor complementary problem, tensor eigenvalue complementary problem, tensor variational inequality problem, and least element problem of polynomial inequalities defined by a tensor, among others. However, conventional computation methods use the definition directly and neglect the structural characteristics of homogeneous polynomials involving tensors, leading to a high computational burden (especially when considering iterative algorithms or large-scale problems). This motivates the need for efficient methods to reduce the complexity of relevant algorithms. First, considering the symmetry of each monomial in the canonical basis of homogeneous polynomials, we propose a calculation method using the merge tensor of the involved tensor to replace the original tensor, thus reducing the computational cost. Second, we propose a calculation method that combines sparsity to further reduce the computational cost. Finally, a simplified algorithm that avoids duplicate calculations is proposed. Extensive numerical experiments demonstrate the effectiveness of the proposed methods, which can be embedded into algorithms for use by the tensor optimization community, improving computational efficiency in magnetic resonance imaging, n-person non-cooperative games, the calculation of molecular orbitals, and so on.
1. Introduction
For any positive integer n, we use to denote the set . Throughout this paper, we assume that l, m, and n are positive integers with , unless otherwise specified. We denote the set of all n-dimensional real vectors by , the set of all n-dimensional non-negative vectors by , and the set of all -dimensional real matrices by . We use the bold lowercase letter to denote an n-dimensional vector with components , where , and the uppercase letter to denote an -dimensional matrix of entries , where and .
A real tensor of order m and dimension means
We use to denote the set of m-order -dimensional real tensors with . In particular, if with , then is called an m-th order n-dimensional real tensor, and the set of all m-th order n-dimensional real tensors is denoted by . A tensor is said to be non-negative if all its entries are non-negative. For any , is said to be a diagonal entry when and is an off-diagonal entry otherwise. If , then it is said to be an identity tensor if all its diagonal entries are one and all its off-diagonal entries are zero, and we denote it by .
Tensors and homogeneous polynomials are two closely related concepts in mathematics, particularly in algebraic geometry, differential geometry, and physics. A tensor can define a homogeneous polynomial; conversely, a homogeneous polynomial can be defined by a tensor but not uniquely (i.e., it can be defined by different tensors). If the tensor is required to be symmetric, then a homogeneous polynomial is defined by a unique symmetric tensor. Given , for any , the vector of -degree homogeneous polynomials defined by tensor , denoted by , can be defined with its i-th component being given by
when , the m-degree homogeneous polynomial defined by the tensor , denoted by , can be defined as
Over the past decade, tensors and their related problems have been studied extensively, many of which involve the calculation of . Some of these problems are given as follows.
- (a)
- Tensor eigenvalue problem [1,2,3,4,5]. Given , an H-eigenvalue of is a real number , for whichadmits a non-zero solution , where . Meanwhile, a Z-eigenvalue of is a real number , for whichadmits a non-zero solution .
- (b)
- Tensor equation [6,7,8,9,10,11,12,13,14]. Given and , this problem involves finding a vector such that
- (c)
- Tensor complementarity problem [15,16,17,18,19,20,21]. Given and , the tensor complementarity problem involves finding a vector such thatSee also the survey papers [22,23,24].
- (d)
- Tensor eigenvalue complementarity problem [25,26,27,28,29,30]. Given , the tensor eigenvalue complementarity problem involves finding a real number and a vector such thatwhere is an identity tensor.
- (e)
- Tensor variational inequality problem [31]. Given , , and a non-empty set , the tensor variational inequality problem involves finding a vector such that
- (f)
- Least element problem for the set defined by polynomial inequalities [32]. Let be a Z-tensor (i.e., all its off-diagonal entries are non-positive) and . Then, there exists an element such that for all . This problem is related to the design of algorithms to find the least element .
For any , has entries and, hence, as n or m becomes large, the computational complexity of with grows rapidly; therefore, large-scale problems invoke huge computational challenges. Moreover, in the iterative algorithms used to solve the above problems (a)–(f), it is necessary to repeatedly calculate the values of at different points . Therefore, determining how to calculate effectively is an important problem in the context of tensor analysis and optimization.
The purpose of this study is to determine how to effectively calculate the vector of -degree homogeneous polynomials for and . We aim to reduce the associated computational cost by dissecting the symmetry of each monomial in the canonical basis of homogeneous polynomials and leveraging the sparsity of the involved tensors. Our main contributions are as follows:
- Firstly, taking into account the symmetry of the monomials composing homogeneous polynomials, we replace the original tensors with their merge tensors, significantly reducing computational costs.
- Secondly, considering sparsity, we design algorithms for the calculation of by finding non-zero elements and corresponding positions. When searching for non-zero elements and their corresponding positions in the merge tensor, we also utilize symmetry.
- Thirdly, a simplified algorithm is proposed to avoid duplicate calculations, which further reduces the computational cost.
The rest of this paper is organized as follows. In Section 2, we introduce some symbols, concepts, and results that will be used in the subsequent analyses. In Section 3, we determine the computational complexity of calculating and without considering sparsity. In Section 4, we design algorithms for calculating and when taking sparsity into consideration and present a simplified algorithm for the calculation of . Preliminary numerical results are reported in Section 5, and the concluding remarks are given in Section 6.
2. Preliminaries
2.1. Notations
Let the index set be non-empty. We use to denote the cardinality of the set T. For any , denotes a sub-vector of containing components corresponding to the indices in T. For any , denotes a sub-matrix of A composed of its rows corresponding to the indices in T; that is, the entries of satisfy and .
2.2. Merge Tensor
For any and , we define
For any given indices , we use to denote the set of all permutations of the index arrangement . For any , denotes the i-th row-subtensor of , with entries for any .
The following concept was introduced in [32].
Definition 1.
For an arbitrary given tensor , we call the merge tensor under the permutation of if, for any and ,
In the following, we use the term merge tensor instead of merge tensor under permutation for the sake of simplicity.
It is easy to see that, through utilizing the symmetry of , the entries of the merge tensor are the sum of entries of the corresponding original tensor under any permutation . From Definition 1, for any given tensor , it is obvious that
We use the following simple example to illustrate this point.
Example 1.
Let , where , , , , , , , and .
Let , where , , , , , , , and .
It is easy to see that for all . Specifically, we have
For any given tensor and , it follows from (2) that we may obtain the value of by computing for any satisfying (2). As the merge tensor given in Definition 1 is one of sparsest tensors among all the tensors satisfying (2), we compute instead of , where is the merge tensor of as this greatly reduces the cost of calculation. In addition, we combine the sparsity of to further reduce the computational cost. These aspects are further investigated in the following sections.
3. Calculation of and Without Considering Sparsity
For any given and , let be the merge tensor of . By (2), we have for any . In this section, we show the difference between and when they are computed directly, according to their definitions.
On one hand, for any , as there are at most non-zero entries in sub-row tensor , the computational complexity of is when we compute directly using (1).
On the other hand, we define
and, for any , we define the monomial vector as
which represents the canonical basis of the vector space of -degree homogeneous polynomials in with real coefficients. For any , can be written as
where denotes the coefficient vector corresponding to the basis vector . For any , it is worth noting that there are at most non-zero entries in the sub-row tensor , such that has at most non-zero elements. Hence, the computational complexity of is when we compute directly using (4).
The difference between and when they are computed using the above mentioned methods is intuitively presented in the following Figure 1.
Figure 1.
Evolution of the difference with respect to n with different m.
It can be seen that, while the computational cost of and increases with n, the gap between the above two gradually widens; in particular, when m is larger, this phenomenon can be clearly observed. From the above analysis, we can see that, for any given tensor , if its merge tensor is not itself, then calculating to obtain the value of for any can greatly reduce the computational cost, when compared to calculating directly.
4. Calculation of and When Considering Sparsity
In many practical calculations, the tensors considered are usually sparse. In this section, let the tensor be given, and be the merge tensor of . For , we propose methods to calculate and by combining the sparsity of the tensors and . It is obvious that the calculation of both and is trivial when is a zero vector or a vector of all ones. Thus, in the following, we assume that is neither a zero vector nor a vector of all ones.
4.1. Calculation of
For any , the i-th component of is equivalent to
where . We divide this section into two parts.
Part 1. Identification of non-zero elements and their positions in tensor .
Suppose that has p non-zero entries with . We denote the vector of all non-zero entries of and the corresponding index matrix by
where is the index corresponding to the non-zero entry of (i.e., for any ).
The above discussion can be summarized according to the following procedure:
Procedure 1
(Calculation of vector and matrix S).
- (S0)
- Input: tensor .
- (S1)
- Compute vector and matrix S by (6) (for any given tensor , we can easily obtain the vector of all its nonzero entries and the corresponding index matrix S by using the Matlab R2023b’s command ‘find’).
Part 2. Calculation of based on vector and matrix S.
For any fixed , we select the rows of the index matrix S satisfying the first component being equal to i and denote the set of numbers corresponding to these rows as ; that is,
If for some , then obviously; hence, we have . Next, we consider the case of for . In this case, for with and , we use to denote a sub-vector of by discarding its first component.
Then, we have
Now, for any , (5) turns to
We can obtain the following algorithm:
| Algorithm 1. Calculation of based on vector and matrix S |
|
4.2. Calculation of
In this section, based on the vector and matrix S defined according to (6) in Section 4.1, we find the vector of all non-zero entries of and the corresponding index matrix and denote them by and , respectively. Then, we provide a method to compute . We divide this section into two parts.
Part 1. Identification of non-zero elements and their positions in the tensor .
Suppose that has q non-zero entries with , where r is defined by (3). Then, and . To obtain and , we execute a simple operation on vector and matrix , as defined by (6).
For any given , is defined by (7); that is, the set of numbers of rows in matrix S satisfies that the first element is to i.
First, for any given , for simplicity, we use
to denote the sub-matrix composed of rows of matrix S and sub-vector composed of elements of the vector corresponding to the set (to reduce storage, we delete all the rows corresponding to the index set from matrix S and vector after obtaining the sub-matrix and sub-vector ). Here, we have .
Second, for each row of the matrix with , we rearrange the last elements in increasing order. We denote the matrix obtained through this partial rearrangement as .
Third, for any , we assume that there are different rows in and denote the matrix composed of these different rows as . Moreover, we denote , with its t-th component being defined by
where with being the j-th row of matrix and being the t-th row of matrix (note that we can add these together utilizing symmetry).
Fourth, for any , we use to denote the sub-vector of obtained by deleting all zero elements of and to denote the sub-matrix of obtained by deleting all rows of corresponding to the zero elements of .
In summary, we can obtain the matrix and vector as
as summarized in Procedure 2:
Procedure 2
(Compute vector and matrix for merge tensor ).
- (S0)
- Input: vector and matrix S for tensor .
- (S1)
- Compute vector and matrix for merge tensor by (10).
Part 2. Calculation of based on vector and matrix .
Based on the above discussions, we provide the following algorithm to compute for any given tensor and vector .
| Algorithm 2. Calculation of based on vector and matrix |
|
4.3. Simplification of Calculating
To further reduce computational costs, we propose a simplified algorithm for calculating in this section. From (8), for any and , instead of calculating directly, we can also calculate for all first and then calculate . These two calculation methods are equivalent, but the difference lies in the order of multiplication.
Now, we further consider the latter case. For any with and , and are defined by (9). Let (or ) be a sub-matrix of (or ) by getting rid of its first column; thus, for , defined in Section 4.1 is a row of matrices or .
Generally, the matrices and are considered as sets of row vectors, and their intersection may not be empty, which means that duplicate rows may appear. Let (or ) be a vector whose j-th element is defined as (or ), and (or ) is defined by (9). For any with and , we first calculate and obtain . Then, we calculate and obtain . However, when we calculate , some of its elements may have already been calculated in . Therefore, we consider utilizing symmetry, such that each monomial in the canonical basis is computed at most once.
We use the following simple example to illustrate the above description.
Example 2.
Let , where , , , , , , , , , , , , , , , , and .
It is easy to calculate that
for a given . When we calculate the first component of , we need to calculate the vector first and then take the inner product of the vectors and . However, when we calculate the second component of , the elements in vector have already been calculated in , and the same applies to the third component of .
According to the above analysis, there is a similar discussion regarding the calculation of . In the following, we introduce the details of a process for calculating in order to avoid duplicate calculations. Based on the matrix defined by (10) in Section 4.2, we can obtain by getting rid of its first column and identify the different rows of denoted as (for any given matrix A, we can easily obtain the matrix C containing all different rows using the Matlab R2023b’s command ‘unique’). Then, for any , we can find and the corresponding index matrix . Based on and , we can obtain the intersection of these two sets and the index of in with respect to the rows. Finally, we calculate the vector whose t-th element is defined as , where is the t-th row of ; thus, for any , can be obtained from and the index of in directly. Then, can be calculated. We divide this section into three parts.
Part 1. Identification of different rows based on by getting rid of its first column.
Let be the sub-matrix of by getting rid of its first column. Then, we can find all the different rows of , and we denote the matrix composed of these different rows by . Suppose that there are s different rows of . Then, .
Part 2. Identification of the intersection of and with respect to rows and the corresponding row index vector of in for any .
For any given , and are defined by (10), which represent the sub-matrix composed of rows of matrix and the sub-vector composed of elements of vector , respectively. Let be the sub-matrix of obtained by discarding its first column. Then, we can obtain the intersection of and to obtain the index set of in with respect to rows. We denote the index set as , which is characterized by the following relationship:
Then, we can obtain the index set J as follows
The above two parts can be summarized through the following procedure:
Procedure 3
(Compute index set for tensor ).
- (S0)
- Input: matrix for tensor .
- (S1)
- Compute by (11).
Part 3. Calculation of based on matrix , vector , and index set for any .
Let be the t-th () row of . We calculate the vector based on and first, as follows:
For any , we can obtain the corresponding and the index set . Thus, can be obtain by and directly; that is,
then can be calculated.
Now, the i-th component of is equivalent to
Based on the above discussions, we give the following simplified algorithm to compute for any given tensor and vector .
| Algorithm 3. Simplified calculation of based on vector and index set |
|
5. Numerical Experiments
In this section, we implement the methods proposed in Section 4 to calculate and and conduct a series of experiments to verify the effectiveness of the proposed methods.
All the experiments were performed using MATLAB R2023b on a laptop computer with an Intel Core i7-10710U at 1.1 GHz and 16 GB of RAM.
5.1. Performance of Algorithm 1 (or Algorithm 2) for Computing (or ) in the Situation That and S (or and ) Are Known
According to the description in Section 4, we can easily find that the difference between Algorithms 1 and 2 relates to their inputs. In this section, we verify the efficiencies of Algorithms 1 and 2 through a comparison using ‘ttv’ in the tensor toolbox [33]. In addition, and are sometimes easily obtained in practical applications, and the tensor may even be given by and ; in this case, we can use Algorithm 2 to compute directly without conducting Procedures 1 and 2. In the following example, to demonstrate the performance of Algorithm 1 (or Algorithm 2), we construct a tensor for which the related merge tensor is itself, such that Algorithms 1 and 2 are the same.
Example 3.
Let with and be a tensor generated with entries such that, for any ,
with all other entries being 0.
Obviously, given in Example 3 is equal to and, so, can be obtained easily via Procedure 1. Next, we apply both Algorithm 2 and ‘ttv’ to compute . Note that, before executing Algorithm 2, we need to use Procedure 1 to obtain and , while ‘ttv’ can be directly applied to the tensor . Moreover, we use , , and to denote the CPU time cost of Procedure 1, Algorithm 2, and ‘ttv’ in the tensor toolbox, respectively. The numerical experimental results for different values of n are presented in Table 1, where p is the number of non-zero elements and represents the sparsity ratio.
Table 1.
Numerical results for computing in Example 3.
From Table 1, we can see that our Algorithm 2 significantly outperforms ‘ttv’ in the tensor toolbox for instances with low sparsity. However, our algorithm requires additional time to find and via Procedure 1, which can be implemented as an off-line process. Moreover, as n increases, Procedure 1 and Algorithm 2 present certain advantages. In the case , the MATLAB function ‘tensor’ ran out of memory.
Furthermore, the tensors involved in Table 1 are stored in a general way. When the tensors in Example 3 are stored by and , Procedure 1 does not need to be executed. Therefore, we only need to directly compare Algorithm 2 and ‘ttv’. It is worth noting that ‘ttv’ in the tensor toolbox is an overloaded function, where the function ‘ttv’ invoked with general tensor arguments differs from that with sparse tensor arguments. When the tensor is stored by and —that is, the tensor is stored in the sparse structure—it is more fair to use ‘ttv’ with sparse tensors as parameters for comparison. The numerical experimental results for different n are shown in Table 2, where we use to denote the CPU time cost of ‘ttv’ with sparse tensors as parameters and the other notation are as same as that in Table 1. Without a loss of generality, in the following description, if the tensors involved are the general structure, we execute the function ‘ttv’ in the tensor toolbox with general tensors as parameters (denoted by ‘ttv’), while, if the tensors involved have sparse structure, we execute the function ‘ttv’ in the tensor toolbox with sparse tensors as parameters (denoted by ‘s-ttv’).
Table 2.
Numerical results for computing in Example 3 with sparse structure.
From Table 2, we can see that when the tensors are stored using and or in the sparse structure, both Algorithm 2 and ‘s-ttv’ can solve problems of larger scale, and Algorithm 2 and ‘s-ttv’ were found to perform equally well. When , Algorithm 2 performed better.
5.2. Verification of the Advantages of Calculating After Merging
In order to verify the advantages of merging, we determined the numerical performances of Algorithm 2 and ‘s-ttv’ based on .
Example 4.
Let with and be a tensor generated with entries satisfying
with all other entries being 0.
Obviously, given in Example 4 is very sparse and its merge tensor can reduce the number of non-zero elements by more than half. Suppose that the tensors in Example 4 are stored in the sparse structure; then, we can directly apply Procedure 2 and Algorithm 2 to compute , while ‘s-ttv’ is used to compute . Moreover, we use , , and to denote the CPU time cost of Procedure 2, Algorithm 2, and ‘s-ttv’ in the tensor toolbox, respectively. The numerical experimental results for different values of n are shown in Table 3, where p is the number of non-zero elements of , and q is the number of non-zero elements of .
Table 3.
Numerical results for computing in Example 4.
From Table 3, it can be seen that both Algorithm 2 and ‘s-ttv’ can solve problems of different scales and have remarkable computational efficiency. For small-scale problems, Algorithm 2 and ‘ttv’ take almost no CPU time, while, for large-scale problems, Algorithm 2 takes less CPU time than ‘s-ttv’. In addition, although Procedure 2 may take additional CPU time for large-scale problems, it can be performed in an offline manner in practical applications.
5.3. Comparison of Algorithms 1 and 2 with s-ttv
For any given and with different , we used ‘s-ttv’ and Algorithm 1 to compute tensor–vector products and used Algorithm 2 to compute , where ‘s-ttv’ was implemented using the tensor toolbox [33], consistent with the description in the Section 5.1.
Example 5.
Let with and be a sparse tensor generated by sptenrand with sparsity ratio ; that is, has non-zero entries uniformly distributed in .
To demonstrate the performance of the different algorithms, we calculated the average CPU time cost from 10 random experiments. In particular, we use , and to denote the average CPU time cost of Algorithms 1 and 2 and ‘s-ttv’, respectively. Moreover, to present the computation cost of the pre-processing procedures clearly, we use to denote the average CPU time cost for and S in Procedure 1, and we use to denote the average CPU time cost for computation of the vector and matrix for the merge tensor in Procedure 2. Here, p is the average number of non-zero elements of original tensors, and q is the average number of non-zero elements of corresponding merge tensors. The numerical results are shown in Table 4.
Table 4.
Numerical results for computing in Example 5 with different values of .
From Table 4, we can see that when we calculate , Algorithm 1 is faster than ‘s-ttv’ in the tensor toolbox for high-order examples while, for low-order examples, Algorithm 1 and ‘s-ttv’ perform equally well. In general, Algorithm 2 is faster than Algorithm 1 and ‘s-ttv’. In our examples, we generated a series of sparse tensors, such that Procedure 1 used to calculate is basically not time-consuming due to the special storage structure of sparse tensors, while Procedure 2 to calculate takes a relatively long time. As discussed in Section 4, we need to implement Procedures 1 and 2 to obtain before executing Algorithm 2.
In many practical applications, we need to compute repeatedly if we use ‘s-ttv’ in the tensor toolbox. However, in our proposed methods, we only use Procedures 1 and 2 once to obtain and then call Algorithm 2 repeatedly, thus greatly shortening the computation time. Therefore, our proposed methods may not have advantages in a single calculation but instead have significant advantages in repeated calculations.
5.4. Comparison of Algorithms 2 and 3
In this section, in order to demonstrate the effectiveness of the simplified strategy presented in Section 4.3, we demonstrate the numerical performance of Algorithms 2 and 3 based on .
Example 6.
Let with and be a sparse tensor generated by sptenrand with sparsity ratio . Then, we can use Procedure 2 to obtain the vector of all non-zero entries and the corresponding index matrix .
In the following, for any given and with different , we use Algorithms 2 and 3 to compute tensor–vector products based on . In addition, we use ‘s-ttv’ to compute as a baseline for comparison.
To demonstrate the performance of the different algorithms, we show the average CPU time cost for 10 random experiments. We use , , and to denote the average CPU time cost for Algorithms 2, 3, and ‘s-ttv’, respectively. As calculating with Algorithm 3 requires an additional procedure, in order to determine the computation cost associated with the pre-processing procedure clearly, we use to denote the average CPU time cost for computing the index set based on and for the merge tensor via Procedure 3. Here, q is the average number of non-zero elements of corresponding merge tensors (i.e., the average number of rows of ), and is the average number of different rows for obtained by discarding the first column. The numerical results are presented in Table 5.
Table 5.
Numerical results for computing in Example 6 based on and .
The total number of multiplications is when we calculate using Algorithm 2, while the total number is when we calculate with Algorithm 3. From Table 5, we can see that Algorithm 3 can avoid a significant number of multiplications. Regarding the CPU time cost, Algorithm 3 is faster than Algorithm 2 with different and sparse ratio .
Compared to Algorithm 2, Algorithm 3 takes additional time to find the index set using Procedure 3 before execution. Similar to the analysis in Section 5.3, when we need to compute iteratively, Procedure 3 only needs to be executed once to obtain , following which Algorithm 3 is called repeatedly, thus greatly reducing the computational time.
5.5. Finding the Least Element for the Set Defined by Polynomial Inequalities
In this section, we consider the least element problem for the set defined by polynomial inequalities [32] in which the proposed iterative algorithm for finding the least element of the considered set involves the calculation of . In order to verify the advantage of our Algorithm 2 over ttv and s-ttv in the tensor toolbox [33], we conducted the numerical experiments using Example 5.3 from [32]:
Example 7
(Example 5.3 in [32]). Consider the set , where
with , and with diagonal entries
and off-diagonal entries
whille all other entries are 0. Here, we use to denote the floor operation of .
From the definition of the tensor in Example 7, the merge tensor can be easily obtained. We used the algorithm proposed in [32] to find the least element of the set involving the merge tensor , where we replaced ttv used in the fixed point iteration process of the algorithm proposed in [32] with s-ttv and our Algorithm 2. We compared the original algorithm in [32] with those replaced by s-ttv and Algorithm 2. The numerical results are shown in Table 6, where It represents the number of iterations, and It(fp) represents the number of iterations in the fixed point method. , , and represent the CPU times for the respective algorithms. , , and represent the CPU time for the fixed point methods used in the respective algorithms. Here, is the least element, Val, and the symbol means . It is worth noting that and include the CPU time associated with Procedure 1 for calculating the vector and matrix .
Table 6.
Numerical results of Example 7 for different values of n.
From Table 6, it can be seen that replacing ttv with s-ttv and Algorithm 2 greatly reduced the CPU time of the iterative algorithm due to the sparsity of the example. Moreover, Algorithm 2 always performed better than s-ttv for different values of n, especially when n is relatively small.
5.6. Tensor Complementarity Problems with the Implicit Z-Tensors
To further validate the effectiveness of embedding our Algorithm 2 in iterative algorithms, we consider tensor complementarity problems with the implicit Z-tensors [20] in which the proposed fixed point iterative algorithm involves the calculation of . We conducted numerical experiments using Example 4 in [20]:
Example 8
(Example 4 in [20]). Consider TCP, where with diagonal entries for any and for any , non-diagonal entries
with all other entries being 0; furthermore, with for , for and for .
The tensor in Example 8 is an implicit Z-tensor, and we can easily obtain the merge tensor . We embedded our Algorithm 2 into the fixed point iterative algorithm in [20] and compared it with the algorithm presented in [20], where the calculation of uses the tensor toolbox [33]. The numerical results are shown in Table 7, where represents the initial solution, Iteration represents the number of iterations, and and represent the CPU time for the whole algorithm embedded with our Algorithm 2 and that presented in [20], respectively. Furthermore, Res represents the natural residual.
Table 7.
Numerical results for computing in Example 8.
It can be seen that Algorithm 2 outperformed the algorithm in [20] for different initial solutions and values of n. Algorithm 2 greatly reduced the computational time for the fixed point iteration algorithm, especially when n is larger.
In order to demonstrate the difference in calculating between the fixed point iterative algorithm in [20] in comparison with our Algorithm 2, using the tensor toolbox more intuitively, we took the initial solution as an example and determined the total computational time when using iterative algorithms adopting the above two techniques, as shown in Figure 2. The bar chart shows the computational time with n equal to and 50, while the line chart shows the difference in computational time between the algorithms.
Figure 2.
The comparison of iterative algotithm in [20] using different techniques.
From Figure 2, we can see that our Algorithm 2 and tensor toolbox have similar and shorter computational times when n is relatively small. Our algorithm does not significantly increase the computational time with increasing n, while the tensor toolbox algorithm increases rapidly. Therefore, the proposed algorithm has a better application value in large-scale problems.
6. Concluding Remarks
Many tensors and their related problems involve a calculation of the vector of a -degree homogeneous polynomial defined by a tensor . In this study, taking symmetry and sparsity properties into consideration, we proposed efficient algorithms that avoid the large amount of computation inherent to existing algorithms. Specifically, utilizing the symmetry of monomials in the canonical basis of homogeneous polynomials, we proposed a method to calculate using the merge tensor of the involved tensor to replace the original tensor, thus reducing the computational cost. Then, an algorithm was designed to calculate this tensor, combining sparsity to further reduce the computational cost. Moreover, through analysis of the calculation details, a simplified algorithm that avoids duplicate calculations was further proposed. Finally, the results of extensive numerical experiments verified the effectiveness of the proposed methods.
There are still some aspects worthy of further study. First, although Procedure 2 or Procedure 3 only need to be executed once (or offline) for iterative algorithms, they take a relatively long time; as such, further accelerating the speed of these procedures would be desirable. Second, it is clear that more entries can be merged for sparser tensors, thus making our algorithms faster; therefore, can we determine the range of sparsity that is suitable for our algorithms? Finally, in addition to considering computation time, storage space may also be taken into account. Future investigations may help to address these outstanding questions.
Funding
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Hu, S.; Huang, Z.H.; Ling, C.; Qi, L. On determinants and eigenvalue theory of tensors. J. Symb. Comput. 2013, 50, 508–531. [Google Scholar] [CrossRef]
- Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
- Liu, P.; Liu, G.; Lv, H. Power function method for finding the spectral radius of weakly irreducible nonnegative tensors. Symmetry 2022, 14, 2157. [Google Scholar] [CrossRef]
- Lin, H.; Zheng, L.; Zhou, B. Largest and least H-eigenvalues of symmetric tensors and hypergraphs. Linear Multilinear Algebr. 2025, 1–27. [Google Scholar] [CrossRef]
- Pakmanesh, M.; Afshin, H.; Hajarian, M. Normalized Newton method to solve generalized tensor eigenvalue problems. Numer. Linear Algebra Appl. 2024, 31, e2547. [Google Scholar] [CrossRef]
- Bai, X.L.; He, H.J.; Ling, C.; Zhou, G. A nonnegativity preserving algorithm for multilinear systems with nonsingular M-tensors. Numer. Algor. 2021, 87, 1301–1320. [Google Scholar] [CrossRef]
- Ning, J.; Xie, Y.; Yao, J. Efficient splitting methods for solving tensor absolute value equation. Symmetry 2022, 14, 387. [Google Scholar] [CrossRef]
- Jiang, Z.; Li, J. Solving tensor absolute value equation. Appl. Numer. Math. 2021, 170, 255–268. [Google Scholar] [CrossRef]
- Ding, W.; Wei, Y. Solving multi-linear systems with M-tensors. J. Sci. Comput. 2016, 68, 689–715. [Google Scholar] [CrossRef]
- Han, L. A homotopy method for solving multilinear systems with M-tensors. Appl. Math. Lett. 2017, 69, 49–54. [Google Scholar] [CrossRef]
- He, H.; Ling, C.; Qi, L.; Zhou, G. A globally and quadratically convergent algorithm for solving multilinear systems with M-tensors. J. Sci. Comput. 2018, 76, 1718–1741. [Google Scholar] [CrossRef]
- Li, D.H.; Guan, H.B.; Wang, X.Z. Finding a nonnegative solution to an M-tensor equation. Pac. J. Optim. 2020, 16, 419–440. [Google Scholar]
- Liu, D.; Li, W.; Vong, S.W. The tensor splitting with application to solve multi-linear systems. J. Comput. Appl. Math. 2018, 330, 75–94. [Google Scholar] [CrossRef]
- Xie, Z.J.; Jin, X.Q.; Wei, Y.M. Tensor methods for solving symmetric M-tensor systems. J. Sci. Comput. 2018, 74, 412–425. [Google Scholar] [CrossRef]
- Bai, X.L.; Huang, Z.H.; Wang, Y. Global uniqueness and solvability for tensor complementarity problems. J. Optim. Theory Appl. 2016, 170, 72–84. [Google Scholar] [CrossRef]
- Che, M.; Qi, L.; Wei, Y. Positive-definite tensors to nonlinear complementarity problems. J. Optim. Theory Appl. 2016, 168, 475–487. [Google Scholar] [CrossRef]
- Huang, Z.H.; Qi, L. Formulating an n-person noncooperative game as a tensor complementarity problem. Comput. Optim. Appl. 2017, 66, 557–576. [Google Scholar] [CrossRef]
- Song, Y.; Qi, L. Properties of some classes of structured tensors. J. Optim. Theory Appl. 2015, 165, 854–873. [Google Scholar] [CrossRef]
- Song, Y.; Qi, L. Tensor complementarity problem and semi-positive tensors. J. Optim. Theory Appl. 2016, 169, 1069–1078. [Google Scholar] [CrossRef]
- Huang, Z.H.; Li, Y.F.; Wang, Y. A fixed point iterative method for tensor complementarity problems with the implicit Z-tensors. J. Glob. Optim. 2023, 86, 495–520. [Google Scholar] [CrossRef]
- Jia, Q.; Huang, Z.H.; Wang, Y. Generalized multilinear games and vertical tensor complementarity problems. J. Optim. Theory Appl. 2024, 200, 602–633. [Google Scholar] [CrossRef]
- Huang, Z.H.; Qi, L. Tensor complementarity problemspart I: Basic theory. J. Optim. Theory Appl. 2019, 183, 1–23. [Google Scholar] [CrossRef]
- Huang, Z.H.; Qi, L. Tensor complementarity problemspart III: Applications. J. Optim. Theory Appl. 2019, 183, 771–791. [Google Scholar] [CrossRef]
- Qi, L.; Huang, Z.H. Tensor complementarity problemspart II: Solution methods. J. Optim. Theory Appl. 2019, 183, 365–385. [Google Scholar] [CrossRef]
- Fan, J.; Nie, J.; Zhou, A. Tensor eigenvalue complementarity problems. Math. Program. 2018, 170, 507–539. [Google Scholar] [CrossRef]
- Ling, C.; He, H.; Qi, L. On the cone eigenvalue complementarity problem for higher-order tensors. Comput. Optim. Appl. 2016, 63, 143–168. [Google Scholar] [CrossRef]
- Ling, C.; He, H.; Qi, L. Higher-degree eigenvalue complementarity problems for tensors. Comput. Optim. Appl. 2016, 64, 149–176. [Google Scholar] [CrossRef]
- Song, Y.; Qi, L. Eigenvalue analysis of constrained minimization problem for homogeneous polynomial. J. Glob. Optim. 2016, 64, 563–575. [Google Scholar] [CrossRef]
- Xu, Y.; Huang, Z.H. Pareto eigenvalue inclusion intervals for tensors. J. Ind. Manag. Optim. 2023, 19, 2123–2139. [Google Scholar] [CrossRef]
- Zhang, L.; Chen, C. A Newton-type algorithm for the tensor eigenvalue complementarity problem and some applications. Math. Comput. 2021, 90, 215–231. [Google Scholar] [CrossRef]
- Wang, Y.; Huang, Z.H.; Qi, L. Global uniqueness and solvability of tensor variational inequalities. J. Optim. Theory Appl. 2018, 177, 137–152. [Google Scholar] [CrossRef]
- Huang, Z.H.; Li, Y.F.; Miao, X.H. Finding the least element of a nonnegative solution set of a class of polynomial inequalities. SIAM J. Matrix Anal. Appl. 2023, 44, 530–558. [Google Scholar] [CrossRef]
- Bader, B.W.; Kolda, T.G.; Dunlavy, D.M. Tensor Toolbox for MATLAB, Version 3.6. 28 September 2023. Available online: www.tensortoolbox.org (accessed on 15 February 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).