Next Article in Journal
Searching for New Physics in Hadronic Final States with Run 2 Proton–Proton Collision Data at the LHC
Next Article in Special Issue
Spectral Distribution and Numerical Methods for Rational Eigenvalue Problems
Previous Article in Journal
A Consistent Theory of Tachyons with Interesting Physics for Neutrinos
Previous Article in Special Issue
An Efficient Algorithm for Solving the Matrix Optimization Problem in the Unsupervised Feature Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme

1
School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(6), 1174; https://doi.org/10.3390/sym14061174
Submission received: 8 May 2022 / Revised: 29 May 2022 / Accepted: 1 June 2022 / Published: 7 June 2022

Abstract

:
Solving polynomial equations inevitably faces many severe challenges, such as easily occupying storage space and demanding prohibitively expensive computation resources. There has been considerable interest in exploiting the sparsity to improve computation efficiency, since asymmetry phenomena are prevalent in scientific and engineering fields, especially as most of the systems in real applications have sparse representations. In this paper, we propose an efficient parallel hybrid algorithm for constructing a Dixon matrix. This approach takes advantage of the asymmetry (i.e., sparsity) in variables of the system and introduces a heuristics strategy. Our method supports parallel computation and has been implemented on a multi-core system. Through time-complexity analysis and extensive benchmarks, we show our new algorithm has significantly reduced computation and memory overhead. In addition, performance evaluation via the Fermat–Torricelli point problem demonstrates its effectiveness in combinatorial geometry optimizations.

1. Introduction

As a basic tool in computer algebra and a built-in function of most computer algebra systems, the notion of a resultant is widely used in mathematical theory. A resultant is not only of significance to computer algebra [1], but also plays an important role in biomedicine [2], image processing [3], geographic information [4], satellite trajectory control [5], information security [6] and other scientific and engineering fields [7,8]. The most widely used techniques for solving polynomial systems are Sylvester and Dixon resultants. For example, applying a Dixon resultant to algebraic attacks can quickly solve multivariate polynomial quadratic equations over finite fields. Although these resultant techniques have been extensively studied and improved, they still face many severe challenges regarding their easy occupation of storage space and demand for prohibitively expensive computation resources.
Specifically, a successive Sylvester elimination technique has a shortcoming over a multivariate resultant in that the performance of successive resultant computations is very sensitive to the ordering of variables. Human intervention is required to determine the most efficient ordering, and so they are not automatic methods [9]. Inappropriate choices may cause the extreme intermediate expression to swell, consequently running out of memory before accomplishment [10]. In fact, this method is very inefficient, with the number of variables increasing. Moreover, this technique performs elimination variable by variable, which is inherently sequential.
In contrast, in most cases, the Dixon method is more efficient for directly computing resultant without eliminating variables one at a time. However, it is generally known that the entries of the Dixon matrix are more complicated than the entries of other resultant matrices. As a promising scheme, the fast recursive algorithm of the Dixon matrix (FRDixon for short) [11,12,13,14,15,16] can greatly improve the computation efficiency of the Dixon matrix. Recently, its effectiveness has been analyzed in detail and proven by [16]. Nevertheless, the size of the Dixon matrix and computational complexity by FRDixon explode exponentially as the number of variables increases. If the size of the Dixon matrix is relatively large, it directly leads to difficulty computing the final resultant.
To sum up, it is difficult to overcome such difficulties mentioned above, whatever the choices of resultants. Notice that the conventional resultants resort to general methods for solving polynomial systems. From another point of view, if we can make use of the features of given systems to design customized algorithms for constructing resultant matrices or computing resultants [10], instead of general-purpose methods, it can be expected that we can obtain a targetable method to solve some intractable problems efficiently. Therefore, the main objective of this article is to make use of the sparsity of systems and advantages of prevailing resultants to design a more suitable scheme according to the features of different systems.
Symmetry/asymmetry phenomena are prevalent in scientific and engineering fields. In computer algebra, especially most problems which arise in geometry, we observe the fact that in the real polynomial systems, variables always do not stand uniformly (i.e., with asymmetry features), such as the three combinatorial geometry problems posed in [10] for Heron’s formula in three and four dimensions [17], the problem of mapping of the topographical point to the standard ellipsoid [4], the Fermat–Torricelli point problem on a sphere with an Euclidean metric [18] and bifurcation points of the logistic map [17,19,20]. As we know, in the last decade, there has been considerable interest in employing sparsity to find the solutions in various fields, since most of the systems in real applications actually have sparse representations, such as finding sparse and low-rank solutions [21], parallel GCD algorithm for sparse multivariate polynomials [22], estimating the greatest common divisor (GCD) with sparse coefficients of noise-corrupted polynomials [23] and sparse multivariate interpolation [24].
However, researchers in symbolic computation mainly consider designing the methods to solve an arbitrary given polynomial system and are not aware of such sparse scenarios. In fact, similar studies on sparsity in other fields can go on in the field of solving polynomial systems. In this paper, we propose a new approach to construct a Dixon matrix for solving polynomial equations with sparsity. Through time-complexity analysis and extensive benchmarks, we show our new algorithm has significantly reduced computation and memory overhead.

1.1. Contributions

Our method is combined with Sylvester resultant and fast recursive algorithm FRDixon. It can be partitioned into two phases. In the first phase, we make use of the sparsity of systems to obtain a smaller polynomial system in fewer variables via a Sylvester resultant with fewer computational efforts. In the next step, we consider the multivariate algorithm FRDixon. Since the computational complexity of FRDixon exponentially depends on the number of variables, consequently, via Sylvester resultant elimination, the exported system with fewer variables than operated by FRDixon is more effective than single FRDixon that operates the original system directly. The idea is the genesis of our algorithm employing the Sylvester resultant and FRDixon simultaneously.
The main contributions of this paper are as follows.
  • We take advantage of the sparsity of the system and present a heuristic strategy to determine the most effective elimination ordering and remove part of the variables via Sylvester resultant.
  • We propose a method to improve the fast recursive algorithm of the Dixon matrix, which leads to reduced time and parallelism available.
  • We present a hybrid algorithm employing the methods of 1 and 2 to overcome some computation problems arising in successive Sylvester resultant computations and FRDixon separately. Meanwhile, we apply parallel computation to speed up these two elimination processes.
  • We implement our hybrid algorithm and parallel version on Maple. Through time-complexity analysis and extensive random benchmarks, we show our new algorithm has significantly reduced computation and memory overhead in the cases of systems with sparsity. In addition, performance evaluation via the Fermat–Torricelli point problem on sphere with Euclidean metric demonstrates our algorithm’s effectiveness in terms of the real combinatorial geometry optimization problems.

1.2. Related Work

It is well known that successive Sylvester resultant computations can be used to settle the problem of elimination of variables from a multivariate polynomial system to obtain a smaller polynomial system in fewer variables. Implementations of Sylvester resultants are supported in most of the computer algebra systems. With respect to so-called multivariate resultants, A.L. Dixon [25] proposed a method to simultaneously eliminate variables from multivariate systems, aiming at solving the polynomial equations system by constructing a Dixon matrix and computing its determinant.
Since then, Sylvester and Dixon approaches have been generalized and improved (see [9,11,12,16,17,20,26,27,28,29,30,31,32,33,34]). In [26], using the Cholesky decomposition, Zhi et al. proposed a method to compute the Sylvester resultant and reduced the time complexity to O ( r ( m + n ) ) , where m + n and r represent the size and numerical rank of the Sylvester matrix, respectively. Kapur et al. [29] extended Dixon’s method for the case when Dixon matrix is singular and successfully proved many non-trivial algebraic and geometric identities. In order to improve the efficiency of computing the Dixon resultant, several methods came into existence, such as the Unknown-Order-Change [30], Fast-Matrix-Construction method [9,31], Corner-Cutting method [32], etc. Zhao et al. [11,12] extended Chionh’s algorithm [31] to the general case of n + 1 polynomial equations in n variables by n-degree Sylvester resultant, and proposed the FRDixon algorithm, which initially constructed the Dixon matrix of nine Cyclic equations. In 2017, Qin et al. [16] gave a detailed analysis of the computational complexity of Zhao’s recurrence formula setting and applied parallel computation to speed up the recursive procedure [33,34]. To deal with the determinant raised in the Dixon matrix which is too large to compute or factor, some heuristic acceleration techniques were raised to accelerate computation in certain specific cases [17,20].

1.3. Organization

The rest of this article is organized as follows. Section 2 reviews the successive Sylvester resultant computations method and the FRDixon algorithm. In Section 3, a parallel hybrid algorithm which combines the Sylvester resultant and modified FRDixon is developed. Section 4 analyzes the time complexity of our proposed algorithm and conducts a series of numerical experiments. Three sets of random instances and one detailed example are presented to illustrate the application of our method. Finally, a conclusion is reported.

2. Review of Elimination Techniques

In this section, we first review the definition of a Sylvester resultant, which serves as the basis of successive Sylvester resultant computations for solving a system of polynomial equations. Then, we describe the fast recursive algorithm for construction of a Dixon matrix (FRDixon) [11].
All the discussions are stated for a general field K [ X , A ] , where X = { x 1 , , x n } denotes the set of variables and A = { a is the parameter , a X } denotes the set of parameters not belonging to X. Consider a system of n + 1 polynomial equations
Sys ( f 1 , , f n ) = { f j x 1 , , x n = i 1 = 0 m j 1 i n = 0 m j n a j , i 1 , , i n x 1 i 1 x n i n , j = 1 , , n + 1 }
in n variables x i and a number of coefficients a j , i 1 , , i n K [ A ] , where m j i is the degree of the polynomial f j ( x 1 , , x n ) with respect to x i . The objective is to construct the resultant matrix of polynomial equation system (1).

2.1. Elimination via Sylvester Resultant

In this subsection, we introduce the elimination process variable by variable based on the Sylvester resultant. The classical Sylvester resultant is used to the elimination of systems of two polynomials in one variable. Consider the polynomials f and g:
f = a m x m + a m 1 x m 1 + + a 0 , g = b l x l + b l 1 x l 1 + + b 0 ,
where m ( > 0 ) and l ( > 0 ) are the degrees of polynomials f and g in x, respectively. Recall that the Sylvester matrix of f and g in x is the matrix of the form
S = a m a m 1 a 0 a m a m 1 a 0 a m a m 1 a 0 b l b l 1 b 0 b l b l 1 b 0 b l b l 1 b 0
and the resultant of f and g in x is defined as the determinant of matrix S.
Let res ( f i , f j , x k ) represent the Sylvester resultant of the polynomial f i and f j in x k . By computing the Sylvester resultant of f i and other polynomials f 2 , , f n with respect to x 1 , respectively, we obtain the system Sys ( x 2 , , x n ) denoted by
f 1 , 2 = res ( f 1 , f 2 , x 1 ) f 1 , 3 = res ( f 1 , f 3 , x 1 ) f 1 , n + 1 = res ( f 1 , f n + 1 , x 1 )
which contains n equations in n 1 variables x 2 , , x n , i.e., the variable x 1 is removed from the original system (1). This procedure may be repeated until we have determined the sequence of polynomial systems Sys ( x 3 , , x n ) , Sys ( x 4 , , x n ) , , Sys ( x n ) . Obviously, the above elimination procedure yields the final resultant Sys ( x n ) for system (1).
Similar to the procedure above, we can eliminate x 1 , , x i 1 , x i + 1 , , x n in any order by successive Sylvester resultant computations and finally have a resultant in variable x i ( i { 1 , , n } ) . In the worst cases, this method requires O ( n 2 ) times Sylvester resultant computations.

2.2. Fast Recursive Algorithm of the Dixon Matrix (FRDixon)

We now give the key technique of the fast recursive algorithm for constructing the Dixon matrix in [11]. To avoid the computation of polynomial division, the technique of truncated formal power series (see [31]) is employed in the FRDixon algorithm.
Let a i ( y ) ( i = 0 , 1 , , n ) be the polynomial in y. It is obvious that i = 0 n a i ( y ) x i = 0 when x = y . Therefore, the quotient i = 0 n a i ( y ) x i i = 0 n a i ( y ) x i ( x y ) ( x y ) is a polynomial in y. We denote 1 1 ( x y ) ( x y ) by series form u = 1 x u y u 1 . Hence,
i = 0 n a i ( y ) x i x y = i = 0 n a i ( y ) x i u = 1 i x u y u 1 + i = 0 n a i ( y ) x i u = i + 1 x u y u 1 ,
here u = 1 0 x u y u 1 is taken as zero. Since the powers of x of the second term are all negatives, the left side of Equation (2) is a polynomial if, and only if, the second term of the right side in (2) is equal to zero. Hence, the equation
i = 0 n a i ( y ) x i x y = i = 1 n a i ( y ) x i k = 1 i 1 x k y i 1 k
holds.
The FRDixon algorithm is based on the following ideas: employing the technique of truncated formal power series for reducing the Dixon matrix construction problem into a set of sub-Dixon matrix construction problems with fewer variables, and using the Sylvester resultant matrix and the Dixon matrix with k 1 ( 3 k n ) variables to represent the Dixon matrix with k variables via matrix block computation. A recursive process for constructing the Dixon matrix is then proposed. For more details, one can refer to [11,12,16].

3. A Hybrid Algorithm for Constructing a Dixon Matrix

In this section, we will propose a new method to construct the Dixon matrix. Additionally, our proposed scheme is a parallel hybrid approach utilizing both the Sylvester resultant and the modified FRDixon algorithm, and making our algorithm applicable in either random polynomial systems or problems in reality.
The time complexity of our algorithm for construction of the Dixon matrix of Sys ( f 1 , , f n + 1 ) defined by (1) is O ( m ¯ 3 ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) (numerical type determinant) or O ( m ¯ ! ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) (symbolic type determinant), where m ¯ is the degree bound of the polynomial, m ˜ i denotes the maximum degree of x i in all polynomials f 1 , , f n + 1 , n denotes the number of variables and t denotes the number of variables in the polynomial system after the Sylvester resultant eliminates some variables. See Section 4.1 for details of the proof. In comparison, our method is more efficient than the existing methods for solving sparse polynomial systems.
Our method can be partitioned into two phases. In the first phase, we eliminate part of variables by Sylvester resultant from the original system, and then in the second phase, we construct the resultant matrix of the system derived from the first phase by a variant version of FRDixon.

3.1. Sylvester Elimination by Heuristic Strategy

A successive Sylvester elimination technique requires n 1 computations to remove a variable for a non-sparse system. If the sparse condition is satisfied, we can obtain a smaller system with the least computation costs.
In this subsection, we describe a heuristic strategy to determine which variable should be eliminated first. According to the degrees of this variable in each polynomial of the system, we can give the optimal combinational relationships of polynomials for removing it by Sylvester resultant.
If there are some variables only appearing in a few equations of system (1), the computation cost of eliminating such variables from (1) will be relatively small. In such a case, our approach to decide the elimination ordering and then remove them via Sylvester resultant in the first phase is fairly straightforward.
Let d j i represent whether the variable x i appears in polynomial f j . That is,
d j i = 1 if x i is in variable set of f j 0 otherwise
For each x i , we compute the sum of d j i for j = 1 , , n + 1 denoted by
e i = j = 1 n + 1 d j i , i = 1 , 2 , , n .
Assume x k denotes the first variable to be eliminated from system (1). Then, for j = 1 , , k 1 , k + 1 , , n , it satisfies
( e k < e j ) or ( e k = e j and m k m j )
m i means the maximum degree of x i in all polynomials f 1 , , f n + 1 . Once x k is picked, we move to adjust the ordering of f 1 , , f n + 1 . Let ord ( f j ) for j = 1 , 2 , , n + 1 be the order of f j after rearrangement. We require that if m j k < m i k , then ord ( f j ) < ord ( f i ) . At this stage, the ordering of polynomials is rearranged according to the degree m j k ; we denote the rearrangement system by Sys ( p 1 , , p n + 1 ) .
From now on, the technique of successive Sylvester resultant computations is operated on the newly adjusted system Sys ( p 1 , , p n + 1 ) . Assume the first l polynomials do not contain variable x k . In such a case, we only need to eliminate x k from the last ( n + 1 l ) polynomials p l + 1 , , p n + 1 by computing the resultants res ( p l + 1 , p l + 2 , x k ) , , res ( p l + 1 , p n + 1 , x k ) . Let
p l + 1 , j = res ( p l + 1 , p j , x k ) , l + 2 j n + 1 .
Then, a new polynomial system
{ p 1 , , p l , p l + 1 , l + 2 , , p l + 1 , n + 1 }
with n 1 variables is obtained.
We can proceed with eliminating a variable from (6) in way similar to that shown above. This approach in the first phase is illustrated in Example 1.
Example 1.
Given a system
f 1 = 3 x 3 2 x 1 x 2 x 3 + x 1 2 x 2 + 9 x 2 x 3 f 2 = x 2 x 3 2 3 x 2 2 x 3 + 5 x 2 + 11 f 3 = x 2 2 x 3 7 x 1 x 2 x 3 2 + 2 x 2 9 f 4 = x 2 x 3 2 x 2 x 3 x 2 2 + x 3
we want to eliminate a variable with the least computation cost. For each variable x i , count the number of times of x i appearing in Sys ( f 1 , , f 4 ) according to (4). We find that the minimum of { e 1 , e 2 , e 3 } is 2 corresponding to x 1 . Therefore, x 1 is chosen to be eliminated first and { f 1 , f 2 , f 3 , f 4 } can be rearranged by m j 1 ( j = 1 , , 4 ) . The rearranged system is
{ p 1 = f 2 , p 2 = f 4 , p 3 = f 3 , p 4 = f 1 } .
Since x 1 does not appear in p 1 and p 2 , eliminating x 1 by computing res ( p 3 , p 4 , x 1 ) , we obtain the system of 3 equations and 2 variables as
p 1 = f 2 = x 2 x 3 2 3 x 2 2 x 3 + 5 x 2 + 11 , p 2 = f 4 = x 2 x 3 2 x 2 x 3 x 2 2 + x 3 , p 3 , 4 = x 2 5 x 3 2 + 7 x 2 4 x 3 2 + 2 x 2 3 x 3 3 4 x 2 4 x 3 + 7 x 2 2 x 3 3 + x 2 x 3 4 + 445 x 2 3 x 3 151 x 2 2 x 3 2 + 4 x 2 3 + 63 x 2 2 x 3 + 18 x 2 x 3 2 36 x 2 2 + 81 x 2 .
Note that if x 3 is chosen to be eliminated first instead of x 1 , this yields a much larger system:
f 1 , 2 = res ( f 1 , f 2 , x 3 ) = x 1 4 x 2 4 3 x 1 2 x 2 5 x 1 3 x 2 3 13 x 1 2 x 2 4 + 54 x 1 x 2 5 + 50 x 1 2 x 2 3 90 x 1 x 2 4 243 x 2 5 + 66 x 1 2 x 2 2 207 x 1 x 2 3 + 486 x 2 4 3 x 1 2 x 2 + 15 x 1 x 2 2 + 702 x 2 3 + 33 x 1 x 2 504 x 2 2 + 693 x 2 + 1089 , f 1 , 3 = res ( f 1 , f 3 , x 3 ) = x 1 3 x 2 4 3 x 1 2 x 2 5 + x 1 4 x 2 2 + 7 x 1 3 x 2 3 30 x 1 2 x 2 4 + 42 x 1 3 x 2 2 128 x 1 2 x 2 3 + 195 x 1 x 2 4 + 438 x 1 2 x 2 2 + 576 x 1 x 2 3 54 x 2 4 + 54 x 1 2 x 2 414 x 1 x 2 2 + 81 x 2 3 + 1134 x 1 x 2 + 765 x 2 2 324 x 2 + 729 , f 1 , 4 = res ( f 1 , f 4 , x 3 ) = x 2 ( x 1 4 x 2 3 + x 1 3 x 2 3 + x 1 2 x 2 4 x 1 3 x 2 2 3 x 1 2 x 2 3 18 x 1 x 2 4 + 6 x 1 2 x 2 2 3 x 1 x 2 3 + 81 x 2 4 + 6 x 1 2 x 2 + 3 x 1 x 2 2 + 36 x 2 3 3 x 1 2 27 x 2 2 ) .
Remark 1.
An attractive feature of the hybrid algorithm is that it reflects the polynomial sparsity of the system. Hence, our overall algorithm is sensitive to the sparsity of variables appearing in its original representation of a given system.

3.2. Construction of Dixon Matrix by Improved FRDixon

At this stage, assume that we have eliminated n t variables via successive Sylvester elimination techniques by making use of a heuristic strategy. The derived system is denoted by Sys ( q 1 , , q t + 1 ) ,
Sys ( q 1 , , q t + 1 ) = { q j x ˜ 1 , , x ˜ t = i 1 = 0 m ˜ j 1 i t = 0 m ˜ j t a ˜ j , i 1 , , i t x ˜ 1 i 1 x ˜ t i t , j = 1 , , t + 1 } ,
where m ˜ j i denotes the degree of variable x ˜ i in { q 1 , q 2 , , q t + 1 } . a ˜ j , i 1 , , i t denotes the coefficients of monomial.
The rest of the work constructs a Dixon matrix using an improved FRDixon algorithm. In our new version, we replace the computations of the product of Sylvester matrix S i ( i = 0 , , m ˜ 1 1 ) and block matrix F j ( j = 0 , , t m ˜ 1 1 ) to the sum of products of a set of matrices with smaller size, which leads to reduced time and parallelism available.
Specifically, S i is expressed by
S i = a ˜ 1 , i , 0 , , 0 a ˜ t + 1 , i , 0 , , 0 a ˜ 1 , i , 0 , , 1 a ˜ t + 1 , i , 0 , , 1 a ˜ 1 , i , 0 , , 0 a ˜ t + 1 , i , 0 , , 0 a ˜ 1 , i , m ˜ 2 , , m ˜ t a ˜ t + 1 , i , m ˜ 2 , , m ˜ t a ˜ 1 , i , m ˜ 2 , , m ˜ t 1 a ˜ t + 1 , i , m ˜ 2 , , m ˜ t 1 a ˜ 1 , i , m ˜ 2 , , m ˜ t a ˜ t + 1 , i , m ˜ 2 , , m ˜ t ,
where a ˜ j , i , i 2 , , i t is exacted from the coefficients of polynomials q 1 , , q t + 1 . The order of S i is t ! l = 2 t m ˜ l × ( t + 1 ) ( t 1 ) ! l = 2 t m ˜ l .
The block matrix F j is constructed by
F j = D D 1 , j + 1 ( 1 , : ) D D t + 1 , j + 1 ( 1 , : ) D D 1 , j + 1 ( ( t 1 ) ! Π l = 2 t m ˜ l , : ) D D t + 1 , j + 1 ( ( t 1 ) ! Π l = 2 t m ˜ l , : ) ,
where D D k , j + 1 ( l , : ) denotes the l-row of D D k , j + 1 defined by (14). The order of F j is ( t + 1 ) ( t 1 ) ! l = 2 t m ˜ l × ( t 1 ) ! l = 2 t m ˜ l .
In the original version, S i and F j are computed, respectively. The improved algorithm considers S i · F j as the sum of products of a set of P k , i · D D k , j + 1 ; that is
S i · F j = k = 1 t + 1 P k , i · D D k , j + 1 , 0 i m ˜ 1 1 , 0 j t m ˜ 1 1 ,
where P k , i is the t ! l = 2 t m ˜ l × ( t 1 ) ! l = 2 t m ˜ l matrix defined by (13).
In our theoretical analysis in Theorem 1 with an improved FRDixon algorithm, we find that the key advantages to this matrix decomposition are as follows.
  • There is no need to construct matrix F j for j = 0 , , t m ˜ 1 1 explicitly. This is in contrast with the original FRDixon, which requires computing matrix F j from D D k , j + 1 one by one.
  • Compared to S i , matrix P k , j with a smaller size can be computed independently and, consequently, has the advantage of working in parallel.
  • Decomposition of (8) leads to reduced time.
Based on above analysis, we now give a description of Algorithm 1.
Algorithm 1: Improved FRDixon algorithm.
Input:     Sys ( q 1 , , q t + 1 ) : multivariate polynomial system with t + 1
      equations and t variables x ˜ 1 , , x ˜ t .
Output:   Dixon ( q 1 , , q t + 1 ) : Dixon matrix of Sys ( q 1 , , q t + 1 ) .
Step 1. (Decompose Dixon polynomial δ ( q 1 , , q t + 1 ) into a set of sub-Dixon polynomials.)
By introducing the new variables x ¯ 1 , , x ¯ t to q j ( x ˜ 1 , , x ˜ t ) , we form the Dixon polynomial δ ( q 1 , , q t + 1 ) defined as
δ ( q 1 , , q t + 1 ) = i = 1 t 1 x ¯ i x ˜ i q 1 ( x ˜ 1 , , x ˜ t ) q 2 ( x ˜ 1 , , x ˜ t ) q t + 1 ( x ˜ 1 , , x ˜ t ) q 1 ( x ¯ 1 , , x ˜ t ) q 2 ( x ¯ 1 , , x ˜ t ) q t + 1 ( x ¯ 1 , , x ˜ t ) q 1 ( x ¯ 1 , , x ¯ t ) q 2 ( x ¯ 1 , , x ¯ t ) q t + 1 ( x ¯ 1 , , x ¯ t )
By comparing the form of the left side in Equation (3), we conclude that Dixon polynomial δ ( q 1 , , q t + 1 ) has a structure identical to (3). Hence, we can use the technique of truncated formal power series to split (9) into several sub-Dixon polynomials:
δ ( q 1 , , q t + 1 ) = u 1 = 0 t m ˜ 1 1 i = u 1 + 1 t m ˜ 1 x ˜ 1 i 1 u 1 x ¯ 1 u 1 ( ( 1 ) 0 q 1 i 1 + + i t = i δ ( q 2 , i 1 , q 3 , i 2 , , q t + 1 , i t ) + ( 1 ) 1 q 2 i 1 + + i t = i δ ( q 1 , i 1 , q 3 , i 2 , , q t + 1 , i t ) + + ( 1 ) t q t + 1 i 1 + + i t = i δ ( q 1 , i 1 , q 2 , i 2 , , q t , i t ) ) ,
where m ˜ i = max { m ˜ j i , j = 1 , , t + 1 } for i = 1 , , t . If terms of q j are collected with respect to x ˜ k , q j can be written as a univariate polynomial in x ˜ k ,
q j = i = 0 m ˜ j k q j , i ( x ˜ 1 , , x ˜ k 1 , x ˜ k + 1 , , x ˜ t ) x ˜ k i ,
then q j , i ( x ˜ 1 , , x ˜ k 1 , x ˜ k + 1 , , x ˜ t ) is regarded as the coefficient polynomial from K [ x ˜ 1 , , x ˜ k 1 , x ˜ k + 1 , , x ˜ t ] with respect to x ˜ k i . This procedure reduces the original Dixon polynomial δ ( q 1 , , q t + 1 ) into several sub-Dixon polynomials δ ( q 1 , i 1 , , q k 1 , i k 1 , q k + 1 , i k , , q t + 1 , i t ) with fewer variables and fewer polynomials.
Step 2. (Express the Dixon polynomial in terms of Dixon matrix.)
Deduce the recursive formula for Dixon matrix, express sub-Dixon polynomials in Dixon matrix from
i 1 + + i t = i δ ( q 1 , i 1 , , q k 1 , i k 1 , q k + 1 , i k , , q t + 1 , i t ) = [ 1 , , l = 2 t x ˜ l ( l 1 ) m ˜ l 1 ] × i 1 + + i t = i Dixon ( q 1 , i 1 , , q k 1 , i k 1 , q k + 1 , i k , , q t + 1 , i t ) × [ 1 , , l = 2 t x ˜ l ( t l + 1 ) m ˜ l 1 ] T ,
for k = 1 , , t + 1 and i = 1 , , t m ˜ 1 . Then, substitute (11) into (10); we can obtain the Dixon matrix representation of (10) as follows,
δ ( q 1 , , q t + 1 ) = u 1 = 0 t m ˜ 1 1 i = u 1 + 1 t m ˜ 1 x ˜ 1 i 1 u 1 x ¯ 1 u 1 ( q 1 · [ 1 , , l = 2 t x ˜ l ( l 1 ) m ˜ l 1 ] · i 1 + + i t = i Dixon ( q 2 , i 1 , , q t + 1 , i t ) + + ( 1 ) t q t + 1 · [ 1 , , l = 2 t x ˜ l ( l 1 ) m ˜ l 1 ] · i 1 + + i t = i Dixon ( q 1 , i 1 , , q t , i t ) ) × [ 1 , , l = 2 t x ˜ l ( t l + 1 ) m ˜ l 1 ] T .
Step 3. (Construct the matrix P k , i .)
Extract the coefficients of q 1 , , q t + 1 to construct the matrix
P k , i = a ˜ k , i , 0 , , 0 a ˜ k , i , 0 , , 1 a ˜ k , i , 0 , , 0 a ˜ k , i , m ˜ 2 , , m ˜ t a ˜ k , i , m ˜ 2 , , m ˜ t 1 a ˜ k , i , m ˜ 2 , , m ˜ t ,
where k = 1 , , t + 1 and i = 0 , , m ˜ 1 1 . From (13), it is easy to see that P k , i is a block matrix. The order of P k , i is t ! l = 2 t m ˜ l × ( t 1 ) ! l = 2 t m ˜ l .
Step 4. (Construct the matrix D D k , j + 1 .)
Compute the sum of sub-Dixon matrices corresponding to sub-Dixon polynomials in (12), denoted by
D D k , j + 1 = i 1 + + i t = j + 1 Dixon ( q 1 , i 1 , , q k 1 , i k 1 , q k + 1 , i k , , q t + 1 , i t ) .
where k = 1 , , t + 1 and j = 0 , , t m ˜ 1 1 . The order of D D k , j + 1 is ( t 1 ) ! l = 2 t m ˜ l × ( t 1 ) ! l = 2 t m ˜ l .
Step 5. (Compute S i · F j .)
From (13) and (14), compute the product of S i · F j by (8).
Step 6. (Construct Dixon ( q 1 , , q t + 1 ) using S i · F j .)
From the evaluations of S i · F j for 0 i m ˜ 1 1 and 0 j t m ˜ 1 1 , construct Dixon matrix,
Dixon ( q 1 , , q t + 1 ) = D 0 , 0 D 0 , t m ˜ 1 1 D m ˜ 1 1 , 0 D m ˜ 1 1 , t m ˜ 1 1 = S 0 S m ˜ 1 1 S 0 · F 0 F ( t 1 ) m ˜ 1 F t m ˜ 1 1 F m ˜ 1 1 F t m ˜ 1 1
where
D i , j = k = 0 min { i , t m ˜ 1 1 j } S i k · F j + k
is a block matrix of order t ! l = 2 t m ˜ l × ( t 1 ) ! l = 2 t m ˜ l .

3.3. The Parallel Hybrid Algorithm

In Section 3.1 and Section 3.2, we describe the two phases involved in our hybrid algorithm. Now, the overall algorithm is presented.
Remark 2.
The Algorithm 2 presented corresponds to our sequential implementation. Further parallel is available. In particular,
  • In step 2, once x k is determined to be eliminated, we simultaneously have at our disposal the computations p l + 1 , l + 2 , , p l + 1 , n + 1 . Hence, res ( p l + 1 , p j , x k ) can be obtained in parallel.
  • In step 3 and step 4, P k , i and D D k , j + 1 can each be obtained independently. Hence, the computations of P k , i and D D k , j + 1 can be carried out in parallel.
  • In step 5, once P k , i and D D k , j + 1 are known to us, we can compute D i , j immediately. Hence, the initialization of D i , j can be performed in parallel.
  • In step 6, recursive operation is carried out on each anti-diagonal line as Dixon ( q 1 , , q t + 1 ) can also be performed in parallel.
Algorithm 2: Hybrid algorithm.
Input:    Sys ( f 1 , , f n + 1 ) : multivariate polynomial system with n + 1
           equations and n variables x 1 , , x n over K [ X , A ] .
Output:   Dixon ( f 1 , , f n + 1 ) : Dixon matrix of Sys ( f 1 , , f n + 1 ) .
Step 1. (Select variable x k to be eliminated from Sys ( f 1 , , f n + 1 ) by applying heuristic scheme.)
Select the variable x k to be eliminated according to (5). Then, rearrange the polynomial f j in terms of the degrees of f j ( j = 1 , , n + 1 ) in x k . Denote the rearranged polynomial system as Sys ( p 1 , , p n + 1 ) .
Step 2. (Eliminate x k from Sys ( p 1 , , p n + 1 ) .)
Assume the polynomials p 1 , , p l do not contain variable x k . Eliminate x k from { p l + 1 , , p n + 1 } by Sylvester resultant:
p l + 1 , j = res ( p l + 1 , p j , x k ) , j = l + 2 , , n + 1
According to the size and feature of given system (1), the process of selection and elimination can be continued until n t variables are removed from Sys ( f 1 , , f n + 1 ) . Assume the derived system is denoted by Sys ( q 1 , , q t + 1 ) .
Step 3. (Construct the matrix P k , i .)
for i = 0 , , m ˜ 1 1 do
   for  k = 1 , , t + 1  do
      Construct the P k , i by (13) in Algorithm 1.
   end k for;
endifor;
Step 4. (Construct the matrix D D k , j + 1 .)
for j = 0 , , t m ˜ 1 1 do
   for  k = 1 , , t + 1  do
       Construct the D D k , j + 1 by (14) recursively in Algorithm 1.
   end k for;
endjfor;
Step 5. (Initialize the elements D i , j of Dixon ( q 1 , , q t + 1 ) .)
From step 3 and step 4, compute the product of S i · F j by (8) and then initialize the elements D i , j = S i · F j of Dixon ( q 1 , , q t + 1 ) .
Step 6. (Construct the Dixon ( q 1 , , q t + 1 ) .)
Observing (15), we find that the following relationship holds,
D i , j = D i 1 , j + 1 + S i · F j , 1 i m ˜ 1 1 , 0 j t m ˜ 1 2 ,
then recursive operation is carried out on each anti-diagonal line. Hence, constructing the Dixon ( q 1 , , q t + 1 ) as follows
Dixon ( q 1 , , q t + 1 ) = S 0 · F 0 S 0 · F m ˜ 1 1 S 0 · F m ˜ 1 S 0 · F t m ˜ 1 1 S 1 · F t m ˜ 1 1 S m ˜ 1 2 · F t m ˜ 1 1 S m ˜ 1 1 · F t m ˜ 1 1
can avoid a mount of repeated computations.
Our method exploits the sparsity in variables of the system and introduces heuristics strategy. Parts of the variables are chosen to be eliminated from the original system via successive Sylvester resultant computations. Next, the exported system with fewer variables is operated by improved FRDixon. Meanwhile, these two elimination processes can be paralleled.

4. Analysis and Evaluation

We first analyze the time complexity of the hybrid algorithm in Section 4.1, and then evaluate the performance of our algorithm. In Section 4.2, we discuss implementations in random instances. Section 4.3 reports on our approach in a real problem. The effectiveness and practicality of our method are illustrated in these examples.

4.1. Time Complexity Analysis

We now give the sequential complexity of the hybrid algorithm in terms of the number of arithmetic operations, and use big-O notation to simplify expressions and asymptotically estimate the number of operations algorithm used as the input grows.
Theorem 1.
The time complexity of the hybrid algorithm (Algorithm 2) for construction of Dixon matrix of Sys ( f 1 , , f n + 1 ) defined by (1) is O ( m ¯ 3 ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) (numerical type determinant) or O ( m ¯ ! ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) (symbolic type determinant).
Proof. 
Similar to the framework of a hybrid algorithm, we partition two phases to analyze the sequential complexity of our new method.
In the first phase, we consider successive Sylvester resultant computations. The most expensive component of this part is to compute a set of Sylvester resultants. It can be shown that evaluation of a m × m numerical determinant using row operations requires about 2 m 3 / 3 arithmetic operations ([35]). If symbolic determinant needs to be computed, a cofactor expansion requires over m ! multiplications in general. Suppose that n t variables are eliminated in the whole successive Sylvester resultant computations. When the ith elimination process is performed, we let l i ( n + 1 i ) for 1 i n t denote the number of Sylvester resultant computations and m ¯ denote the degree bound of polynomials.
In terms of the complexity of determinant computation, the first phase requires 2 3 m ¯ 3 i = 1 n t l i arithmetic operations for numerical determinant or m ¯ ! 2 i = 1 n t l i arithmetic operations for the symbolic determinant. Hence, the complexity of this part is O ( m ¯ 3 ( n 2 t 2 + n + t ) ) or O ( m ¯ ! ( n 2 t 2 + n + t ) ) using big-O notation.
In the second phase, we consider improved FRDixon for system (7). The main cost is calculation of (8). Each of these P k , i · D D k , j + 1 ’s requires
[ ( ( t 1 ) ! l = 2 t m ˜ l ) 3 + ( t 1 ) ( ( t 1 ) ! l = 2 t m ˜ l ) 2 ]
multiplications and
[ ( ( t 1 ) ! l = 2 t m ˜ i ) 3 ( t 2 ) ( ( t 1 ) ! l = 2 t m ˜ l ) 2 ] + ( t 2 ) ( t 1 ) ! l = 2 t m ˜ l )
additions. The t m ˜ 1 2 calls to compute S i · F j cost
( t 2 1 ) t m ˜ 1 2 [ ( ( t 1 ) ! l = 2 t m ˜ l ) 3 + ( t 1 ) ( ( t 1 ) ! l = 2 t m ˜ l ) 2 ]
multiplications and
( t 2 1 ) t m ˜ 1 2 [ ( ( t 1 ) ! l = 2 t m ˜ l ) 3 ( t 2 ) ( ( t 1 ) ! l = 2 t m ˜ l ) 2 + ( t 2 ) ( t 1 ) ! l = 2 t m ˜ l + ( t ! l = 2 t m ˜ l ) 2 ]
additions. The Dixon ( q 1 , , q t + 1 ) can be constructed in ( t m ˜ 1 1 ) 2 additions using the method given in (16). Hence, the complexity of improved FRDixon is
( t 2 1 ) t m ˜ 1 2 [ ( ( t 1 ) ! l = 2 t m ˜ l ) 3 + ( t 1 ) ( ( t 1 ) ! l = 2 t m ˜ l ) 2 ]
multiplications and
( t 2 1 ) t m ˜ 1 2 [ T ˜ 3 ( t 2 ) T ˜ 2 + ( t 2 ) T ˜ + ( t ! l = 2 t m ˜ l ) 2 ] + ( t m ˜ 1 1 ) 2
additions, where T ˜ = ( t 1 ) ! l = 2 t m ˜ l . If we use big-O notation, the complexity of improved FRDixon is O ( m ˜ 1 2 t ! 3 l = 2 t m ˜ l 3 ) .
Hence, the complexity of hybrid algorithm is O ( m ¯ 3 ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) for numerical type or O ( m ¯ ! ( n 2 t 2 + n + t ) + m ˜ 1 2 t ! 3 i = 2 t m ˜ i 3 ) for symbolic type. □

4.2. Random Systems

To compare the performance of our hybrid algorithm, successive Sylvester resultant elimination method and FRDixon ([11,16]), we have implemented these algorithms on three benchmark sets with different sizes. All timings reported are in CPU seconds and were obtained using Maple 18 on Intel Core i5 3470 @ 3.20 GHz running Windows 10, involving basic operations such as matrix multiplication and determinant calculations.

4.2.1. Timings

Each polynomial of systems S1–S30 in Table 1, Table 2 and Table 3 is generated at random using the Maple command ‘randpoly’. To guarantee the sparsity of the system, remove one variable randomly from each polynomial in every system of S1–S30. One hundred instances are contained in each system and the average running time is reported. The timings for columns 4, 5 and 6 of Table 1, Table 2 and Table 3 are for the successive Sylvester resultant elimination method (SylRes for short), FRDixon and hybrid algorithm (Hybrid for short), respectively. To better assess the parallel implementation of our algorithm, we report timings and speed-ups for four cores listed in column 7 in Table 1, Table 2 and Table 3. ’—’ indicates that the program went on for more than 2000s, or ran out of space.
Benchmark # 1
This set of benchmarks consists of 10 groups of systems. Every system of S1–S10 contains five polynomial equations in four variables.
The data in Table 1 show that for systems S1–S10, our new algorithm has a better performance compared to FRDixon. For systems S1–S4, SylRes has fewer timings than the hybrid algorithm. As terms and degrees increase, systems become more complex. Our hybrid algorithm is superior to SylRes. Considering test instances S9 and S10, we tried successive resultant computations using the existing implementations of Sylvester’s resultant in Maple for various variable orderings, but most of the time, the computations run out of memory. We also tried to compute Dixon matrix by FRDixon. The program ran for up to 2000s in these examples.
Table 1. Benchmark # 1 : variables = 4.
Table 1. Benchmark # 1 : variables = 4.
SystemTermDegreeAverage Time (s)
SylResFRDixonHybridHybrid (in Parallel)
S1420.4311.5752.2830.771
S2520.9319.2742.6730.879
S36311.40216.11540.39811.481
S47336.02234.97549.64113.264
S584234.51558.97494.81324.813
S694265.42568.757105.22128.952
S71051469.08906.224287.38173.475
S81151564.15921.901297.51776.837
S9126764.242195.073
S10136773.361197.019
Benchmark # 2
This set of systems differs from the first benchmark in that the degree of each polynomial is set to be four in the second set. The number of terms and variables varies from small to large.
In the experimentation of benchmark # 2 in Table 2, we found that our new hybrid algorithm is faster than FRDixon for computing the Dixon matrix. Notice that when the number of variables increases up to 7 and the terms of polynomials reach 12 or more, the FRDixon was not successfully terminated after 2000s. There were similar results for benchmark # 1 . Successive Sylvester resultant computations cost less than our method in simple systems S11 and S12. However, for complicated systems, the successive technique shows its inefficiency.
Table 2. Benchmark # 2 : degrees = 4.
Table 2. Benchmark # 2 : degrees = 4.
SystemTermVariableAverage Time (s)
SylResFRDixonHybridHybrid (in Parallel)
S11430.236.3910.5070.187
S12530.215.3300.8470.223
S1364203.74524.67289.32523.492
S1474219.53533.01398.32126.398
S1585564.12760.180279.94570.447
S1695596.09781.112299.86476.106
S171061759.251265.803594.381151.093
S181161301.021617.829155.479
S191271359.986346.983
S201371505.042382.271
Benchmark # 3
This set of benchmarks consists of 10 groups of systems of terms = 6 polynomial equations with varying numbers of variables and degrees. See Table 3. As analysis of the complexity of the hybrid algorithm, the number of arithmetic operations is mainly affected by variables and degrees. With the increase in these two parameters, three algorithms need more computation time for completion. Facing intractable systems S27–S30, SylRes and FRDixon are both powerless.
In all experiments listed in Table 1, Table 2 and Table 3, our hybrid algorithm always takes advantage of the sparsity of a given system and combines successive Sylvester resultant computations with improved FRDixon. The average running time is never more than the original FRDixon. Except for some “simple” instances, the new algorithm has a better performance compared to successive Sylvester resultant computations.
Table 3. Benchmark # 3 : terms = 6.
Table 3. Benchmark # 3 : terms = 6.
SystemDegreeVariableAverage Time (s)
SylResFRDixonHybridHybrid (in Parallel)
S21230.022.6400.2070.072
S22330.125.6390.4400.134
S233411.01223.52734.5689.007
S2444125.03516.96987.38022.043
S2545532.52727.617271.23969.971
S2655844.641106.289478.947122.307
S2756851.086215.152
S2866987.602249.005
S29671823.056460.764
S30771909.443483.125

4.2.2. Matrix Dimension

According to the definition of the resultant (determinant of resultant matrix), one can notice that the dimension of resultant matrix is the key factor affecting the computational complexity on the target resultant. In order to compare the matrix dimension generated by FRDixon and the hybrid algorithm, we record the matrix sizes of systems S1–S30, as shown in Table 4, Table 5 and Table 6.
From Table 4, Table 5 and Table 6, we can see that the size of the Dixon matrix produced by hybrid algorithm is much smaller than FRDixon for any system.

4.3. Real Problems

In order to measure the comprehensive performance of the hybrid algorithm in the real problems, we implement our new algorithm on an open optimization problem in combinatorial geometry.
Given a spherical triangle A B C whose length of sides are a, b and c, respectively, the problem is to find a point P on the sphere such that the sum of distances L between the point P and the vertexes of A B C reaches the minimum.
As illustrated in Figure 1, let A B = c , B C = a , A C = b , P A = u , P B = v , P C = w and L = u + v + w . All distances are measured by an Euclidean metric. If we can find the relationship between a, b, c and L, the original minimum problem is solved.
By applying the Lemma 42.1 in [36] and the compactness of the sphere, we can prove that a point P on sphere is such that the Cayley–Menger determinant equals to zero. That is, the distances between the center point O of the sphere and A, B, C, P should satisfy
V ( a , b , c , u , v , w ) = 0 1 1 1 1 1 1 0 u 2 v 2 w 2 1 1 u 2 0 c 2 b 2 1 1 v 2 c 2 0 a 2 1 1 w 2 b 2 a 2 0 1 1 1 1 1 1 0 = 0 .
Now, we transform the relationship between a, b, c and L into an optimization problem of the following form
min L = u + v + w s . t . V ( a , b , c , u , v , w ) = 0 .
Consequently, the Fermat–Torricelli point P on the sphere is such that the polynomial system
L u v w = 0 G λ = 0 , G u = 0 , G v = 0 , G w = 0 .
where
G = u + v + w + λ · V ( a , b , c , u , v , w ) .
From (17)–(20), we obtain the polynomial system
f 1 ( u , v , w , λ ) = L u v w f 2 ( u , v , w , λ ) = a 4 u 4 + 2 a 2 b 2 u 2 v 2 + 4 c 2 v 2 w 2 + 4 c 2 w 4 ( 28 terms ) f 3 ( u , v , w , λ ) = 4 a 4 u 3 + 4 a 2 b 2 u v 2 + + 8 c 2 u v 2 8 c 2 u w 2 λ + 1 ( 14 terms ) f 4 ( u , v , w , λ ) = 4 a 2 b 2 u 2 v 4 b 4 v 3 + + 8 c 2 u 2 v 8 c 2 v w 2 λ + 1 ( 14 terms ) f 5 ( u , v , w , λ ) = 4 a 2 c 2 u 2 w + 4 b 2 c 2 v 2 w + 8 c 2 v 2 w + 16 c 2 w 3 λ + 1 ( 14 terms ) .
It is easy to know that this geometry problem is equivalent to solving the polynomial system (21) with five equations and four variables.
We first report the result that (21) solved by successive Sylvester resultant computation techniques. The order of elimination is λ , u, v and w. Starting with computing res ( f 3 , f 5 , λ ) and res ( f 4 , f 5 , λ ) , the variable λ is eliminated from (21),
f 1 ( u , v , w ) = L u v w f 2 ( u , v , w ) = a 4 u 4 + 2 a 2 b 2 u 2 v 2 + 4 c 2 v 2 w 2 + 4 c 2 w 4 ( 28 terms ) f 3 , 5 ( u , v , w ) = 4 a 4 u 3 4 a 2 b 2 u 2 v + 8 c 2 u w 2 + 8 c 2 v w 2 ( 26 terms ) f 4 , 5 ( u , v , w ) = 4 a 4 u 3 + 4 a 2 b 2 u v 2 + + 8 c 2 v 2 w 16 c 2 w 3 ( 26 terms ) .
This is followed by computing
p 1 = res ( f 1 , f 2 , u ) , p 2 = res ( f 1 , f 3 , 5 , u ) , p 3 = res ( f 1 , f 4 , 5 , u ) ;
and
p 4 = res ( p 1 , p 2 , v ) , p 5 = res ( p 1 , p 3 , v ) ;
to eliminate u and v, respectively. Finally, w is eliminated by computing
p 6 = res ( p 4 , p 5 , w ) .
It was found that the elimination process of res ( p 4 , p 5 , w ) could not be completed due to the memory overflows after 7961.3 s.
Now, we discuss the trace of our algorithm on this system. First, λ was eliminated by computing res ( f 3 , f 5 , λ ) and res ( f 4 , f 5 , λ ) . This took 0.85 s. This was followed by the construction of a Dixon matrix of system (22), which turned out to be 384 × 384 , and this took 40.127 s. The total computational cost took 40.977 s. Another scheme was to eliminate two variables λ and u by proceeding with (22) and (23). This took 1.48 s. Then, FRDixon was used to compute the target Dixon matrix. After 1.626 s, a 32 × 32 Dixon matrix was obtained. The total computations took 3.106 s. The second scheme, in contrast, works better than the first.
Lastly, let us look at the FRDixon. It took 895.013 s to compute the Dixon matrix of 1536 × 1536 . The results of these methods are summarized in Table 7.

5. Conclusions

In this paper, we proposed a hybrid algorithm which combines the Sylvester resultant elimination based on heuristic strategy and the improved fast recursive Dixon matrix construction algorithm. Our hybrid algorithm can construct a Dixon matrix efficiently, and the dimension is smaller than other existing algorithms. Moreover, we can achieve reasonable robustness through randomization cases over different sizes of systems and a real problem.
To conclude, we point out that the polynomial systems are sparse in many applications. Our hybrid method seems to be very suitable for such a scenario. We have demonstrated some prospects of our approach through a great example. These preliminary findings are very encouraging and suggest that further studies are needed to examine methods based on hybrid algorithms. We are therefore planning to explore different and specific applications. For instance, we are planning to apply a Dixon resultant as an algebraic attack method to solve multivariate polynomial quadratic systems over finite fields.

Author Contributions

G.D. contributed to conceptualization, methodology and writing—original draft preparation. N.Q. contributed to formal analysis, resources, software and writing—review and editing. M.T. contributed to validation and writing—review and editing. X.D. contributed to project administration, supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Guangxi Science and Technology project under Grant (No. Guike AD18281024), the Guangxi Key Laboratory of Cryptography and Information Security under Grant (No. GCIS201821), the Guilin University of Electronic Technology Graduate Student Excellent Degree Thesis Cultivation Project under Grant (No. 2020YJSPYB02), and Innovation Project of GUET Graduate Education (No. 2022YCXS144).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the referee for his or her very helpful comments and useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohammadi, A.; Horn, J.; Gregg, R.D. Removing phase variables from biped robot parametric gaits. In Proceedings of the IEEE Conference on Control Technology and Applications—CCTA 2017, Waimea, HI, USA, 27–30 August 2017; pp. 834–840. [Google Scholar]
  2. Jaubert, O.; Cruz, G.; Bustin, A.; Schneider, T.; Lavin, B.; Koken, P.; Hajhosseiny, R.; Doneva, M.; Rueckert, D.; René, M.B. Water-fat Dixon cardiac magnetic resonance fingerprinting. Magn. Reson. Med. 2020, 83, 2107–2123. [Google Scholar] [CrossRef] [Green Version]
  3. Winkler, J.R.; Halawani, H. The Sylvester and Bézout Resultant Matrices for Blind Image Deconvolution. J. Math. Imaging Vis. 2018, 60, 1284–1305. [Google Scholar] [CrossRef] [Green Version]
  4. Lewis, R.H.; Paláncz, B.; Awange, J.L. Solving geoinformatics parametric polynomial systems using the improved Dixon resultant. Earth Sci. Inform. 2019, 12, 229–239. [Google Scholar] [CrossRef] [Green Version]
  5. Paláncz, B. Application of Dixon resultant to satellite trajectory control by pole placement. J. Symb. Comput. 2013, 50, 79–99. [Google Scholar] [CrossRef]
  6. Tang, X.J.; Feng, Y. Applying Dixon Resultants in Cryptography. J. Softw. 2007, 18, 1738–1745. [Google Scholar]
  7. Gao, Q.; Olgac, N. Dixon Resultant for Cluster Treatment of LTI Systems with Multiple Delays. IFAC-PapersOnLine 2015, 48, 21–26. [Google Scholar] [CrossRef]
  8. Han, P.K.; Horng, D.E.; Gong, K.; Petibon, Y.; Kim, K.; Li, Q.; Johnson, K.A.; Georges, E.F.; Ouyang, J.; Ma, C. MR-Based PET Attenuation Correction using a Combined Ultrashort Echo Time/Multi-Echo Dixon Acquisition. Math. Phys. 2020, 47, 3064–3077. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, L.; Zhang, J.; Hou, X. Nonlinear Algebric Equation System and Automated Theorem Proving; Shanghai Scientific and Technological Education Publishing House: Shanghai, China, 1996; ISBN 7-5428-1379-X. [Google Scholar]
  10. Tang, M.; Yang, Z.; Zeng, Z. Resultant elimination via implicit equation interpolation. J. Syst. Sci. Complex. 2016, 29, 1411–1435. [Google Scholar] [CrossRef]
  11. Fu, H.; Zhao, S. Fast algorithm for constructing general Dixon resultant matrix. Sci. China Math. 2005, 35, 1–14. (In Chinese) [Google Scholar] [CrossRef]
  12. Zhao, S. Dixon Resultant Research and New Algorithms. Ph.D. Thesis, Graduate School of Chinese Academy of Sciences, Chengdu Institute of Computer Applications, Chengdu, China, 2006. [Google Scholar]
  13. Zhao, S.; Fu, H. An extended fast algorithm for constructing the Dixon resultant matrix. Sci. China Math. 2005, 48, 131–143. [Google Scholar] [CrossRef]
  14. Zhao, S.; Fu, H. Three kinds of extraneous factors in Dixon resultants. Sci. China Math. 2009, 52, 160–172. [Google Scholar] [CrossRef]
  15. Fu, H.; Wang, Y.; Zhao, S.; Wang, Q. A recursive algorithm for constructing complicated Dixon matrices. Appl. Math. Comput. 2010, 217, 2595–2601. [Google Scholar] [CrossRef]
  16. Qin, X.; Wu, D.; Tang, L.; Ji, Z. Complexity of constructing Dixon resultant matrix. Int. J. Comput. Math. 2017, 94, 2074–2088. [Google Scholar] [CrossRef]
  17. Lewis, R.H. Heuristics to accelerate the Dixon resultant. Math. Comput. Simul. 2008, 77, 400–407. [Google Scholar] [CrossRef]
  18. Guo, X.; Leng, T.; Zeng, Z. The Fermat-Torricelli problem on sphere with euclidean metric. J. Syst. Sci. Math. Sci. 2018, 38, 1376–1392. [Google Scholar] [CrossRef]
  19. Kotsireas, I.S.; Karamanos, K. Exact Computation of the bifurcation Point B4 of the logistic Map and the Bailey-broadhurst Conjectures. Int. J. Bifurc. Chaos 2004, 14, 2417–2423. [Google Scholar] [CrossRef] [Green Version]
  20. Lewis, R.H. Comparing acceleration techniques for the Dixon and Macaulay resultants. Math. Comput. Simul. 2010, 80, 1146–1152. [Google Scholar] [CrossRef]
  21. Candes, E.J. Mathematics of sparsity (and a few other things). In Proceedings of the International Congress of Mathematicians 2017, Seoul, Korea, 13–21 August 2014; pp. 1–27. [Google Scholar]
  22. Hu, J.; Monagan, M.B. A Fast Parallel Sparse Polynomial GCD Algorithm. In Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation—ISSAC 2016, Waterloo, ON, Canada, 19–22 July 2016; Abramov, S.A., Zima, E.V., Gao, X., Eds.; ACM: New York, NY, USA, 2016; pp. 271–278. [Google Scholar]
  23. Qiu, W.; Skafidas, E. Robust estimation of GCD with sparse coefficients. Signal Process. 2010, 90, 972–976. [Google Scholar] [CrossRef]
  24. Cuyt, A.A.M.; Lee, W. Sparse interpolation of multivariate rational functions. Theor. Comput. Sci. 2011, 412, 1445–1456. [Google Scholar]
  25. Dixon, A. The eliminant of three quantics in two independent variables. Proc. Lond. Math. Soc. 1909, s2-7, 49–69. [Google Scholar]
  26. Li, B.; Liu, Z.; Zhi, L. A structured rank-revealing method for Sylvester matrix. J. Comput. Appl. Math. 2008, 213, 212–223. [Google Scholar] [CrossRef] [Green Version]
  27. Zhao, S.; Fu, H. Multivariate Sylvester resultant and extraneous factors. Sci. China Math. 2010, 40, 649–660. [Google Scholar] [CrossRef]
  28. Minimair, M. Computing the Dixon Resultant with the Maple Package DR. In Proceedings of the Applications of Computer Algebra (ACA), Kalamata, Greece, 20–23 July 2015; Kotsireas, I., MartinezMoro, E., Eds.; ACA: Kalamata, Greece, 2017; pp. 273–287. [Google Scholar]
  29. Kapur, D.; Saxena, T.; Yang, L. Algebraic and Geometric Reasoning Using Dixon Resultants. In Proceedings of the International Symposium on Symbolic and Algebraic Computation, ISSAC ’94, Oxford, UK, 20–22 July 1994; MacCallum, M.A.H., Ed.; ACM: New York, NY, USA; pp. 99–107. [Google Scholar]
  30. Lu, Z. The Software of Gather2and2sift Based on Dixon Resultant. Ph.D. Thesis, Graduate School of Chinese Academy of Sciences, Beijing, China, 2003. [Google Scholar]
  31. Chionh, E.; Zhang, M.; Goldman, R.N. Fast Computation of the Bézout and Dixon Resultant Matrices. J. Symb. Comput. 2002, 33, 13–29. [Google Scholar] [CrossRef] [Green Version]
  32. Foo, M.; Chionh, E. Corner edge cutting and Dixon A-resultant quotients. J. Symb. Comput. 2004, 37, 101–119. [Google Scholar]
  33. Qin, X.; Feng, Y.; Chen, J.; Zhang, J. Parallel computation of real solving bivariate polynomial systems by zero-matching method. Appl. Math. Comput. 2013, 219, 7533–7541. [Google Scholar] [CrossRef] [Green Version]
  34. Qin, X.; Yang, L.; Feng, Y.; Bachmann, B.; Fritzson, P. Index reduction of differential algebraic equations by differential Dixon resultant. Appl. Math. Comput. 2018, 328, 189–202. [Google Scholar] [CrossRef]
  35. Lay, D.C. Linear Algebric and Its Applications; Addison-Wesley: Boston, MA, USA, 2013; ISBN 0321385178. [Google Scholar]
  36. Blumenthal, L.M. Theory and Applications of Distance Geometry, 2nd ed.; Chelsea House Pub: New York, NY, USA, 1970; ISBN 978-0828402422. [Google Scholar]
Figure 1. A triangle ABC on the sphere and their Fermat–Torricelli point P.
Figure 1. A triangle ABC on the sphere and their Fermat–Torricelli point P.
Symmetry 14 01174 g001
Table 4. Matrix dimensions corresponding to systems in Table 1.
Table 4. Matrix dimensions corresponding to systems in Table 1.
AlgorithmMatrix Dimension
S1S2S3S4S5
Hybrid 48 × 48 48 × 48 480 × 480 720 × 720 1008 × 1008
FRDixon 192 × 192 192 × 192 864 × 864 864 × 864 1944 × 1944
S6S7S8S9S10
Hybrid 1440 × 1440 1260 × 1260 1296 × 1296 2592 × 2592 3402 × 3402
FRDixon 2304 × 2304 2800 × 2800 2800 × 2800
Table 5. Matrix dimensions corresponding to systems in Table 2.
Table 5. Matrix dimensions corresponding to systems in Table 2.
AlgorithmMatrix Dimension
S11S12S13S14S15
Hybrid 72 × 72 96 × 96 864 × 864 1008 × 1008 1296 × 1296
FRDixon 144 × 144 144 × 144 1944 × 1944 1944 × 1944 5760 × 5760
S16S17S18S19S20
Hybrid 1944 × 1944 2970 × 2970 3168 × 3168 6336 × 6336 7128 × 7128
FRDixon 5760 × 5760 8640 × 8640 8640 × 8640
Table 6. Matrix dimensions corresponding to systems in Table 3.
Table 6. Matrix dimensions corresponding to systems in Table 3.
AlgorithmMatrix Dimension
S21S22S23S24S25
Hybrid 12 × 12 18 × 18 480 × 480 840 × 840 1440 × 1440
FRDixon 24 × 24 48 × 48 864 × 864 1944 × 1944 5760 × 5760
S26S27S28S29S30
Hybrid 2160 × 2160 5760 × 5760 7128 × 7128 8640 × 8640 9072 × 9072
FRDixon 6480 × 6480
Table 7. Comparing the timings and the dimensions of Fermat–Torricelli problem on sphere with Euclidean metric.
Table 7. Comparing the timings and the dimensions of Fermat–Torricelli problem on sphere with Euclidean metric.
SylResFRDixonHybrid
Scheme 1Scheme 2
Timings>7961.3895.01340.9773.106
Dimension of matrix 1536 × 1536 384 × 384 32 × 32
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Deng, G.; Qi, N.; Tang, M.; Duan, X. Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme. Symmetry 2022, 14, 1174. https://doi.org/10.3390/sym14061174

AMA Style

Deng G, Qi N, Tang M, Duan X. Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme. Symmetry. 2022; 14(6):1174. https://doi.org/10.3390/sym14061174

Chicago/Turabian Style

Deng, Guoqiang, Niuniu Qi, Min Tang, and Xuefeng Duan. 2022. "Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme" Symmetry 14, no. 6: 1174. https://doi.org/10.3390/sym14061174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop