Abstract
Multivariate polynomial interpolation plays a crucial role both in scientific computation and engineering application. Exploring the structure of the D-invariant (closed under differentiation) polynomial subspaces has significant meaning for multivariate Hermite-type interpolation (especially ideal interpolation). We analyze the structure of a D-invariant polynomial subspace in terms of Cartesian tensors, where is a subspace with a maximal total degree equal to . For an arbitrary homogeneous polynomial of total degree k in , can be rewritten as the inner products of a kth order symmetric Cartesian tensor and k column vectors of indeterminates. We show that can be determined by all polynomials of a total degree one in . Namely, if we treat all linear polynomials on the basis of as a column vector, then this vector can be written as a product of a coefficient matrix and a column vector of indeterminates; our main result shows that the kth order symmetric Cartesian tensor corresponds to is a product of some so-called relational matrices and .
1. Introduction
Multivariate polynomial interpolation is widely used in many application domains, such as image processing, electronic communication, control theory, etc. The theory of multivariate polynomial interpolation is far from perfect since the interpolation conditions are too complicated, and the multiplicity structure of each interpolation site has many different expressions. Note that most related practical problems can be converted to ideal interpolation problems, whose interpolation conditions is determined by a invariant polynomial subspace. Throughout the paper, denotes the polynomial ring in d variables over . For simplicity, we will work with the ground field or . Ideal interpolation, a special class of polynomial interpolation problems, can be defined by a linear projector (idempotent operator) of finite rank on . The kernel of the projector forms a polynomial ideal. Carl de Boor has conducted a great deal of important work for ideal interpolation [1].
The interpolation conditions of an ideal interpolant correspond to a D-invariant subspace [1,2]. Lagrange interpolation is a standard example of which the interpolation conditions consist of only evaluation functionals. In the one variable case, every ideal projector is the pointwise limit of Lagrange projectors when . However, this is not true in the multivariate case [3,4]. Thus, it is natural to ask what kind of ideal interpolation problem can be written as a limit of Lagrange interpolation problems; we call this the discrete approximation problem for ideal interpolation [5]. This is equal to considering how to discretize the differential operators in the D-invariant subspace of each interpolation site. In [5], by analyzing the structure of the second-order D-invariant subspaces, we give a sufficient condition to solve the discrete approximation problem for this case. This indicates that analyzing the structure of the D-invariant subspaces will help us know more about multivariate polynomial interpolation.
In polynomial system solving, the multiplicity structure at an isolated zero is identified with the dual space consisting of all linear functionals supported at that vanish on the entire polynomial ideal generated by the given polynomial system [2]. Here, all the functionals correspond to a vector space, which is D-invariant (some references would use the term “closed”). Dayton and Zeng used the dual space theory to derive special-case polynomial system solving algorithms in 2005 [6]. After that, Zeng presented a new algorithm that substantially reduces the calculation by means of the closedness subspace strategy in his paper [7]. In [8], Li and Zhi demonstrated the explicit structure of the D-invariant subspace in the case of breadth one, which helps to compute the multiplicity structure of an isolated singular solution more efficiently.
We have seen that analyzing the structure of the D-invariant subspaces helps us to study a class of interpolation problems and to improve computational efficiency in polynomial system solving. It is the aim of this paper to describe the structure of the D-invariant subspaces by Cartesian tensors.
2. Preliminaries
In this section, we will recall some basic definitions in symbolic computation [9] and the concept of tensors in multilinear algebra [10].
A monomial order on is any relation ≻ on satisfying the following:
(i) ≻ is a total (or linear) order on .
(ii) If and , then .
(iii) ≻ is a well-order on .
Let be a nonzero polynomial and ≻ be a monomial order. The multidegree of f is the following:
The leading monomial of f is the following:
The total degree of f, denoted by , is the maximum such that the coefficient is nonzero.
Definition 1.
A polynomial subspace is said to be D-invariant if it is closed under differentiation, i.e., ,
in which is the partial derivative of p with respect to the jth argument.
In this paper, we denote by a D-invariant polynomial subspace of degree n, i.e., satisfies the following:
(i) .
(ii) There exists at least one polynomial of total degree n in .
(iii) , .
For any given subspace , we also define
Since is a linear space, we only need to study a special basis of by doing the following: to fix a term order (for example, graded reverse lexicographic order, ), we first write the polynomials in a given basis of in matrix form, where the columns are indexed by the monomials in increasing order and the rows are indexed by the basis of ; then, with Gauss–Jordan elimination, we can obtain the reduced row echelon form of this matrix, which gives another basis for . We call this new basis the “reduced basis” of . In the discussion that follows, by a basis of some subspace, we always mean the reduced basis.
Einstein Convention. Einstein introduced a convention whereby if a particular subscript (e.g., i) appears twice in a single term of an expression, then it is implicitly summed, and i is called a dummy index. For example, in traditional notation, we have the following:
and using summation convention, we can simply write the following:
Let . Recall that a tensor of real numbers is simply an element in the tensor product of the vector spaces [11]. Let
be the standard basis of , where forms a standard basis for Note that , with , the Kronecker delta.
We denote by
an nth order Cartesian tensor. Here, the sum over is implicit, and denotes the weight for in the sum. Throughout the paper, d denotes the number of indeterminates, so if there is no confusion, we will simply write (i is a subscript) if is in the standard basis of , and write (i is a superscript) if is in the standard basis of , where q need not be equal to d. Particularly, for a second order tensor on , , representing each basis tensor as a matrix:
then T can be written in the form of a square matrix, i.e., .
Definition 2
([11]). A tensor is called symmetric if it is invariant under any permutation σ of its n indices, i.e., the following:
The space of symmetric tensors of order n on is naturally isomorphic to the space of homogeneous polynomials of total degree n in d variables [11]. We will use this fact to represent a homogeneous polynomial in .
Definition 3
([12]). The inner product (also known as contraction) of two Cartesian tensors is defined as the following:
For example, if A and B are matrices, we obtain the following matrix multiplication:
etc. The critical request here is that the vectors in the inner product must be of the same dimension.
3. The Structure of the D-Invariant Subspace
The structure of was discussed in our paper [5]; we list the results in this section for completeness.
We first consider a special second-degree D-invariant subspace as follows:
in which and the superscript indicate the total degree of the polynomial. Without loss of generality, the polynomials of total degree one can be rewritten as the following:
With all , given, we have the following result.
Theorem 1
([5]). With the above notation, in has the following form:
where E is an symmetric matrix, is a 1-column matrix.
Note that E has free parameters, and L has free parameters. The proof of Theorem 1 also establishes the following result.
Corollary 1
([5]). Suppose that , where . Then, each , has the following form:
where is an symmetric matrix, is a 1-column matrix.
4. The Structure of the D-Invariant Subspace
We first assume that there is only 1 polynomial of total degree n in . In the general case, since we are considering reduced bases, then all polynomials of total degree n in the basis have this form.
4.1. The Structure of the Homogeneous D-Invariant Subspace
We will discuss the structure of in the D-invariant subspace as follows:
with given in this part. The partial derivative of the highest homogeneous part of can be represented as a linear combination of the quadratic homogeneous part of ; hence, it is natural to begin with the case when and are all homogeneous polynomials. The general situation of is covered at the end of the next section.
An arbitrary homogeneous polynomial has the following form:
where are dummy indices and
Here, denotes any permutation of .
Lemma 1.
Suppose that is of the form (5), then .
Proof.
,
Thus,
The proof is completed. □
Proposition 1.
Suppose that is of the form (5). Let where with , . Let be the coefficients satisfying the following:
namely, is the relational matrix between and all . Then
where the right side is a sum w.r.t. k from 1 to .
Proof.
Since
comparing the above equation with (7), we have the following:
which is equivalent to the following:
Thus, the proposition is proved. □
Next, we focus on formulating the relational expression between and all in “matrix form”.
Supposing that is a sequence of matrices of the same size, the following notation is useful:
Theorem 2.
Let , be the relational matrices between and all linear polynomials in the graded basis of (i.e., ). With the above notation,
4.2. The Degrees of Freedom of
We have proved that has the form (11). Corollary 1 shows that each has free parameters with given. Now we turn to the following question: given the space , how many degrees of freedom does have? According to Equation (8), the constraints on are derived from the symmetry of , i.e., Equation (6).
Lemma 2.
The number of equality constraints contained in (6) is the following:
Proof.
There are three cases to consider. First, do not lead to any constraint. Second, are of the form with . Notice that holds naturally by (8), so there are equality constraints. Third, , are of the form Each leads to five equations, for which three pairs naturally hold, so there are equality constraints in this case. □
Similar to (2), with the term order , we can write the following:
in which Q is a column permutation matrix. Since the basis is reduced, then . We denote by and the following sets:
Theorem 3.
Let χ denote the number of the degrees of freedom of . Then, we have the following:
Proof.
With given, is equal to the number of the degrees of freedom of , so the second inequality holds obviously. To verify the first one, note that some of the linear equations derived from (6) may not be linearly independent, and by Lemma 2, the theorem is proved. □
Example 1.
Choose in (13). Then
which indicates that Thus, by the above theorem, the following holds:
so that .
Next, let us show that estimation (14) is sharp through the following example.
Example 2.
Let , and
By (14),
We can verify the following:
, if . The free parameters in are .
, if . The free parameters can be chosen as .
5. The Structure of the D-Invariant Subspace
Consider the following D-invariant subspace:
in which for . We will discuss the special case where all polynomials in the reduced basis of are restricted to be homogeneous. Let
with , .
Similar to the case when , let , the following can be verified:
Note that if , is a first order tensor which can be written as a vector as in Section 3; if , is a second order tensor which we write as a matrix in (3). Namely, has two equivalent forms:
Finally, assuming that is given, we set forth a general form of Theorem 2.
Theorem 4.
For any fixed , , let be the relational matrix between and all polynomials of total degree in the basis of . Then,
where .
Unlike general linear polynomial subspaces, due to the “closed” property, the relation between the polynomial of higher degree (i.e., ) and all linear polynomials in can be observed by the above formula. For the proof of this theorem, we need the following lemmas.
Lemma 3.
Proof.
This completes the proof. □
In Theorem 2, we have actually proved the following:
This can be easily generalized to an arbitrary , as follows.
Lemma 4.
Proof
(Proof of Theorem 4). We will use induction on n. If , the theorem is true by Theorem 2. Now assume that the theorem holds for all . If we let in (15), we have the following:
on the other hand, using Lemma 3, we obtain the following:
Thus,
By our inductive assumption and Lemma 4, the theorem is proved. □
6. Conclusions
Li and Zhi demonstrated the structure of the breadth-one D-invariant polynomial subspace in [8]. We analyzed the structure of a second-degree D-invariant polynomial subspace in our previous work [5]. As an application for ideal interpolation, we solved the discrete approximation problem for under certain conditions. In this work, we discuss the structure of for a special case, where all polynomials in the reduced basis of are restricted to be homogeneous. In the future, we will consider a more general case of . For any fixed s and t, we can decompose into its homogeneous components, and write the following:
with , a jth order symmetric Cartesian tensor for . Since 1 is always in , this means that the constant term in can be omitted with reduction. We can now analyze the structure of the following:
with given. In view of the D-invariance of , only relates to the highest homogeneous components of all , i.e., . In addition, note that each can be expressed in the same way; hence, has the form (17). relates to all the homogeneous complements, which have total degree of polynomials in . With linear reduction, can be written in the form .
Author Contributions
Conceptualization, X.J.; methodology, X.J. and K.C.; software, X.J. and K.C.; validation, X.J. and K.C.; formal analysis, X.J. and K.C.; investigation, X.J. and K.C.; resources, X.J. and K.C.; data curation, X.J. and K.C.; writing–original draft preparation, X.J.; writing–review and editing, K.C.; visualization, X.J. and K.C.; supervision, X.J. and K.C.; project administration, X.J. and K.C.; funding acquisition, X.J. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Natural Science Foundation of China under Grant No. 11901402.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- de Boor, C. Ideal interpolation. In Approximation Theory XI: Gatlinburg; Nashboro Press: Brentwood TN, USA, 2004; pp. 59–91. [Google Scholar]
- Marinari, M.G.; Möller, H.M.; Mora, T. Gröbner bases of ideals defined by functionals with an application to ideals of projective points. Appl. Algebra Engrg. Comm. Comput. 1993, 4, 103–145. [Google Scholar] [CrossRef]
- de Boor, C.; Shekhtman, B. On the pointwise limits of bivariate Lagrange projectors. Linear Algebra Appl. 2008, 429, 311–325. [Google Scholar] [CrossRef] [Green Version]
- Shekhtman, B. On a conjecture of Carl de Boor regarding the limits of Lagrange interpolants. Constr. Approx. 2006, 24, 365–370. [Google Scholar] [CrossRef]
- Jiang, X.; Zhang, S. The structure of a second-degree D-invariant subspace and its application in ideal interpolation. J. Approx. Theory 2016, 207, 232–240. [Google Scholar] [CrossRef]
- Dayton, B.H.; Zeng, Z. Computing the multiplicity structure in solving polynomial systems. In Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation, Beijing, China, 24–27 July 2005; ACM: New York, NY, USA, 2005; pp. 116–123. [Google Scholar]
- Zeng, Z. The Closedness Subspace Method for Computing the Multiplicity Structure of a Polynomial System. 2009. Available online: http://orion.neiu.edu/~zzeng/Papers/csdual.pdf (accessed on 1 May 2021).
- Li, N.; Zhi, L. Computing the multiplicity structure of an isolated singular solution: Case of breadth one. J. Symb. Comput. 2012, 47, 700–710. [Google Scholar] [CrossRef] [Green Version]
- Cox, D.; Little, J.; O’Shea, D. Ideals, Varieties, and Algorithms; Springer: New York, NY, USA, 1992. [Google Scholar]
- Northcott, D.G. Multilinear Algebra; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
- Comon, P.; Golub, G.; Lim, L.H.; Mourrain, B. Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 2008, 30, 1254–1279. [Google Scholar] [CrossRef] [Green Version]
- Zhang, R. A Brief Tutorial on Tensor Analysis; Tongji University Press: Shanghai, China, 2010. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).