Next Article in Journal
Progress in Mathematical Ecology
Next Article in Special Issue
Quantum Information: A Brief Overview and Some Mathematical Aspects
Previous Article in Journal
Payoff Distribution in a Multi-Company Extraction Game with Uncertain Duration
Previous Article in Special Issue
Resolving Decompositions for Polynomial Modules

Metrics 0

## Export Article

Mathematics 2018, 6(9), 166; doi:10.3390/math6090166

Article
A Heuristic Method for Certifying Isolated Zeros of Polynomial Systems
Xiaojie Dou  1 and Jin-San Cheng  2,*
1
College of Science, Civil Aviation University of China, Tianjin 300300, China
2
KLMM, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Received: 12 July 2018 / Accepted: 3 September 2018 / Published: 11 September 2018

## Abstract

:
In this paper, by transforming the given over-determined system into a square system, we prove a necessary and sufficient condition to certify the simple real zeros of the over-determined system by certifying the simple real zeros of the square system. After certifying a simple real zero of the related square system with the interval methods, we assert that the certified zero is a local minimum of sum of squares of the input polynomials. If the value of sum of squares of the input polynomials at the certified zero is equal to zero, it is a zero of the input system. As an application, we also consider the heuristic verification of isolated zeros of polynomial systems and their multiplicity structures.
Keywords:
over-determined polynomial system; isolated zeros; minimum point; sum of squares; interval methods

## 1. Introduction

Finding zeros of polynomial systems is a fundamental problem in scientific computing. Newton’s method is widely used to solve this problem. For a fixed approximate solution of a system, we can use the $α$-theory [1,2,3], the interval methods or the optimization methods [4,5,6,7,8,9] to completely determine whether it is related to a zero of the system. However, the $α$-theory or the interval methods focuses mainly on a simple zero of a square system, that is, a system with n equations and n unknowns.
Some special certifications of a rational solution of rational polynomials with certified sum of squares decompositions are considered [10,11,12,13,14,15,16].
What about singular zeros of a well-constrained polynomial system? Usually, an over-determined system which contains the same zero as a simple one is constructed by introducing new equations. The basic idea is the deflation techniques [17,18,19,20,21,22,23,24]. In some studies [25,26,27,28,29,30], new variables are also included. Moreover, some authors verify that a perturbed system possesses an isolated singular solution within a narrow and computed error bound. The multiplicity structures of singular zeros of a polynomial system are also studied [18,21,29]. Although it is in a theoretical sense and global sense, the method in [17] provides a sufficient condition that a zero is exactly a zero of a zero-dimensional polynomial system with rational coefficients.
For the deflation methods mentioned above, on the one hand, to be a zero of the perturbed systems does not mean being a zero of the input system considering the difference between the two systems; on the other hand, although the over-determined systems without introducing new variables have the same zeros as the input systems, the verification methods, such as the $α$-theory or the interval methods, could not be used directly on the over-determined systems in general.
In [31], the authors extended the $α$-theory from well-constrained systems to over-determined systems. A main result about Newton’s method given in their paper is their Theorem 4 [31], which says that under the condition of $2 α 1 ( g , ζ ) < 1$, where $g = ( g 1 , … , g m ) ∈ ( C [ x 1 , … , x n ] ) m ( m ≥ n )$, $J ( g ) ( x ) †$ is the Moore–Penrose inverse of the Jacobian matrix $J ( g ) ( x )$ of g and
$α 1 ( g , x ) = β 1 ( g , x ) γ 1 ( g , x ) , β 1 ( g , x ) = ∥ J ( g ) ( x ) † ∥ ∥ g ( x ) ∥ , γ 1 ( g , x ) = sup k ≥ 2 ∥ J ( g ) ( x ) † ∥ J k ( g ) ( x ) k ! 1 k − 1 ,$
$ζ$ is an attractive fixed point for Newton’s method and, simultaneously, a strict local minimum for $∥ g ∥ 2 = ∑ j = 1 m ∥ g j ∥ 2$. However, as they stated, whether the attracting fixed points for Newton’s method are always local minima of $∥ g ∥ 2$, or the zeros of the input system, is unknown.
In this paper, we consider the problem of certifying the simple real zeros of an over-determined polynomial system. Given $Σ = { f 1 , … , f m } ∈ ( R [ x 1 , … , x n ] ) m ( m ≥ n )$, we construct a new square system $Σ ′ = { ∂ f ∂ x 1 , … , ∂ f ∂ x n }$ with $f = ∑ i = 1 m f i 2$. After transforming the input over-determined system into a square one, we can use both the $α$-theory and the interval methods to certify its simple zeros. In this paper, we only consider using the interval methods to certify the simple real zeros of the over-determined system. We prove that the simple real zeros of the input system are local minima of sum of squares of the input polynomials. We also give the condition that the local minimum is a simple zero of the input system.
Let $R$ be the field of real numbers. Denote $R [ x ] = R [ x 1 , … , x n ]$ as the polynomial ring. Let $F = { f 1 , … , f m } ⊂ R [ x ]$ be a polynomial system. Let $p = ( p 1 , … , p n ) ∈ R n$.
The following theorem is the main result of this paper.
Theorem 1.
Let $Σ = { f 1 , … , f m } ⊂ R [ x ]$ ($m ≥ n$) and $f = ∑ i = 1 m f i 2$. Then, we have:
1.
If $p ∈ R n$ is an isolated simple real zero of Σ, $p$ is a local minimum of f.
2.
$p$ is a simple real zero of Σ if and only if $( p , 0 )$ is a simple real zero of the square system $Σ r = { J 1 ( f ) , …$, $J n ( f ) , f − r }$, where $J i ( f ) = ∂ f ∂ x i$ and r is a new variable.
In the above theorem, we get a necessary and sufficient condition to certify the simple real zeros of the input system $Σ$ by certifying the simple real zeros of the square system $Σ r$. Therefore, to certify that $p$ is a simple real zero of $Σ$, the key point is verifying that $f ( p ) = 0$.
However, it is difficult to decide numerically if a point is a zero of a polynomial. Thus, we cannot use the necessary and sufficient condition to certify the simple real zeros of $Σ$ by certifying the simple real zeros of $Σ r$.
As an alternative, we refine and certify the simple real zeros of $Σ$ by refining and certifying a new square system $Σ ′ = { J 1 ( f ) , …$, $J n ( f ) }$ with the interval methods and get a verified inclusion $X$, which contains a unique simple real zero $x ^$ of $Σ ′$. In fact, $x ^$ is a local minimum of f, which also is a necessary condition for the certification. On the one hand, if $f ( x ^ ) = 0$, by Theorem 1, $( x ^ , 0 )$ is a simple real zero of $Σ r$, and then $x ^$ is a simple real zero of $Σ$. Thus, we certified the input system $Σ$. On the other hand, if $f ( x ^ ) ≠ 0$, we can only assert that $Σ r$ has a unique zero in the verified inclusion $X × [ 0 , f ( x ^ ) ]$, which means we certified the system $Σ r$.
A big difference between this paper and our pervious work [32] is that we do not merely consider certifying simple zeros of over-determined polynomial systems, but also consider the certification of the general isolated zeros. Specifically, as an application of our method, we give a heuristic method for certifying not only the isolated singular zeros of polynomial systems, but also the multiplicity structures of the isolated singular zeros of polynomial systems.
This paper is an extended version of the CASC’17 conference paper [32].
The paper is organized as follows. We introduce some notations and preliminaries in the next section. In Section 3, we give a method to show how to transform an over-determined system into a square one. The interval verification method on the obtained square system is considered in Section 4. We give two applications of our method in Section 5 and draw conclusions in Section 6.

## 2. Preliminaries

Let $C$ be the field of complex numbers. Denote $C [ x ] = C [ x 1 , … , x n ]$ as the polynomial ring. Let $F = { f 1 , … , f m } ⊂ C [ x ]$ be a polynomial system. Let $p = ( p 1 , … , p n ) ∈ C n$. $F ( p ) = 0$ denote that $p$ is a zero of $F ( x ) = 0$.
Let A be a matrix. Denote $A T$ as the transpose of A and $rank ( A )$ as the rank of A. Let $M a t ( a i , j )$ denote the matrix whose i-th row j-th column element is $a i , j$.
Let $Σ = { f 1 , … , f m } ⊂ C [ x ]$ be a polynomial system. Denote $J ( Σ )$ as the Jacobian matrix of $Σ$. That is,
$J ( Σ ) = ∂ f 1 ∂ x 1 … ∂ f 1 ∂ x n ⋮ ⋱ ⋮ ∂ f m ∂ x 1 … ∂ f m ∂ x n .$
For a polynomial $f ∈ C [ x ]$, let $J ( f )$ denote $( ∂ f ∂ x 1 , ∂ f ∂ x 2 , … , ∂ f ∂ x n )$, $J i ( f ) = ∂ f ∂ x i$ and $J i , j ( f ) = J j ( J i ( f ) ) = ∂ 2 f ∂ x j ∂ x i$. Denote $Σ r = { J 1 ( f ) , …$, $J n ( f ) , f − r }$ with $f = ∑ j = 1 m f j 2$.
We denote the value of a function matrix $A ∈ C [ x ] n × n$ at a point $p ∈ C n$ as $A ( p )$. Let $J ( F ) ( p )$ denote the value of a function matrix $J ( F )$ at a point $p$, similarly for $J ( f ) ( p )$.
Definition 1.
Anisolated solutionof $F ( x ) = 0$ is a point $p ∈ C n$ which satisfies:
$∃ ε > 0 : { y ∈ C n : ∥ y − p ∥ < ε } ∩ F − 1 ( 0 ) = { p } .$
Definition 2.
We call an isolated solution $p ∈ C n$ of $F ( x ) = 0$ asingular solutionif and only if
$rank ( J ( F ) ( p ) ) < n .$
Otherwise, we call $p$ asimple solution.
Definition 3.
Astationary pointof a polynomial function $f ( x ) ∈ C [ x ]$ is a point $p ∈ C n$, which satisfies:
$∂ f ∂ x i ( p ) = 0 , ∀ i = 1 , … , n .$
We can find the following lemma in many undergraduate textbooks about linear algebra (see Example 7 on page 224 in [33] for example).
Lemma 1.
Let $A ∈ R m × n$ be a real matrix with $m ≥ n$ and $B = A T A$. Then, the ranks of A and B are the same, especially for the case that A is of full rank.
In the following, we consider the real zeros of the systems with real coefficients. It is reasonable since, for a system (m equations and n unknowns) with complex coefficients, we can rewrite the system into a new one with $2 m$ equations and $2 n$ unknowns by splitting the unknowns $x i = x i , 1 + i x i , 2$ and equations $f j ( x 1 , … , x n ) = g j , 1 ( x 1 , 1 , x 1 , 2 , … , x n , 1 , x n , 2 ) + i g j , 2 ( x 1 , 1 , x 1 , 2 , … , x n , 1 , x n , 2 )$, where $i 2 = − 1$, $f j ∈ C [ x ] , g j , 1 , g j , 2 ∈ R [ x ]$, $j = 1 , … , m$, and find the complex zeros of the original system by finding out the real zeros of the new system.

## 3. Transforming Over-determined Polynomial Systems into Square Ones

In this section, we show how to transform an over-determined polynomial system into a square one with their zeros having a one-to-one correspondence, especially for the simple zeros.
By Definition 3, we have the following lemma:
Lemma 2.
Given a polynomial system $Σ = { f 1 , … , f m } ⊂ R [ x ]$ ($m ≥ n$). Let $f = ∑ i = 1 m f i 2$ and $Σ ′ = { J 1 ( f ) , J 2 ( f ) , … , J n ( f ) }$. If $p ∈ R n$ is an isolated real zero of $Σ ′$, then $p$ is a stationary point of f.
Lemma 3.
Let $Σ = { f 1 , … , f m } ⊂ R [ x ]$ ($m ≥ n$), $Σ ′ = { J 1 ( f ) , J 2 ( f ) , … , J n ( f ) }$ with $f = ∑ i = 1 m f i 2$. If $p ∈ R n$ is an isolated real zero of Σ, then we have:
1.
$p$ is an isolated real zero of $Σ ′$.
2.
$rank ( J ( Σ ) ( p ) ) = rank ( J ( Σ ′ ) ( p ) )$.
Proof.
It is clear that $p$ is an isolated real zero of $Σ ′$ providing that $p$ is an isolated real zero of $Σ$, since $J i ( f ) = 2 ∑ k = 1 m f k J i ( f k )$.
To prove the second part of this lemma, we rewrite $J i ( f )$ as follows.
$J i ( f ) = 2 〈 f 1 , … , f m 〉 〈 J i ( f 1 ) , … , J i ( f m ) 〉 T ,$
where $〈 · 〉 T$ is the transpose of a vector or a matrix $〈 · 〉$. Then,
$J i , j ( f ) = J j ( J i ( f ) ) = J j ( 2 ∑ k = 1 m f k J i ( f k ) ) = 2 ∑ k = 1 m ( J j ( f k ) J i ( f k ) + f k J i , j ( f k ) ) = 2 〈 J j ( f 1 ) , … , J j ( f m ) 〉 〈 J i ( f 1 ) , … , J i ( f m ) 〉 T + 2 ∑ k = 1 m f k J i , j ( f k ) .$
Then, the Jacobian matrix of $Σ ′$ is
$J ( Σ ′ ) = J 1 , 1 ( f ) … J 1 , n ( f ) ⋮ ⋱ ⋮ J n , 1 ( f ) … J n , n ( f ) = Mat ( J i , j ( f ) ) .$
We rewrite
$Mat ( J i , j ( f ) ) = 2 A T A + 2 Mat ( ∑ k = 1 m f k J i , j ( f k ) ) ,$
where
$A = J 1 ( f 1 ) … J n ( f 1 ) ⋮ ⋱ ⋮ J 1 ( f m ) … J n ( f m )$
is an $m × n$ matrix which is exactly the Jacobian matrix of $Σ$, that is, $J ( Σ ) = A$. Then, we have
$J ( Σ ′ ) ( p ) = 2 A ( p ) T A ( p ) .$
By Lemma 1, the second part of the lemma is true. This ends the proof. □
Remark 1.
In our construction of f and $Σ ′$, the degrees of the polynomials almost be doubled compared to the original one. However, to evaluate the Jacobian matrix of $Σ ′$, we evaluate the Jacobian matrix of the original system with $m 2 n$ numerical products. One can find it from Equation (4) in the above proof. In fact, to get $J ( Σ ′ ) ( p )$, we only need to compute $A ( p )$, which does not increase our actual computing degree.
As a byproduct, thanks to the doubled degree of the polynomials, our final certified accuracy is also improved in Lemma 4.
The following is the proof of Theorem 1:
Proof.
In fact, by fixing the real zero $p$ as an isolated simple zero in Lemma 3, we have $p$ is an isolated simple real zero of $Σ ′ = { J 1 ( f ) , … , J n ( f ) }$. Since $p$ is an isolated simple zero of $Σ$, $A ( p )$ is a column full rank matrix. Therefore, it is easy to verify that $J ( Σ ′ ) ( p ) = 2 A ( p ) T A ( p )$ is a positive definite matrix. Thus, $p$ is a local minimum of f and the first part of the theorem is true. Now, we consider the second part.
First, it is easy to verify that $p$ is the real zero of $Σ$ if and only if $( p , 0 )$ is the real zero of $Σ r$. Notice that $Σ r = { Σ ′ , f − r }$. Thus, with the same method as proving Lemma 3, we can compute easily that
$J ( Σ r ) ( p , 0 ) = J ( Σ ′ ) ( p ) 0 0 − 1 = 2 J ( Σ ) ( p ) T J ( Σ ) ( p ) 0 0 − 1 ,$
which implies that
$rank ( J ( Σ ) ( p ) ) = rank ( J ( Σ ′ ) ( p ) ) = rank ( J ( Σ r ) ( p , 0 ) ) − 1 ,$
which means that $J ( Σ r ) ( p , 0 )$ is of full rank if and only if $J ( Σ ) ( p )$ is of full rank. Thus, $p$ is an isolated simple zero of $Σ$ if and only if $( p , 0 )$ is an isolated simple zero of $Σ r$. The second part is true. We have finished the proof. □
From Theorem 1, we know that the simple real zeros of $Σ$ and $Σ r$ are in one-to-one correspondence with the constraint that the value of the sum of squares of the polynomials in $Σ$ at the simple real zeros is identically zero. Thus, we can transform an over-determined polynomial system into a square system $Σ r$.
We show a simple example to illustrate the theorem below.
Example 1.
The simple zero $p = ( 0 , 0 )$ of the over-determined system $Σ = { f 1 , f 2 , f 3 }$ corresponds to a simple zero of a square system $Σ r = { J 1 ( f ) , J 2 ( f ) , f − r }$, where $f = f 1 2 + f 2 2 + f 3 2$ with
$f 1 = x 2 − 2 y , f 2 = y 2 − x , f 3 = x 2 − 2 x + y 2 − 2 y .$
We can verify simply that $( p , 0 )$ is a simple zero of $Σ r$.
Although the simple real zeros of $Σ$ and $Σ r$ have a one-to-one correspondence, it cannot be used directly to do certification of the simple zeros of $Σ$ since we cannot certify $r = 0$ numerically. However, we can certify the zeros of $Σ ′ = { J 1 ( f ) , J 2 ( f ) , … , J n ( f ) }$ as an alternative, which is a necessary condition for the certification.
We discuss it in next section.

## 4. Certifying Simple Zeros of Over-determined Systems

In this section, we consider certifying the over-determined system with the interval methods. We prove the same local minimum result as [31].
The classical interval verification methods are based on the following theorem:
Theorem 2
([5,6,8,30]). Let $f = ( f 1 , … , f n ) ∈ ( R [ x ] ) n$ be a polynomial system, $x ˜ ∈ R n$, real interval vector $X ∈ IR n$ with $0 ∈ X$ and real matrix $R ∈ R n × n$ be given. Let an interval matrix $M ∈ IR n × n$ be given whose i-th row $M i$ satisfies
${ ∇ f i ( ζ ) : ζ ∈ x ˜ + X } ⊆ M i .$
Denote by I the $n × n$ identity matrix and assume
$− R f ( x ˜ ) + ( I − R M ) X ⊆ i n t ( X ) ,$
where $i n t ( X )$ denotes the interior of X. Then, there is a unique $x ^ ∈ x ˜ + X$ with $f ( x ^ ) = 0$. Moreover, every matrix $M ˜ ∈ M$ is nonsingular. In particular, the Jacobian $J ( f ) ( x ^ )$ is nonsingular.
About interval matrices, there is an important property in the following theorem.
Theorem 3
([34]). A symmetric interval matrix $A I$ is positive definite if and only if it is regular and contains at least one positive definite matrix.
Given an over-determined polynomial system $Σ = { f 1 , … , f m } ⊂ R [ x ]$ with an isolated simple real zero, we can compute a related square system
$Σ ′ = { ∂ f ∂ x 1 , ∂ f ∂ x 2 , … , ∂ f ∂ x n } with f = ∑ j = 1 m f j 2 .$
Based on Lemma 3, a simple zero of $Σ$ is a simple zero of $Σ ′$. Thus, we can compute the approximate simple zero of $Σ$ by computing the approximate simple zero of $Σ ′$. Using Newton’s method, we can refine these approximate simple zeros with quadratic convergence to a relative higher accuracy. Then, we can certify them with the interval method mentioned before and get a verified inclusion $X$, which possesses a unique certified simple zero of the system $Σ ′$ by Theorem 2, denoting as $x ^ ∈ X$.
However, even though we get a certified zero $x ^$ of the system $Σ ′$, considering Lemma 2, we cannot say $x ^$ is a zero of the input system $Σ$. Because the certified zero $x ^$ is just a stationary point of f. Considering Theorem 1 and the difference between $Σ ′$ and $Σ r$, we have the following theorem.
Theorem 4.
Let Σ, $Σ ′$, $Σ r$, f, $x ^$ and the interval $X$ be given as above. Then, we have:
1.
$x ^$ is a local minimum of f.
2.
There exists a verified inclusion $X × [ 0 , f ( x ^ ) ]$ that possesses a unique simple zero of the system $Σ r$. Especially, if $f ( x ^ ) = 0$, the verified inclusion $X$ possesses a unique simple zero of the input system Σ.
Proof.
First, it is easy to see that computing the value of the matrix $J ( Σ ′ )$ at the interval $X$ will give a symmetric interval matrix, denoting as $J ( Σ ′ ) ( X )$. By Theorem 2, we know that, for every matrix $M ∈ J ( Σ ′ ) ( X )$, M is nonsingular. Therefore, the interval matrix $J ( Σ ′ ) ( X )$ is regular. Especially, the matrix $J ( Σ ′ ) ( x ^ )$, which is the Hessian matrix of f, is full rank and, therefore, is positive definite. Thus, $x ^$ is a local minimum of f. By Theorem 1, we know that $J ( Σ ′ ) ( X )$ is positive definite. Thus, for every point $q ∈ X$, $J ( Σ ′ ) ( q )$ is a positive definite matrix. Considering Theorem 2, it is trivial that, for the verified inclusion $X × [ 0 , f ( x ^ ) ]$, there exists a unique simple zero of the system $Σ r$. If $f ( x ^ ) = 0$, by Theorem 1, the verified inclusion $X$ of the system $Σ ′$ is a verified inclusion of the original system $Σ$. □
Remark 2.
1. In the above proof, we know that, for every point $q ∈ X$, $J ( Σ ′ ) ( q )$ is a positive definite matrix.
2. By Theorem 2, we know that there is a unique $x ^ ∈ X$ with $Σ ′ ( x ^ ) = 0$. However, we could not know what the exact $x ^$ is. According to the usual practice, in actual computation, we take the midpoint $p ^$ of the inclusion $X$ as $x ^$ and verify whether $f ( p ^ ) = 0$. Considering the uniqueness of $x ^$ in $X$, therefore, if $f ( p ^ ) = 0$, we are sure that the verified inclusion $X$ possesses a unique simple zero of the input system Σ. If $f ( p ^ ) ≠ 0$, we can only claim that there is a local minimum of f in the inclusion $X$ and $X × [ 0 , f ( p ^ ) ]$ is a verified inclusion for the system $Σ r$.
Considering the expression of $Σ$ and f and for the midpoint $p ^$ of $X$, we have a trivial result below.
Lemma 4.
Denote $ϵ = max j = 1 m | f j ( p ^ ) |$. Under the conditions of Theorem 4, we have $| f ( p ^ ) | ≤ m ϵ 2$.
Based on the above idea, we give an algorithm below. In the verification steps, we apply algorithm $verifynlss$ in INTLAB [30], which is based on Theorem 2, to compute a verified inclusion $X$ for the related square system $Σ ′$. For simplicity, denote the interval $X = [ x ̲ 1 , x ¯ 1 ] , ⋯ , [ x ̲ m , x ¯ m ]$ and the midpoint of $X$ as $p ^ = [ ( x ̲ 1 + x ¯ 1 ) / 2 , … , ( x ̲ m + x ¯ m ) / 2 ]$.
The correctness and the termination of the algorithm is obvious by the above analysis.
We give two examples to illustrate our algorithm.
Example 2.
Continue Example 1. Given an approximate zero $p ˜ = ( 0.0003528 , 0.0008131 )$, using Newton’s method, we get a higher accuracy approximate zero
$p ˜ ′ = 10 − 11 · ( − 0.104224090958505 , − 0.005858368844383 ) .$
Compute $f = f 1 2 + f 2 2 + f 3 2$ and $Σ ′ = { J 1 ( f ) , J 2 ( f ) }$. After applying algorithm $verifynlss$ on $Σ ′$, we have a verified inclusion:
$X = ( [ − 1.29668721603974 , 0.11330049261083 ] [ − 0.08866995073891 , 0.08866995073891 ] ) · 10 − 321 .$
Based on Theorem 2, we know that there exists a unique $x ^ ∈ X$, such that $Σ ′ ( x ^ ) = 0$.
Let $Σ r = { J 1 ( f ) , J 2 ( f ) , f − r }$. By Theorem 1, we can certify the simple zero of Σ by certifying the simple zero of $Σ r$ theoretically. Considering the difference between $Σ ′$ and $Σ r$, we check first whether the value of f at some point in the interval $X$ is zero. According to the usual practice, we consider the midpoint $p ^$ of $X$, which equals $( 0 , 0 )$ and, further, $f ( p ^ )$ is zero. Therefore, we are sure that there exists a unique $x ^ = ( x ^ , y ^ ) ∈ X$, s.t. $Σ r ( ( x ^ , 0 ) ) = 0$ and, then, there exists a unique simple zero $( x ^ , y ^ ) ∈ X$ of the input system Σ, which means we certified the input system Σ.
Example 3.
Let $Σ = { f 1 = x 1 2 + 3 x 1 x 2 + 3 x 1 x 3 − 3 x 3 2 + 2 x 2 + 2 x 3 , f 2 = − 3 x 1 x 2 + x 1 x 3 − 2 x 2 2 + x 3 2 + 3 x 1 + x 2 , f 3 = 2 x 2 x 3 + 3 x 1 − 3 x 3 + 2 , f 4 = − 6 x 2 2 x 3 + 2 x 2 x 3 2 + 6 x 2 2 + 15 x 2 x 3 − 6 x 3 2 − 9 x 2 − 7 x 3 + 6 }$ be an over-determined system. Consider an approximate zero
$p ˜ = ( − 1.29655 , 0.47055 , − 0.91761 ) .$
Using Newton’s method, we get a higher accuracy zero
$p ˜ ′ = ( − 1.296687216045438 , 0.470344502045004 , − 0.917812633399457 ) .$
Compute
$f = f 1 2 + f 2 2 + f 3 2 + f 4 2 and Σ ′ = { J 1 ( f ) , J 2 ( f ) , J 3 ( f ) } .$
After applying algorithm $verifynlss$ on $Σ ′$, we have a verified inclusion:
$X = [ − 1.29668721603974 , − 1.29668721603967 ] [ 0.47034450205107 , 0.47034450205114 ] [ − 0.91781263339256 , − 0.91781263339247 ]$
).
Similarly, based on Theorem 2, we know that there exists a unique $x ^ ∈ X$, such that $Σ ′ ( x ^ ) = 0$.
Proceeding as in the above example, we consider the midpoint $p ^$ of $X$ and compute $f ( p ^ ) = 3.94 · 10 − 31 ≠ 0$. Thus, by Theorem 4, we get a verified inclusion $X × [ 0 , f ( p ^ ) ]$, which contains a unique simple zero of the system $Σ r$. It means that $X$ may contain a zero of Σ. Even if $X$ does not contain a zero of Σ, it contains a local minimum of f, which has a minimum value no larger than $f ( p ^ )$.

## 5. Two Applications

As an application, we consider certifying isolated singular zeros of over-determined systems heuristically. Generally, dealing with the multiple zeros of polynomial systems directly is difficult. The classical method to deal with the isolated singular zeros of polynomial systems is the deflation technique, which constructs a new system owning the same singular zero as an isolated simple one. Although the deflation method can be used to refine or verify the isolated zero of the original system, it is a pity that the multiplicity information of the isolated zero is missed. In this section, as an application of the method of converting an over-determined system into a square system in previous section, we give a heuristic method for certifying isolated singular zeros of polynomial systems and their multiplicity structures.

#### 5.1. Certifying Isolated Singular Zeros of Polynomial Systems

Recently, Cheng et al. [35] proposed a new deflation method to reduce the multiplicity of an isolated singular zero of a polynomial system to get a final system, which owns the isolated singular zero of the input system as a simple one. Different from the previous deflation methods, they considered the deflation of isolated singular zeros of polynomial systems from the perspective of linear combination.
In this section, we first give a brief introduction of their deflation method and, then, show how our method is applied to certify the isolated singular zeros of the input system in a heuristic way.
Definition 4.
Let $f ∈ C [ x ]$, $p ˜ ∈ C n$ and a tolerance $θ > 0$, s.t. $| f ( p ˜ ) | < θ$. We say f is$θ$-singularat $p ˜$ if
$∂ f ( p ˜ ) ∂ x j < θ , ∀ 1 ≤ j ≤ n .$
Otherwise, we say f is$θ$-regularat $p ˜$.
Let $F = { f 1 , … , f n } ⊂ C [ x ]$ be a polynomial system. $p ˜ ∈ C n$ is an approximate isolated zero of $F = 0$. Consider a tolerance $θ$. First, we can compute the polynomials of all $f i ( i = 1 , … , n )$, which is $θ$-regular at the approximate zero $p ˜$. That is, we compute a polynomial set
$G = { d x γ ( f ) | d x γ ( f ) is θ − regular at p ˜ , f ∈ F } .$
Then, put $G$ and $F$ together and compute a subsystem $H = { h 1 , … , h s } ⊂ G ∪ F$, whose Jacobian matrix at $p ˜$ has a maximal rank s. If $s = n$, we get the final system $F ˜ ′ = H$. Otherwise, we choose a new polynomial $h ∈ G ∪ F \ H$ and compute
$g = h + ∑ i = 1 s α i h i , g j = ∂ h ∂ x j , j = 1 , … , n ,$
where $α j , j = 1 , … , n$ are new introduced variables. Next, we check if
$rank ( J ( H , g 1 , … , g n ) ( p ˜ ) ) = n + s .$
If Equation (6) holds, we get the final system $F ˜ ′ = H ∪ { g 1 , … , g n }$. Otherwise, let $H : = H ∪ { g 1 , … , g n } ⊂ C [ x , α ]$ and repeat again until Equation (6) holds.
Now, we give an example to illustrate the above idea.
Example 4.
Consider a polynomial system $F = { f 1 = − 9 4 + 3 2 x 1 + 2 x 2 + 3 x 3 + 4 x 4 − 1 4 x 1 2 , f 2 = x 1 − 2 x 2 − 2 x 3 − 4 x 4 + 2 x 1 x 2 + 3 x 1 x 3 + 4 x 1 x 4 , f 3 = 8 − 4 x 1 − 8 x 4 + 2 x 4 2 + 4 x 1 x 4 − x 1 x 4 2 , f 4 = − 3 + 3 x 1 + 2 x 2 + 4 x 3 + 4 x 4 }$. Consider an approximate singular zero
$p ˜ = ( p ˜ 1 , p ˜ 2 , p ˜ 3 , p ˜ 4 ) = ( 1.00004659 , − 1.99995813 , − 0.99991547 , 2.00005261 )$
of $F = 0$ and the tolerance $ε = 0.005$.
First, we have the Taylor expansion of $f 3$ at $p ˜$:
$f 3 = 3 × 10 − 9 − 3 × 10 − 9 ( x 1 − p ˜ 1 ) + 0.00010522 ( x 4 − p ˜ 4 ) + 0.99995341 ( x 4 − p ˜ 4 ) 2$
$− 0.00010522 ( x 1 − p ˜ 1 ) ( x 4 − p ˜ 4 ) − ( x 1 − p ˜ 1 ) ( x 4 − p ˜ 4 ) 2 .$
Consider the tolerance $θ = 0.05$. Since
$| f 3 ( p ˜ ) | < θ , ∂ f 3 ∂ x i ( p ˜ ) < θ ( i = 1 , 2 , 3 , 4 ) , ∂ 2 f 3 ∂ x 4 2 ( p ˜ ) > θ ,$
we get a polynomial
$∂ f 3 ∂ x 4 = − 8 + 4 x 1 + 4 x 4 − 2 x 1 x 4 ,$
which is θ-regular at $p ˜$. Similarly, by the Taylor expansion of $f 1 , f 2 , f 4$ at $p ˜$, we have that $f 1 , f 2 , f 4$ are all θ-regular at $p ˜$.
Thus, we have
$G = { f 1 , f 2 , − 8 + 4 x 1 + 4 x 4 − 2 x 1 x 4 , f 4 } .$
Compute
$r = rank ( J ( G ) ( p ˜ ) , ε ) = 3 .$
We can choose
$H = { h 1 = f 1 , h 2 = f 2 , h 3 = − 8 + 4 x 1 + 4 x 4 − 2 x 1 x 4 }$
from $G ∪ F$. To $h = f 4 ∈ G ∪ F \ H$, let
$g = h + α 1 h 1 + α 2 h 2 + α 3 h 3 .$
By solving a Least Square problem:
$L e a s t S q u a r e s ( ( J ( H , h ) ( p ˜ ) ) T [ α 1 , α 2 , α 3 , − 1 ] T = 0 ) ,$
we get an approximate value:
$( α ˜ 1 , α ˜ 2 , α ˜ 3 ) = ( − 1.000006509 , − 0.9997557989 , 0.000106178711 ) .$
Then, compute
${ g 1 = ∂ g ∂ x 1 = 3 + 3 2 α 1 + α 2 + 4 α 3 − 1 2 α 1 x 1 + 2 α 2 x 2 + 3 α 2 x 3 + 4 α 2 x 4 − 2 α 3 x 4 , g 2 = ∂ g ∂ x 2 = 2 + 2 α 1 − 2 α 2 + 2 α 2 x 1 , g 3 = ∂ g ∂ x 3 = 4 + 3 α 1 − 2 α 2 + 3 α 2 x 1 , g 4 = ∂ g ∂ x 4 = 4 + 4 α 1 − 4 α 2 + 4 α 3 + 4 α 2 x 1 − 2 α 3 x 1 ,$
and we get a polynomial set
$H ′ = { h 1 , h 2 , h 3 , g 1 , g 2 , g 3 , g 4 } ,$
which satisfies
$rank ( J ( H ′ ) ( p ˜ , α ˜ 1 , α ˜ 2 , α ˜ 3 ) , ε ) = 7 .$
Thus, we get the final system $F ˜ ′ ( x , α ) = H ′$.
In the above example, given a polynomial system $F$ with an isolated singular zero $p$, by computing the derivatives of the input polynomials directly or the linear combinations of the related polynomials, we compute a new system $F ˜ ′$, which has a simple zero. However, generally, the final system $F ˜ ′$ does not contain all $f i ( i = 1 , … , n )$. Thus, to ensure that the simple zero or parts of the simple zero of the square system $F ˜ ′$ really correspond to the isolated singular zero of the original system, we put $F$ and $F ˜ ′$ together and consider certifying the over-determined system $F ∪ F ˜ ′$ in the following.
Example 5.
Continue with Example 4. we put $F$ and $F ˜ ′$ together and get the over-determined system $Σ = F ∪ F ˜ ′$. According to our method in Section 4, let
$f = ∑ j = 1 4 f j 2 + h 3 2 + ∑ j = 1 4 g j 2 .$
Then, we compute
$Σ ′ = { ∂ f ∂ x 1 , … , ∂ f ∂ x 4 , ∂ f ∂ α 1 , … , ∂ f ∂ α 3 } and Σ r = { Σ ′ , f − r } .$
After applying algorithm $verifynlss$ on $Σ ′$ at $( p ˜ , α ˜ 1 , α ˜ 2 , α ˜ 3 )$, we have a verified inclusion:
$X = [ 0.99999999999979 , 1.00000000000019 ] [ − 2.00000000000060 , − 1.99999999999945 ] [ − 1.00000000000040 , − 0.99999999999956 ] [ 1.99999999999998 , 2.00000000000002 ] [ − 1.00000000000026 , − 0.99999999999976 ] [ − 1.00000000000022 , − 0.99999999999975 ] [ − 0.00000000000012 , − 0.00000000000010 ]$
By Theorem 2, we affirm that there is a unique isolated simple zero $x ^ ∈ X$, s.t. $Σ ′ ( x ^ ) = 0$.
Next, as in Examples 2 and 3, we consider the midpoint $( p ^ , α ^ )$ of $X$ and compute $f ( p ^ , α ^ ) = 4.0133 × 10 − 28$. Thus, by Theorem 4, we get a verified inclusion $X × [ 0 , f ( p ^ , α ^ ) ]$, which contains a unique simple zero of the system $Σ r$. It means that $X$ may contain a zero of Σ. Even if $X$ does not contain a zero of Σ, it contains a local minimum of f, which has a minimum value no larger than $f ( p ^ , α ^ )$.
In the above example, we get the verified inclusion $X × [ 0 , f ( p ^ , α ^ ) ]$ of the system $Σ r$. Noticing that $f ( p ^ , α ^ ) ≠ 0$, according to Theorem 4, we are not sure if the verified inclusion $X$ contains a unique simple zero of the system $Σ$. While, considering the value of $f ( p ^ , α ^ )$ is very small, under certain numerical tolerance condition(for example $10 − 25$), we can deem that the verified inclusion $X$ contains a simple zero of the system $Σ$. That is, we certified the over-determined system $Σ$ and further certified the original system $F$.

#### 5.2. Certifying the Multiplicity Structures of Isolated Singular Zeros of Polynomial Systems

In recent years, Mourrain et al. [21,36] proposed a new deflation method, which can be used to refine the accuracy of an isolated singular zero and the parameters introduced simultaneously and, moreover, the parameters can describe the multiplicity structure at the zero. They also proved that the number of equations and variables in this deflation method depend polynomially on the number of variables and equations of the input system and the multiplicity of the singular zero. However, although they also showed that the isolated simple zeros of the extended polynomial system correspond to zeros of the input system, the extended system is usually an over-determined system. Therefore, the problem of knowing the multiplicity structure of the isolated singular zero exactly becomes the problem of solving or certifying the isolated simple zero of the over-determined system.
In this section, we first give a brief introduction of their deflation method and, then, show how our method is applied to certify the multiplicity structure of the isolated singular zero of the input system heuristically.
Let $F = { f 1 , … , f m } ⊂ C [ x ]$. Let $p = ( p 1 , … , p n ) ∈ C n$ be an isolated multiple zero of $F$. Let $I = 〈 f 1 , … , f m 〉$, $m p$ be the maximal ideal at $p$ and Q be the primary component of I at $p$ so that $Q = m p$.
Consider the ring of power series $C [ [ ∂ p ] ] : = C [ [ ∂ 1 , p , … , ∂ n , p ] ]$ and we use the notation for $β = ( β 1 , … , β n ) ∈ N n$:
$∂ p β ( f ) : = ∂ 1 , p β 1 ⋯ ∂ n , p β n = ∂ | β | f ∂ x 1 β 1 ⋯ ∂ x n β n ( p ) , for f ∈ C [ x ] .$
The deflation method based on the orthogonal primal-dual pairs of bases for the space $C [ x ] / Q$ and its dual $D ⊂ C [ ∂ ]$, which is illustrated in the following lemma.
Lemma 5.
Let $F , p , Q , D$ be as in the above and δ be the multiplicity of $F$ at $p$. Then, there exists a primal-dual basis pair of the local ring $C [ x ] / Q$ with the following properties:
1.
(a) The primal basis of the local ring $C [ x ] / Q$ has the form
$B : = { ( x − p ) α 0 , ( x − p ) α 1 , … , ( x − p ) α δ − 1 } .$
We can assume that $α 0 = 0$ and that the monomials in B are connected to 1. Define the set of exponents in B
$E : = { α 0 , … , α δ − 1 } .$
2.
The unique dual basis $Λ = { Λ 0 , Λ 1 , … , Λ δ − 1 } ⊂ D$ orthogonal to B has the form:
$Λ 0 = ∂ p α 0 = 1 p , Λ 1 = 1 α 1 ! ∂ p α 1 + ∑ | β | < | α 1 | β ∉ E ν α 1 , β 1 β ! ∂ p β , ⋮ Λ δ − 1 = 1 α δ − 1 ! ∂ p α δ − 1 + ∑ | β | < | α δ − 1 | β ∉ E ν α δ − 1 , β 1 β ! ∂ p β ,$
The above lemma says that, once given a primal basis B of the local ring $C [ x ] / Q$, there exists a unique dual basis $Λ$, which can be used to determine the multiplicity structure of $p$ in $F$ and further the multiplicity $δ$ of $p$, orthogonal to B. Based on the known primal basis B, Mourrain et.al constructed the following parametric multiplication matrices, which can be used to determine the coefficients of the dual basis $Λ$.
Definition 5.
Let B as defined in Lemma 5 and denote the exponents in B by $E : = { α 0 , … , α δ − 1 }$ as above. Let
$E + : = ⋃ i = 1 n ( E + e i )$
with $E + e i = { ( γ 1 , … , γ i + 1 , … , γ n ) : γ ∈ E }$ and we denote $∂ ( E ) = E + \ E$. We define an array μ of length $n δ ( δ − 1 ) / 2$ consisting of 0s, 1s and the variables $μ α i , β$ as follows: for all $α i , α k ∈ E$ and $j ∈ { 1 , … , n }$ the corresponding entry is
$μ α i , α l + e j = 1 , i f α i = α k + e j 0 , i f α k + e j ∈ E , α i ≠ α k + e j μ α i , α l + e j , i f α k + e j ∉ E .$
The parametric multiplication matrices corresponding to E are defined for $i = 1 , … , n$ by
$M i t ( μ ) : = 0 μ α 1 , e i μ α 2 , e i ⋯ μ α δ − 1 , e i 0 0 μ α 2 , α 1 + e i ⋯ μ α δ − 1 , α 1 + e i ⋮ ⋮ ⋮ 0 0 0 ⋯ μ α δ − 1 , α δ − 2 + e i 0 0 0 ⋯ 0 .$
Definition 6.
(Parametric normal form). Let $K ⊂ C$ be a field. We define
$N z , μ : K [ x ] ⟶ K [ z , μ ] δ f ⟼ N z , μ ( f ) : = f ( z + M ( μ ) ) [ 1 ] = ∑ γ ∈ N n 1 γ ! ∂ z γ ( f ) M ( μ ) γ [ 1 ] .$
where $[ 1 ] = [ 1 , 0 , … , 0 ]$ is the coefficient vector of 1 in the basis B.
Based on the above lemma and definitions, the multiplicity structure are characterized by polynomial equations in the following theorem.
Theorem 5
([36]). Let $K ⊂ C$ be any field, $F ⊂ K [ x ]$, and let $p ∈ C n$ be an isolated zero of $F$. Let Q be the primary ideal at $p$ and assume that B is a basis for $K [ x ] / Q$ satisfying the conditions of Lemma 5. Let $E ⊂ N n$ be as in Lemma 5 and $M i ( μ )$ for $i = 1 , … , n$ be the parametric multiplication matrices corresponding to E as in Definition 5 and $N z , μ$ be the parametric form as in Definition 6. Then, $( z , μ ) = ( p , ν )$ is an isolated zero with multiplicity one of the polynomial system in $K [ z , μ ]$:
${ N z , μ ( f k ) = 0 , f o r k = 1 , … , m , M i ( μ ) · M j ( μ ) − M j ( μ ) · M i ( μ ) = 0 , f o r i , j = 1 , … , n .$
The second part of Equation (7) gives a pairwise commutation relationship of the parametric multiplication matrices. Moreover, Theorem 5 makes sure that Equation (7) has an isolated zero $( p , ν )$ of multiplicity one. Thus, it can be used to deflate the isolated zero $p$ of the input system $F$ and simultaneously determine the multiplicity structure of $p$.
Now, we show an example to illustrate how their method works.
Example 6.
Let $F = { f 1 = x 1 + x 2 + x 1 2 , f 2 = x 1 + x 2 + x 2 2 }$ be a polynomial system with a three-fold isolated zero $p = ( 0 , 0 )$. Given the primal basis $B = { 1 , x 1 , x 1 2 }$, which satisfies the properties of Lemma 5, we can compute the parametric multiplication matrices:
$M 1 t ( μ ) = 0 1 0 0 0 1 0 0 0 , M 2 t ( μ ) = 0 μ 1 μ 2 0 0 μ 3 0 0 0 .$
Thus, Equation (7) generates the following polynomials:
1.
$N ( f 1 ) = 0$ gives the polynomials $x 1 + x 2 + x 1 2 , 1 + 2 x 1 + μ 1 , 1 + μ 2$.
2.
$N ( f 2 ) = 0$ gives the polynomials $x 1 + x 2 + x 2 2 , 1 + ( 1 + 2 x 2 ) μ 1 , ( 1 + 2 x 2 ) μ 2 + μ 1 μ 3$.
3.
$M 1 M 2 − M 2 M 1 = 0$ gives the polynomial $μ 3 − μ 1$.
Furthermore, Theorem 5 promises that $( p , ν 1 , ν 2 , ν 3 )$ is an isolated zero with multiplicity one of the system $F ′ = { f 1 , f 2 , 1 + 2 x 1 + μ 1 , 1 + μ 2 , 1 + ( 1 + 2 x 2 ) μ 1 , ( 1 + 2 x 2 ) μ 2 + μ 1 μ 3 , μ 3 − μ 1 }$.
On the one hand, from the above example, we can see that given a polynomial system $F$ with an isolated zero $p$, by Theorem 5, we will get an extended system $F ′ ⊂ C [ x , μ ]$, which owns an isolated zero $( p , ν )$ with multiplicity one. Moreover, by Lemma 5, we have the dual basis
$Λ = { 1 , ∂ 1 + ν 1 ∂ 2 , 1 2 ∂ 1 2 + ν 2 ∂ 2 + ν 3 ∂ 1 ∂ 2 + 1 2 ν 1 ν 3 ∂ 2 2 } ,$
which corresponds to the primal basis $B = { 1 , x 1 , x 1 2 }$.
On the other hand, it is not hard to see that Equation (7) defined in Theorem 5 usually gives an over-determined extended system $F ′$. Once given an approximate zero $( p ˜ , ν ˜ )$, similar to in Corollary 4.12 in [29], we can use random linear combinations of the polynomials in $F ′$ to produce a square system, which has a simple zero at $( p , ν )$ with high probability. Furthermore, Newton’s method can be used on this square system to refine $( p ˜ , ν ˜ )$ to a higher accuracy. However, this operation can only return an approximate multiplicity structure of the input system $F$ with a higher accuracy. Next, we consider employing our certification method to certify the multiplicity structure of $F$.
Example 7.
Continue to consider Example 6. Let $Σ = F ′ = { f 1 , f 2 , g 1 = 1 + 2 x 1 + μ 1 , g 2 = 1 + μ 2 , g 3 = 1 + ( 1 + 2 x 2 ) μ 1 , g 4 = ( 1 + 2 x 2 ) μ 2 + μ 1 μ 3 , g 5 = μ 3 − μ 1 }$. Given an approximate zero
$( p ˜ , ν ˜ ) = ( 0.15 , 0.12 , − 1.13 , − 1.32 , − 1.47 ) .$
By Algorithm 1, with Newton’s method, we get a higher accuracy zero
$( p ˜ ′ , ν ˜ ′ ) = ( 0.000000771 , 0.000001256 , − 1.000002523 , − 1.000000587 , − 1.000001940 ) .$
Then, let
$f = f 1 2 + f 2 2 + ∑ j = 1 5 g j 2$
and compute
$Σ ′ = { J 1 ( f ) , J 2 ( f ) , J μ 1 ( f ) , J μ 2 ( f ) , J μ 3 ( f ) } .$
After applying algorithm $verifynlss$ on $Σ ′$ at $( p ˜ ′ , ν ˜ ′ )$, we have a verified inclusion:
$X = [ − 0.00000000000001 , 0.00000000000001 ] [ − 0.00000000000001 , 0.00000000000001 ] [ − 1.00000000000001 , − 0.99999999999999 ] [ − 1.00000000000001 , − 0.99999999999999 ] [ − 1.00000000000001 , − 0.99999999999999 ] .$
Based on Theorem 2, we know that there exists a unique $( x ^ , μ ^ ) ∈ X$, s.t. $Σ ′ ( x ^ , μ ^ ) = 0$.
Similarly, as in Examples 2 and 3, we consider the midpoint $( p ^ , ν ^ )$ of $X$ and compute $f ( p ^ , ν ^ ) = 0$. Thus, by Theorem 4, we are sure that there exists a unique simple zero $( x ^ 1 , x ^ 2 , ν ^ 1 , ν ^ 2 , ν ^ 3 )$ of the input system Σ in the interval $X$, which means we certified the input system Σ.
According to the analysis in the above example, we know that, after applying our Algorithm 1 on the extended system $Σ = F ′$, we get a verified inclusion $X$, which possesses a unique simple zero of $F ′$. Noticing that the values of the variables $μ 1 , μ 2 , μ 3$ in $F ′$ determine the coefficients of the dual basis $Λ$, thus, certifying the extended system $F ′$ means certifying the multiplicity structure of the input system $F$ at $p$. Thus, by Theorem 4, as long as $f ( x ^ , μ ^ ) = 0$, we are sure that we certified not only the isolated singular zero of the input system $F$, but also its multiplicity structure.
 Algorithm 1$VSPS$: verifying a simple zero of a polynomial system. Input: an over-determined polynomial system $Σ : = { f 1 , ⋯ , f m } ⊂ R [ x ]$ and an approximate simple zero $p ˜ = ( p ˜ 1 , ⋯ , p ˜ n ) ∈ R n$.Output: a verified inclusion $X$ and a small non-negative number. 1:Compute f and $Σ ′$;2:Compute $p ˜ ′ : = Newton ( Σ ′ , p ˜ )$;3:Compute $X : = verifynlss ( Σ ′ , p ˜ ′ )$ and $f ( p ^ )$;4:if$f ( p ^ ) = 0$, then5:return  $( X , 0 )$;6:else7:return  $( X , f ( p ^ ) )$.8:end if

## 6. Conclusions

In this paper, we introduce the following two main contributions. First, we consider certifying the simple zeros of over-determined systems. By transforming the given over-determined system into a square one, we prove a necessary and sufficient condition to certify the simple real zeros of the over-determined system $Σ$ by certifying the simple real zeros of the square system $Σ r$. However, noting that deciding numerically if a point is a zero of a polynomial is difficult, we refine and certify the simple real zeros of $Σ$ by refining and certifying a new square system $Σ ′$ with the interval methods and get a verified inclusion $X$, which contains a unique simple real zero $x ^$ of $Σ ′$. In fact, $x ^$ is a local minimum of f, which is also a necessary condition for the certification. By the necessary and sufficient condition in Theorem 1, we know that, as long as $f ( x ^ ) = 0$, we can say that $x ^$ is a simple real zero of $Σ$ and we certified the input system $Σ$.
Second, based on our work [35] and the work of Mourrain et al. [21,36], as an application of our method, we give a heuristic method for certifying not only the isolated singular zeros of polynomial systems, but also the multiplicity structures of the isolated singular zeros of polynomial systems.
In the future, for the certified zero $x ^$, trying to give a sufficient condition for the certification is the direction of our effort.

## Author Contributions

J.C. contributed for supervision, project administration, funding and conceived of the presented idea. X.D. developed the theory, performed the computations and wrote the initial draft of the paper. J.C. verified the analytical methods. Both authors read and approved the final version of the paper.

## Funding

The work was partially supported by NSFC Grants 11471327.

## Acknowledgments

The authors would like to thank the anonymous referees very much for their useful suggestions that improved this paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Blum, L.; Cucker, F.; Shub, M.; Smale, S. Complexity and Real Computation; Springer: New York, NY, USA, 1998. [Google Scholar]
2. Hauenstein, J.D.; Sottile, F. Algorithm 921: AlphaCertified: Certifying Solutions to Polynomial Systems. ACM Trans. Math. Softw. 2012, 38, 28. [Google Scholar] [CrossRef]
3. Smale, S. Newton’s Method Estimates from Data at One Point. In The Disciplines: New Directions in Pure, Applied and Computational Mathematics; Ewing, R., Gross, K., Martin, C., Eds.; Springer: New York, NY, USA, 1986. [Google Scholar]
4. Kanzawa, Y.; Kashiwagi, M.; Oishi, S. An algorithm for finding all solutions of parameter-dependent nonlinear equations with guaranteed accuracy. Electr. Commun. JPn. 1999, 82, 33–39. [Google Scholar] [CrossRef]
5. Krawczyk, R. Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehlherschranken. Computing 1969, 4, 247–293. [Google Scholar] [CrossRef]
6. Moore, R.E. A test for existence of solutions to nonlinear systems. SIAM J. Numer. Anal. 1977, 14, 611–615. [Google Scholar] [CrossRef]
7. Nakaya, Y.; Oishi, S.; Kashiwagi, M.; Kanzawa, Y. Numerical verification of nonexistence of solutions for separable nonlinear equations and its application to all solutions algorithm. Electr. Commun. Jpn. 2003, 86, 45–53. [Google Scholar] [CrossRef]
8. Rump, S.M. Solving algebraic problems with high accuracy. Proceedings of the Symposium on A New Approach to Scientific Computation; Academic Press Professional, Inc.: San Diego, CA, USA, 1983; pp. 51–120. [Google Scholar]
9. Yamamura, K.; Kawata, H.; Tokue, A. Interval solution of nonlinear equations using linear programming. BIT Numer. Math. 1998, 38, 186–199. [Google Scholar] [CrossRef]
10. Allamigeon, X.; Gaubert, S.; Magron, V.; Werner, B. Formal proofs for nonlinear optimization. J. Form. Reason. 2015, 8, 1–24. [Google Scholar]
11. Kaltofen, E.; Li, B.; Yang, Z.; Zhi, L. Exact certification of global optimality of approximate factorizations via rationalizing sums-of-squares with floating point scalars. In Proceedings of the Twenty-first International Symposium on Symbolic and Algebraic Computation, ISSAC 08, Hagenberg, Austria, 20–23 July 2008; ACM: New York, NY, USA; pp. 155–164. [Google Scholar]
12. Kaltofen, E.L.; Li, B.; Yang, Z.; Zhi, L. Exact certification in global polynomial optimization via sums-of-squares of rational functions with rational coefficients. J. Symb. Computat. 2012, 47, 1–15. [Google Scholar] [CrossRef]
13. Monniaux, D.; Corbineau, P. On the generation of positivstellensatz witnesses in degenerate cases. In Interactive Theorem Proving; LNCS, 6898; van Eekelen, M., Geuvers, H., Schmaltz, J., Wiedijk, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 249–264. [Google Scholar]
14. Peyrl, H.; Parrilo, P.A. A Macaulay2 package for computing sum of squares decompositions of polynomials with rational coefficients. In Proceedings of the SNC 2007, Waterloo, AB, Canada; 2007; pp. 207–208. [Google Scholar]
15. Peyrl, H.; Parrilo, P.A. Computing sum of squares decompositions with rational coefficients. Theor. Comput. Sci. 2008, 409, 269–281. [Google Scholar] [CrossRef]
16. Safey, M.; Din, E.; Zhi, L. Computing rational points in convex semialgebraic sets and sum of squares decompositions. SIAM J. Optim. 2010, 20, 2876–2889. [Google Scholar]
17. Akogul, T.A.; Hauenstein, J.D.; Szanto, A. Certifying solutions to overdetermined and singular polynomial systems over $Q$. J. Symb. Comput. 2018, 84, 147–171. [Google Scholar]
18. Dayton, B.; Zeng, Z. Computing the multiplicity structure in solving polynomial systems. In Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation, Beijing, China, 24–27 July 2005; Kauers, M., Ed.; ACM: New York, NY, USA, 2005; pp. 116–123. [Google Scholar]
19. Giusti, M.; Lecerf, G.; Salvy, B.; Yakoubsohn, J.-C. On location and approximation of clusters of zeros: case of embedding dimension one. Found. Comput. Math. 2007, 7, 1–58. [Google Scholar] [CrossRef]
20. Hauenstein, J.D.; Wampler, C.W. Isosingular sets and deflation. Found. Computat. Math. 2013, 13, 371–403. [Google Scholar] [CrossRef]
21. Hauenstein, J.D.; Mourrain, B.; Szant, A. Certifying isolated singular points and their multiplicity structure. In Proceedings of the Twenty-first International Symposium on Symbolic and Algebraic Computation, ISSAC’ 15, Bath, UK, 6–9 July 2015; pp. 213–220. [Google Scholar]
22. Ojika, T. A numerical method for branch points of a system of nonlinear algebraic equations. Appl. Numer. Math. 1988, 4, 419–430. [Google Scholar] [CrossRef]
23. Ojika, T.; Watanabe, S.; Mitsui, T. Deflation algorithm for the multiple roots of a system of nonlinear equations. J. Math. Anal. Appl. 1983, 96, 463–479. [Google Scholar] [CrossRef]
24. Zeng, Z. Computing multiple roots of inexact polynomials. Math. Comput. 2005, 74, 869–903. [Google Scholar] [CrossRef]
25. Dayton, B.; Li, T.; Zeng, Z. Multiple zeros of nonlinear systems. Math. Comput. 2011, 80, 2143–2168. [Google Scholar] [CrossRef]
26. Kanzawa, Y.; Oishi, S. Approximate singular solutions of nonlinear equations and a numerical method of proving their existence. Theory and application of numerical calculation in science and technology, II (Japanese) (Kyoto, 1996). Sūrikaisekikenkyūsho Kōkyūroku 1997, 990, 216–223. [Google Scholar]
27. Leykin, A.; Verschelde, J.; Zhao, A. Newton’s method with deflation for isolated singularities of polynomial systems. Theor. Comput. Sci. 2006, 359, 111–122. [Google Scholar] [CrossRef]
28. Li, N.; Zhi, L. Verified Error Bounds for Isolated Singular Solutions of Polynomial Systems. SIAM J. Numer. Anal. 2014, 52, 1623–1640. [Google Scholar] [CrossRef]
29. Mantzaflaris, A.; Mourrain, B. Deflation and certified isolation of singular zeros of polynomial systems. In Proceedings of the ISSAC 2011, San Jose, CA, USA, 17 January 2011; pp. 249–256. [Google Scholar]
30. Rump, S.M.; Graillat, S. Verified error bounds for multiple roots of systems of nonlinear equations. Numer. Algorithms 2010, 54, 359–377. [Google Scholar] [CrossRef]
31. Dedieu, J.P.; Shub, M. Newton’s method for overdetermined systems of equations. Math. Comput. 1999, 69, 1099–1115. [Google Scholar] [CrossRef]
32. Cheng, J.S.; Dou, X. Certifying simple zeros of over-determined polynomial systems. In Computer Algebra in Scientific Computing; CASC’17 Lecture Notes in Computer, Science; Gerdt, V., Koepf, W., Seiler, W., Vorozhtsov, E., Eds.; Springer: Cham, Switzerland, 2017; pp. 55–76. [Google Scholar]
33. Li, S. Linear Algebra; Higher Education Press: Beijing, China, 2006; ISBN 978-7-04-019870-6. [Google Scholar]
34. Rohn, J. Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl. 1994, 15, 175–184. [Google Scholar] [CrossRef]
35. Cheng, J.S.; Dou, X.; Wen, J. A new deflation method for verifying the isolated singular zeros of polynomial systems. preprint 2018.
36. Hauenstein, J.D.; Mourrain, B.; Szanto, A. On deflation and multiplicity structure. J. Symb. Comput. 2017, 83, 228–253. [Google Scholar] [CrossRef]