Next Article in Journal
On the Differential Topology of Expressivity of Parameterized Quantum Circuits
Previous Article in Journal
Implicit Quiescent Optical Soliton Perturbation with Nonlinear Chromatic Dispersion and Kudryashov’s Self-Phase Modulation Structures for the Complex Ginzburg–Landau Equation Using Lie Symmetry: Linear Temporal Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Approach to Error Analysis for Roots of Polynomial Equations

by
Imme van den Berg
1,* and
João Carlos Lopes Horta
2
1
Centro de Investigação em Matemática e Aplicações (CIMA), Universidade de Évora, Rua Romão Ramalho 59, 7000-671 Évora, Portugal
2
Faculdade de Ciências e Tecnologia, Universidade de Cabo Verde, Campus de Palmarejo Grande, Zona K 379 C, Praia 7943-010, Cape Verde
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(3), 120; https://doi.org/10.3390/appliedmath5030120
Submission received: 30 June 2025 / Revised: 15 August 2025 / Accepted: 26 August 2025 / Published: 4 September 2025

Abstract

We study equations with real polynomials of arbitrary degree, such that each coefficient has a small, individual error; this may originate, for example, from imperfect measuring. In particular, we study the influence of the errors on the roots of the polynomials. The errors are modeled by imprecisions of Sorites type: they are supposed to be stable to small shifts. We argue that such imprecisions are appropriately reflected by (scalar) neutrices, which are convex subgroups of the nonstandard real line; examples are the set of infinitesimals, or the set of numbers of order ε , where ε is a fixed infinitesimal. The Main Theorem states that the imprecisions of the roots are neutrices, and determines their shape.

1. Introduction

1.1. Imprecisions and the Sorites Property, Scalar Neutrices

Imprecisions may result from inaccurate measuring or lack of information. Typically, instead of dealing with a concrete real number, we have to consider a small interval. To do justice to the imprecision, such an interval should not have crisp boundaries. Then we are in the presence of the Sorites property: the interval is stable under at least some shifts, which should not be too large. The apparent paradox is reasonably resolved in the context of Nonstandard Analysis; a thorough description is given by Dinis and Jacinto in [1]. In particular, the external convex subsets of the real line (i.e., intervals which cannot be defined in classical Zermelo–Fraenkel set theory) may have the Sorites property. Obvious examples are the set of infinitesimals, i.e., the real numbers smaller in absolute value than all standard non-zero real numbers, and the set of limited numbers, i.e., the numbers which in absolute value are less than some standard real number. The set of infinitesimals is stable under addition or subtraction of a fixed infinitesimal, and the set of limited numbers is stable under addition or subtraction of a fixed standard number. In fact, they have the group property. More in general, convex subsets of the nonstandard real line which are groups for addition have been called (scalar) neutrices by Koudjeti and Van den Berg in [2]. There are many neutrices of different sizes and natures. An important subclass of the neutrices is given by the idempotent neutrices which, being stable under multiplications, are very useful in studying non-linear problems.

1.2. Error Analysis and External Numbers

An external number is the (pointwise, or Minkowski) sum of a (nonstandard) real number and a neutrix, and then, on behalf of the observations made in Section 1.1 it may be argued that they represent a real number with a small, individual error. As will be seen in Section 3, the Minkowski operations of addition and multiplication applied to the external numbers correspond exactly to the algebraic rules of informal Error Analysis [3]. However, in the present setting, they give rise to a formal algebraic and ordered structure, called Complete Algebraic Solid in [4]. Many rules of ordinary calculus continue to hold, sometimes in adapted form. However, we have sub-distributivity instead of distributivity, though the distributive law holds in most cases, which can be characterized.

1.3. Polynomials and Main Results

We adopt the usual definition of a polynomial as the result of a (standard) finite number of additions and multiplications of elements of the ground set and a variable x. In the present subdistributive setting, the structure of the polynomials is much richer than the structure of real polynomials, which are sums of monomials. Because imprecisions (neutrices) vary in size, equalities are not always realizable, and we study inclusions instead. The Main Theorem (Theorem 1) states, firstly, that a polynomial inclusion can be reduced to a standard finite set of inclusions for real polynomials, and secondly, that roots, if any, are external numbers, while the size of their neutrices can always be determined. An important tool in the proof is the real version of the Fundamental Theorem of Algebra. In a sense, we extend this theorem to an algebraic structure that is sub-distributive, admitting a larger class of polynomials. Other steps in the proof are multi-scale asymptotic methods, which, firstly, localize sets of near roots, and, secondly, show that within such a set a real polynomial, on a sufficiently restricted area, behaves like a monomial, noting that roots of monomial inclusions have a simple form in terms of an external number.

1.4. Relation to Existing Literature

The term neutrix is borrowed from Van der Corput, who wished to develop an ars negligendi. In [5], a neutrix is a ring without unity of functions, which in a particular problem should be neglected. An overview of classical, mostly asymptotic, methods to study perturbations of polynomials is given by Murdock in the opening chapter of [6]. In particular, he focused on how to calculate or approximate the roots of polynomials with perturbations in their coefficients, especially when dealing with equations that are slightly perturbed from a known solvable form. In [7], fine asymptotics is used to determine domains on which asymptotic approximations of roots are reasonably valid.
The study of how small changes in a polynomial’s coefficients affect its roots has also been carried out with the use of notions of continuous or differentiable dependence. In the framework of Nonstandard Analysis, one may express continuous and differentiable dependence rather easily with the help of infinitesimals. Lutz and Goze [8] formulated general nonstandard theorems on the infinitely closeness of roots of standard polynomials, and infinitesimal perturbations of them. These results have been recently extended by Remm in [9]. It has been observed that within classical analysis, the formulation of general theorems is hard; recently, notable attempts have been made by Nathanson and Ross [10].
A general method of sensitivity analysis is based on the condition number [11], which in our one-dimensional case corresponds to the absolute value of the derivative. Indeed, the tangent of the polynomial at the root locally approximates the polynomial, and represents a linear function which is a homomorphism of groups, and in particular neutrices. In the special case of a standard polynomial with the imprecision around a zero given by the infinitesimals, one may expect to be able to derive that the imprecision of the root is again infinitesimal. Note, however, that such an approach is not conclusive in the case of multiple roots (corresponding to a condition number equal to zero), and that our context is much more general, for the coefficients of a polynomial may have different orders of magnitude, and then the polynomial could locally be highly oscillating. In addition, the size of imprecision in the zero is prescribed by our approach, so it can be too large for the approximation by the tangent line to be significant.
A comprehensive framework in classical asymptotics for localizing roots approximately is provided by Nafisah et al. in [12]. They estimated the order of magnitude of polynomial roots based on the structure and magnitude of their coefficients without the need for calculation of exact root values or the use of explicit root-finding algorithms.
Perturbations of polynomials and their roots have also been studied from a numerical point of view; see, for example, [13,14].
The study of error analysis in the present non-linear setting has been preceded by similar studies in the context of systems of linear equations; see, e.g., Van Tran and Van den Berg [15] and the references mentioned in this article.

1.5. Structure of the Article

The present article has the following structure. Section 2 contains some background on Nonstandard Analysis with some relevant references. In Section 3, we recall the Minkowski operations on subsets of the real line, define neutrices—in particular idempotent neutrices—and external numbers, and give the rules for the algebraic operations. In Section 4, we define polynomials of various types, and present the Main Theorem, which says that a polynomial inclusion may be reduced to a finite system of inclusions for real polynomials which, if consistent, is solved by external numbers. Section 5 recalls preliminary properties. They concern the algebraic structure of external numbers, the idempotent neutrices, and a Substitution theorem which provides a general method for simplifying a given inclusion. In Section 6, we prove the Main Theorem on the roots of a polynomial by External induction, using the Substitution Theorem in the induction step, to lower, locally, the degree of the polynomial in question. Section 7 studies the multiplicity of the roots, and the behavior of the polynomials close to the roots. In Section 8, we study the influence of the neutrices occurring in the coefficients of the polynomials on the roots. We will see that in a typical case, the imprecision of the roots depends much on the imprecision of the constant term, while the imprecisions in the remaining coefficients have an impact on the existence of the roots; we illustrate this numerically. Throughout this article, we present many examples illustrating our approach.

2. On Nonstandard Analysis

Nonstandard Analysis distinguishes between standard and nonstandard sets. We adopt the axiomatic approach of the Internal Set Theory IST of Nelson [16]. It is built upon classical Zermelo–Fraenkel set theory ZFC , and extends its language { } with a new unitary predicate “ st ”, which means “standard”. All uniquely definable sets of ZFC are standard, while infinite sets always have nonstandard elements; this is a consequence of the Idealization Axiom, which is part of IST . In particular, there exist nonstandard numbers within the set of natural numbers N . They are all larger than the standard natural numbers, and are called unlimited. An important feature is External induction, which states that the induction property holds over the standard natural numbers for every formula of IST . Real numbers which in absolute value are less than or equal to a standard natural number are called limited, and the remaining real numbers are again unlimited In this setting, R also contains non-zero infinitesimals, which are the multiplicative inverses of the unlimited real numbers. So, within the real number system, there exists more than one order of magnitude, and it is possible to define many more orders of magnitude. This enables working with errors of multiple sizes simultaneously within the real number system.
Formulas of set theory which contain only the symbol { } are called internal. If such a formula defines a set, it is called internal. Formulas of set theory which contain the symbol st , or both symbols, are called external. We restrict ourselves to bounded formulas i.e., in which every variable ranges over a standard set. This gives rise to the axiomatic system BST of Kanovei and Reeken [17]. They constructed an extension HST of BST , in which external sets may be defined, which live outside the universe of ZFC . All results and arguments of the present article are contained in HST . In particular, the error sets are formally defined as external sets of this system.
For terminology and introductions to axiomatic Nonstandard Analysis we refer to Nelson’s original article and, e.g, [18,19].

3. Minkowski Operations, Neutrices, External Numbers

Neutrices and external numbers were introduced in [2]. Here, and also in [4], formal proofs, and many examples can be found. They enable a calculus on a family of subsets of R , where we often identify real numbers with their singletons.
The algebraic operations are the pointwise Minkowski operations. Let A , B R . Then, with some abuse of language,
A + B = { x + y : x A , y B } A · B = { x · y : x A , y B } A = { x : x A } .
We will usually omit the point when dealing with multiplications.
Definition 1. 
A neutrix is a convex subset of R which is idempotent for addition and (additive) inversion.
A neutrix is a convex subgroup of R . Some obvious neutrices are 0, R , the set of infinitesimals, which will be denoted by ⊘, and the set of limited numbers, which will be denoted by £. The neutrices are ordered by inclusion, and if M , N are neutrices such that M N , we may write M = min ( M , N ) and N = max ( M , N ) .
Proposition 1 gives a useful criterion to recognize neutrices.
Proposition 1. 
A set N R is a neutrix if
1. 
0 N ,
2. 
x ( x N 2 x N ) ,
3. 
x y ( x N , | y | | x | y N ) ,
or alternatively, if £ N = N .
The last part of Proposition 1 expresses that, in an algebraic sense, neutrices are modules over the ring £, i.e., multiplication by £ leaves the neutrix invariant.
A neutrix N shares with the neutral element { 0 } the property of convexity, idempotency for addition, i.e., N + N = N , and equality to its inverse, i.e., N = N . Idempotent neutrices share with the neutral element also idempotency for multiplication.
Definition 2. 
A neutrix I is called idempotent if
I I = I .
An idempotent neutrix is a convex subring of R . We may extend Proposition 1 to recognize idempotent neutrices.
Proposition 2. 
A set I R is an idempotent neutrix if
1. 
£ I = I ,
2. 
x ( x I x 2 I ) .
Example 1. 
The following neutrices are idempotent: { 0 } , R , , and £. We define two other idempotent neutrices. Let ε 0 , ε > 0 . The external set of all positive unlimited numbers is denoted by Appliedmath 05 00120 i001, and the external set of all positive appreciable numbers, i.e., numbers which are limited, but not infinitesimal, is denoted by @. The ε -microhalo M ε is the set of real numbers which are less in absolute value than all standard powers of ε, and it holds that M ε =  Appliedmath 05 00120 i002. The ε -microgalaxy m ε is the set of real numbers which are less in absolute value than all standard fractional powers of e 1 ε , and we have m ε = £ e @ ε . Then M ε , m ε are idempotent neutrices. The neutrices ε and ε £ are not idempotent. One could interpret informally the neutrix ε by the numbers which are small with respect to ε, the neutrix ε £ by the numbers which are of order ε, the ε -microhalo by the numbers which are small with respect to all accessible powers of ε and the ε -microgalaxy by the numbers which are exponentially small with respect to ε. The microhalo and the microgalaxy are of some importance in singular perturbations [20].
The construction of polynomials depends on both addition and multiplication. Neutrices are invariant under addition, and idempotent neutrices are also invariant under multiplication; as such, they are more convenient when studying polynomials. In particular, they share, in a slightly modified form, a property with the zero of an integral domain. There it holds that x y = 0 implies that x = 0 or y = 0 . For idempotent neutrices, we have x y I x I y I . This property will be extended by Proposition 12 below and is very useful in the localization of possible roots of polynomials. Idempotency is also useful in the context of multiple roots, and will be studied in more detail in Section 5.2.
Definition 3. 
Let N be a neutrix.
1. 
We say that t R is limited with respect to N if t N N . The set of all real numbers that are limited with respect to N is denoted by £ N .
2. 
We say that t R is an absorber of N if t N N . The set of all absorbers of N is denoted by N .
3. 
We say that t R is appreciable with respect to N if t N = N . The set of all numbers which are appreciable with respect to N is denoted by @ N .
4. 
We say that t R is an exploder of N if t N N . The set of all exploders of N is denoted by Appliedmath 05 00120 i003.
A fixed infinitesimal ε > 0 is an absorber of £, and its inverse 1 / ε is an exploder of £. All appreciable numbers are appreciable with respect to £.
Definition 4. 
An external number α is the (Minkowski) sum of a real number a and a neutrix A. An external number is called zeroless if 0 α .
Remark 1. 
As argued in [4], the collection of neutrices is formally not an external set, but a definable class. The class of all neutrices is denoted by N . As a consequence, the collection of all external numbers is a class, which will be denoted by E .
The rules for addition, subtraction, multiplication, and division of external numbers are defined by the following Minkowski operations.
Definition 5. 
Let a , b R , A , B be neutrices and α = a + A , β = b + B be external numbers.
1. 
α ± β = a ± b + A + B = a + b + max { A , B } .
2. 
α β = a b + A b + B a + A B = a b + max { a B , b A , A B } .
3. 
1 α = 1 a + A a 2 , if α is zeroless.
One may recognize several properties of informal error analysis, as present in for instance [3], where the neutrices model the errors. Indeed, if α a + A , where { 0 } A contains zero, the external number α reduces to its neutrix N ( α ) A , which contains positive and negative numbers, which is in line with the usual assumption that errors do not have a sign (or are always thought to be non-negative, see Definition 6 below). If α is zeroless, its relative imprecision
R ( α ) A a
satisfies R ( α ) , which reflects that errors should be relatively small. The sum of two errors should be the largest of them. If α or β are zeroless, in Definition 5 (item 2), we may neglect the neutrix product A B . This reflects that when multiplying numbers with relatively small errors, the product of the errors could be neglected. The rule of the division is in line with a Taylor expansion; indeed, one shows that
1 a + A = 1 a 1 + A a = 1 a 1 A a = 1 a 1 + A a = 1 a + A a 2 .
Let α = a + A , β = b + B be external numbers. If they are not disjoint, their intersection is equal to α if A B and to β if B A . Then, it follows from External induction that, if n N is standard and α 1 , , α n are external numbers
α 1 i n α i i ( 1 i n ) ( α = α i ) .
An order relation is given as follows.
Definition 6. 
Let α , β E . We define
α β a α b β ( a b ) .
If α β = and α β , then a α b β ( a < b ) , and we write α < β .
For neutrices, the order relation corresponds to inclusion. Hence, the maximum of two neutrices, as present in Definition 5 is a maximum both in the sense of set theory and of the order relation. Note that 0 A for every neutrix A, so neutrices are non-negative, in line with their interpretation in error analysis.
The pointwise Minkowski operations induce a close relation to operations on the real numbers, so practical calculations with external numbers tend to be quite straightforward.
Some care is needed with distributivity. The multiplication rule of Definition 5 (item 2) corresponds to a distributivity property and is often used in the present article.
Note, however, that
0 = 0 = ( 1 1 ) 1 1 = = + = .
Remark 2. 
An external set is said to be definable if it is defined by a formula of BST . This is the case for the neutrices of Example 1. From now on, it will be assumed that every neutrix is a (definable) external set.

4. Polynomials, Main Theorem

We study polynomial equations and inclusions in the structure of external numbers E . Because this structure has somewhat less rigidity than a field, and in particular, the distributivity law does not hold in full generality, not every polynomial—result of a finite number of algebraic operations on external numbers and a free variable—can be reduced to a sum of monomials. Also, it is not always possible to reduce an equation to zero. However, the right-hand member of a polynomial equation can be reduced to a neutrix; in many cases, we may even assume that the neutrix is idempotent for multiplication. In addition, it will be shown that a polynomial equation with an external unknown may be seen as a family of equations of polynomials that are structured as sums of monomials with a real variable. This leads to further simplifications, for the distributive law does hold when the common multiplier is real. It may happen that there is an inconsistency if we speak about polynomial equations, for the set of all elements satisfying the equation does not fill up the whole neutrix figuring in the right-hand member. So, in general, we should study inclusions instead of equations, although, as we will see, a significant class of inclusions reduce to equations.
Putting these arguments together, we will study polynomial inclusions with real variables, with a neutrix on the right-hand side.
We recall that the structure of external numbers E , + , · , contains a copy of the real numbers, in the form of singletons. For convenience, we may identify the singletons with their elements, and vice versa.
Definition 7 
(Polynomial). A polynomial R ( x ) over E is the result of a standard finite number of operations on elements of E and an arbitrary real number x. The class of all polynomials will be denoted by P . We call
G R N { x }
the generator class of P .
We may identify R ( x ) with a polynomial function R : R E . Two polynomials P ( x ) and Q ( x ) are equivalent if (with abuse of language) the associated polynomial functions satisfy P ( x ) = Q ( x ) for all x R , again with abuse of language, we may simply write P ( x ) = Q ( x ) if the polynomials are equivalent.
Assume the polynomial R ( x ) in Definition 7 is the result of a standard finite number of operations on real numbers. Then, the polynomial is both real and internal.
Remark 3. 
From now on, we will always assume that a polynomial contains the variable x non-trivially, i.e., it cannot be reduced to a constant polynomial.
Definition 8. 
Let x be a free variable ranging over R , R ( x ) be a non-zero polynomial, and N be a neutrix.
1. 
We call the relation
R ( x ) N
a polynomial inclusion.
2. 
The solution set S of the inclusion (4) is defined by
S = { x R : R ( x ) N } .
3. 
A root ρ of the inclusion (4) is a maximal convex subset of the solution set S.
Definition 9. 
Let x be a free variable ranging over R , P ( x ) be an (internal) real polynomial, and N be a neutrix. Then, the relation
P ( x ) N
will be called a real polynomial inclusion.
In line with the identification of a real numbers and its singleton, we may as well write the relation (5) in the form P ( x ) N .
Theorem 1 
(Main Theorem). Consider the structure of external numbers E , + , · , . Let N be a neutrix. Let R be a polynomial.
1. 
The polynomial neutrix inclusion (4) is equivalent to a set of real polynomial inclusions P 1 ( x ) I 1 , …, P m ( x ) I m , where m N is standard, P 1 , , P m are real polynomials, and I 1 , , I m are idempotent neutrices.
2. 
Assume the polynomial neutrix inclusion (4) admits a root ρ. Then, ρ is an external number of the form
ρ = r + t I ,
with r R , I = I k for some k with 1 k m and t £ I k .
The reason that it is possible to reduce the neutrix inclusion (4) to a system of neutrix inclusions for polynomials with real coefficients lies in the fact that a polynomial over external numbers in the sense of Definition 7 may be given a convenient structure in terms of neutrices and real polynomials. To see this, we introduce some terminology and notation.
Definition 10. 
Let n be a standard natural number. A polynomial P ( x ) is called a polynomial represented by monomials if
P ( x ) = α m x m + + α 1 x + α 0 ,
where α 0 , α 1 , , α m are external numbers. The degree of P ( x ) is the maximal index k with 0 k m such that α k is zeroless. If α 0 , α 1 , , α m are real, the degree of P ( x ) is defined as usual.
Observe that a real polynomial satisfying Definition 7 has standard degree and is always equivalent to a polynomial represented by monomials, so without restriction of generality, we may always assume that real polynomials are of this form.
Definition 11. 
Let m be a standard natural number. Let N 1 , N 2 , , N m be neutrices and R 1 , R 2 , , R m be real polynomials. Then, the polynomial
Q ( x ) = N 1 R 1 ( x ) + N 2 R 2 ( x ) + + N m R m ( x )
will be called a neutricial polynomial.
Definition 12. 
Let n be a standard natural number. A polynomial R ( x ) is called a structured polynomial
R ( x ) = P ( x ) + Q ( x ) ,
where P ( x ) is a real polynomial of degree n and Q ( x ) is a neutricial polynomial. The class of all structured polynomials will be denoted by S .
Theorem 2 
(Equivalence with structured polynomials). Every polynomial over E is equivalent to a structured polynomial.
By Theorem 2, the polynomial inclusion (4) reduces to the system
P ( x ) N Q ( x ) N ,
where P , Q are given by (8).
The Main Theorem and Theorem 2 will be proved in Section 6. In particular, we will see (Theorem 14) that the neutrix inclusion Q ( x ) N may be reduced to a system of real polynomial neutrix inclusions.

5. Background on Neutrices and External Numbers

5.1. Algebraic Properties of External Numbers

The next theorems recall the basic algebraic properties of the external numbers.
If one restricts operations to one specific neutrix, we obtain structures similar to quotient groups, or even rings and fields. Taking all possible neutrices leads to a structure called quotient class in [21]. A full list of axioms for the operations on the external numbers has been given in [4], leading to a stronger structure called a Completely Arithmetical Solid ( CAS ) . Proofs can be found in the latter two references.
Theorem 3. 
The structures E , + and E N , · are regular commutative semigroups.
Note that every external number α = a + A carries its own neutral element A, for a + A + A = a + A and a + A ( a + A ) = A A = A . If α is zeroless, its individual unity is given by 1 + R ( α ) , for ( a + A ) ( 1 + A / a ) = a + A + A = a + A and ( a + A ) / ( a + A ) = ( 1 + A / a ) ( 1 + A / a ) = 1 + A / a + A / a = 1 + A / a .
In order to present a criterion for distributivity, we introduce first some notions.
To start with, we define more formally the relative precision of Formula (1).
Definition 13. 
Let α , β be external numbers. Its relative precision R α is the set of all real numbers that leave α invariant, i.e., R α α = α . We say that α is relatively more precise than β if R α R β . If R ( α ) = 1 , we say that α is precise.
Note that if α R 0 , then R α = 1 ; hence, non-zero real numbers are precise. The relative imprecision of a zero-less external numbers α = a + A is indeed given by (1). The relative imprecision of a neutrix N is the set @ N of Definition 3 (item 3). We return to the relative imprecision of neutrices in Section 5.2.
Definition 14. 
Let N be a neutrix, and α , β be external numbers. We say α and β are opposite with respect to N if ( α β ) N N .
Theorem 4. 
Let α , β , γ E . Then, α β + γ = α β + α γ if and only if α is relatively more precise than β or than γ, or β and γ are not opposite with respect to N ( α ) .
We present now some important special cases of distributivity.
Theorem 5. 
Let α , β , γ E . Then, α β + γ = α β + α γ if the following hold:
1. 
If β and γ have the same sign.
2. 
If α is real.
3. 
If β or γ is a neutrix.
4. 
If α N , and β and γ are not opposite with respect to α.
Theorem 6. 
The order relation ≤ on E is total, and compatible with addition and multiplication, in the following sense:
1. 
α , β , γ E α β α + γ β + γ .
2. 
α , β E N ( α ) α N ( β ) β N ( α β ) α β .
3. 
α , β , γ E N ( β ) β α N ( γ ) γ β γ α γ .
4. 
Whenever α , β , γ E , if N ( α ) < α and β γ , then α β α γ .
5. 
Whenever α , β , γ E , if N ( β ) β γ , then N ( α ) β N ( α ) γ .
The order relation also satisfies a form of completeness. In the next theorem, we use the common notation for open and closed intervals.
Theorem 7 
(Generalized Dedekind Completeness). Let L R be a definable lower half-line. Then, there exists a unique external number α such that either L = ( , α ] or L = ( , α ) .

5.2. Idempotent Neutrices

In this subsection, we study idempotent neutrices in more detail. Most of the notions and properties originate from [2], which also contains complete proofs (see also ([4] Section 4.5)). An important property states that every neutrix is proportional to an idempotent neutrix. We add some properties on the division of neutrices, which are relevant for the solution of polynomial equations of the lowest degree, i.e., when all coefficients of the polynomial are neutrices.
Theorem 8. 
Let N N . Then there exists a real number p 0 and a unique idempotent neutrix I such that I = p N .
Theorem 8 expresses a form of completeness. To illustrate this, assume that N is an external neutrix containing 1. For large q it may be expected that the product of q N with itself is bigger than q N , and for small q it may be expected that the product of q N with itself is less than q N . Indeed, for q N , we have
q 2 ( q N ) ( q N ) , q 2 q N
and, since N / q ,
1 q 2 N q 2 , 1 q 2 N q 2 N q 2 = N q N q 1 q 2 q 2 = q 2 .
Theorem 8 expresses that, for q = p somewhere in between, products of p N with itself are equal to p N . Its proof depends on the General Dedekind Completeness Theorem 7, which, as we saw, expresses a completeness property relative to addition: at the top of a definable external lower-half line L, or just beyond, one finds a sum r + M of a real number r and a neutrix M (which satisfies M + M = M ). Otherwise said, the lower half-line K r + L is itself idempotent for addition.
Definition 15. 
Let N R be a neutrix. Let I be the neutrix which is idempotent for multiplication, such as given by Theorem 8, i.e., such that N = p I for p R , p 0 . Then, we call I associated to N by p.
We now consider powers of a neutrix. We define first powers of arbitrary sets by standard natural numbers.
Definition 16. 
Let A R and n N , n 1 be standard. The n t h power A n of A is defined by External induction, as follows:
A 1 = A A n + 1 = A n A .
Observe that A 2 = A A . If A is a non-zero neutrix, by choosing a positive, respectively, a negative element of A, its square contains negative elements. So A 2 { x 2 : x A } .
Proposition 3. 
Let I be an idempotent neutrix and n N be standard. Then I n = I . In general, if A is a neutrix, also A n is a neutrix.
Proof. 
One proves by External induction that I n = I for all standard n N . Let A be a neutrix, I be the idempotent neutrix associated to A and p > 0 be such that I = p A . Then A n = I / p n , which is a neutrix. □
We now turn to fractional powers.
Definition 17. 
Let A R and m N , m 1 be standard. Then,
A 1 m = x R : x m A .
Proposition 4. 
Let A be a neutrix and m N , m 1 be standard. Then A 1 m is a neutrix
Proof. 
The proposition is a consequence of the following three properties:
  • 0 A 1 m , because 0 m A .
  • Let x A 1 m . Then x m A . Then also ( 2 x ) m = 2 m x m A . Hence 2 x A 1 m . From this, we derive that 2 A 1 m = A 1 m .
  • Let x A 1 m and | y | | x | . Then | y m | | x | m A . Hence, | y m | A and also y m A . From this, we derive that y A 1 m .
As expected, idempotent neutrices are invariant for standard fractional powers.
Proposition 5. 
Let I be an idempotent neutrix and m N , m 1 be standard. Then, I 1 / m = I .
Proof. 
Let x I . Then, x m I m = I . Hence x I 1 / m , which implies that I I 1 / m . Conversely, let x I 1 / m . Suppose that x I . Then, | x | > I , so | x | m > I m = I . Hence | x | I 1 / m , and also x I 1 / m , since I 1 / m is a neutrix. So, we have a contradiction; hence, x I . Then, I 1 / m I . We conclude that I 1 / m = I . □
We cannot as such define multiplicative inverses of neutrices, because they contain 0. It is sometimes useful to distinguish a symmetric inverse, we define for general symmetric sets, i.e., sets A R , A such that for all x A it holds that x A .
Definition 18. 
Let A R be symmetric. Then, its symmetric inverse A s 1 is defined by
A s 1 = x : 1 x A 0 .
Observe that, when A , B are neutrices
A B A s 1 B s 1 .
Examples of symmetric inverses are given by £ s 1 = and s 1 = £ .
Proposition 6. 
Let N be a neutrix. Then the following hold:
1. 
N s 1 is a neutrix.
2. 
If N is idempotent, also N s 1 is idempotent.
3. 
If I is an idempotent neutrix and p R is such that N is associated to I by p, then N s 1 is is associated to I s 1 by 1 / p .
Proof. 
  • Clearly, 0 N s 1 . Let x N s 1 , x 0 . Then 1 x N , hence also 1 2 x N . So, 2 x N s 1 . Let y be such that | y | | x | . By symmetry, 1 | x | N , so 1 | y | 1 | x | N . Hence y N s 1 . It follows that N s 1 is a neutrix.
  • Assume that N is idempotent. Let x N s 1 , x 0 . Then, 1 x 2 N , else 1 x N 1 / 2 = N , a contradiction. Hence, x 2 = 1 1 / x 2 N s 1 , which implies that N s 1 is idempotent.
  • It holds that
    N s 1 = x R : 1 x N { 0 } = x R : 1 p x I { 0 } = y p R : 1 / y I { 0 } = I s 1 p .
    Hence, N s 1 is associated to I s 1 by 1 / p .
We now study the effect on a given neutrix by multiplying it by some factor or some other neutrix. Due to Theorem 8, we may as well restrict ourselves to idempotent neutrices.
Observe that the set of all real numbers that are appreciable with respect to I corresponds to R ( I ) .
The next proposition gives more detailed information on the sets £ I and I , and a fortiori on the relative precision R ( I ) = @ I .
Proposition 7. 
Let I be an idempotent neutrix.
1. 
£ I is an idempotent neutrix containing I and containing £.
2. 
I is an idempotent neutrix contained in I and contained in ⊘.
3. 
Appliedmath 05 00120 i006  = 1 I { 0 } .
4. 
If 1 I , one has £ I = I and I = I s 1 .
5. 
If I < 1 , one has £ I = I s 1 and I = I .
6 
I = I I = £ I .
Observe that, if I , J are idempotent neutrices
J I J s 1 £ I .
The relative imprecision of any neutrix N is equal to the relative imprecision of the unique idempotent neutrix I associated to N. Indeed, if p R is such that N = p I ,
R ( N ) = x R : N x = N = x R : p I x = p I = y R : I y = I = R ( I ) .
As a consequence, it suffices to study the properties of the relative imprecision of idempotent neutrices.
Proposition 8. 
Let I be an idempotent neutrix.
1. 
R ( I ) = 1 R ( I ) = @ I @ .
2. 
R ( I ) = R I s 1 .
3. 
If 1 I , one has R ( I ) = I I s 1 .
4. 
If 1 I , one has R ( I ) = I s 1 I .
We now recall Koudjeti’s theorem, which says that the product of two idempotent neutrices is equal to one of them. It gives additional conditions to determine which one, which we present in a slightly modified form, using the relative imprecision.
Theorem 9 
(Koudjeti’s theorem). Let I , J R be idempotent neutrices. Then, I J = I or I J = J . More precisely, the following:
1. 
If R ( I ) R ( J ) , then I J = J .
2. 
If R ( I ) = R ( J ) , then I J = min ( I , J ) .
By commutativity, Theorem 9 covers all possible cases. We see that the product of two idempotent neutrices is equal to the neutrix with the largest relative precision. If the relative precision of the two neutrices is equal, the product is equal to the smallest of them.
Of course = £ = £ = and £ £ = £ . The following table illustrates Theorem 9 for some other common idempotent neutrices, where we always assume that ε > 0 , ε 0 .
Appliedmath 05 00120 i004
Theorem 10 below is a direct consequence of Theorems 8 and 9.
Theorem 10. 
Let M , N be neutrices. Let I be the idempotent neutrix associated to M by p and J be the idempotent neutrix associated to N by q, where p , q R . Then
M N = p q I J ,
where I J is given by Theorem 9. Also
M N = q M M N = p N .
The division of neutrices is given in terms of the usual division operator of groups [22]. The division of neutrices is relevant for the transformation of a general polynomial inclusion into a set of real polynomial inclusions, and for the solution of the latter.
Definition 19. 
Let M , N be neutrices. Then,
M : N = { x R : x N M } .
As a consequence ( M : N ) N M .
Proposition 9. 
Let I , J be idempotent neutrices. Then, I:J is an idempotent neutrix.
Proof. 
Let x , y R such that x J I and y is limited. Then y x J = x y J = x J I . Hence, £ x I:J. Moreover, x 2 J = x 2 J 2 I 2 = I . Then, it follows from Proposition 2 that I:J is an idempotent neutrix. □
The next theorem determines the result of the division of an idempotent neutrix I and J in terms of the neutrices themselves and their symmetric inverses; it mentions conditions with respect to I and £ I , which, due to Proposition 7, may be rewritten in terms of conditions on I and I s 1 .
Theorem 11. 
Let I , J be an idempotent neutrix. Then, the following:
1. 
If R ( I ) R ( J ) , then I : J = J s 1 .
2. 
If R ( J ) R ( I ) , then I : J = I .
3. 
If R ( I ) = R ( J ) , then
I : J = J s 1 I , J < 1 I e l s e .
Proof. 
We repeatedly apply Koudjeti’s Theorem.
  • Observe that J I or J £ I . Assume first that J I . Then, J s 1 £ I . By Theorem 9 (item 1) it holds that
    J s 1 J = J I I .
    It follows from Proposition 9 that we must show that J s 1 is the maximal idempotent neutrix with this property. Let K J s 1 be an idempotent neutrix. Then, R ( K ) R ( J s 1 ) = R ( J ) . Hence, by Theorem 9 (item 1)
    K J = K J s 1 I .
    Hence, I : J = J s 1 .
    Secondly, assume that J £ I . Then, J s 1 J = J s 1 I I . To see that J s 1 is the maximal idempotent neutrix with this property, let again K J s 1 be an idempotent neutrix. such that K J I . It certainly holds that K J , hence R ( K ) R ( J ) . Then
    K J = J £ I I .
    Hence, I : J = J s 1 .
  • By Theorem 9 (item 1) it holds that I J = I . It also holds that I J £ I . To see that I is the maximal idempotent neutrix with this property, let K I be an idempotent neutrix. If I = £ I , we have K J = K I . If I = I , we have K J min ( K , J ) I . We see that in both circumstances I : J = I .
  • Assume first that I = J = I . Then £ I J = I , and if K £ I is idempotent, we have K J = K I . Hence, I : J = £ I = £ J = J s 1 . Secondly, assume that I = I , J = £ I . Then I J = I . As in the first case, we derive that J is maximal with this property. Hence I : J = I . Finally, assume I = £ I . Then I J I if J = I and also if J = £ I . As above, we derive that I is maximal with this property.
We obviously have : = £ : = £ : £ = £ and : £ = . The following table illustrates Theorem 11 for the idempotent neutrices of (11).
Appliedmath 05 00120 i005
Corollary 1. 
Let M , N be neutrices. Let I be the idempotent neutrix associated to M by p and J be the idempotent neutrix associated to N by q, where p , q R ; we assume that q 0 . Then
M : N = p q I : J ,
where I : J is given by Theorem 11. As a consequence,
M : N = M q M N = p N s 1 .
Theorem 12 gives necessary and sufficient conditions to ensure that the result of the product of idempotent neutrix I by its division by another idempotent neutrix J is exactly equal to I.
Theorem 12. 
Let I , J be an idempotent neutrix. Then, the following properties hold:
1. 
If R ( J ) R ( I ) , then J ( I : J ) = I .
2. 
If R ( I ) R ( J ) , then J ( I : J ) I .
3. 
If R ( I ) = R ( J ) , then J ( I : J ) = I I < 1 < J .
Proof. 
1. The equality is a consequence of Theorem 11 (item 2)
2.
The strict inclusion is a consequence of Theorem 11 (item 1) and (12).
3.
We apply Theorem 11 (item 3), Assume first that J < 1 < I . Then J ( I : J ) = I I = I £ I = I . In all remaining cases, we have J ( I : J ) = J I = I . Indeed, if I , J < 1 , then J ( I : J ) = J J s 1 = J = I , and if 1 < J , then J ( I : J ) = J I = I .
To illustrate Theorem 12 (item 3), we observe that R ( ) = R ( £ ) = @ , and £ ( : £ ) = £ = .

5.3. External Equations

We study equations, or more properly inclusions, of the type
f ( x ) α ,
where f : R R is internal and α = a + A , with A a neutrix. The second member allows for some imprecision and, in a sense, we search for approximate solutions of f ( x ) = a . A classical theory of approximate equations is given by asymptotic equations. It is shown in ([4] Theorem 9.2.1) that with respect to the inclusion (13) often simplifications are possible, i.e., instead of trying to resolve the inclusion (13) for a possibly complex given function f, one may as well use an approximation g of f which could be simpler. The Substitution theorem in question concerns both additive and multiplicative approximations; in the latter case, it is supposed that f is non-zero. In the context of polynomials, we study the influence of imprecisions of multiplicative type, but due to the existence of roots, sometimes the polynomial becomes zero. Theorem 13 extends the Substitution theorem to multiplicative approximations by functions having common zeros.
Theorem 13 
(Substitution theorem). Let A be a neutrix. Consider the inclusion
f ( x ) A .
Let S R .
Let g : S R { 0 } be internal such that
f ( x ) g ( x ) R ( A ) x S , f ( x ) 0 g ( x ) = 0 x S , f ( x ) = 0 .
Then, on S Equation (14) and the equation g ( x ) A they have the same solutions.
Proof. 
Let x S . Assume f ( x ) α . If f ( x ) = 0 , also g ( x ) = 0 A . If f ( x ) 0 , it holds that g ( x ) R ( A ) f ( x ) R ( A ) A = A .
Conversely, let g ( x ) A . Assume first that g ( x ) 0 . Then, f ( x ) g ( x ) R ( A ) A R ( A ) = A . If g ( x ) = 0 , also f ( x ) = 0 , else, as we saw above, g ( x ) R ( A ) f ( x ) , which does not contain 0, a contradiction. Hence, f ( x ) = 0 A indeed.
Combining, we derive that, whenever x S , it holds that f ( x ) A g ( x ) A .
It follows from (10) that if A is associated to some idempotent neutrix I by some real number p, one has R ( A ) = R ( I ) , while R ( I ) may be calculated from Theorem 8. Anyhow, R ( A ) @ , and in many practical cases approximations are multiplicative near-equalities, and then we have approximations within 1 + @ . In a sense, the Substitution theorem comes close to Van der Corput’s purpose when designing his functional neutrices: a step forward in neglecting unnecessary complications.
Example 2. 
The Main Theorem states that the roots of polynomials of limited degree are external numbers, i.e., they have the form of a real number which is symmetrically surrounded by a neutrix. This example illustrates that the root of an inclusion of a polynomial of unlimited degree does not need to be an external number; in fact, it has the form of an external interval, which is symmetric with respect to one of its elements.
Let ω N be unlimited. We will show that the solution of the inclusion
x ω ω ! .
is given by the open external interval
ω e log ( ω ) 2 ω + £ ω , ω e + log ( ω ) 2 ω + £ ω .
Note that the external interval is symmetric with respect to the origin.
To see this, we simplify the inclusion (16), where we approximate ω ! with the help of the nonstandard form of Stirling’s formula, which says that.
ω ! = ω ω e ω ω 2 π ( 1 + ε )
for some ε 0 . Note that R ( ) = @ . Then, by the Substitution theorem, instead of (16) we could solve
x ω ω ω e ω ω 2 π ,
or even
x ω ω ω e ω ω .
This reduces to x ω ω ω e ω ω 1 / 2 , or
x ω e ω 1 / ( 2 ω ) 1 / ω .
Using Taylor expansions, we see that
ω 1 / ( 2 ω ) = exp log ( ω ) 2 ω 1 + log ( ω ) 2 ω + ω .
and
@ 1 / ω = exp £ ω = 1 + £ ω ,
hence 1 / ω is given by the open external interval
1 / ω = 1 + £ ω , 1 + £ ω .
Because
1 + log ( ω ) 2 ω + ω ± 1 + £ ω = ± 1 + log ( ω ) 2 ω + £ ω ,
Formula (18) reduces to
x ω e log ( ω ) 2 ω + £ ω , ω e + log ( ω ) 2 ω + £ ω .
We conclude that the root of (16) is the open external interval given by (17).
See also [23], where the equation (16) was used to determine the domain where the remainder of certain Taylor polynomials of degree ω is infinitesimal, with ad hoc methods.

6. Proof of the Main Theorem

Remark 4. 
The study of real polynomial equations P ( x ) = 0 is part of classical analysis. The solution of a real polynomial inclusion P ( x ) R is trivially equal to R . As a consequence, we may restrict our study to polynomial inclusions with respect to external neutrices. We recall that every external neutrix is supposed to be definable in the sense of Remark 2.
Part 1 of the Main Theorem concerns the transformation of a polynomial inclusion into a system of real polynomial inclusions. This will be realized in Section 6.1. The principal tools of the proof are the multiplication rules for the external numbers.
Part 2 of the Main Theorem concerns the shape of the roots of such a real polynomial inclusion. The principal step of the proof consists in replacing the original polynomial inclusion by a simpler inclusion, concerning a polynomial of lower degree which is locally close to the original polynomial; this enables an induction step. This step is based upon the general method of substitution of an inclusion by an inclusion of lower complexity of Section 5.3, using a notion of near-root and the Fundamental Theorem of Algebra.

6.1. Structuring Polynomials

First, we prove Theorem 2 on the transformation from a general polynomial into a structured polynomial. Then, Theorem 14 shows how a neutrix inclusion concerning a structured polynomial may be transformed into a finite set of real polynomial inclusions. We start with some notation and state some preliminary results on the class of general polynomials P and its generator class G of Definition 7, and the class of structured polynomials S of Definition 12.
Notation 1. 
If C is a class, C ¯ will indicate the class of all expressions which result from applying a standard finite number of additions and multiplications to elements of C, up to equivalence.
Observe that
G S
S P
G ¯ = P
P = P ¯ .
Proposition 10. 
S = S ¯ .
Proof. 
Clearly S S ¯ .
Let k , m N be standard. For all 1 i k , i + 1 j m , let A i , B j be non-zero neutrices. Let P 0 ( x ) , P 1 ( x ) , , P k ( x ) , Q 0 ( x ) , Q k + 1 ( x ) , , Q n ( x ) be real polynomials, and S ( x ) = P 0 ( x ) + i = 1 k A i P i ( x ) , T ( x ) = Q 0 ( x ) + j = i + 1 m B j Q j ( x ) . Then S ( x ) + T ( x ) is equivalent to the structured polynomial
( P 0 ( x ) + Q 0 ( x ) ) + i = 1 k A i P i ( x ) + j = i + 1 m B j Q j ( x ) .
It follows from the rule for multiplication of external numbers that S ( x ) T ( x ) is equivalent to the structured polynomial
P 0 ( x ) Q 0 ( x ) + j = i + 1 m B j P 0 ( x ) Q j ( x ) + i = 1 k A i Q 0 ( x ) P i ( x ) + i = 1 k j = i + 1 m A i B j P i ( x ) Q j ( x ) .
Then one shows with External induction that S is closed under a standard finite number of additions and multiplications. Hence, S ¯ S . Combining, we obtain that S = S ¯ . □
Proof of Theorem 2. 
It follows from (22), (21), (19), respectively (20) that
P = G ¯ S ¯ = S P .
Hence, S = P . □
Example 3. 
Let P ( x ) = ( x ( 1 + ) ) ( x ( 1 + ) ) . Then, P ( x ) is equivalent to S ( x ) x 2 1 + ( x 1 ) + ( x + 1 ) + . Indeed,
P ( x ) = ( x ( 1 + ) ) ( x ( 1 + ) ) = x 2 1 + ( x 1 ) + ( x + 1 ) + 2 = x 2 1 + ( x 1 ) + ( x + 1 ) + = S ( x ) .
Theorem 14. 
Let R ( x ) = P ( x ) + Q ( x ) be a structured polynomial over E , where m N is standard and Q ( x ) = N 1 P 1 ( x ) + N 2 P 2 ( x ) + + N m P m ( x ) with N 1 , , N m neutrices and P 1 , , P m real polynomials. Let N be a neutrix. Then, the neutrix inclusion (4) is equivalent to the set of neutrix inclusions
P ( x ) N P 1 ( x ) N : N 1 P m ( x ) N : N m .
Proof of Theorem 14. 
Let x R be such that R ( x ) N . Then P ( x ) N and Q ( x ) N by (9), hence
P ( x ) N P 1 ( x ) N 1 N P m ( x ) N m N .
By Definition 19, we have
P ( x ) N P 1 ( x ) N : N 1 P m ( x ) N : N m ,
that is, x satisfies (23).
On the other hand, let x R be such that
P ( x ) N P 1 ( x ) N : N 1 P m ( x ) N : N m .
Then,
P ( x ) N P 1 ( x ) N 1 N P m ( x ) N m N ,
so P ( x ) + P 1 ( x ) N 1 + + P m ( x ) N m N , hence R ( x ) = P ( x ) + Q ( x ) N . □
Example 4. 
Let S ( x ) x 2 1 + ( x 1 ) + ( x + 1 ) + be as given by Example 3. Consider the polynomial inclusion S ( x ) . It may be reduced to the system of real polynomial inclusions
x 2 1 ( x 1 ) ( x + 1 ) ,
which leads to
x 2 1 x 1 : = £ x + 1 : = £ 1 : = £ .
If ε > 0 , ε 0 , the polynomial inclusion S ( x ) ε is inconsistent, and the above procedure would also lead to an inconsistent system, where the last line would be 1 ε : = ε £ .

6.2. From I-Near Roots to Roots

Proposition 11 
(Equivalence to reduced-idempotent case). Let P ( x ) be a real polynomial and N be a neutrix. Then there exists a reduced real polynomial Q ( y ) and an idempotent neutrix I such that
P ( x ) N Q ( y ) I ,
where y is a multiple of x.
Proof. 
Let P ( x ) = a n x n + + a 1 x + a 0 , where a i R , i = 0 , 1 , , n , and a n 0 . By Theorem 8, we have N = p I , where 0 p R and I is idempotent. We may assume that p > 0 . Put
x = p | a n | n y .
Then P ( x ) N Q ( y ) I , where y = | a n | p n x . □
By the Fundamental Theorem of Algebra [22] a reduced real polynomial P in can be written in the form
P ( x ) = ( x r 1 ) ( x r l ) ( x s 1 ) 2 + c 1 ( x s m ) 2 + c m ,
where r 1 , , r l , s 1 , , s m , c 1 , , c m R , c 1 , , c m > 0 .
Definition 20. 
Let P be a real reduced polynomial and I be an idempotent neutrix. Consider the real polynomial inclusion
P ( x ) I .
A real number r is called an I-near root of P if P ( r ) I . The multiplicity of a I-near root r is the sum of the degrees of all the factors in the product representation (24) of P
P ( r ) = ( r r 1 ) ( r r l ) ( r s 1 ) 2 + c 1 ( r s m ) 2 + c m
that belong to I.
Every root r i of P is a I-near root of (25), and all the elements of a root ρ of (25) are I-near roots.
Notice that x 2 + 1 has no roots, but if r is limited, then r is an £-near root, with multiplicity 2, with respect to the inclusion x 2 + 1 £ .
From now on, it is assumed that P is a reduced polynomial of standard degree n, I is an idempotent neutrix, and the equation P ( x ) I has at least one I-near root.
In (26), we let J = { j N : c j I } and
G 1 i l ( r i + I ) j J ( s j + I ) .
Observe that if j J ,
c j I
is a necessary condition for s j to be a I-near root. Indeed, if c j > I , it holds that ( r s j ) 2 + c j > I , and if c j < I , we have ( r s j ) 2 c j + I 2 = c j + I , while also c j > I . Using a Taylor expansion, we see that
r s j = ± c j + I = ± c j 1 + I c j = ± c j + I c j ,
hence (25) has two distinct roots, so the multiplicity of s j cannot be equal to 2.
We use the following notation.
L = { i { 1 , , l } : r i r + I } M = { j { 1 , , m } : s j r + I c j I } L = { 1 , , l } L M = { 1 , , m } M .
All I-near roots are confined to G, as follows from the next theorem.
Proposition 12. 
Let r be a I-near root of (25). Then r G . Moreover its multiplicity is at least equal to 1.
Proof. 
We denote the k t h factor in (24) by f k . So
P ( r ) = f 1 f l ( f l + 1 ) 2 ( f m ) 2 I .
If f k > I for all these k, it holds that | P ( r ) | = | f 1 f l ( f l + 1 ) 2 ( f m ) 2 | > I n = I , a contradiction. Hence f k I for some k. Then r r i I for some i { 1 , . . . , m } or r s j I for some j { 1 , . . . , l } . In the first case the multiplicity of r is at least equal to 1, and r r i + I G . In the first case the multiplicity of r is at least equal to 2, and ( r s j ) 2 + c j I , while c j I by (28). Hence ( r s j ) 2 I = I 2 . This implies that r s j I , hence r s j + I G . □
Theorem 15. 
Assume r is an I-near root of multiplicity n. Then ρ = r + I is a root of P.
Proof. 
We saw that c j I is necessary condition for s j to be I-near root of P. Then every factor of (24) is contained in I or I 2 ( = I ) , and we have for every r ρ
P ( r ) = ( r r 1 ) ( r r l ) ( r s 1 ) 2 + c 1 ( r s m ) 2 + c m = ( r r + r r 1 ) ( r r + r r l ) × ( r r + r s 1 ) 2 + c 1 ( r r + r s m ) 2 + c m ( I + I ) ( I + I ) ( I + I ) 2 + c 1 ( I + I ) 2 + c m = I n = I .
Hence
P ( ρ ) I .
Theorem 16 
(Main Lemma). Let r be a I-near root of P with multiplicity less than n. Then there exists a reduced polynomial Q, with 1 g deg Q < deg P , and d > I such that, for x r + I , we have
R ( x ) I Q ( y ) I ,
where y = x d 1 / g . Moreover, r d 1 / g is an I-near root of Q.
Proof. 
We use the factorization (24) of P and the notation of (29).
Let
Q ( x ) = i L x r i j M ( ( x s j ) 2 + c j ) d = i L | r r i | j M ( ( r s j ) 2 + c j ) g = # L + # M h = # L + # M .
Note that g = deg ( Q ) , hence g n 1 and h 1 . Hence L or M , and we have 1 deg ( Q ) < deg ( P ) . For i L we have | r r i | > I , and for j M we have | r s j | > I or c j > I , which implies that ( r s j ) 2 + c j > I . Hence
d > I # L + # M = I h = I .
Let x r + I . For i L , we have
x r i r r i r r i + I r r i = 1 + I r r i 1 + .
Now let j M . If s j r + I , we have
x s j r s j r s j + I r s j = 1 + I r s j
and
1 + I r s j 2 = 1 + I r s j .
Due to the fact that c j > 0 , we have
( x s j ) 2 + c j ( r s j ) 2 + c j ( 1 + I r s j ) 2 ( r s j ) 2 + c j ( r s j ) 2 + c j = ( r s j ) 2 + c j + I ( r s j ) ( r s j ) 2 + c j = 1 + I ( r s j ) ( r s j ) 2 + c j 1 + I r s j 1 + .
If s j r + I , we have ( r s j ) 2 I and c j > I . Also x s j I , hence ( x s j ) 2 I . Again, we have
( x s j ) 2 + c j ( r s j ) 2 + c j ( r s j ) 2 + c j + I ( r s j ) 2 + c j = 1 + I ( r s j ) 2 + c j 1 + .
Combining these results, we derive that
P ( x ) Q ( x ) d = i L x r i j M ( ( x s j ) 2 + c j ) d ( 1 + ) h = 1 + @ I .
By Theorem 13
P ( x ) I Q ( x ) d I .
Put y = x d 1 / g and
Q y i L y r i d 1 g j M y s j d 1 g 2 + c j d 2 g = i L x d 1 g r i d 1 g j M x d 1 g s j d 1 g 2 + c j d 2 g = Q ( x ) d .
Then Q is reduced and deg Q = deg Q = g . By (31) we have for x r + I
P ( x ) I Q ( y ) I .
In particular
Q r d 1 / g = i L r r i j M ( r s j ) 2 + c j d = Q ( r ) d = P ( r ) I ,
hence r d 1 / g is an I-near root of Q. □
Theorem 17 
(Roots as external numbers; reduced case). Let P be a reduced real polynomial, I be an idempotent neutrix and ρ be a root of the polynomial neutrix inclusion P ( x ) I . Then ρ is an external number of the form
ρ = r + t I ,
with r R and t £ I .
Proof. 
The proof of the theorem is by External induction on the degree n. We distinguish the case n = 1 , the case where n = 2 and P has double I-near roots, and the induction step.
For n = 1 , we have P ( x ) = x + a , where a R . The root of the equation x + a I is the external number ρ a + I .
For n = 2 , let r ρ . By assumption r is an I-near root of multiplicity 2. Now P ( x ) = ( x r 1 ) ( x r 2 ) for some r 1 , r 2 R , or P ( x ) = ( x s ) 2 + c , where s , c R . In the first case r r 1 , r r 2 I . Then r 1 , r 2 r + I . Hence ρ = r + I is a root by Theorem 15. In the second case it holds that c I , hence ( r s ) 2 I . Then s r + I . Again it follows from Theorem 15 that ρ = r + I is a root.
Suppose the theorem is proved for all polynomials of degree n 1 1 . Assume deg P = n . Due to Theorem 15 we need only to consider the case where the multiplicity of any I-near root is at most n 1 . By Theorem 12 every I-near root r ˜ of P is contained in the I-admissible set G given by (27), hence r ˜ r ¯ + I , with r ¯ = r i for some i { 1 , , l } or r ¯ = s j for some j { 1 , , m } , and we choose a particular r ¯ . Because the multiplicity of r ¯ is at most n 1 , by Theorem 16 there exists a polynomial Q with deg Q < n such that for every x r ¯ + I
P ( x ) I Q ( y ) I ,
where for some u > I
y = x u .
By the induction hypothesis every root σ of Q is of the form σ = s + v I with v £ I . By (33) and (34) the root σ corresponds to a root ρ = σ u = r + t I of P, where r = s u and t = v u £ I , noting that u > I . Also
x ρ y = x u σ .
The theorem follows from the fact that r ¯ G is arbitrary. □
Corollary 2. 
Let P be an real polynomial such that deg P 1 , N be a neutrix and ρ be a root of the polynomial neutrix inclusion P ( x ) N . Then N ( ρ ) is proportional to N.
Example 5. 
Let I be an idempotent neutrix and b , c R . Consider the reduced quadratic polynomial
P ( x ) = x 2 + b x + c .
Let D = b 2 4 c be the discriminant. We show that the root formula may be extended to the inclusion P ( x ) I , with a neutrix term equal to I / D . Then P ( x ) = x + b 2 2 D 4 , and the inclusion P ( x ) I becomes
x + b 2 2 D 4 + I .
If D < I , Equation (35) does not have a solution. If D I , the right-hand side of (35) is equal to I, and the external number ρ b 2 + I is a double root. Finally, assume that D > I . Then also D > I . We may write P ( x ) = ( x r 1 ) ( x r 2 ) , with
r 1 = b 2 + D 2 r 2 = b 2 D 2 .
Then r 1 r 2 = D > I . The method of the proof of Theorem 16 suggests to put x = r 1 + y , with y I . Then, using the Substitution theorem, we derive that
P ( x ) I y ( r 1 r 2 + y ) I y ( D + y ) I y I D .
Hence the first root ρ 1 is given by
ρ 1 = b 2 + D 2 + I D .
In an analogous way we see that the second root ρ 2 is given by
ρ 2 = b 2 D 2 + I D .
As a special case, let ω N be unlimited. Then the roots ρ 1 , 2 of
x 2 2 ω x + ω 2 ω
are ρ 1 , 2 = ω ± ω + / ω .
Example 6. 
Let ε > 0 , ε 0 . Consider the inclusion
x 3 + x £ e @ / ε .
Noting that x 3 + x = x ( x 2 + 1 ) , 1 £ e @ / ε and that £ e @ / ε is idempotent, we see that £ e @ / ε is a triple root of the inclusion (39).
Example 7. 
This example shows that the neutrices in the roots may be of different size. Let ω be positive unlimited and P ( x ) = ( x ω ) ( x ( ω + 1 ) ) ( x 2 ω ) . Consider the inclusion P ( x ) £ . Then r 1 ω , r 2 ω + 1 and r 3 2 ω are £ near roots. In fact r 1 , r 2 are a double £ near root, because r 2 r 1 £ .
The method sketched in the proof of Theorem 16 suggests to study the inclusion P ( x ) £ separately for x ω + £ and x 2 ω + £ . In the first case, by the Substitution Theorem, we may simplify this inclusion into ( x ω ) ( x ( ω + 1 ) ) ( ω ) £ , or
( x ω ) ( x ( ω + 1 ) ) £ ω .
The method suggests now to put y = x ω . Then (40) is transformed into
Q ( y ) ( y ω ω ) ( y ω ω ω ) £ .
This inclusion has two single £-near roots s 1 = ω ω and s 2 = ω ω ω . For y ω ω + £ , with the help of the Substitution Theorem, we may transform (40) into
y ω ω £ ω .
with root σ 1 ω ω + £ ω . For y ω ω + ω + £ , we find in a similar way the root σ 2 ω ω + ω + £ ω . In the original coordinates the roots become
ρ 1 ω + £ ω , ρ 2 ω + 1 + £ ω .
To determine the third root, we study P ( x ) £ for x 2 ω + £ . The Substitution Theorem simplifies this inclusion into ω ( ω 1 ) ( x 2 ω ) £ , or even ω 2 ( x 2 ω ) £ . Hence the third root is given by
ρ 3 2 ω + £ ω 2 .
Example 8. 
Due to the Theorem of Galois, there does not exist a general solution strategy to find the roots of polynomials of degree 5 or higher. However, in certain cases solutions still may be found if the polynomial equations are “relaxed”, i.e., we look for near-solutions. An example is given by ([6] Exercise 1.5.6), where it is asked to find approximate roots of x 5 + ε x 2 ε 2 for ε 0 . The somewhat similar, but “singular” problem
ε x 5 + x 2 1
for ε 0 , ε > 0 has been considered in ([4] Example 9.2.1), and solved using the Substitution theorem. In a classical asymptotic setting, approximate solutions of ε x 5 + x 2 1 = 0 were studied in ([7] Section 3), with particular emphasis on the domain of validity of the solutions under numerical interpretations. Here we study (41) again, now trying to incorporate the general solution methods of Theorems 1 and 16.
To compare the influence of the variable terms of the polynomial occurring in (41) we distinguish
1. 
Appliedmath 05 00120 i007
2. 
S 2 x R : x 2 ε x 5 @ = @ ε 1 / 3 .
3. 
Appliedmath 05 00120 i008
On S 1 the term x 2 is dominant, on S 3 the term ε x 5 is dominant, and on S 2 the terms have comparable influence. We treat all cases separately, starting with the extreme cases. Observe that we may rewrite (41) into
ε x 5 + x 2 1 +
and that R ( 1 + ) = 1 + .
1. 
On S 1 one has x 2 + ε x 5 x 2 1 + = R ( 1 + ) . By the Substitution theorem the simpler equation x 2 1 + has the same roots, i.e., ρ 1 1 + and ρ 2 1 + . Both are admissible, for they are subsets of / ε 1 / 3 .
2. 
On S 3 it holds that x 2 + ε x 5 ε x 5 1 + = R ( 1 + ) . Again by the Substitution theorem one may as well solve ε x 5 1 + . Its solution is 1 + ε 1 / 5 . However the solution must be rejected for it is not an element of S 3 .
3. 
On S 2 the Substitution theorem provides no simplification. Put a = ε x 3 . Then a is appreciable. The inclusion (42) becomes a 2 / 3 + a 5 / 3 = a 2 / 3 ( a + 1 ) ( 1 + ) ε 2 / 3 , which we may transform into the polynomial inclusion
a 2 ( a + 1 ) 3 ( 1 + ) ε 2 .
This is again an inclusion with respect to a polynomial of degree 5. However, we may try to localize the roots by relaxing the inclusion to an an inclusion with respect to the smallest idempotent neutrix containing the right-hand side, i.e., to . Using the method of Theorem 16 we see that the roots of
a 2 ( a + 1 ) 3 .
are α 1 and α 2 1 + . So we may study (43) separately for a α 1 and a α 2 . The first case must be rejected, for a must be appreciable. In the second case we may put a = 1 + y , where y satisfies
( 1 + y ) 2 y 3 ( 1 + ) ε 2
By the Substitution theorem we may as well solve y 3 ( 1 + ) ε 2 , and we find the root η ( 1 + ) ε 2 / 3 . Then the root of (43) is given by 1 + η , and the unique root ρ 3 of (41) on S 2 satisfies
ρ 3 = 1 + ( 1 + ) ε 2 / 3 1 / 3 ε 1 / 3 = 1 ε 1 / 3 + ( 1 + ) 1 3 ε 1 / 3 .

6.3. Roots of Systems of Polynomial Inclusions

The last part of the Main Theorem concerns the shape of the root of a system of polynomial inclusions; in fact, it says that the root is equal to the root of one of these inclusions. This is a consequence of Theorem 18 below.
Theorem 18. 
Let m N be standard and P 1 , , P m be real polynomials such that deg P 1 , …, deg P m 1 and I 1 , , I m are idempotent neutrices. If the system of neutrix inclusions
P 1 ( x ) I 1 P m ( x ) I m .
has a root ρ, there exists k with 1 k m such that ρ = r + t I k , where r R and t £ I k .
Proof. 
Let 1 i m . Then, the polynomial P i has a non-empty standard finite set of roots, which by Theorem 1 (item 2) are all external numbers, with neutrices of the form t i I i , with t i £ I i . Because the intersection of these sets of external numbers is non-empty, by (2), the intersection ρ is equal to one of them, hence there exists k , 1 k m such that ρ = ρ k = r + t k I k with t k £ I k . □
Corollary 3. 
Let m N be standard and P 1 , , P m be real polynomials such that deg P 1 , …, deg P m 1 and N 1 , , N m be neutrices. If the system of neutrix inclusions
P 1 ( x ) N 1 P m ( x ) N m .
has a root ρ, there exists k with 1 k m such that N ( ρ ) is proportional to N k .
Proof. 
The corollary is a consequence of Corollary 2 and Theorem 18. □

7. Multiplicity and Exactness of Roots

Now that we know that a root is an external number, we may use it in calculations, and we study two important properties. In Section 7.1, we define its multiplicity, and relate it to the multiplicity of I-near roots. In Section 7.2, we study whether a root of a polynomial inclusion is exact, or semi-exact in the sense of Definition 22 below.
We use the factorization (24), which we repeat here:
P ( x ) = ( x r 1 ) ( x r l ) ( x s 1 ) 2 + c 1 ( x s m ) 2 + c m .

7.1. Multiplicities

Let R be a polynomial and ρ be root of R. The image of ρ by R is denoted by
R ( ρ ) { R ( x ) : x ρ } .
Definition 21. 
Let ρ be root of P and r ρ , t £ I be such that ρ = r + t I .
We define the polynomial P ^ ( ρ ) by
P ^ ( ρ ) = ( ρ r 1 ) ( ρ r l ) ( ρ s 1 ) 2 + c 1 ( ρ s m ) 2 + c m .
The multiplicity of ρ is the sum of the degrees of the first l factors in (45) that are equal to t I and the degrees of the remaining m factors in (45) that are equal to t 2 I .
It may happen that P ( ρ ) P ^ ( ρ ) , as shown by the following example.
Example 9. 
Put P ( x ) = x 2 . The double root ρ of the inclusion x 2 is given by ρ = . Then P ( ρ ) = + , i.e., the set of all positive infinitesimals, and P ^ ( ρ ) = = .
Proposition 13. 
Let ρ be a root of P and r ρ , t £ I be such that ρ = r + t I . Then, the multiplicity of ρ is less than or equal to the multiplicity of the I-near root r.
Proof. 
Let ρ be a root of P and r ρ , t £ I be such that ρ = r + t I . It holds that P ( ρ ) I . Then, P ( r ) I , hence r is an I-near root. Now, the multiplicity M ρ of ρ equals M ρ = # { i , 1 i m , r i r + t I } + # { j , 1 j p , s j r + t I c j t 2 I } , while the multiplicity M r of r is given by M r = # { i , 1 i m , r i r + I } + # { j , 1 j p , s j r + I c j I } . Because t I , t 2 I I , we see M ρ M r . □
Example 10 
(Continuation of Example 7). Let ω be positive unlimited. We saw that the inclusion ( x ω ) ( x ( ω + 1 ) ) ( x 2 ω ) £ had two £ near roots r 1 = ω , r 2 = ω + 1 . In fact, each of them is a double £ near root. However, the multiplicity of the roots ρ 1 = ω + £ ω , ρ 2 = ω + 1 + £ ω is strictly less, being single.

7.2. Exactness of Roots

Definition 22. 
Let R be a polynomial, N be a neutrix, and ρ be a root of the inclusion R ( x ) N . Let R ( ρ ) be given by (44). The root ρ is said to be exact if R ( ρ ) = N and semi-exact if for some a N it holds that R ( ρ ) = { y N : y a } or R ( ρ ) = { y N : y a } .
Example 11. 
A simple example of a root which is not exact is given by the inclusion x £ , which is solved by the neutrix £, while £ £ . The triple root of the inclusion x 3 is exact. Example 9 shows that is a semi-exact double root of the inclusion x 2 , since { x 2 : x } = { y N : y 0 } . Let ε > 0 , ε 0 . Then it is obvious that is also a semi-exact root of the inclusion x 2 ε . In addition, it is a semi-exact root of the inclusion { x : ( x ε ) ( x + ε ) } , the image of the root now being { y N : y ε 2 } .
Example 11 suggests that the root of a polynomial is exact if its multiplicity is odd, and exact if its multiplicity is even. This is attested by Theorems 19 and 21 below.
Theorem 19. 
Let P be a reduced real polynomial of degree n 1 and I be an idempotent neutrix. Assume ρ is a root of multiplicity n. Then ρ is exact if n is odd, and semi-exact if n is even.
Proof. 
Assume that n is odd. It follows from Proposition 12 that
x r + I P ( x ) I .
Hence, P ( x ) can neither change sign for x < r + I , nor for x > r + I . Because the degree of P is odd, we derive that
x < r + I P ( x ) < I , x > r + I P ( x ) > I .
Hence, Im ( P ) contains values less than I and larger than I. By continuity and (47), for every y I there exists some x r + I , such that y = P ( x ) . Hence it holds that P ( ρ ) I . Then, it follows from (30) that P ( ρ ) = I , i.e., ρ is exact.
Finally, assume that n is even. In a similar way as is the argument above, we derive that both
x < r + I P ( x ) > I , x > r + I P ( x ) > I .
Hence, Im ( P ) does not contain values less than I. Now P has a minimum, say M, where M I . Because Im ( P ) contains values larger than I, by continuity and (48), for every y I , y M there exists some x r + I , such that y = P ( x ) . Hence, it holds that P ( ρ ) { y I : y M } . Combining with (30), we conclude that P ( ρ ) = { y I : y M } , i.e., ρ is semi-exact. □
Definition 23. 
Let L , M be as given by (29) and ρ = r + t I be a root of P, where r R and t £ I . We define
L ρ = { i L : r i ρ } M ρ = { j M : s j ρ , c j t 2 I } L ρ = { 1 , , l } L ρ M ρ = { 1 , , m } M ρ g ρ = # L ρ + # M ρ h ρ = # L ρ + # M ρ d ρ = i L ( r r i ) j M ( ( r s j ) 2 + c j ) .
If the multiplicity of ρ is less than n, we call the polynomial P ρ defined by
P ρ ( x ) = i L ρ ( x r i ) j M ρ ( ( x s j ) 2 + c j ) d ρ
the local approximation of P in ρ.
Observe that in the classical case, i.e., I = { 0 } and ρ = r , the local approximation would reduce to a (shifted) monomial d r ( x r ) g r . In a sense, a local approximation is a generalization of a shifted monomial, involving a product of factors x r i , ( x s j ) 2 + c j with r i , s j all taken in ρ , and c j negligible.
Theorem 20. 
Let ρ = r + t I be a root of P of multiplicity g ρ less than n, where r R and t £ I . Let P ρ be the local approximation of P in ρ. Then, ρ is a root of P ρ of multiplicity g ρ , which is equal to the degree of P ρ . Moreover, for all x ρ
P ( x ) P ρ ( x ) 1 + I .
Proof. 
Note that necessarily n 2 . We use the factorization (24) of P and the notations of (49), and imitate the proof of Theorem 16.
Note that h ρ 1 , so L ρ or M ρ .
Let x r + t I . First, let i L ρ . Then I / ( r r i ) I and
x r i r r i r r i + t I r r i = 1 + t I r r i 1 + t I 1 + I .
Secondly, let j M ρ . If s j r + t I , we have
x s j r s j r s j + t I r s j = 1 + t I r s j
and
1 + t I r s j 2 = 1 + t I r s j .
Due to the fact that c j > 0 , we have
( x s j ) 2 + c j ( r s j ) 2 + c j ( 1 + t I r s j ) 2 ( r s j ) 2 + c j ( r s j ) 2 + c j = ( r s j ) 2 + c j + t I ( r s j ) ( r s j ) 2 + c j = 1 + t I ( r s j ) ( r s j ) 2 + c j 1 + t I r s j 1 + t I 1 + I .
If s j r + t I , we have c j > t 2 I . Also, x s j t I , hence ( x s j ) 2 t 2 I . Hence
( x s j ) 2 + c j ( r s j ) 2 + c j ( r s j ) 2 + c j + t 2 I ( r s j ) 2 + c j = 1 + t 2 I ( r s j ) 2 + c j 1 + I .
It follows that
P ( x ) P ρ ( x ) = i L ρ x r i j M ρ ( ( x s j ) 2 + c j ) d ( 1 + I ) h ρ = 1 + I .
By Theorem 13
P ( x ) I P ρ ( x ) I ,
hence ρ is a root of P of multiplicity g ρ . This is equal to the degree of P ρ . □
Theorem 21. 
Let ρ be a root of P.
1. 
If the degree of P ρ is odd, the root ρ is exact.
2. 
If the degree of P ρ is even, the root ρ is semi-exact.
Proof. 
We only prove Part 1, the proof of Part 2 is similar. Let r R and t £ I be such that ρ = r + t I . Assume first that the degree of P ρ is odd. By Theorem 15 the external number ρ is an exact root of P ρ . Let z P ( r ) + I and w z P ( r ) , where we assume that w > max ( 0 , P ( r ) / 2 ) . Let x ρ be such that P ρ ( x ) = P ρ ( r ) + 2 w . It follows from (50) and the fact that P ( r ) I that
P ( x ) = P ( x ) P ρ ( x ) P ρ ( r ) P ( r ) ( P ( r ) + 2 w ) ( 1 + I ) ( 1 + I ) ( P ( r ) + 2 w ) = ( 1 + I ) ( P ( r ) + 2 w ) .
Then, there exists some ε 0 such that
P ( x ) = ( 1 + ε ) ( P ( r ) + 2 w ) = P ( r ) + 2 w + 2 ε w + ε P ( r ) .
If ε P ( r ) 0 , (51) implies that
P ( x ) > P ( r ) + 2 w w = P ( r ) + w = z .
If ε P ( r ) < 0 , the condition w > P ( r ) / 2 , implies that 2 ε w ε P ( r ) > 0 , hence again
P ( x ) > P ( r ) + 2 w > P ( r ) + w = z .
It follows from (52) and (53) and the continuity of P that there exists some x between r and x such that P ( x ) = z .
We may apply an analogous reasoning to y P ( r ) w . Again, by continuity, [ y , z ] P ( ρ ) . We conclude that P ( ρ ) = P ( r ) + I . Hence, ρ is an exact root of P.

8. Roots of Polynomials Represented by Monomials

Generally speaking, the imprecisions of the roots of polynomials represented by monomials of Definition 10 are strongly related to the imprecision of the constant term. To see this, we put such a polynomial inclusion into the reduced form
P ( x ) = ( 1 + A n ) x n + ( a n 1 + A n 1 ) x n 1 + + ( a 1 + A 1 ) x + a 0 + A 0 I ,
where a 0 , a n 1 are real, A 0 , A n 1 and A n are neutrices, and I is an idempotent neutrix. The inclusion (54) is consistent only if A 0 I , and it is natural to study the cases A 0 = I , i.e., the imprecision of the right-hand member is equal to the imprecision of the constant term. An alternative interpretation is the following. Consider the polynomial x n + a n 1 x n 1 + a 1 x + a 0 , and the coefficients are perturbed respectively by the neutrices A 0 , A n 1 , A n . Then, it is natural to look for the smallest neutrix at the right-hand side such that (54) makes sense. If A 0 is not idempotent, it is of the form A 0 = p I with p R and I idempotent. We may again put the real part of the equation x n + a n 1 x n 1 + a 1 x + a 0 p I in reduced form using the method of Proposition 11, i.e., by the change in variables x x / p 1 / n .
Definition 24. 
Consider the inclusion (54). Then, the neutrix
F I : A 1 I : A n 1 1 / ( n 1 ) I : A n 1 1 / n
is called its feasibility space F .
Observe that the F is equal to one of the elements of the intersection of (55), hence F is a neutrix indeed.
Theorem 22 extends Theorem 1 (item 2) from real polynomials to inclusions with respect to general polynomials represented by monomials.
Theorem 22. 
Assume the polynomial represented by monomials (54) admits a zeroless root ρ. Then ρ = r + t I , where r R and t £ I .
Proof. 
Let P ( x ) = x n + a n 1 x n 1 + + a 1 x + a 0 and F be the feasibility space of (54). It follows from Theorems 1 (item 1) and 14 that ρ is a solution of the system
P ( x ) I x F .
By Theorem 1 (item 2) any root ρ of P ( x ) I is of the form ρ = r + t I , where r R and t £ I . Assume that ρ is zeroless. Now, the intersection of a zeroless external number with a neutrix is equal to itself. Hence, if the system (56) is consistent, the external number ρ satisfies the whole system (56). i.e., ρ is still a root. □
Example 12. 
Suppose we modify (38) into
1 + £ ω 4 x 2 2 + £ ω 3 ω x + ω 2 ω + .
Then the feasibility space F is given by F = : ( £ / ω 4 ) 1 / 2 : ( £ / ω 2 ) = ω 2 . Hence, the roots ρ 1 , 2 = ω ± ω + / ω of (38) are feasible, and are proportional to the right-hand side . For bigger imprecisions in the quadratic and linear term, the roots may no longer be feasible, for example, the feasibility space F of the inclusion
1 + £ ω 3 x 2 2 + £ ω 2 ω x + ω 2 ω + .
is reduced to F = : £ / ω 3 / 2 : ( £ / ω ) = ω 3 / 2 ω = ω , which does not contain the roots ρ 1 , 2 .
In a sense, Theorem 22 implies that if the constant term is significant, i.e., the inclusion (54) has only zeroless roots, and its neutrix is equal to the neutrix at the right-hand side, the imprecisions of these roots are a function of the imprecision of the constant term, in fact, they are a multiple of it.
Example 13. 
We will verify the properties of feasibility and dependence of the imprecision in the roots on the imprecision in the constant term (taken to be equal to the right-hand side) in a more direct way. This approach also permits a numerical interpretation.
Let M , N be neutrices. Again, starting from (38), we study the perturbed inclusion
1 + M x 2 2 + N ω x + ω 2 ω + .
Being perturbations, we assume that M , N . The roots ϱ 1 , 2 of the unperturbed inclusion are contained in ( 1 + ) ω , and if we put values x ¯ d ω , d 0 into (59), we see that the difference of P ( x ¯ ) and P ( ϱ 1 ) or P ( ϱ 2 ) is at least of the order @ ω 2 , so the roots of the perturbed inclusion are no longer contained in ( 1 + ) ω . Now M ( 1 + ) ω 2 and N ω ( 1 + ) ω imply that both M / ω 2 , N / ω 2 .
To verify that in this feasible case, the influence of the neutrices M , N on the roots is marginal with respect to the influence of the neutrix of the right-hand side, we take representatives μ 0 M , ν 0 N and two infinitesimals ε 1 , ε 2 . We may as well choose one infinitesimal ε ε 2 ε 1 at the right-hand side, which we suppose to be non-zero, and then investigate the influence on the roots, say, r 1 , 2 of
1 + μ x 2 2 + ν ω x + ω 2 ω ε = 0 .
Then,
r 1 , 2 = ω + ν ω 2 ± ω 1 + ε ω + ν ω + ν 2 ω 4 μ ω + μ + ε μ ω .
Taking into account that μ , ν / ω 2 , we see that, under the second root-sign in (61), the leading term involving ε is ε / ω , the leading term involving μ is μ ω , and the leading term involving ν is ν ω . Using a first-order Taylor expansion of the root, we find that the latter absorbs the term ν ω / 2 , and we derive that
r 1 ω + ω ( 1 + ) ε 2 ω ( 1 + ) μ ω ω 2 + ( 1 + ) ν ω ω 2 ,
r 2 ω ω + ( 1 + ) ε 2 ω + ( 1 + ) μ ω ω 2 ( 1 + ) ν ω ω 2 .
We analyse the expression (62) for increasing μ and ν; the case of (63) is similar. If they are significantly smaller than just small with respect 1 / ω 2 , the sole leading term in the error of r 1 is ε / ω and the error depends almost linearly on ε. If at least one of them is of the same order of magnitude as ε / ω , the error term is of the order of magnitude of / ω , which is still proportional to the order of magnitude at the right-hand side. If at least one of them is of order of magnitude significantly larger than ε / ω , we may choose two representatives, which are sufficiently far apart, for the difference in the values found for r 1 is not to be of order ε, so the inclusion (59) fails to hold, i.e., is inconsistent.
We illustrate the above findings numerically, as follows.
Let ω = 100 , ε = ω 2 , and let μ , ν R be such that μ 1 / ω 5 and ν 1 / ω 6 . We will use the random uniform distribution of the Python 3 program to generate some possible values of μ and ν as shown in the Table 1.
With the values of ω, ϵ, μ, and ν, we calculate the roots of (60) by applying the quadratic formula, and present the results in the Table 2. We observe an imprecision of order 10 7 in the values of r 1 , and may conclude that the difference between two values is still of order 10 7 . So we may defend that these differences are small with respect to ε = 10 4 , respecting (59). The case of r 2 is similar. In Table 3 we highlighted the negligible influence of μ and ν with respect to the impact of ε on the value of r 1 , by showing that a linear progression in values for ε corresponds to an almost linear progression in the errors of the root.
Now, let μ , ν R be such that 1 / ω 2 μ , ν 1 / ( ω ω ) . We will use again the randomized method of Python 3 based on the uniform distribution to find some values of μ and ν. The results are shown in Table 4. The associated roots of (60) are presented in Table 5.
The table illustrates the unfeasibility of the roots. Indeed, for each root, the differences in values tend to be at least of order 10 1 , which lie outside the tolerance given by ε = 10 4 .

9. Conclusions

In the context of Nonstandard Analysis, we studied the roots of inclusions of type P ( x ) I , where P is a polynomial of standard degree, with coefficients possibly of different sizes, and I is usually, but not necessarily, a small error-set around zero, typically an order of magnitude. We showed how the errors in the roots, which may be multiple, depend on I. We also studied the case where the coefficients contain errors, which may also be of different sizes. In general, those errors have an influence on whether the roots continue to exist. The results are no longer valid if the degree of the polynomial is nonstandard, for the errors seem to be more like intervals. An example suggests that the extremities of such an interval again depend on I. It would be interesting to consider more general cases. Indeed, polynomials of nonstandard degree yield local or uniform infinitesimal approximations of other functions, and then could be used to study zeros, or more generally, approximate values of such functions. Examples are approximations by Taylor polynomials, and by polynomials of Bernstein [24] or of Százs type [25]. The terms of such polynomials, when of high order, tend to be close to a Gaussian curve, which may help to control the remainder; however, it seems that additional information on the rate of convergence is strongly needed.
The study of error sets for roots of polynomials is formally simpler in a complex setting. Indeed, here roots always exist and the polynomials have a decomposition in only linear factors, avoiding the complications of irreducible quadratic terms. Some difficulties arise from the fact that the error sets are two-dimensional. The study has been initiated in [26], and we intend it to be the subject of a second article.

Author Contributions

All authors made equal contributions and have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dinis, B.; Jacinto, B. A theory of marginal and large difference. Erkenntnis 2025, 90, 517–544. [Google Scholar] [CrossRef]
  2. Koudjeti, F.; Van den Berg, I. Neutrices, external numbers, and external calculus. In Nonstandard Analysis in Practice; Springer: Berlin/Heidelberg, Germany, 1995; pp. 145–170. [Google Scholar]
  3. Taylor, J. Introduction to Error Analysis. The Study of Uncertainties in Physical Measurements, 3rd ed.; University Science Books: Mill Valley, CA, USA, 2025. [Google Scholar]
  4. Dinis, B.; Van den Berg, I. Neutrices and External Numbers: A Flexible Number System; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
  5. Van der Corput, J. Introduction to the neutrix calculus. J. d’Analyse Math. 1959, 7, 281–399. [Google Scholar] [CrossRef]
  6. Murdock, J.A. Perturbations: Theory and Methods; Society for Industrial and Applied Mathematics: University City, PA, USA, 1999. [Google Scholar]
  7. Saptaingyas, Y.; Horssen, W.; Adi-Kusumo, F.; Aryati, L. On accurate asymptotic approximations of roots for polynomial equations containing a small, but fixed parameter. AIMS Math. 2024, 9, 28542–28559. [Google Scholar] [CrossRef]
  8. Lutz, R.; Goze, M. Nonstandard Analysis: A Practical Guide with Applications; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1982; Volume 881. [Google Scholar]
  9. Remm, E. Perturbations of polynomials and applications. arXiv 2022, arXiv:2208.09186. [Google Scholar] [CrossRef]
  10. Nathanson, M.; Ross, D. Continuity of the roots of a polynomial. Commun. Algebra 2024, 52, 2509–2518. [Google Scholar] [CrossRef]
  11. Wilkinson, J. Rounding Errors in Algebraic Processes; Prentice-Hall: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  12. Nafisah, A.; Sheikh, S.; Alshahrani, M.; Almazah, M.; Alnssyan, B.; Dar, J. Perturbation Approach to Polynomial Root Estimation and Expected Maximum Modulus of Zeros with Uniform Perturbations. Mathematics 2024, 12, 2993. [Google Scholar] [CrossRef]
  13. Pakdemirli, M.; Sari, G. A comprehensive perturbation theorem for estimating magnitudes of roots of polynomials. LMS J. Comput. Math. 2013, 16, 1–8. [Google Scholar] [CrossRef]
  14. Pakdemirli, M.; Yurtsever, H. Estimating roots of polynomials using perturbation theory. Appl. Math. Comput. 2007, 188, 2025–2028. [Google Scholar] [CrossRef]
  15. Van Tran, N.; Van den Berg, I. Gaussian elimination for flexible systems of linear inclusions. Port. Math. 2024, 81, 265–305. [Google Scholar] [CrossRef] [PubMed]
  16. Nelson, E. Internal set theory: A new approach to nonstandard analysis. Bull. Amer. Math. Soc. 1977, 83, 1165–1198. [Google Scholar] [CrossRef]
  17. Kanovei, V.; Reeken, M. Nonstandard Analysis, Axiomatically; Springer Monographs in Mathematics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar] [CrossRef]
  18. Robert, A. Nonstandard Analysis; Dover Publications: Garden City, NY, USA, 2011. [Google Scholar]
  19. Diener, F.; Reeb, G. Analyse Non Standard; Hermann: Paris, France, 1989. [Google Scholar]
  20. Diener, F. Sauts des Solutions des Équations εẍ = f(t, x, ). SIAM J. Math. Anal. 1986, 17, 533–559. [Google Scholar] [CrossRef]
  21. Dinis, B.; van den Berg, I. On the quotient class of non-archimedean fields. Indag. Math. 2017, 28, 784–795. [Google Scholar] [CrossRef]
  22. Van der Waerden, B. Algebra. Volume I, II; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  23. Van den Berg, I. Un point de vue nonstandard sur les développements en série de Taylor. In IIIe Rencontre de Géométrie du Schnepfenried; Astérisque 109-110; Numdam: Grenoble, France, 1983; Volume 2, pp. 209–223. [Google Scholar]
  24. Alamer, A.; Nasiruzzaman, M. Approximation by Stancu variant of λ-Bernstein shifted knots operators associated by Bézier basis function. J. King Saud Univ. Sci. 2024, 36, 103333. [Google Scholar] [CrossRef]
  25. Ayman-Mursaleen, M.; Nasiruzzaman, M.; Rao, N. On the Approximation of Szász-Jakimovski-Leviatan Beta Type Integral Operators Enhanced by Appell Polynomials. Iran. J. Sci. 2025, 49, 784–795. [Google Scholar] [CrossRef]
  26. Horta, J.L. Numeros Externos Complexos. Raízes de Polinómios e Aplicacões. Ph.D. Thesis, University of Évora, Évora, Portugal, 2023. [Google Scholar]
Table 1. Randomized values of μ , ν such that μ 1 / ω 5 , 1 / ω 5 and ν 1 / ω 6 , 1 / ω 6 .
Table 1. Randomized values of μ , ν such that μ 1 / ω 5 , 1 / ω 5 and ν 1 / ω 6 , 1 / ω 6 .
μ ν
1 6.16816134938787 × 10 11 6.490371029260664 × 10 13
2 8.314980866448598 × 10 11 4.645965930628269 × 10 13
3 9.001647900349496 × 10 11 7.489624343264043 × 10 13
4 2.9241171554531896 × 10 11 2.0262917424621737 × 10 13
5 9.540798109567693 × 10 12 8.497802800906751 × 10 13
6 5.540392467986064 × 10 11 3.3998607329776005 × 10 13
7 7.067635593935561 × 10 12 2.083598692965336 × 10 14
8 7.608355608789118 × 10 11 1.545761457454435 × 10 13
9 4.0752179022103377 × 10 11 5.80780396377376 × 10 14
10 6.750290059554738 × 10 11 5.079085278645448 × 10 13
Table 2. Feasible roots r 1 , r 2 .
Table 2. Feasible roots r 1 , r 2 .
r 1 r 2
1 110.00000496303844 89.99999502469016
2 110.00000505055976 89.99999496611663
3 110.00000494595086 89.99999503612077
4 110.000004982419 89.99999501175299
5 110.00000499469401 89.99999500348284
6 110.00000496666648 89.99999502228677
7 110.0000050427465 89.99999497138671
8 110.00000495405321 89.99999503074552
9 110.00000502462176 89.99999498352285
10 110.00000495943901 89.99999502711124
Table 3. The root r 1 for the values ε , 2 ε , 3 ε . In the second and third columns, we used the same randomizations of μ and ν . The progression in the errors of the root seems fairly linear.
Table 3. The root r 1 for the values ε , 2 ε , 3 ε . In the second and third columns, we used the same randomizations of μ and ν . The progression in the errors of the root seems fairly linear.
ε 2 ε 3 ε
1 110.00000496303844 110.00000996303476 110.00001496302849
2 110.00000505055976 110.00001005055606 110.00001505054975
3 110.00000494595086 110.00000994594718 110.00001494594092
4 110.000004982419 110.00000998241532 110.00001498240904
5 110.00000499469401 110.00000999469032 110.00001499468404
6 110.00000496666648 110.0000099666628 110.00001496665654
7 110.0000050427465 110.00001004274279 110.00001504273648
8 110.00000495405321 110.00000995404955 110.00001495404328
9 110.00000502462176 110.00001002461805 110.00001502461176
10 110.00000495943901 110.00000995943533 110.00001495942907
Table 4. Randomized values of μ , ν such that μ , ν 1 / ω 2 , 1 / ( ω ω ) .
Table 4. Randomized values of μ , ν such that μ , ν 1 / ω 2 , 1 / ( ω ω ) .
μ ν
1 0.0009486342373741334 0.0007799208660884681
2 0.0009584854856079895 0.00021740435531815595
3 0.00016826054200809844 0.00045528338513826786
4 0.0002048101418587952 0.0006367456370331195
5 0.00018801209019496414 0.00016299015884466272
6 0.00019479036064838474 0.0009217406178601308
7 0.00013633169047803684 0.0006160205686031223
8 0.0007515857463528371 0.0009094029935426791
9 0.0001949211029637104 0.000891765459563486
10 0.00012011504031013572 0.0008545615968917112
Table 5. The associated values of r 1 , r 2 are:
Table 5. The associated values of r 1 , r 2 are:
r 1 r 2
1 109.85493260654815 90.03343852754995
2 109.53320242157982 90.29700366183502
3 110.14758676370813 89.8642874684393
4 110.22400406631981 89.79870381820668
5 109.97590343249935 90.0027971698904
6 110.38273236334847 89.6704732623649
7 110.25352220665178 89.78080883170443
8 110.04519690330046 89.89547084006159
9 110.36667368341062 89.68350886031622
10 110.39085847928942 89.67056729417847
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Berg, I.v.d.; Horta, J.C.L. A General Approach to Error Analysis for Roots of Polynomial Equations. AppliedMath 2025, 5, 120. https://doi.org/10.3390/appliedmath5030120

AMA Style

Berg Ivd, Horta JCL. A General Approach to Error Analysis for Roots of Polynomial Equations. AppliedMath. 2025; 5(3):120. https://doi.org/10.3390/appliedmath5030120

Chicago/Turabian Style

Berg, Imme van den, and João Carlos Lopes Horta. 2025. "A General Approach to Error Analysis for Roots of Polynomial Equations" AppliedMath 5, no. 3: 120. https://doi.org/10.3390/appliedmath5030120

APA Style

Berg, I. v. d., & Horta, J. C. L. (2025). A General Approach to Error Analysis for Roots of Polynomial Equations. AppliedMath, 5(3), 120. https://doi.org/10.3390/appliedmath5030120

Article Metrics

Back to TopTop