Improving the Convergence of Interval Single-Step Method for Simultaneous Approximation of Polynomial Zeros

: This paper describes the extended method of solving real polynomial zeros problems using the single-step method, namely, the interval trio midpoint symmetric single-step (ITMSS) method, which updates the midpoint at each forward-backward-forward step. The proposed algorithm will constantly update the value of the midpoint of each interval of the previous roots before entering the preceding steps; hence, it always generate intervals that decrease toward the polynomial zeros. Theoretically, the proposed method possesses a superior rate of convergence at 16, while the existing methods are known to have, at most, 9. To validate its efﬁciency, we perform numerical experiments on 52 polynomials, and the results are presented, using performance proﬁles. The numerical results indicate that the proposed method surpasses the other three methods by ﬁne-tuning the midpoint, which reduces the ﬁnal interval width upon convergence with fewer iterations.


Introduction
The widespread application of interval arithmetic was inhibited in the past by a lack of hardware and software. Nonetheless, more real-world applications have appeared in recent years. In the twenty-first century, interval arithmetic was discovered to have substantially contributed to the steganography field, notably in improving the quality of watermarked images [1]. Interval arithmetic also plays a significant role in enhancing the power of higher performance computing, such as GPUs [2]. Furthermore, in the era of data science and artificial intelligence, many scholars and practitioners from various backgrounds have incorporated the interval analysis concepts into their existing models or methods in order to investigate uncertainty propagation in specific data or systems [3][4][5]. These applications have led to increased attention and more rigorous studies on interval methods over the recent years.
The history of simultaneous inclusion of polynomial zeros can be traced back to the Weierstrass' function [6], where the iterative procedures for finding polynomial zeros is guaranteed to be of quadratic convergence. Numerous studies by many scholars have contributed to the development of polynomial inclusion studies over the decades. In recent years, the approach for bounding polynomial zeroes simultaneously was studied via different strategies, such as the Chebyshev-like root-finding method [7], Weierstrass root-finding method [8,9], Halley's method [10], Ehrlich method [11,12] and also by considering various types of initial conditions to solve this problem [13]. In 2014, Proinov and Petcova [14] obtained the important result related to the semi-local convergence of the Weierstrass root finding method. Later, Cholakov and Vasileva [15] introduced and

Materials and Method
In this section, we will discuss the standard formulation of finding the root problem.

Interval Single-Step Method
We begin this section with the class of simultaneous methods, which leads to our proposed method. Let p : R → R be a polynomial of degree n: where a i ∈ R, i = 0, · · · , n are given and a n = 0. A zero of polynomials is equivalent to a root of the equation p(x) = 0.
If polynomial (1) has distinct zeros x * i , i = 1, 2, · · · , n, then it can be written as follows: by letting a n = 1. It follows from (2) that any zero can be expressed as follows: If x j ≈ x * j (j = 1, · · · , n) so by (3), we have This gives us the point total-step iterative procedure defined by the following: can be updated one at a time or simultaneously. It is firstly mentioned by Weierstrass [6], Durand [27] and Kerner [28], which is also known as the WDK method. When compared to Newton's method, the WDK method is much more robust, i.e., regardless of the initial assumption, it converges to the zeros approximately.
The simultaneous computation of the polynomial zeros in the sense of (4) was proposed by [21] and is known as the point single-step (PS1): , i = 1, · · · , n; k = 0, 1, 2, . . . (5) Due to its simplicity in implementation, the PS1 has been studied and extended in various ways. For example, Alefeld and Herzberger [23] improvised PS1 using the interval approach and named it the interval single-step (IS1) method. Later, the modifications of the variants were tested for five polynomials, producing a fewer number of iterations and quicker CPU time [29]. In addition, Chen et al. [30] demonstrated that adding a scaling function to IS1 outperforms the existing procedures, leading to a more significant reduction in the final interval width with fewer iterations.
An alternative expression of (5) is as follows. We first differentiate (2) with respect to x to give the following: From (2) and (6), we have the following: Rearranging Equation (7), we obtain the following: Finally, we have the following equation: By implementing (4) to (6) and (9), a revised point single-step (PS2) method [21] can be expressed as follows: where Incorporating the interval computation into (10), Salim [24] proposed the following iterative formula: where x (k) i is the midpoint of the interval X (k) i . Known as the alternative interval single step procedure (IS2), the R-order of convergence of the iterative procedure (11) is more than 3. The proof of this holds with the corresponding proof of the PS2 method and is almost identical. Later, Salim et al. [25] improved the corresponding R-order of convergence of the ISS2 method, which is at least 9.

Interval Trio Symmetric Single-Step (ITMSS) Method
This study proposes the interval trio midpoint symmetric single-step (ITMSS) method for bounding real zeros of a polynomial simultaneously. The proposed method will update each interval's midpoint value of X, denoted by mid(X), and revise the value of g (k) i before entering the next step. Enforcing this strategy will allow us to narrow the computed bounds rigorously. The process is repeated until smaller intervals with guaranteed roots are generated, and the the stopping condition imposed on the interval width is satisfied. We denote the width of interval X as w(X). Table 1 describes the proposed algorithm in detail. Table 1. Interval trio symmetric single-step (ITMSS).
Step 0: Given initial intervals X
The significance of the ITMSS algorithm is that the values of g (k) Step 1, which are computed for use in Step 2.1, are renewed every time it completes the inner loop of Step 2. This means that we will compute the new midpoint values from the final interval width for the used in the next internal loop of Step 2, generating the updating values of g for every root, i, where i = 1, . . . , n. It also has the following forward-backward-forward attractive features, where the value of summations . . , n, which are computed in Step 2.1, will be used in Step 2.2.
Hence, it will constantly update the value of the midpoint of each interval of the previous roots before entering the following steps that will always generate intervals that decrease toward the polynomial zeros.
Moreover, the value from summations . . , n, which are computed in Step 2.2, will be used in Step 2.3. The iteration stops when w(X (k+1) i ) < ε for some fixed stopping criterion ε = 10 −16 . Otherwise, set k = k + 1 and then subsequently X 3) . The iteration stops when the stopping condition is satisfied. The renewing of midpoint values before computing the new g (k) i are repeated three times in the internal loop to increase the convergence to the zeros simultaneously in each iteration, hence the name, interval trio midpoint symmetric single-step (ITMSS) method.
The following shows that the proposed method with updating the midpoint of the enclosing intervals will always generate intervals that decrease toward the polynomial zeros. Theorem 1. Let p(x) = a (n) x n + a (n−1) x n−1 + · · · + a (0) be a polynomial with n simple roots , 1 ≤ i ≤ n, generated from the ITMSS Algorithm, satisfies the following: or the sequence comes to rest at x * i , x * i after a finite number of steps.
in this ITMSS method and considering the construction of (11), it follows immediately that the width of the inclusions for each zero is at least halved at each calculation of a new iteration.
Theorem 1 partially holds when the polynomial has multiple roots. If we collect these multiple roots together as x * 1 , x * 2 , · · · , x * n , this approach must be altered so that the new calculations of the included intervals are only done for the indices 1 ≤ i ≤ n. Theorem 1 is then only valid for simple zeros, where the included intervals are recomputed at each step. Meanwhile, the other intervals remain unchanged [21].

Results
In this section, we present the theoretical convergence results of the proposed method, which is then followed by numerical results on real polynomials.

Convergence Analysis
The following theorem is about the inclusion of the generated intervals X (k) i , i = 1, . . . , n and their convergence to x * i , i = 1, . . . , n. Finally, we establish the R-order of convergence of the proposed method, and the theoretical analysis of convergence will be discussed.

Theorem 2.
Let I(R) be the set of all closed intervals on the real line and D i be subset of I(R) for i = 1, . . . , n. If the assumptions of Theorem 1 are valid, then it follows that 0 / ∈ D i ∈ I(R) such that p (x) ∈ D i , i = 1, . . . , n, and then are satisfied. Finally, the R-Order of convergence of ITMSS is given by the following: Proof of Theorem 2. The proof that w(X . . , n holds is almost identical with the corresponding proofs in [21], and is therefore omitted. It remains to be proven that R-order of convergence is at least 16.
As in the proof of Theorem [21], it may be shown that there exists α > 0 such that for every k ≥ 0, the following holds: and and where w (k,s) i and β = 1 n − 1 .
Let the following hold: and for r = 1, 2, 3, with Then, by (16)- (19) for every k ≥ 0, we have the following: and and Suppose without loss of generality, we have the following: Then, by an inductive argument, it follows from (12)-(23) that for i = 1, . . . , n and k ≥ 0, we have the following: Hence, by (18) and Step 2.4, for every k ≥ 0, we have the following: Thus, the following is true: Then for every k ≥ 0, by (12)-(23), Let the following hold: , and by (24), we have the following: Hence, Therefore, it follows from [21] and [31] that, O R ITMSS, x * i ≥ 16, i = 1, . . . , n.

Numerical Experiments
We analyze the efficiency of the proposed method by comparing the computational results of the 52 test problems with the interval single-steps (IS2) method and its variants regarding the number of iterations and the largest final width of the interval generated [26,30,32]. The selected test examples are arranged starting from a real polynomial p(x) with degree n = 3 up to n = 12. Furthermore, we only consider real and simple zeros in this experiment. Next, these algorithms are implemented using MATLAB R2017b cooperated with the Intlab V12.1 toolbox, specifically for interval arithmetic developed by Rump [33]. Meanwhile, the stopping criterion used is w(X (k) i ) ≤ 10 −16 , i = 1, . . . , n. Then, we observe the results using the performance profile to test the degree of efficiency of our algorithms. The algorithms that are considered include the following: We begin with setting up the initial intervals for each n in each test example into all four algorithms stated above in order to compute the number of iterations, k, and the largest final interval width, w(X (k) i ). There are 52 test examples that consist of 343 starting points, multiplied by four algorithms and then multiplied by two output categories (k and w). Thus, 416 of the output findings must be assessed using a performance profile. According to Dolan and Moré [34], when a large number of tested examples are employed, such as 100, 250, 500, or 1000, the output analyzing phases become extremely tough to evaluate, motivating researchers to apply performance profile comparison. In a nutshell, a performance profile is a visualization-based analytical tool used to evaluate the results of a benchmark experiment, allowing the user to compare the performance of each solver. Therefore, it is also known as a (cumulative) distribution function of a performance indicator, where ρ(τ) is the probability that a performance ratio is at most τ (the best possible ratio). The solver's likelihood will prevail over the other solvers, and is represented by ρ(1). If we are mainly concerned about the number of triumphs, we can compare the values of ρ(1) for each solver. Hence, the preferable solver is the one with the highest number of winning results.
The performance profiles for the number of iterations, k, and the maximum final interval width, w, are shown in Figures 1 and 2, respectively. The lines indicate the methods on the graphs. From Figure 1, it is noticeable that the ITMSS method performs better than IS2, ISS2, and IZSS2, respectively. In other words, the midpoint renewing procedures require fewer iterations, and converge to the zeros in terms of the number of iterations. As previously stated, the ITMSS method has a better chance of winning and is the most preferred method of all. Note that the order of convergence for IS2 is at least 3 [24], and the graph shows that this method is the least efficient in terms of the number of iterations. Meanwhile, the order of convergence for ISS2 and IZSS2 is at least 9 and 13, respectively [25,32]. Therefore, from an overall view of Figure 1, the method's efficiency resembles the order of convergence of the methods.

Discussion
In this study, we considered the polynomial zeros inclusion problem specifically by using a single-step procedure. The idea is to propose a more dynamic approach to finding the zeros. Considering that adjusting the midpoint of the intervals at every inner loop of the algorithm will encapsulate all real potential results, the interval arithmetic method's precision is guaranteed. It fulfills the stopping condition faster without neglecting other preliminary concepts of interval arithmetic. Based on the idea, we name the proposed method as the interval trio midpoint symmetric single-step (ITMSS) method. Then, according to the order of convergence analysis, the ITMSS method has a high order of convergence of at least 16. The results show that the ITMSS method converges faster than any single-step (IS2) procedure antecedently.
Furthermore, to evaluate the performance of the ITMSS method, we conducted a numerical experiment using 52 test examples, comparing it with IS2, ISS2 and IZSS2. We visualized the numerical results using the performance profile regarding the number of iterations and the largest final interval width. From the numerical results explained above, it is evident that the ITMSS method performs better than the IS2, ISS2, and IZSS2 methods, respectively. Although the proposed method is highly likely to win in Figure 1, this is not the case with the largest final interval width findings, as shown in Figure 2. Figure 2 shows the comparison of all four methods in terms of the largest final interval width generated when reaching the stopping condition. From the graph, the ITMSS method has a high probability of winning for most of the scenarios, but the graph shows a less satisfactory outcome in some circumstances. In certain situations, the number of iterations for the ITMSS method is equivalent to the iterations by the IZSS2 method. Furthermore, there are also situations where even the ITMSS method has the fewest number of iterations, compared to the other three methods, but the value of its largest final interval width is similar. Next, we provide the details of the above situations in the tables.
From Table 2, we selected one of the test examples p(x) = y 4 + 40 3 y 3 − 0.02y 2 − 0.4y, which happened to have a less satisfactory outcome for ITMSS in terms of the number of iteration generated. The table displays the interval width of every iteration by comparing all of the methods under consideration. At iteration k, the largest width of the final interval is highlighted in grey. As shown in Table 2, the polynomial required four iterations to complete for the IS2 method, whereas the ISS2 method only needed three iterations. However, both IZSS2 and ITMSS methods stopped at the second iteration. This table shows that the ITMSS method yields smaller intervals than other methods. For example, the width of i = 2 for the ITMSS method reached the stopping condition at k = 1; meanwhile, other i th did not. All the i th will continually run until the next iteration. Even though the ITMSS iterates twice as compared to the IZSS2 method, from the table, this method can , which are essentially the roots for the first and fourth intervals, respectively. It can be seen that the largest final interval width for ITMSS is significantly smaller. This proved that the ITMSS method converges faster than other methods; it can also approximately reduce the final interval width for every iteration.
In Table 3, the polynomial p(x) = 20000y 8 + 16080000y 7 + 551830000y 6 + 1053409 3200y 5 + 122028205260y 4 + 875779839648y 3 + 3789351757513y 2 + 8998687954893y + 8930 298867308 gives the final interval width generated for all methods as 8.88178419700125 × 10 −16 . However, the ITMSS method has fulfilled the stopping conditions at k = 1. This observation, depending only on the largest final interval width value, does not significantly interpret the algorithm. However, indeed, the largest final interval width and the number of iterations are correlated to each other. From the overall view, the proposed algorithm can converge to the roots simultaneously and fulfill the inclusion theorem better than the other three methods. This means that the final width of the intervals generated is minimal. Hence, the ITMSS algorithm almost always converges to zero, irrespective of the initial estimates. From an overall view, imposing the value g(x) and midpoint at every inner loop reduces the number of iterations and lessens the final interval width upon convergence. That is, the proposed method ensures that zero is contained within a suitably narrow final interval.

Conclusions
In this study, we investigated the interval iterative methods for the inclusion of polynomial zeros specifically for solving simple roots problems simultaneously. We provide the numerical results, using a performance profile to validate the efficiency of all four methods: IS2, ISS2, IZSS2 and ITMSS. Theoretically, the proposed ITMSS method has an R-order of convergence of 16, which means it can bound the zeros rigorously at a superior rate. The numerical results indicate that the ITMSS method surpassed the other three methods by fine-tuning the midpoint and decreasing the final interval width with fewer iterations. However, it is suggested that further investigations can be conducted for polynomials with complex roots or with complex coefficients while solving the complexity of the computational CPU time running by implementing the interval arithmetic technique.

Abbreviations
The following abbreviations are used in this manuscript:

IS2
Interval single-step method ISS2 Interval symmetric single-step method IZSS2 Interval zoro-symmetric single-step method ITMSS Interval trio midpoint symmetric single-step method