## 1. Introduction

In this study, inspired in previous papers such us [

1], we want to approximate a locally unique solution

${z}^{\ast}$ of equation

where

$G:I\subset \to T$,

$T=\mathbb{R}$ or

$T=\mathbb{C}$,

I is convex and

G is a differentiable function.

As society changes, the way in which we understand science also changes, as can be seen in [

2,

3,

4,

5,

6]. In the case of Mathematical Modelling [

2,

3,

4,

5] it is important to comment that it enables us to solve problems with the form of the Equation (

1). Based on the results, researchers are working to use the same assumptions to increase the convergence domain of the iterative method because it is usually small, the final purpose is to estimate:

$\Vert {z}_{n+1}-{z}_{n}\Vert $ and

$\Vert {z}_{n}-{z}^{\ast}\Vert $.

The dynamic behavior of iterative methods provides key information about its reliability and stability. In the past ten years, some studies related to dynamic characteristics have been presented [

3,

4,

6,

7,

8,

9,

10,

11,

12,

13,

14,

15,

16,

17,

18,

19,

20,

21,

22,

23,

24,

25]. The main problem of this research is related to the fact that these families or methods have only one parameter, or even no parameters. In our case, we have more free parameters, so we need another tool to study it. The tools in [

19,

21], arevery useful for studying the dynamic behavior of families with two different parameters or the actual dynamics in other interactive applications. In this article, we will study the convergence of:

Includes computable convergence radius and error estimates. In addition, we will show the behaviors related to the family by using the convergence planes.

We recognize local, semi-local and global convergence. In the case of local (convergence ball is centered at the solution) and semi-local (convergence ball is centered at the starting point

${x}_{0}$) the convergence is faster, and less expensive than global convergence. But global convergence provides all roots inside a domain. It is worth noticing that loocal convergence results are important because they demonstrate the degree of difficulty in choosing starters

${x}_{0}$. There is a plethora of literature on solvers. See for example [

26,

27,

28,

29,

30] and the references therein. In these references other techniques are used on polynomials (or not) considering proper intervals containing a root or all roots. These references do not provide computable bounds on

$\Vert {x}_{n}-{x}_{\ast}\Vert $ or uniqueness of the solution

${x}_{\ast}$ results. But we do. Moreover, the starter

${x}_{0}$ really a shot in the dark, whereas in our case it is picked up from a predetermined ball of convergence.

Consider the motivational example for

$T=\mathcal{R}$,

$I=[-\frac{1}{2},\frac{3}{2}]$ where

Then, we have

${x}_{\ast}=0$, and

Then, obviously, function

${f}^{\u2034}\left(t\right)$ is unbounded on

I. Hence, earlier results using the third (or higher order, that do not appear on

$\left(2\right)$, but used to compute the order of convergence) derivatives cannot guarantee convergence (

2) to

${z}^{\ast}$,

The convergence studies so far have shown convergence using high order derivatives that do not appear on the method. Hence, limiting the applicability of the method. But we show convergence using only the first derivative that actually appears on the method. Hence, we expand the applicability of the method. A radius of convergence, estimates on $\Vert {x}_{n}-{x}_{\ast}\Vert $ and uniqueness results not given before are also provided. Our technique can be used to expand the applicability of other methods along the same lines. The convergence order is determined using (COC) orr (ACOC) to be precised in Remark 2.

All this work is organized into different chapters: Next section, number 2, we display the method and the theorems that guarantee the local convergence of the method. In the next section, number 3, the authors study the convergence planes of the method of the previous section applied to a quadratic polynomial. In the next section we introduce some chemical applications that reinforce the applicability of the method and in the last section, number 5, we write the conclusions that complete the article.

## 2. Local Convergence Analysis

We start to introduce in this section the method we will study throughout the article. We study the local convergence analysis of the two-step King-like method defined as follow for each

$n=0,1,\dots $ by

where

${z}_{0}$ is an initial point,

$\alpha \in T$ and

$\beta \in T$ are parameters. Some well-known methods, such as our King’s family methods or Traub-Ostrowski method, are special cases of this family.

We consider

$N\ge 1$,

${L}_{0},L>0$ and

$\alpha ,\beta \in T$. We now want to demonstrate the local convergence of the method (

2) for which we are going to use the following auxiliary functions and parameters defined as:

The first auxiliary function that we are going to define is called

${k}_{1}$ on the interval

$[0,\frac{1}{{L}_{0}})$ by

and

Next, taking into account the definition of N and

$\alpha $, we can write

Based on the above, we obtain as a consequence

Now consider the interval

$[0,\frac{1}{{L}_{0}})$ and defining a new auxiliary functions

and other auxiliary function

It clearly follows that

and consequently,

when

Using now the Intermediate Value Theorem, it is clear that there exist at least one root of ${p}_{0}$ located in the interval $(0,\frac{1}{{L}_{0}})$, we will call ${r}_{0}$ the smallest zero in that interval.

Now we can perform the same technique, for this we define on the interval

$[0,{r}_{0})$ a new auxiliary functions

and

Similarly, suppose

it is easy to see that

and

when

So, it is clear that there exist at least one root of ${p}_{2}$ located in the interval $(0,{r}_{0})$. We will call ${r}_{2}$ as the smallest zero in the interval.

We also define the value

r as the minimum of the values between

${r}_{1}$ and

${r}_{2}$For

$t\in [0,r)$, the following conditions are fulfilled by definition of the functions

and

From now on, we will also consider $U(v,\rho )$ as the open ball in T with center $v\in T$ and radius $\rho >0$, as $\overline{U}(v,\rho )$ the closed one and ${I}_{0}=I\cap U({Z}^{\ast},\frac{1}{{L}_{0}})$, where ${L}_{0}$ is positive therefore we can present the following result.

**Theorem** **1.** Let $G:I\subset T\to T$ be a differentiable function. Suppose that there exists ${z}^{\ast}\in I$, ${L}_{0},L>0$, $N\ge 1$ and $\alpha ,\beta \in T$ such that the following requirements are trueand Then, the method (

2)

generate a well defined sequence $\left\{{z}_{n}\right\}$ for ${z}_{0}\in U({z}^{\ast},r)\backslash \left\{{z}^{\ast}\right\}$, and $\left\{{z}_{n}\right\}$ remains in $U({z}^{\ast},r)$ for each $n=0,1,2,\dots $ and moreover, the sequence converges to ${z}^{\ast}$.. Furthermore, the inequalities written below are also fulfilledand Moreover, if we take R in the formthe limit point ${z}^{\ast}$ is the only solution of Equation (1) in the domain **Proof.** To prove the Theorem we are going to use the mathematical induction method to show estimates (

16) and (

17). First of all, using

and (

12) we get

Using this condition and the Banach lemma on invertible functions [

2,

3,

5] it is obvious that

is not equal to zero and satisfied

As a consequence,

${w}_{0}$ is well defined. Now, using (

11) we obtain

Notice that $\theta ({z}_{0}-{z}^{\ast})+{z}^{\ast}\in U({z}^{\ast},r)$ since $\Vert \theta ({z}_{0}-{z}^{\ast})-{z}^{\ast}++{z}^{\ast}\Vert =\theta \Vert {z}_{0}-{z}^{\ast}\Vert <r.$

Now, from (

14) and (

19), we obtain

From method (

2) for

$n=0$, and the Equations (

7), (

8), (

13), (

19) and (

22) we get

As a result, it is satisfied (

16) taking

$n=0$ and

${y}_{0}\in U({x}^{\ast},r)$.

Now, replace

${z}_{0}$ by

${w}_{0}$ in the previous inequality (

22) we get

Now we must prove $G\left({z}_{0}\right)+(\beta -2)G\left({w}_{0}\right)\ne 0$ for ${z}_{0}\ne {z}^{\ast}$.

From conditions (

7), (

9), (

11), (

12) and (

22)–(

23) it follows that

Or what is the same, from (

24) we obtain

Consequently,

${z}_{1}$ is well defined by the second substep of method (

2) for

$n=0$.

Now, from (

2) for

$n=0$, (

7), (

10), (

19), (

22) and (

25) we get

which demonstrates the Equation (

17) with

$n=0$ and

${z}_{1}\in U({z}^{\ast},r)$.

Now substituting

${z}_{0}$,

${w}_{0}$,

${z}_{1}$ by

${z}_{k}$,

${w}_{k}$,

${z}_{k+1}$ in the previous estimates we reach (

16)–(

18).

Next, from $\Vert {z}_{k+1}-{z}^{\ast}\Vert \le c|{z}_{k}-{z}^{\ast}\Vert <r$, where $c={k}_{2}(\Vert {z}_{0}-{z}^{\ast}\Vert )\in [0,1]=$, we obtain: $\underset{k\to \infty}{lim}{z}_{k}={z}^{\ast}$ and ${z}_{k+1}\in U({z}^{\ast},r)$.

Finally, we set

for some

${w}^{\ast}\in \overline{U}({z}^{\ast},R)$ with

$G\left({w}^{\ast}\right)=0$. From (

12), we obtain

It is clear that from (

26),

${M}^{-1}\in L(T,T)$. Finally, using

$0=G\left({w}^{\ast}\right)-G\left({z}^{\ast}\right)=M({w}^{\ast}-{z}^{\ast})$, we get

□

**Remark** **1.** Moreover for the error bounds in practice we can use the computational order of convergence (COC) [31] using or the approximate computational order of convergence (ACOC) [31] using iterations instead of the exact solution

**Remark** **2.** - (a)
Condition (11) and estimates (12)–(14) are natural Lipschitz-type conditions which are standard in the study of iterative methods. In fact (12)–(14) can be condensed in one condition. But used this way is better for reasons of precision. - (b)
Concerning the second and the thirs conditions in Theorem 1, these were left as uncluttered as possible. One can easily see that there are infinitely many pair of values $(\alpha ,\beta )$ satisfying these inequalities (but not as weak). Indeed, suppose thatThen, the second inequality is satisfied provided thatorThen, the original inequalities certainly hold, if (

27)

and (

28)

hold. Moreover, (

27)

(in view of (

28))

can ever be replaced byHowever as noted earlier condtions (

27)

and (

28)

or (

28)

and (

29)

are stronger than the two inequalities appearing in the statemetn of Theorem 1 (and can certainly replace them). Either way, as claimed above there are infinitely many choices of $(\alpha ,\beta )$.

## 3. The Convergence Planes Applied to the Quadratic Polynomial $\mathit{p}\mathbf{\left(}\mathit{z}\mathbf{\right)}\mathbf{=}{\mathit{z}}^{\mathbf{2}}\mathbf{-}\mathbf{1}$ Using Method (2)

Once the convergence of the method has been demonstrated in the previous section, we will study the convergence planes. We should first study the parameter planes of the method (

2), but the details of these parameter planes can be seen in [

24]. We will show the convergence planes applied to

$p\left(z\right)={z}^{2}-1$ a polynomial of degree two, taking different values of the method parameters

$\alpha $ and

$\beta $. In this article, to draw the points, we iterate a maximum of 200 times and a tolerance of

${10}^{-3}$. We will use different colors to differentiate the numerical behavior of the points when they are iterated:

in yellow if the iteration of the initial point diverges to ∞,

in magenta if the starting point converges to the first root of the quadratic polynomial, $z=1$,

in cyan if the iteration of the starting point converges to the other root of the quadratic polynomial $z=-1$,

if the iteration of the initial point converges to any strange fixed points, it is colored in red,

in other colors if the iteration of the initial point converge to different n-cycles ($n\le 8$),

in black if the iteration of the initial point has other behavior.

Consequently, we are only interested in points painted in cyan or magenta since it means that the selection of parameters $\alpha $ and $\beta $ for that initial point ${z}_{0}$ in numerical terms is a good one.

In

Figure 1 we see the convergence plane associated to the method (

2) applied to the two order polynomial

$p\left(z\right)={z}^{2}-1$ with the initial point

${z}_{0}=0.5$ and in the region of the plane

$(\alpha ,\beta )\in [-5,5]\times [-5,5]$. and in

Figure 2 with starting point

${z}_{0}=-0.75$ and in the region of the plane

$(\alpha ,\beta )\in [-5,5]\times [-5,5]$## 4. Application Examples

To show the applicability of the theorems and results presented in this article, let’s look at the Planck’s radiation law problem, which can be obtained from the reference [

32]:

which estimates the density of within an isothermal black body.

If we use a variable change:

The expression which estimates the density of within an isothermal black body can be defined in the following terms:

And if we now define the function

h(

x) as:

As a result, if we are able to find the roots of Equation (

32) can provide us the maximum wavelength of radiation (

$\lambda $) through the expression:

Let’s consider the interval $D=[4,6]$ and the solution ${x}^{\ast}=4.965114\dots $.

Then, we will introduce the applicability of the three special cases introduced in previous

Section 3.

**Application** **1.** First we will consider the case $\alpha =0.9$ and $\beta =2.5$

We obtain the values for this caseand Moreover, we can conclude the following conditionsare satisfied. Then, due to the definition of the “k” functions we can deriveand Therefore, if we take the alpha and beta values as $\alpha =0.9$, $\beta =2.5$ and $r={r}_{2}=2.64413\dots $ we we can assure that the method described in (2) converges to the unique solution ${x}^{\ast}$ of the function $h\left(x\right)$ by Theorem 1.. **Application** **2.** Secondly we will consider the case $\alpha =0.75$ and $\beta =2.01$

We obtain the values for this caseand Moreover, we can conclude the following conditionsare satisfied. Now, applying the definition of the “k” functions we getand So, taking the values: $\alpha =0.75$, $\beta =2.01$ and $r={r}_{2}=1.18924\dots $ we can assure the convergence of the method described in (2) to the solution ${x}^{\ast}$ of $h\left(x\right)$ applying Theorem 1. **Application** **3.** Finally we will consider the case $\alpha =\beta =1$.

We obtain the values for this caseand Moreover, we can conclude the following conditionsare satisfied. Now, applying the definition of the “k” functions we getand So, taking the values: $\alpha =0.9$, $\beta =2.5$ and $r={r}_{2}=2.64413\dots $ we can assure the convergence of the method described in (2) to the solution ${x}^{\ast}$ of $f\left(x\right)$ by Theorem 1.