Special Issue "Iterative Methods for Solving Nonlinear Equations and Systems"

A special issue of Mathematics (ISSN 2227-7390).

Deadline for manuscript submissions: closed (30 September 2019).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Dr. Fazlollah Soleymani
Website
Guest Editor
Institute for Advanced Studies in Basic Sciences, Zanjan, Zanjan, Iran
Interests: Numerical linear algebra; option pricing PDEs; computational methods for SDEs; iterative methods

Special Issue Information

Dear Colleagues,

Solving nonlinear equations and systems is a non-trivial task that involves many areas of Science and Technology. Usually, it is not affordable in a direct way, and iterative algorithms play a fundamental role in their approach. This is an area of research that has experienced exponential growth in the last years.

The main theme of this Special Issue, which is not the unique, is the design, analysis of convergence, and stability and application of new iterative schemes for solving nonlinear problems to practical problems. This includes methods with and without memory, with derivatives or derivative-free, with real or complex dynamics associated with them and an analysis of their convergence that can be local, semi-local, or global.

Prof. Dr. Juan R. Torregrosa
Prof. Dr. Alicia Cordero
Dr. Fazlollah Soleymani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • nonlinear problems
  • iterative methods
  • convergence
  • efficiency
  • chaotic behavior
  • complex or real dynamics

Published Papers (30 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Higher-Order Derivative-Free Iterative Methods for Solving Nonlinear Equations and Their Basins of Attraction
Mathematics 2019, 7(11), 1052; https://doi.org/10.3390/math7111052 - 04 Nov 2019
Abstract
Based on the Steffensen-type method, we develop fourth-, eighth-, and sixteenth-order algorithms for solving one-variable equations. The new methods are fourth-, eighth-, and sixteenth-order converging and require at each iteration three, four, and five function evaluations, respectively. Therefore, all these algorithms are optimal [...] Read more.
Based on the Steffensen-type method, we develop fourth-, eighth-, and sixteenth-order algorithms for solving one-variable equations. The new methods are fourth-, eighth-, and sixteenth-order converging and require at each iteration three, four, and five function evaluations, respectively. Therefore, all these algorithms are optimal in the sense of Kung–Traub conjecture; the new schemes have an efficiency index of 1.587, 1.682, and 1.741, respectively. We have given convergence analyses of the proposed methods and also given comparisons with already established known schemes having the same convergence order, demonstrating the efficiency of the present techniques numerically. We also studied basins of attraction to demonstrate their dynamical behavior in the complex plane. Full article
Show Figures

Figure 1

Open AccessArticle
Design and Complex Dynamics of Potra–Pták-Type Optimal Methods for Solving Nonlinear Equations and Its Applications
Mathematics 2019, 7(10), 942; https://doi.org/10.3390/math7100942 - 11 Oct 2019
Cited by 1
Abstract
In this paper, using the idea of weight functions on the Potra–Pták method, an optimal fourth order method, a non optimal sixth order method, and a family of optimal eighth order methods are proposed. These methods are tested on some numerical examples, and [...] Read more.
In this paper, using the idea of weight functions on the Potra–Pták method, an optimal fourth order method, a non optimal sixth order method, and a family of optimal eighth order methods are proposed. These methods are tested on some numerical examples, and the results are compared with some known methods of the corresponding order. It is proved that the results obtained from the proposed methods are compatible with other methods. The proposed methods are tested on some problems related to engineering and science. Furthermore, applying these methods on quadratic and cubic polynomials, their stability is analyzed by means of their basins of attraction. Full article
Show Figures

Figure 1

Open AccessArticle
Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations
Mathematics 2019, 7(10), 937; https://doi.org/10.3390/math7100937 - 10 Oct 2019
Abstract
We present a three-step family of iterative methods to solve systems of nonlinear equations. This family is a generalization of the well-known fourth-order King’s family to the multidimensional case. The convergence analysis of the methods is provided under mild conditions. The analytical discussion [...] Read more.
We present a three-step family of iterative methods to solve systems of nonlinear equations. This family is a generalization of the well-known fourth-order King’s family to the multidimensional case. The convergence analysis of the methods is provided under mild conditions. The analytical discussion of the work is upheld by performing numerical experiments on some application oriented problems. Finally, numerical results demonstrate the validity and reliability of the suggested methods. Full article
Open AccessArticle
An Improved Curvature Circle Algorithm for Orthogonal Projection onto a Planar Algebraic Curve
Mathematics 2019, 7(10), 912; https://doi.org/10.3390/math7100912 - 01 Oct 2019
Cited by 1
Abstract
Point orthogonal projection onto planar algebraic curve plays an important role in computer graphics, computer aided design, computer aided geometric design and other fields. For the case where the test point p is very far from the planar algebraic curve, we propose an [...] Read more.
Point orthogonal projection onto planar algebraic curve plays an important role in computer graphics, computer aided design, computer aided geometric design and other fields. For the case where the test point p is very far from the planar algebraic curve, we propose an improved curvature circle algorithm to find the footpoint. Concretely, the first step is to repeatedly iterate algorithm (the Newton’s steepest gradient descent method) until the iterated point could fall on the planar algebraic curve. Then seek footpoint by using the algorithm (computing footpoint q ) where the core technology is the curvature circle method. And the next step is to orthogonally project the footpoint q onto the planar algebraic curve by using the algorithm (the hybrid tangent vertical foot algorithm). Repeatedly run the algorithm (computing footpoint q ) and the algorithm (the hybrid tangent vertical foot algorithm) until the distance between the current footpoint and the previous footpoint is near 0. Furthermore, we propose Second Remedial Algorithm based on Comprehensive Algorithm B. In particular, its robustness is greatly improved than that of Comprehensive Algorithm B and it achieves our expected result. Numerical examples demonstrate that Second Remedial Algorithm could converge accurately and efficiently no matter how far the test point is from the plane algebraic curve and where the initial iteration point is. Full article
Show Figures

Figure 1

Open AccessArticle
Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing
Mathematics 2019, 7(9), 866; https://doi.org/10.3390/math7090866 - 19 Sep 2019
Cited by 4
Abstract
Splitting methods have received a lot of attention lately because many nonlinear problems that arise in the areas used, such as signal processing and image restoration, are modeled in mathematics as a nonlinear equation, and this operator is decomposed as the sum of [...] Read more.
Splitting methods have received a lot of attention lately because many nonlinear problems that arise in the areas used, such as signal processing and image restoration, are modeled in mathematics as a nonlinear equation, and this operator is decomposed as the sum of two nonlinear operators. Most investigations about the methods of separation are carried out in the Hilbert spaces. This work develops an iterative scheme in Banach spaces. We prove the convergence theorem of our iterative scheme, applications in common zeros of accretive operators, convexly constrained least square problem, convex minimization problem and signal processing. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems
Mathematics 2019, 7(9), 815; https://doi.org/10.3390/math7090815 - 03 Sep 2019
Cited by 2
Abstract
In this paper, an iterative method for solving large, sparse systems of weakly nonlinear equations is presented. This method is based on Hermitian/skew-Hermitian splitting (HSS) scheme. Under suitable assumptions, we establish the convergence theorem for this method. In addition, it is shown that [...] Read more.
In this paper, an iterative method for solving large, sparse systems of weakly nonlinear equations is presented. This method is based on Hermitian/skew-Hermitian splitting (HSS) scheme. Under suitable assumptions, we establish the convergence theorem for this method. In addition, it is shown that any faster and less time-consuming two-stage splitting method that satisfies the convergence theorem can be replaced instead of the HSS inner iterations. Numerical results, such as CPU time, show the robustness of our new method. This method is easy, fast and convenient with an accurate solution. Full article
Open AccessArticle
A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator
Mathematics 2019, 7(9), 776; https://doi.org/10.3390/math7090776 - 23 Aug 2019
Cited by 2
Abstract
In this manuscript, a new family of Jacobian-free iterative methods for solving nonlinear systems is presented. The fourth-order convergence for all the elements of the class is established, proving, in addition, that one element of this family has order five. The proposed methods [...] Read more.
In this manuscript, a new family of Jacobian-free iterative methods for solving nonlinear systems is presented. The fourth-order convergence for all the elements of the class is established, proving, in addition, that one element of this family has order five. The proposed methods have four steps and, in all of them, the same divided difference operator appears. Numerical problems, including systems of academic interest and the system resulting from the discretization of the boundary problem described by Fisher’s equation, are shown to compare the performance of the proposed schemes with other known ones. The numerical tests are in concordance with the theoretical results. Full article
Show Figures

Figure 1

Open AccessArticle
An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications
Mathematics 2019, 7(9), 767; https://doi.org/10.3390/math7090767 - 21 Aug 2019
Cited by 2
Abstract
This research paper proposes a derivative-free method for solving systems of nonlinear equations with closed and convex constraints, where the functions under consideration are continuous and monotone. Given an initial iterate, the process first generates a specific direction and then employs a line [...] Read more.
This research paper proposes a derivative-free method for solving systems of nonlinear equations with closed and convex constraints, where the functions under consideration are continuous and monotone. Given an initial iterate, the process first generates a specific direction and then employs a line search strategy along the direction to calculate a new iterate. If the new iterate solves the problem, the process will stop. Otherwise, the projection of the new iterate onto the closed convex set (constraint set) determines the next iterate. In addition, the direction satisfies the sufficient descent condition and the global convergence of the method is established under suitable assumptions. Finally, some numerical experiments were presented to show the performance of the proposed method in solving nonlinear equations and its application in image recovery problems. Full article
Show Figures

Figure 1

Open AccessArticle
A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications
Mathematics 2019, 7(8), 745; https://doi.org/10.3390/math7080745 - 15 Aug 2019
Cited by 3
Abstract
One of the fastest growing and efficient methods for solving the unconstrained minimization problem is the conjugate gradient method (CG). Recently, considerable efforts have been made to extend the CG method for solving monotone nonlinear equations. In this research article, we present a [...] Read more.
One of the fastest growing and efficient methods for solving the unconstrained minimization problem is the conjugate gradient method (CG). Recently, considerable efforts have been made to extend the CG method for solving monotone nonlinear equations. In this research article, we present a modification of the Fletcher–Reeves (FR) conjugate gradient projection method for constrained monotone nonlinear equations. The method possesses sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiments were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment was applying the method in signal and image recovery problems arising from compressive sensing. Full article
Show Figures

Figure 1

Open AccessArticle
Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme
Mathematics 2019, 7(8), 731; https://doi.org/10.3390/math7080731 - 10 Aug 2019
Abstract
The goal of this research is to extend and investigate an improved approach for calculating the weighted Moore–Penrose (WMP) inverses of singular or rectangular matrices. The scheme is constructed based on a hyperpower method of order ten. It is shown that the improved [...] Read more.
The goal of this research is to extend and investigate an improved approach for calculating the weighted Moore–Penrose (WMP) inverses of singular or rectangular matrices. The scheme is constructed based on a hyperpower method of order ten. It is shown that the improved scheme converges with this rate using only six matrix products per cycle. Several tests are conducted to reveal the applicability and efficiency of the discussed method, in contrast with its well-known competitors. Full article
Open AccessArticle
Numerical Solution of Heston-Hull-White Three-Dimensional PDE with a High Order FD Scheme
Mathematics 2019, 7(8), 704; https://doi.org/10.3390/math7080704 - 06 Aug 2019
Cited by 2
Abstract
A new numerical method for tackling the three-dimensional Heston–Hull–White partial differential equation (PDE) is proposed. This PDE has an application in pricing options when not only the asset price and the volatility but also the risk-free rate of interest are coming from stochastic [...] Read more.
A new numerical method for tackling the three-dimensional Heston–Hull–White partial differential equation (PDE) is proposed. This PDE has an application in pricing options when not only the asset price and the volatility but also the risk-free rate of interest are coming from stochastic nature. To solve this time-dependent three-dimensional PDE as efficiently as possible, high order adaptive finite difference (FD) methods are applied for the application of method of lines. It is derived that the new estimates have fourth order of convergence on non-uniform grids. In addition, it is proved that the overall procedure is conditionally time-stable. The results are upheld via several numerical tests. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
A Unified Convergence Analysis for Some Two-Point Type Methods for Nonsmooth Operators
Mathematics 2019, 7(8), 701; https://doi.org/10.3390/math7080701 - 03 Aug 2019
Abstract
The aim of this paper is the approximation of nonlinear equations using iterative methods. We present a unified convergence analysis for some two-point type methods. This way we compare specializations of our method using not necessarily the same convergence criteria. We consider both [...] Read more.
The aim of this paper is the approximation of nonlinear equations using iterative methods. We present a unified convergence analysis for some two-point type methods. This way we compare specializations of our method using not necessarily the same convergence criteria. We consider both semilocal and local analysis. In the first one, the hypotheses are imposed on the initial guess and in the second on the solution. The results can be applied for smooth and nonsmooth operators. Full article
Open AccessArticle
An Optimal Eighth-Order Family of Iterative Methods for Multiple Roots
Mathematics 2019, 7(8), 672; https://doi.org/10.3390/math7080672 - 27 Jul 2019
Cited by 1
Abstract
In this paper, we introduce a new family of efficient and optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity ( m 1 ) . We use the weight function approach involving one and two parameters to develop [...] Read more.
In this paper, we introduce a new family of efficient and optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity ( m 1 ) . We use the weight function approach involving one and two parameters to develop the new family. A comprehensive convergence analysis is studied to demonstrate the optimal eighth-order convergence of the suggested scheme. Finally, numerical and dynamical tests are presented, which validates the theoretical results formulated in this paper and illustrates that the suggested family is efficient among the domain of multiple root finding methods. Full article
Show Figures

Figure 1

Open AccessArticle
A Fast Derivative-Free Iteration Scheme for Nonlinear Systems and Integral Equations
Mathematics 2019, 7(7), 637; https://doi.org/10.3390/math7070637 - 18 Jul 2019
Cited by 2
Abstract
Derivative-free schemes are a class of competitive methods since they are one remedy in cases at which the computation of the Jacobian or higher order derivatives of multi-dimensional functions is difficult. This article studies a variant of Steffensen’s method with memory for tackling [...] Read more.
Derivative-free schemes are a class of competitive methods since they are one remedy in cases at which the computation of the Jacobian or higher order derivatives of multi-dimensional functions is difficult. This article studies a variant of Steffensen’s method with memory for tackling a nonlinear system of equations, to not only be independent of the Jacobian calculation but also to improve the computational efficiency. The analytical parts of the work are supported by several tests, including an application in mixed integral equations. Full article
Show Figures

Figure 1

Open AccessArticle
A Seventh-Order Scheme for Computing the Generalized Drazin Inverse
Mathematics 2019, 7(7), 622; https://doi.org/10.3390/math7070622 - 12 Jul 2019
Abstract
One of the most important generalized inverses is the Drazin inverse, which is defined for square matrices having an index. The objective of this work is to investigate and present a computational tool in the form of an iterative method for computing this [...] Read more.
One of the most important generalized inverses is the Drazin inverse, which is defined for square matrices having an index. The objective of this work is to investigate and present a computational tool in the form of an iterative method for computing this task. This scheme reaches the seventh rate of convergence as long as a suitable initial matrix is chosen and by employing only five matrix products per cycle. After some analytical discussions, several tests are provided to show the efficiency of the presented formulation. Full article
Show Figures

Figure 1

Open AccessArticle
A Generic Family of Optimal Sixteenth-Order Multiple-Root Finders and Their Dynamics Underlying Purely Imaginary Extraneous Fixed Points
Mathematics 2019, 7(6), 562; https://doi.org/10.3390/math7060562 - 20 Jun 2019
Abstract
A generic family of optimal sixteenth-order multiple-root finders are theoretically developed from general settings of weight functions under the known multiplicity. Special cases of rational weight functions are considered and relevant coefficient relations are derived in such a way that all the extraneous [...] Read more.
A generic family of optimal sixteenth-order multiple-root finders are theoretically developed from general settings of weight functions under the known multiplicity. Special cases of rational weight functions are considered and relevant coefficient relations are derived in such a way that all the extraneous fixed points are purely imaginary. A number of schemes are constructed based on the selection of desired free parameters among the coefficient relations. Numerical and dynamical aspects on the convergence of such schemes are explored with tabulated computational results and illustrated attractor basins. Overall conclusion is drawn along with future work on a different family of optimal root-finders. Full article
Show Figures

Figure 1

Open AccessArticle
The Modified Inertial Iterative Algorithm for Solving Split Variational Inclusion Problem for Multi-Valued Quasi Nonexpansive Mappings with Some Applications
Mathematics 2019, 7(6), 560; https://doi.org/10.3390/math7060560 - 19 Jun 2019
Abstract
Based on the very recent work by Shehu and Agbebaku in Comput. Appl. Math. 2017, we introduce an extension of their iterative algorithm by combining it with inertial extrapolation for solving split inclusion problems and fixed point problems. Under suitable conditions, we prove [...] Read more.
Based on the very recent work by Shehu and Agbebaku in Comput. Appl. Math. 2017, we introduce an extension of their iterative algorithm by combining it with inertial extrapolation for solving split inclusion problems and fixed point problems. Under suitable conditions, we prove that the proposed algorithm converges strongly to common elements of the solution set of the split inclusion problems and fixed point problems. Full article
Open AccessArticle
How to Obtain Global Convergence Domains via Newton’s Method for Nonlinear Integral Equations
Mathematics 2019, 7(6), 553; https://doi.org/10.3390/math7060553 - 17 Jun 2019
Abstract
We use the theoretical significance of Newton’s method to draw conclusions about the existence and uniqueness of solution of a particular type of nonlinear integral equations of Fredholm. In addition, we obtain a domain of global convergence for Newton’s method. Full article
Show Figures

Figure 1

Open AccessArticle
On the Semilocal Convergence of the Multi–Point Variant of Jarratt Method: Unbounded Third Derivative Case
Mathematics 2019, 7(6), 540; https://doi.org/10.3390/math7060540 - 13 Jun 2019
Cited by 1
Abstract
In this paper, we study the semilocal convergence of the multi-point variant of Jarratt method under two different mild situations. The first one is the assumption that just a second-order Fréchet derivative is bounded instead of third-order. In addition, in the next one, [...] Read more.
In this paper, we study the semilocal convergence of the multi-point variant of Jarratt method under two different mild situations. The first one is the assumption that just a second-order Fréchet derivative is bounded instead of third-order. In addition, in the next one, the bound of the norm of the third order Fréchet derivative is assumed at initial iterate rather than supposing it on the domain of the nonlinear operator and it also satisfies the local ω -continuity condition in order to prove the convergence, existence-uniqueness followed by a priori error bound. During the study, it is noted that some norms and functions have to recalculate and its significance can be also seen in the numerical section. Full article
Open AccessArticle
A Higher Order Chebyshev-Halley-Type Family of Iterative Methods for Multiple Roots
Mathematics 2019, 7(4), 339; https://doi.org/10.3390/math7040339 - 09 Apr 2019
Abstract
The aim of this paper is to introduce new high order iterative methods for multiple roots of the nonlinear scalar equation; this is a demanding task in the area of computational mathematics and numerical analysis. Specifically, we present a new Chebyshev–Halley-type iteration function [...] Read more.
The aim of this paper is to introduce new high order iterative methods for multiple roots of the nonlinear scalar equation; this is a demanding task in the area of computational mathematics and numerical analysis. Specifically, we present a new Chebyshev–Halley-type iteration function having at least sixth-order convergence and eighth-order convergence for a particular value in the case of multiple roots. With regard to computational cost, each member of our scheme needs four functional evaluations each step. Therefore, the maximum efficiency index of our scheme is 1.6818 for α = 2 , which corresponds to an optimal method in the sense of Kung and Traub’s conjecture. We obtain the theoretical convergence order by using Taylor developments. Finally, we consider some real-life situations for establishing some numerical experiments to corroborate the theoretical results. Full article
Open AccessArticle
Optimal Fourth, Eighth and Sixteenth Order Methods by Using Divided Difference Techniques and Their Basins of Attraction and Its Application
Mathematics 2019, 7(4), 322; https://doi.org/10.3390/math7040322 - 30 Mar 2019
Cited by 1
Abstract
The principal objective of this work is to propose a fourth, eighth and sixteenth order scheme for solving a nonlinear equation. In terms of computational cost, per iteration, the fourth order method uses two evaluations of the function and one evaluation of the [...] Read more.
The principal objective of this work is to propose a fourth, eighth and sixteenth order scheme for solving a nonlinear equation. In terms of computational cost, per iteration, the fourth order method uses two evaluations of the function and one evaluation of the first derivative; the eighth order method uses three evaluations of the function and one evaluation of the first derivative; and sixteenth order method uses four evaluations of the function and one evaluation of the first derivative. So these all the methods have satisfied the Kung-Traub optimality conjecture. In addition, the theoretical convergence properties of our schemes are fully explored with the help of the main theorem that demonstrates the convergence order. The performance and effectiveness of our optimal iteration functions are compared with the existing competitors on some standard academic problems. The conjugacy maps of the presented method and other existing eighth order methods are discussed, and their basins of attraction are also given to demonstrate their dynamical behavior in the complex plane. We apply the new scheme to find the optimal launch angle in a projectile motion problem and Planck’s radiation law problem as an application. Full article
Show Figures

Figure 1

Open AccessArticle
Improving the Computational Efficiency of a Variant of Steffensen’s Method for Nonlinear Equations
Mathematics 2019, 7(3), 306; https://doi.org/10.3390/math7030306 - 26 Mar 2019
Cited by 3
Abstract
Steffensen-type methods with memory were originally designed to solve nonlinear equations without the use of additional functional evaluations per computing step. In this paper, a variant of Steffensen’s method is proposed which is derivative-free and with memory. In fact, using an acceleration technique [...] Read more.
Steffensen-type methods with memory were originally designed to solve nonlinear equations without the use of additional functional evaluations per computing step. In this paper, a variant of Steffensen’s method is proposed which is derivative-free and with memory. In fact, using an acceleration technique via interpolation polynomials of appropriate degrees, the computational efficiency index of this scheme is improved. It is discussed that the new scheme is quite fast and has a high efficiency index. Finally, numerical investigations are brought forward to uphold the theoretical discussions. Full article
Show Figures

Figure 1

Open AccessArticle
Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications
Mathematics 2019, 7(3), 299; https://doi.org/10.3390/math7030299 - 24 Mar 2019
Cited by 2
Abstract
The aim of this paper is to present a new semi-local convergence analysis for Newton’s method in a Banach space setting. The novelty of this paper is that by using more precise Lipschitz constants than in earlier studies and our new idea of [...] Read more.
The aim of this paper is to present a new semi-local convergence analysis for Newton’s method in a Banach space setting. The novelty of this paper is that by using more precise Lipschitz constants than in earlier studies and our new idea of restricted convergence domains, we extend the applicability of Newton’s method as follows: The convergence domain is extended; the error estimates are tighter and the information on the location of the solution is at least as precise as before. These advantages are obtained using the same information as before, since new Lipschitz constant are tighter and special cases of the ones used before. Numerical examples and applications are used to test favorable the theoretical results to earlier ones. Full article
Open AccessArticle
Study of a High Order Family: Local Convergence and Dynamics
Mathematics 2019, 7(3), 225; https://doi.org/10.3390/math7030225 - 28 Feb 2019
Cited by 3
Abstract
The study of the dynamics and the analysis of local convergence of an iterative method, when approximating a locally unique solution of a nonlinear equation, is presented in this article. We obtain convergence using a center-Lipschitz condition where the ball radii are greater [...] Read more.
The study of the dynamics and the analysis of local convergence of an iterative method, when approximating a locally unique solution of a nonlinear equation, is presented in this article. We obtain convergence using a center-Lipschitz condition where the ball radii are greater than previous studies. We investigate the dynamics of the method. To validate the theoretical results obtained, a real-world application related to chemistry is provided. Full article
Show Figures

Figure 1

Open AccessArticle
Extended Local Convergence for the Combined Newton-Kurchatov Method Under the Generalized Lipschitz Conditions
Mathematics 2019, 7(2), 207; https://doi.org/10.3390/math7020207 - 23 Feb 2019
Cited by 1
Abstract
We present a local convergence of the combined Newton-Kurchatov method for solving Banach space valued equations. The convergence criteria involve derivatives until the second and Lipschitz-type conditions are satisfied, as well as a new center-Lipschitz-type condition and the notion of the restricted convergence [...] Read more.
We present a local convergence of the combined Newton-Kurchatov method for solving Banach space valued equations. The convergence criteria involve derivatives until the second and Lipschitz-type conditions are satisfied, as well as a new center-Lipschitz-type condition and the notion of the restricted convergence region. These modifications of earlier conditions result in a tighter convergence analysis and more precise information on the location of the solution. These advantages are obtained under the same computational effort. Using illuminating examples, we further justify the superiority of our new results over earlier ones. Full article
Open AccessArticle
Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions
Mathematics 2019, 7(1), 89; https://doi.org/10.3390/math7010089 - 16 Jan 2019
Abstract
We provide a ball comparison between some 4-order methods to solve nonlinear equations involving Banach space valued operators. We only use hypotheses on the first derivative, as compared to the earlier works where they considered conditions reaching up to 5-order derivative, although these [...] Read more.
We provide a ball comparison between some 4-order methods to solve nonlinear equations involving Banach space valued operators. We only use hypotheses on the first derivative, as compared to the earlier works where they considered conditions reaching up to 5-order derivative, although these derivatives do not appear in the methods. Hence, we expand the applicability of them. Numerical experiments are used to compare the radii of convergence of these methods. Full article
Open AccessArticle
A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements
Mathematics 2019, 7(1), 55; https://doi.org/10.3390/math7010055 - 07 Jan 2019
Cited by 1
Abstract
In this paper, a few single-step iterative methods, including classical Newton’s method and Halley’s method, are suggested by applying [ 1 , n ] -order Padé approximation of function for finding the roots of nonlinear equations at first. In order to avoid the [...] Read more.
In this paper, a few single-step iterative methods, including classical Newton’s method and Halley’s method, are suggested by applying [ 1 , n ] -order Padé approximation of function for finding the roots of nonlinear equations at first. In order to avoid the operation of high-order derivatives of function, we modify the presented methods with fourth-order convergence by using the approximants of the second derivative and third derivative, respectively. Thus, several modified two-step iterative methods are obtained for solving nonlinear equations, and the convergence of the variants is then analyzed that they are of the fourth-order convergence. Finally, numerical experiments are given to illustrate the practicability of the suggested variants. Henceforth, the variants with fourth-order convergence have been considered as the imperative improvements to find the roots of nonlinear equations. Full article
Open AccessArticle
A Third Order Newton-Like Method and Its Applications
Mathematics 2019, 7(1), 31; https://doi.org/10.3390/math7010031 - 30 Dec 2018
Cited by 2
Abstract
In this paper, we design a new third order Newton-like method and establish its convergence theory for finding the approximate solutions of nonlinear operator equations in the setting of Banach spaces. First, we discuss the convergence analysis of our third order Newton-like method [...] Read more.
In this paper, we design a new third order Newton-like method and establish its convergence theory for finding the approximate solutions of nonlinear operator equations in the setting of Banach spaces. First, we discuss the convergence analysis of our third order Newton-like method under the ω -continuity condition. Then we apply our approach to solve nonlinear fixed point problems and Fredholm integral equations, where the first derivative of an involved operator does not necessarily satisfy the Hölder and Lipschitz continuity conditions. Several numerical examples are given, which compare the applicability of our convergence theory with the ones in the literature. Full article
Open AccessArticle
An Efficient Family of Optimal Eighth-Order Multiple Root Finders
Mathematics 2018, 6(12), 310; https://doi.org/10.3390/math6120310 - 07 Dec 2018
Cited by 5
Abstract
Finding a repeated zero for a nonlinear equation f ( x ) = 0 , f : I R R has always been of much interest and attention due to its wide applications in many fields of science and engineering. Modified [...] Read more.
Finding a repeated zero for a nonlinear equation f ( x ) = 0 , f : I R R has always been of much interest and attention due to its wide applications in many fields of science and engineering. Modified Newton’s method is usually applied to solve this kind of problems. Keeping in view that very few optimal higher-order convergent methods exist for multiple roots, we present a new family of optimal eighth-order convergent iterative methods for multiple roots with known multiplicity involving a multivariate weight function. The numerical performance of the proposed methods is analyzed extensively along with the basins of attractions. Real life models from life science, engineering, and physics are considered for the sake of comparison. The numerical experiments and dynamical analysis show that our proposed methods are efficient for determining multiple roots of nonlinear equations. Full article
Show Figures

Figure 1

Open AccessArticle
Hybrid Second Order Method for Orthogonal Projection onto Parametric Curve in n-Dimensional Euclidean Space
Mathematics 2018, 6(12), 306; https://doi.org/10.3390/math6120306 - 05 Dec 2018
Cited by 2
Abstract
Orthogonal projection a point onto a parametric curve, three classic first order algorithms have been presented by Hartmann (1999), Hoschek, et al. (1993) and Hu, et al. (2000) (hereafter, H-H-H method). In this research, we give a proof of the approach’s first order [...] Read more.
Orthogonal projection a point onto a parametric curve, three classic first order algorithms have been presented by Hartmann (1999), Hoschek, et al. (1993) and Hu, et al. (2000) (hereafter, H-H-H method). In this research, we give a proof of the approach’s first order convergence and its non-dependence on the initial value. For some special cases of divergence for the H-H-H method, we combine it with Newton’s second order method (hereafter, Newton’s method) to create the hybrid second order method for orthogonal projection onto parametric curve in an n-dimensional Euclidean space (hereafter, our method). Our method essentially utilizes hybrid iteration, so it converges faster than current methods with a second order convergence and remains independent from the initial value. We provide some numerical examples to confirm robustness and high efficiency of the method. Full article
Show Figures

Figure 1

Back to TopTop