Abstract
We prove the logarithmic convergence rate of the families of usual and modified iterative Runge-Kutta methods for nonlinear ill-posed problems between Hilbert spaces under the logarithmic source condition, and numerically verify the obtained results. The iterative regularization is terminated by the a posteriori discrepancy principle.
1. Introduction
Let X and Y be infinite-dimensional real Hilbert spaces with inner products and norms . Let us consider a nonlinear ill-posed operator equation
where is a nonlinear operator between the Hilbert spaces X and Y. We assume that (1) has a solution for exact data (which need not be unique). We have approximate data with
Besides the classical Tikhonov–Phillips regularization, a plethora of interesting variational and iterative approaches for ill-posed problems can be found, e.g., in Morozov [1], Tikhonov and Arsenin [2], Bakushinsky and Kokurin [3], and Kaltenbacher et al. [4]. We focus here on iterative methods, as they are also very popular and effective to use in applications. The simplest iterative regularization is the Landweber method—see, e.g., Hanke et al. [5], where the analysis for convergence rates is done under Hölder-type source condition. A more effective method often used in applications is the Levenberg–Marquardt method
This was investigated in [6,7,8] under the Hölder-type source condition (HSC) and a posteriori discrepancy principle (DP). Jin [7] proved optimal convergence rates for an a priori chosen geometric step size sequence , whereas Hochbruck and Hönig [6] showed convergence with the optimal rate for quite general step size sequences including the geometric sequence. Later, Hanke [8] avoided any constraints on the rate of decay of the regularization parameter to show the optimal convergence rate.
Tautenhahn [9] proved that asymptotic regularization, i.e., the approximation of problem (1) by a solution of the Showalter differential equation (SDE)
where the regularization parameter T is chosen according to the DP under the HSC, is a stable method to solve nonlinear ill-posed problems.
Solving SDE by the family of Runge-Kutta (RK) methods delivers a family of RK-type iterative regularization methods
where denotes the vector of identity operators, while is the diagonal matrix of bounded linear operators with identity operator on the entire diagonal and zero operator outside of the main diagonal with respect to the appropriate spaces. The parameter in (5) is the step-length, also called the relaxation parameter. The matrix A and the vector b are the given parameters that correspond to the specific RK method, building the so-called Butcher tableau (succession of stages). Different choices of the RK parameters generate various iterative methods.
Böckmann and Pornsawad [10] showed convergence for the whole RK-type family (including the well-known Landweber and the Levenberg–Marquardt methods). That paper also emphasized advantages of using some procedures from the mentioned family, e.g., regarding implicit A-stable Butcher tableaux. For instance, the Landweber method needs a lot of iteration steps, while those implicit methods need only a few iteration steps, thus minimizing the rounding errors.
Later, Pornsawad and Böckmann [11] filled in the missing results on optimal convergence rates, but only for particular first-stage methods under HSC and using DP.
Our current study considers further the unifying RK-framework described above, as well as a modified version presented below, showing optimality of the RK-regularization schemes under logarithmic source conditions.
An additional term , as in the iteratively regularized Gauss–Newton method (see, e.g., [4]),
was added to a modified Landweber method. Thus, Scherzer [12] proved a convergence rate result under HSC without particular assumptions on the nonlinearity of operator F. Moreover, in Pornsawad and Böckmann [13], an the additional term was included to the whole family of iterative RK-type methods (which contains the modified Landweber iteration),
where and . Using a priori and a posteriori stopping rules, the convergence rate results of the RK-type family are obtained under HSC if the Fréchet derivative is properly scaled.
Due to the minimal assumptions for the convergence analysis of the modified iterative RK-type methods, an additional term was added to SDE
Pornsawad et al. [14] investigated this continuous version of the modified iterative RK-type methods for nonlinear inverse ill-posed problems. The convergence analysis yields the optimal rate of convergence under a modified DP and an exponential source condition.
Recently, a second-order asymptotic regularization for the linear problem was investigated in [15]
under HSC using DP.
Define
with and the usual logarithmic sourcewise representation
where is sufficiently small and is an initial guess that may incorporate a priori knowledge on the solution.
In numerous applications—e.g., heat conduction, scattering theory, which are severely ill-posed problems—the Hölder source condition is far too strong. Therefore, Hohage [16] proved convergence and logarithmic convergence rates for the iteratively regularized Gauss–Newton method in a Hilbert space setting, provided a logarithmic source condition (9) is satisfied and DP is used as the stopping rule. Deuflhard et al. [17] showed some convergence rate result for the Landweber iteration using DP and (9) under a Newton–Mysovskii condition on the nonlinear operator. Sufficient conditions for the convergence rate, which is logarithmic in the data noise level, are given.
In Hohage [18], a systematic study of convergence rates for regularization methods under (9) including the case of operator approximations for a priori and a posteriori stopping rules is provided. A logarithmic source condition is considered by Pereverzyev et al. [19] for a derivative-free method, by Mahale and Nair [20] for a simplified generalized Gauss–Newton method, and by Böckmann et al. [21] for the Levenberg–Marquardt method using DP as stopping rule. Pornsawad et al. [22] solved the inverse potential problem, which is exponentially ill-posed, employing the modified Landweber method and proved convergence rate under the logarithmic source condition via DP for this method.
To the best of our knowledge, for the first time, convergence rates are established both for the whole family of RK-methods and for the modified version, when applied to severely ill-posed problems (i.e., under the logarithmic source condition).
The structure of this article is as follows. Section 2 provides assumptions and technical estimations. We derive the convergence rate of the RK-type method (5) in Section 3 and of the modified RK-type method (6) in Section 4 under the logarithmic source conditions (8) and (9). In Section 5, the performed numerical experiments confirm the theoretical results.
2. Preliminary Results
Lemma 1.
Let K be a linear operator with . For with with φ given by (8) and , there exist positive constants and such that
and
Proof.
Assumption 1.
There exist positive constants , and and a linear bounded operator such that for , the following conditions hold
where is the exact solution of (1).
Let be the error of the kth iteration , and .
The proof is given in [22] using the mean value theorem.
Assumption 2.
Let K be a linear operator and τ be a positive number. There exist positive constants and such that
and
with .
We note that the explicit Euler method provides and . Thus, the conditions (19) and (20) hold if is bounded. For the implicit Euler method, we have
and
for some positive number . We observe from Figure 1 that the conditions (19) and (20) hold for .
Figure 1.
Plots of (a) and (b) for and .
Finally, we need a technical result for the next two sections.
Lemma 2.
Let Assumptions 1 and 2 hold for the operator . Then, there exists a positive number such that
and
Proof.
Using Assumption 2; the estimates in Equations (15), (16), (19), (20), (24); and the triangle inequality, we obtain
with positive number .
(ii) Denote . We have
Part (i) ensures an upper bound for the second term of the last formula. Hence, a similar upper bound for remains to be determined. To this end, we will use the inequality applied to and . Thus, by using (19) and (20), we obtain
for some positive c, where the last inequality follows as in (24) and (25). Now, (26) combined with part (i) and the last inequality yield (22). □
3. Convergence Rate for the Iterative RK-Type Regularization
To investigate the convergence rate of the RK-type regularization method (5) under the logarithmic source condition, the nonlinear operator F has to satisfy the local property in an open ball of radius around
with ∈. In addition, the regularization parameter is chosen according to the generalized discrepancy principle, i.e., the iteration is stopped after steps with
where is a positive number. Note that the triangle inequality yields
In the sequel, we establish an error estimate that will be useful in deriving the logarithmic convergence rate.
Theorem 1.
Let Assumptions 1 and 2 be valid. Assume that problem (1) has a solution in and fulfills (2). Furthermore, assume that the Frchet derivative of F is scaled such that for and the source conditions (8) and (9) are fulfilled. Thereby, the iterative RK-type regularization is stopped according to the discrepancy principle (28). If is sufficiently small, then there exists a constant c depending only on p and such that for any ,
and
Proof.
Using (5), we can show that
Using the spectral theory and (14), we have
By recurrence and Equation (33), we obtain
Moreover, it holds that
We will prove by mathematical induction that
and
hold for all with a positive constant c independent of k. Using the discrepancy principle (28), triangle inequality, and , we can show that
By assumption (see Vainikko and Veterennikov [23], as cited in Hanke et al. [5]), we have
and
Therefore,
and
Employing the assumption of the induction in Equations (37) and (38) into the third term of (45), we obtain
Similar to Equation (45) in [22], we have
with a generic constant , which does not depend on . Using (47), we can estimate (46) as
The sum is bounded because the integral
is bounded with from above by a positive constant independent of k. Thus, (45) becomes
with .
By assumption (see Vainikko and Veterennikov [23] as cited in Hanke et al. [5]), we have
and
Thus,
and
Using (47) and the assumption of the induction in Equations (37) and (38) into the third term of (54), we obtain
The summation in (55) is bounded because, with , the integral
for some positive constant independently of k. Thus, (54) becomes
with . Setting , we have
and
Using (58), we obtain
Setting , (59) leads to
We choose a sufficiently small such that . Thus, the induction is completed. Using (37), we can show that
The second assertion is obtained by using (39) as follows:
□
We are now in a position to show the logarithmic convergence rate for the iterative RK-type regularization under a logarithmic source condition, when the iteration is stopped according to the discrepancy principle (28).
Theorem 2.
Under the assumptions of Theorem 1 and for , one has
4. Convergence Rate for the Modified Version of the Iterative RK-Type Regularization
The paper [13] contains a study of the modified iterative Runge-Kutta regularization method
where and . More precisely, it presents a detailed convergence analysis and derives Hölder-type convergence rates.
The aim in this section is to show convergence rates for (68) with the natural choice . We consider here the logarithmic source condition (9) with defined by (8), where is small enough. That is, we deal with the following method:
We work further under assumptions (27) and (28) with an appropriately chosen constant —compare inequality (2.10) in [13]. For the sake of completeness, we recall below the convergence result adapted to the choice (compare to Proposition 2.1 and Theorem 2.1 in [13]).
Theorem 3.
We state below a result on essential upper bounds for the errors in (69).
Theorem 4.
Let Assumptions 1 and 2 hold for the operator . Assume that problem (1) has a solution in and fulfills (2). Assume that the Frchet derivative of F is scaled such that for and that the parameters with are small enough. Furthermore, assume that the source condition 9 is fulfilled and that the modified RK-type regularization method (69) is stopped according to (28). If is sufficiently small, then there exists a constant c depending only on p and such that for any ,
and
Proof.
First, we deduce an explicit formula for . We proceed similarly to the proof in Theorem 1, but this time, we need to take into account the additional term . Thus, the proof steps are as follows.
I. We establish an explicit formula for the error :
where the last equality follows from (32). We denoted .
Therefore, we obtain the following closed formula:
From Proposition 1, (34), Lemma 2, (39), and (74), it follows that
for any , where is a positive constant.
II. Following the technical steps of the proof of Theorem 1 in [22], one can similarly show by induction that there is a positive number c, such that the following inequalities hold for any :
Then, one can eventually obtain (70) and (71) as in the mentioned proof. Note that Theorem 1 in [22] is based on Proposition 1 in [22], which requires small enough parameters , such that and , for all (compare to (16) on page 4 in [22]). Since the smallest value of is about (when ), one can clearly find small enough so as to satisfy the imposed inequalities, e.g., a harmonic-type sequence such as for some . □
One can show convergence rates for the modified Runge-Kutta regularization method, as done in the previous section for the unmodified version.
Theorem 5.
Under the assumptions of Theorem 4 and for , one has
5. Numerical Example
The purpose of the following numerical example is to verify the error estimates shown above. Define the nonlinear operator as
with the kernel function
The noisy data is given by and the exact solution is . In order to demonstrate the results in Theorems 1 and 4, we consider Landweber, Levenberg–Marquardt (LM), Lobatto IIIC, and the Radau IIA methods, see Table 1 for the Butcher tableau.
Table 1.
Butcher tableau for (a) explicit Euler or Landweber, (b) implicit Euler or Levenberg–Marquardt, (c) Lobatto IIIC, and (d) Radau IIA methods.
The implementation in this section is the same as the one reported in [10,13]. The number of basis functions is 65 and the number of equidistant grid points is 150, while the parameter is the harmonic sequence term . As expected, the results in Figure 2 show that the curve of lies below a straight line with slope , as suggested by (30) in Theorem 1 and (70) in Theorem 4.
Figure 2.
The plot of versus for (a) one−step RK methods and (b) for two−step methods with . (c,d) Results with . The parameters are (a) , (b) 25, (c) , and (d) 100. For (a–d), a harmonic sequence and are used.
6. Summary and Outlook
Up to now, the logarithmic convergence rate under logarithmic source condition has only been investigated for particular examples, namely, the Levenberg–Marquardt method (Böckmann et al. [21]) and the modified Landweber method (Pornsawad et al. [22]). Here, we extended the results to the whole family of Runge-Kutta-type methods with and without modification. For the future, it is still open to prove the optimal convergence rate under Hölder source condition for the whole family without modification.
Author Contributions
The authors P.P., E.R., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
The authors are grateful to the referees for the interesting comments and suggestions that helped to improve the quality of the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Morozov, V.A. On the solution of functional equations by the method of regularization. Sov. Math. Dokl. 1966, 7, 414–417. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill Posed Problems; V. H. Winston and Sons: New York, NY, USA, 1977. [Google Scholar]
- Bakushinsky, A.B.; Kokurin, M.Y. Iterative Methods for Approximate Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods For Nonlinear Ill-Posed Problems; Walter de Gruyter: Berlin, Germany, 2008. [Google Scholar]
- Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
- Hochbruck, M.; Hönig, M. On the convergence of a regularizing Levenberg-Marquardt scheme for nonlinear ill-posed problems. Numer. Math. 2010, 115, 71–79. [Google Scholar] [CrossRef]
- Jin, Q. On a regularized Levenberg-Marquardt method for solving nonlinear inverse problems. Numer. Math. 2010, 115, 229–259. [Google Scholar] [CrossRef]
- Hanke, M. The regularizing Levenberg-Marquardt scheme is of optimal order. J. Integral Equ. Appl. 2010, 22, 259–283. [Google Scholar] [CrossRef]
- Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [Google Scholar] [CrossRef]
- Böckmann, C.; Pornsawad, P. Iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [Google Scholar] [CrossRef][Green Version]
- Pornsawad, P.; Böckmann, C. Convergence rate analysis of the first-stage Runge-Kutta-type regularizations. Inverse Probl. 2010, 26, 035005. [Google Scholar] [CrossRef]
- Scherzer, O. A modified Landweber iteration for solving parameter estimation problems. Appl. Math. Optim. 1998, 38, 45–68. [Google Scholar] [CrossRef]
- Pornsawad, P.; Böckmann, C. Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [Google Scholar] [CrossRef]
- Pornsawad, P.; Sapsakul, N.; Böckmann, C. A modified asymptotical regularization of nonlinear ill-posed problems. Mathematics 2019, 7, 419. [Google Scholar] [CrossRef]
- Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018, 1–26. [Google Scholar] [CrossRef]
- Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and an inverse scattering problem. Inverse Probl. 1997, 13, 1279–1299. [Google Scholar] [CrossRef]
- Deuflhard, P.; Engl, W.; Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14, 1081–1106. [Google Scholar] [CrossRef]
- Hohage, T. Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optimiz. 2000, 21, 439–464. [Google Scholar] [CrossRef]
- Pereverzyev, S.S.; Pinnau, R.; Siedow, N. Regularized fixed-point iterations for nonlinear inverse problems. Inverse Probl. 2005, 22, 1–22. [Google Scholar] [CrossRef]
- Mahale, P.; Nair, M.T. A simplified generalized Gauss-Newton method for nonlinear ill-posed problems. Math. Comput. 2009, 78, 171–184. [Google Scholar] [CrossRef]
- Böckmann, C.; Kammanee, A.; Braunß, A. Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inverse Ill-Posed Probl. 2011, 19, 345–367. [Google Scholar] [CrossRef]
- Pornsawad, P.; Sungcharoen, P.; Böckmann, C. Convergence rate of the modified Landweber method for solving inverse potential problems. Mathematics 2020, 8, 608. [Google Scholar] [CrossRef]
- Vainikko, G.; Veterennikov, A.Y. Iteration Procedures in Ill-Posed Problems; Nauka: Moscow, Russia, 1986. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

