Abstract
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of for some , and an appropriate source condition. We yield the optimal rate of convergence.
1. Introduction
Let X and Y be infinite-dimensional real Hilbert space with inner products and norms . Let us consider a nonlinear operator equation:
where is a nonlinear operator between the Hilbert space X and Y. If the operator F is not continuously invertible, then (1) may not have a solution. If a solution exists, arbitrarily small perturbations of the data may lead to unacceptable results. In other words, the problems of the form (1) do not depend continuously on the data. It was shown in Tautenhahn (1994) [] that asymptotic regularization, i.e., the approximation of Equation (1) by a solution of the Showalter differential equation:
where the regularization parameter T is chosen according to the discrepancy principle, is a suitable approximation to the unknown solution , and are the available noisy data with:
is a stable method for solving nonlinear ill-posed problems. Under the Hölder-type source condition for the regularized solution in X, the optimal rate is obtained using the assumption that a bounded linear operator exists such that:
and:
are satisfied, see [,]. Detailed studies of inverse ill-posed problems may be found, e.g., in [] and [,,,].
It is well-known that the asymptotic regularization is a continuous version of the Landweber iteration. A forward Euler discretization of (2) gives back a damped Landweber iteration:
for some relaxation parameter , which is convergent for exact data and stable with respect to data error []. Later, Scherzer [] observed that the term appears in a regularized Gauss–Newton method, i.e.:
To highlight the importance of this term for iterative regularization, Scherzer [] included the term into the Landweber method and proved a convergence rate result under the usual Hölder-type sourcewise representation without the assumptions on the nonlinearity of operator F like in (4) and (5). Moreover, in [], the additional term was included to the whole family of iterative Runge–Kutta-type methods (RKTM):
where stands for , the vector and matrix A are defined by the Runge–Kutta method, and is a relaxation parameter, which includes the modified Landweber iteration. Using a priori and a posteriori stopping rules, the convergence rate resultes of the RKTM are obtained under a Hölder-type sourcewise condition if the Fréchet derivative is properly scaled. However, References [,] have to take into account that the nonlinear operator F is properly scaled with a Lipschitz-continuous Fréchet derivative in , i.e.:
with instead of (4) and (5).
Due to the minimal assumptions for the convergence analysis of the modified iterative RKTM, we studied in detail the additional term in the continuous version written as:
for the noisy case and as:
for the noise-free case.
Recently, a second order asymptotic regularization for the linear problem was investigated in []:
Under Hölder-type source condition and Morozov’s discrepancy principle, the method has the same power-type convergence rate as (2) in the linear case. Furthermore, a discrete second-order iterative regularization for the nonlinear case was proposed in [].
The paper is organized as follows: In Section 2, the assumption and preliminary results are given. We show that if the stopping time T is chosen to be a solution of for some , then there exists a unique solution . Section 3 contributes to the convergence analyses of the proposed method under the tangential cone condition and, in addition, the modified discrepancy principle for noisy case. Finally, in Section 4, we show that the rate is obtained under the modified source condition. Section 5 provides the conclusion.
2. Preliminaries
For an ill-posed problem, the local property of the nonlinear operator is usually used to ensure at least the local convergence of regularization method instead of using nonexpansivity of the fixed point operator []. For the presented work, we can provide the local convergence if the nonlinear operator fulfills the following tangential cone condition, i.e., for all :
It is immediately implied by Equation (9) that for all , we have:
A stronger condition was used in [] to provide the local convergence of Tikhonov regularization, i.e.:
This condition implies (9) if is sufficiently small. In addition to the local condition (Equation (9)), we assume that the Fréchet derivative of F is bounded, i.e., for all :
Adding the term to the Showalter differential equation requires a more complicated proof. To prove the convergence of the presented method, the following assumptions are needed. However, it is not necessary for the convergence rate result in Section 4 and the discretized version [].
Assumption 1.
For and , the following properties hold:
- (i)
- converges;
- (ii)
- converges.
The following lemma will be useful.
Lemma 1.
For any continuous function f on and , if converges, then:
- (i)
- converges for all ;
- (ii)
Corollary 1.
Let the assumption 1 be satisfied. Then:
- (i)
- ;
- (ii)
Proof.
The proof directly follows from the Lemma 1. □
To prove the existence and uniqueness of solution of the nonlinear equation in Lemma 3, we prepared Lemma 2.
Proof.
Using (7), we obtained:
In [], the stopping time T serves as a regularization parameter and is chosen such that the discrepancy principle is satisfied, i.e.:
with some . However, in our research, we used a variation of the discrepancy principle. Let be defined by:
Note that . In the presented work, the regularization parameter fulfills the following rule:
where is a solution of the following nonlinear equation:
If , Tautenhahn [] shows that a unique solution of exists, which is .
Lemma 3.
Proof.
(a) Observe that is continuous with . Using (7), we have:
Moreover, (11) together with the fact that yield:
The variation of discrepancy principle (Equation (16)) provides the right hand side of (19) as a negative value. Thus, is non-increasing.
(b) Next, we show that . Suppose that . Due to this preliminary supposition, we have for all . Applying (11) to (12) and using the fact that , we get:
Rearranging (20), we obtain:
Integrating (22) on both sides and using and , we obtain:
It follows that for all . This means that or , which contradicts the assumption. Consequently, there is a solution with .
(c) Finally, we show by contraposition that a solution of is unique. From (a), there is with for all for some . Thus, for . By (12) and (20), we have:
Similarly, by (19), we obtain:
This means that , and thus, . Consequently, , which implies that is a constant. For all , we have . Therefore, , which contradicts (b). □
Remark 1.
Due to the discrepancy principle and , we have:
Proving by contradiction, we can show that . This means that , and thus, . In the same manner, for the noise-free case, we obtain .
3. Convergence Results
In this section, we first show for the exact data that the solution of (8) tends to a solution of as , and it also tends to a unique solution of minimal distance to under the conventional condition. At the end of this section, we show that the proposed method provides a stable approximation of if a unique solution is chosen by the discrepancy principle (16). Note, the following result was used to prove that the solution of (8) converges to a solution provided the tangential cone condition holds.
Lemma 4.
Remark 2.
Because of Lemma 4, Equation (1) has a unique solution of minimal distance to . It holds . If , we get , see [].
Next, we prove the convergence of the solution of (8) for the noise-free case.
Theorem 1.
Proof.
Let be any solution of (1) in and put:
We show that for . Let s be an arbitrary real number with . Thus, it holds that:
Through (27), we have:
Obviously, for and , fulfills (30). Therefore, is negative. This means that is non-increasing. It follows that and converge (for ), to some , and consequently, . Next, we show that also tends to zero as . Through (8), we have:
and through (10) together with the inequality for , we have:
The right hand side of (31) becomes zero as because of Corollary 1, which implies that as , and thus:
This means that exists. Consequently, for , the solution of (8) converges, say, to some . Due to the continuity of F, we have . By Corollary 1 we have , and thus, is a solution of (1).
Using Lemma 4 and the additional assumption for all , we know that . Therefore:
This means and . □
For the noise case, the regularization parameter , which is chosen by the discrepancy principle (16), provides the solution of (7), which converges to as , see next theorem.
Theorem 2.
Proof.
Due to the results of theorem 1 and Corollary 1, the proof can be done according to the method of the proof of theorem 2.4 in []. □
4. Convergence Rates
In this section, we prove an order optimal error bound under a particular sourcewise representation. The Hölder-type source condition is commonly used to analyze the convergence rate results for many regularization methods, e.g., [,,,]. An analysis of ill-posed problems under general source conditions of the form:
with an index function , i.e., is continuous, strictly increasing and , was reported in [,,]. For the presented work, the following source condition (Equation (33)) is necessary. However, the usual assumptions on the nonlinearity of the operator F are still required.
Assumption 2.
Let be the unique solution of minimal distance to . There exists an element and constant and such that:
with:
The sum is absolutely convergent, since are bounded linear operators.
Assumption 3.
For all , there exists a linear bounded operator and a constant such that:
- (i)
- ;
- (ii)
Proposition 1.
Proof.
By assumption 3 and (35), our assertion is obtained. □
Proposition 2.
Let and assumption 3 be satisfied. Then, for all we have:
Proof.
The proof is similar to that in []. □
Proposition 3.
Proof.
Integration by parts yields:
and the following integration results in:
Combining both equations yields:
Integration by parts again yields:
In the next theorem, we estimate the functions:
Theorem 3.
Let (3), (9), assumption 2 with , , , , and be satisfied. Let , and denotes the unique solution of minimal distance to . If is the solution of (7) with , where is chosen according to the discrepancy principle (Equation (16)) with , then the functions and of (42) satisfy the following system of integral inequalities of the second kind:
and
where the constant , and are given by:
Proof.
Let the terms on the right hand side of (37) be denoted by and , respectively. Thus:
We set:
We note that Proposition 3 yields:
Let the terms on the right hand side of (55) be denoted by and , respectively. Thus:
Note that by direct integration, we get:
We remark that constants and exist for . It might be that does not hold for all problems.
Proposition 4.
Let the assumption of Theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant such that the following estimates hold:
Proof.
We used the estimate (A2), (A3), (A6), and (A7) to show that:
hold with and , which is defined by (43) and (44), respectively. The definition of in (43) provides:
Due to the assumption, we have and . If is sufficiently small, , , , and , there exists such that:
and:
are smaller than . Our assertions is obtained via (63). □
Next, we provide the main result of this section.
Theorem 4.
Let the assumptions of theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant such that:
Proof.
Thus:
5. Conclusions
In this article, an additional term was included to the Showalter differential equation in order to study the impact of this term to the classical asymptotical regularization proposed by []. In the presented work, the regularization parameter was chosen according to an a posteriori choice rule (Equation (16)), where is needed instead of using . It includes not only the noise level but also the information of local properties of the nonlinear operator F, see [] for the analysis of Tikhonov regularization using the modified discrepancy principle. This may cause a slightly bigger residual norm than the conventional discrepancy principle. However, it still allows a stable approximation of . To ensure the convergence of the proposed method, the additional assumption 1 is required.
Apart from the convergence result, the proposed method obtained the optimal convergence rate under the source condition (33), i.e., and the assumptions on the nonlinearity of operator F. Although the exponential term in the source condition was not necessary in the classical asymptotical regularization to obtain the optimal rate [], we discovered that the exponential term is an important key to obtain the optimal rate for the presented method and probably also for the modified iterative RKTM studied by []. The modified iterative RKTM obtained the rate under the Hölder type source condition, where was chosen in accordance with the discrepancy principle and was fixed. To obtain the optimal rate of the modified iterative RKTM under the source condition (Equation (33)), an analysis in detail is required.
Furthermore, the numerical integration method for solving (2) or (7), such as Runge–Kutta-type methods, is written in the following form:
where is a relaxation parameter and is an increment function []. Another discretization technique is based on Padé approximation in the following form []:
The effects of Padé integration in the study of the chaotic behavior of conservative nonlinear chaotic systems have been reported by Butusov et al. []. The comparative study of the Runge–Kutta methods versus Padé methods shows that chaotic behavior appears in models obtained by nonlinear integration techniques where chaos does not appear in conventional methods. A regularized algorithm for computing Padé approximations in a floating point arithmetic or for problems with noise has been reported by Gonnet et al. []. However, the role and effects of Padé integration for solving (2) or (7) requires a study in detail. This is an interesting task for future investigations.
Author Contributions
The authors P.P., N.S., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version.
Funding
This research received no external funding.
Acknowledgments
This work was supported by the Faculty of Science of Silpakorn University and by Centre of Excellence in Mathematics of Mahidol University. We would like to express special thanks to Assistant Professor Jittisak Rakbud for their valuable help. The authors would like to thank the reviewers for valuable hints and improvements.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [Google Scholar] [CrossRef]
- Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
- Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of inverse problems; Kluwer Academic Publishers: Norwell, MA, USA, 1996. [Google Scholar]
- Kabanikhin, S. Inverse and Ill-posed Problems: Theory and Applications; Inverse and Ill-Posed Problems Series; De Gruyter: Berlin, Germany, 2011. [Google Scholar]
- Hansen, P. Regularization Tools: A Matlab Package for Analysis and Solution of Discrete Ill-posed Problems; IMM-REP, Institut for Matematisk Modellering, Danmarks Tekniske Universitet: Lyngby, Denmark, 1994. [Google Scholar]
- Tikhonov, A.; Goncharsky, A.; Stepanov, V.; Yagola, A. Numerical Methods for the Solution of Ill-Posed Problems; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 2013. [Google Scholar]
- Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods For Nonlinear Ill-Posed Problems; Radon Series on Computational and Applied Mathematics; Walter de Gruyter: Berlin, Germany, 2008. [Google Scholar]
- Scherzer, O. A Modified Landweber Iteration for Solving Parameter Estimation Problems. Appl. Math. Optim. 1998, 38, 45–68. [Google Scholar] [CrossRef]
- Pornsawad, P.; Böckmann, C. Modified Iterative Runge-Kutta-Type Methods for Nonlinear Ill-Posed Problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [Google Scholar] [CrossRef]
- Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018, 1–26. [Google Scholar] [CrossRef]
- Hubmer, S.; Ramlau, R. Convergence Analysis of a Two-Point Gradient Method for Nonlinear Ill-Posed Problems. Inverse Probl. 2017, 33, 095004. [Google Scholar] [CrossRef]
- Qi-Nian, J. Applications of the modified discrepancy principle to Tikhonov regularization of nonlinear ill-posed problems. SIAM J. Numer. Anal. 1999, 36, 475–490. [Google Scholar] [CrossRef]
- Hanke, M. A regularizing Levenberg-Marquardt scheme with applications to inverse groundwater filtration problems. Inverse Probl. 1997, 13, 79–95. [Google Scholar] [CrossRef]
- Mathé, P.; Hofmann, B. How general are general source conditions? Inverse Probl. 2008, 24, 015009. [Google Scholar] [CrossRef]
- Hofmann, B.; Mathé, P. Analysis of profile functions for general linear regularization methods. SIAM J. Numer. Anal. 2007, 45, 1122–1141. [Google Scholar] [CrossRef]
- Tautenhahn, U. Optimality for ill-posed problems under general source conditions. Numer. Funct. Anal. Optim. 1998, 19, 377–398. [Google Scholar] [CrossRef]
- Böckmann, C.; Pornsawad, P. Iterative Runge–Kutta–type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [Google Scholar] [CrossRef][Green Version]
- Butusov, D.; Karimov, A.; Tutueva, A.; Kaplun, D.; Nepomuceno, E.G. The effects of Padé numerical integration in simulation of conservative chaotic systems. Entropy 2019, 21, 362. [Google Scholar] [CrossRef]
- Gonnet, P.; Guttel, S.; Trefethen, L.N. Robust Padé approximation via SVD. SIAM Rev. 2013, 55, 101–117. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).