Next Article in Journal
Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator
Next Article in Special Issue
Non-Iterative Solution Methods for Cauchy Problems for Laplace and Helmholtz Equation in Annulus Domain
Previous Article in Journal
The VIF and MSE in Raise Regression
Previous Article in Special Issue
The Impact of the Discrepancy Principle on the Tikhonov-Regularized Solutions with Oversmoothing Penalties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems

by
Pornsarp Pornsawad
1,2,*,†,
Parada Sungcharoen
1,2,† and
Christine Böckmann
3,†
1
Department of Mathematics, Faculty of Science, Silpakorn University, 6 Rachamakka Nai Rd., Nakhon Pathom 73000, Thailand
2
Centre of Excellence in Mathematics, Mahidol University, Rama 6 Rd., Bangkok 10400, Thailand
3
Institut für Mathematik, Universität Potsdam, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam OT Golm, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(4), 608; https://doi.org/10.3390/math8040608
Submission received: 25 March 2020 / Revised: 9 April 2020 / Accepted: 10 April 2020 / Published: 16 April 2020
(This article belongs to the Special Issue Numerical Analysis: Inverse Problems – Theory and Applications)

Abstract

:
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.

Graphical Abstract

1. Introduction

An inverse potential problem consists in determining the shape of an unknown domain D form measurements of the Neumann boundary values of u on Ω , where the solution u of the homogeneous Dirichlet problem fulfills
Δ u = χ D in             Ω R \ D ,
u = 0 on             Ω R
where χ D is the characteristic function of the domain D Ω R = { x R 2 : | x | < R } . This inverse problem is a nonlinear severely ill-posed problem; see [1,2]. If a classical difference method is used for solving the inverse problem, the errors can grow exponentially fast as the mesh size goes to zero. Many regularizing methods are adopted to provide a stable solution of inverse potential problems, e.g., a second-degree method with frozen derivatives [3], level set regularization [4], the iteratively regularized Gauss–Newton method [5] and Levenberg–Marquardt method [1]. In this work, we consider a discrete version analoguous to the modified asymptotic regularization proposed by Pornsawad et al. [6] to recover the starlike shape of the unknown domain D.
In a general setting, an inverse potential problem can be formulated via a nonlinear operator equation
F ( x ) = y ,
where y is the normal derivative of u on the boundary, u ν | Ω R , ν is the outer normal vector on Ω R , the operator F : D F X Y is a nonlinear operator on domain D F X , X and Y are Hilbert spaces, and the unknown x includes the information of the domain D Ω R . For convenience in this article, the indices of inner products · , · and norms · are neglected but they can always be identified from the context in which they appear. Due to the nonlinearity of Equation (3), we assume all over that Equation (3) has a solution x + which needs not to be unique. We have the disturbed data y δ with
y δ y δ
where δ > 0 is a noise level. If one solves Equation (3) by traditional numerical method, high oscillating solutions may occur. Thus, one needs a regularization to minimize the approximation and data error.
One well-known continuous regularization is Showalter’s method or asymptotic regularization [7], where an approximate solution is obtained by solving an initial value problem. Later, a second-order asymptotic regularization for the linear problem A x = y was investigated in Zhang and Hofmann [8], where the optimal order is obtained under the Hölder type source condition and a conventional discrepancy principle as well as a total energy discrepancy principle. Recently, the study of modified asymptotic regularization is reported in Pornsawad et al. [6] where the term x ¯ x δ ( t ) , x δ ( 0 ) = x ¯ , is included to the method proposed by Tautenhahn [7], i.e.,
x ˙ δ ( t ) = F ( x δ ( t ) ) * [ y δ F ( x δ ( t ) ) ] ( x δ ( t ) x ¯ ) , 0 < t T .
A discrete version analogue to Equation (5) is successfully developed in Pornsawad and Böckmann [9], where the whole family of Runge–Kutta methods is applied and one obtaines an optimal convergence rate under Hölder-type sourcewise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous.
It is well known that, for many applications such as the inverse potential problem and the inverse scattering problem [5], the Hölder type source condition in general is not fulfilled even if a solution is very smooth. It is applicable only for mildly ill-posed problems [1,10,11]. Therefore, the convergence rate analysis of an explicit Euler method presented by
x n + 1 δ = x n δ + F ( x n δ ) * ( y δ F ( x n δ ) ) α n ( x n δ x 0 )
is considered in this article under the logarithmic source condition in Equation (7) and the properly scaled Fréchet derivative F ( x + ) 1 . The method in Equation (6) is a particular method of the iterative Runge–Kutta-type method [9], where τ n = α n 1 is the relaxation parameter obtained by discretization of conventional asymptotic regularization [7]. We define
f = f p , f p ( λ ) : = ln e λ p for 0 < λ 1 0 for λ = 0
with p > 0 and the usual sourcewise representation
x + x 0 = f F ( x + ) * F ( x + ) w , w X
where w is sufficiently small. The method in Equation (6) is also known as the modified Landweber method [12] which has the rate O ( δ ) under the Hölder-type source condition and general discrepancy principle. The convergence rate analysis under the logarithmic source condition in Equation (7) has been successfully studied by Hohage [5] for the iteratively regularized Gauss–Newton method and by Deuflhard et al. [13] for Landweber’s iteration. Current studies of source condition may be found, e.g., in Romanov et al. [11], Bakushinsky et al. [14], Schuster et al. [15] and Albani et al. [16].
The purpose of this work is to present the convergence rate analysis of the iterative scheme of Equation (6) under the logarithmic source condition in Equation (7) with 1 p 2 and to recover the shape of an unknown domain D for an inverse potential problem (Equations (1) and (2)). Thus, in Section 2, a preliminary result is prepared. As usual, the Fréchet derivative of F needs to be scaled. Furthermore, we assume a nonlinearity condition of F in a ball B ρ ( x 0 ) = { x X : x x 0 ρ } , ρ > 0 , which is given in Assumption 1. It is well known that, without the additional assumption on the nonlinear operator, the convergence rate cannot be provided. The following assumption has been used in many works [5,17], i.e., there exists a bounded linear operator R : Y Y and Q : X Y such that
F ( x ˜ ) = R ( x ˜ , x ) F ( x ) + Q ( x ˜ , x ) I R ( x ˜ , x ) C R Q ( x ˜ , x ) C Q F ( x ) ( x ˜ x )
with nonnegative constants C R and C Q . However a weaker condition will be used in this work. This will be shown in Assumption 1. In Section 3, the convergence rate of the modified Landweber method under the logarithmic source condition is presented. Application of the modified Landweber method to an inverse potential problem is provided in Section 4.

2. Preliminary Results

In this section, preliminary results are prepared to provide the convergence analysis of the modified Landweber method.
Lemma 1.
Let A be a linear operator with A 1 . For n N with n > 1 , e 0 : = f ( λ ) w with f given by Equation (7) and p > 0 , there exist positive constants c 1 and c 2 such that
i = 0 n 1 ( 1 α i ) ( I A * A ) n e 0 c 1 ( ln ( n + e ) ) p w
and
A i = 0 n 1 ( 1 α i ) ( I A * A ) n e 0 c 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w
with 0 < α i 1 , i = 0 , 1 , 2 , , n .
Proof. 
By spectral theory and Equations (7), (A1), and (A2), we have
i = 0 n 1 ( 1 α i ) ( I A * A ) n e 0 i = 0 n 1 ( 1 α i ) ( I A * A ) n f ( A * A ) w   sup λ ( 0 , 1 ] | ( 1 λ ) n ( 1 ln λ ) p | w   c 1 ( ln ( n + e ) ) p w
for some constant c 1 > 0 . Similary, spectral theory and Equations (7), (A3), and (A4) provides
A i = 0 n 1 ( 1 α i ) ( I A * A ) n e 0 i = 0 n 1 ( 1 α i ) ( I A * A ) n ( A * A ) 1 / 2 f ( A * A ) w   sup λ ( 0 , 1 ] | ( 1 λ ) n λ 1 / 2 ( 1 ln λ ) p | w   c 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w
for some constant c 2 > 0 .  □
Proposition 1.
Let A be a linear operator with A 1 . For n N with n > 1 , e 0 : = f ( λ ) w with f given by Equation (7) and p = 2 ψ , for some ψ [ 1 / 2 , 1 ] , there exist positive constants c ˜ 1 and c ˜ 2 such that
j = 0 n 1 α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 c ˜ 1 ( ln ( n + e ) ) p w
and
A j = 0 n 1 α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 c ˜ 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w
for α i = 1 2 ( i + l 0 ) ψ , i = 0 , 1 , 2 , , n and l 0 2 .
Proof. 
We will prove by induction that, for some c R , the inequality
j = 0 k 1 α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 c ( ln ( k + e ) ) p w
is true. Note that p ψ = 2 ln 2 ln ( ln ( 1 + e ) ) and l 0 2 provide
p ψ ln ( ln ( 1 + e ) ) ln 2 ln l 0 ln ( l 0 + n 1 ) .
This means that ( ln ( 1 + e ) ) p ( l 0 + n 1 ) ψ . Using Equation (7), we have
α n 1 ( I A * A ) 0 e 0 1 2 ( n 1 + l 0 ) ψ sup λ ( 0 , 1 ] | ( 1 ln λ ) p | w   1 2 ( ln ( 1 + e ) ) p w .
Thus, Equation (15) holds for k = 1 . Next, we assume that Equation (15) holds for k 1 < n for some constant c. Applying Lemma 1, we obtain
  j = 0 k α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 j = 0 k 1 α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 + α n k 1 ( I A * A ) k i = 1 k ( 1 α n i ) e 0 c ( ln ( k + e ) ) p w + 1 2 ( n k 1 + l 0 ) ψ ( I A * A ) k f ( A * A ) w c ( ln ( k + e ) ) p w + c 1 2 ( n k 1 + l 0 ) ψ ( ln ( k + e ) ) p w .
By Figure 1, we observe that
1 2 ( n k 1 + l 0 ) ψ ( ln ( k + e ) ) p ( ln ( k + 1 + e ) ) p .
Moreover, in Figure 2, the graph of ln ( k + 1 + e ) ln ( k + e ) p + 1 has a maximum at k = 1 and p = 2 and the maximum value is ln ( 2 + e ) ln ( 1 + e ) 2 + 1 = 2.3956 . Thus, Equation (16) becomes
  j = 0 k α n j 1 ( I A * A ) j i = 1 j ( 1 α n i ) e 0 max { c , c 1 } ( ln ( k + 1 + e ) ) p w ( ln ( k + e ) ) p ( ln ( k + 1 + e ) ) p + 1 c ˜ 1 ( ln ( k + 1 + e ) ) p w
for some constant c ˜ 1 . Thus, the induction is complete.
We prove Equation (14) by induction in the same manner as Equation (13).  □
Assumption 1.
There exist positive constants c L , c R , and c r and linear bounded operator R x : Y Y such that, for x B ρ ( x 0 ) , the following condition holds
F ( x ) = R x F ( x + )
R x I c L x x +
R x I c R
R x c r
where x + is the exact solution of Equation (3).
Lemma 2.
Let the Assumption 1 be assumed. Then, we have
( 1 α n ) I R x n δ * 1 2 K R e n
for some constant K R > 0 with e n = x + x n δ .
Proof. 
We note that the reverse triangle inequality and Equation (19) guarantee the estimates
1 R x I R x I c R 1 R x I
and
I + R x * 1 R x I × I R x * I + R x * c R 1 I R x * I + R x * .
Using the estimates in Equations (18), (20), (22), and (23) and the triangle inequality, we now have
( 1 α n ) I R x n δ * = 1 2 ( 1 ( 1 + α n ) ) ( I + R x n δ * ) + 1 2 ( 1 + ( 1 α n ) ) ( I R x n δ * ) 1 2 α n c R 1 I + R x n δ * + 2 α n I R x n δ * 1 2 K R e n
with the positive constant K R = α n c R 1 I + c r + 2 α n c L .  □
Proposition 2.
Let the conditions in Equations (17) and (18) in Assumption 1 be true. Then,
F ( x n δ ) F ( x + ) F ( x + ) ( x n δ x + ) 1 2 c L e n K e n
for x B ρ ( x 0 ) with K = F ( x + ) and e n = x + x n δ .
Proof. 
Define w t = x + + t ( x n δ x + ) as 0 t 1 . Using the mean value theorem with Equations (17) and (18), we obtain
F ( x n δ ) F ( x + ) F ( x + ) ( x n δ x + ) = 0 1 F ( x + + t ( x n δ x + ) ) F ( x + ) ( x n δ x + ) d t 0 1 R w t I F ( x + ) ( x n δ x + ) d t 1 2 c L e n K e n .
 □

3. Convergence Analysis

To investigate the convergence rate of the modified Landweber method under the logarithmic source condition, we choose the regularization parameter n according to the generalized discrepancy principle, i.e., the iteration is stopped after N = N( y δ , δ ) steps with
y δ F ( x N δ ) τ δ < y δ F ( x n δ ) , 0 n < N
where τ > 2 η 1 η is a positive number. In addition to the discrepancy principle, F satisfies the local property in the open ball B ρ ( x 0 ) of radius ρ around x 0
F ( x ) F ( x ˜ ) F ( x ) ( x x ˜ ) η F ( x ) F ( x ˜ ) , η < 1 2
with x , x ˜ B ρ ( x 0 ) D F . Utilizing the triangle inequality yields
1 1 + η F ( x ) ( x x ˜ ) F ( x ) F ( x ˜ ) 1 1 η F ( x ) ( x x ˜ )
to ensure at least local convergence to a solution x + of Equation (3) in B ρ 2 ( x 0 ) .
Theorem 1.
Assume that the problem in Equation (3) has a solution x + in B ρ 2 ( x 0 ) , y δ fulfills Equation (4), and F satisfies Equations (17) and (18). Assume that the Fréchet derivative of F is scaled such that F ( x ) 1 for x B ρ 2 ( x 0 ) . Furthermore, assume that the source condition in Equations (7) and (8) is fulfilled and that the modified Landweber method is stopped according to Equation (26). If w is sufficiently small, then there exists a constant K 2 depending only on p and w with
e n K 2 ( ln n ) p
and
y δ F ( x n δ ) 4 K 2 ( n + 1 ) 1 / 2 ( ln n ) p .
Proof. 
We give the abbreviation e n := x + x n δ for the error of the nth iteration x n δ of Equation (6) and K : = F ( x + ) . We can rewrite Equation ( 6) into the form
x + x n + 1 δ = ( 1 α n ) ( x + x n δ ) + F ( x n δ ) * ( F ( x n δ ) y δ ) α n ( x 0 x + ) .
Since e n := x + x n δ and K : = F ( x + ) , we present e n as
e n + 1 = ( 1 α n ) e n + F ( x n δ ) * ( F ( x n δ ) y δ ) α n ( x 0 x + ) = ( 1 α n ) ( I K * K ) e n + ( 1 α n ) K * K e n + F ( x n δ ) * ( F ( x n δ ) y δ ) α n ( x 0 x + ) = ( 1 α n ) ( I K * K ) e n + ( 1 α n ) K * F ( x n δ ) F ( x + ) K ( x n δ x + ) + K * F ( x n δ ) * y δ F ( x n δ ) α n K * ( y δ F ( x n δ ) ) + ( 1 α n ) K * ( y y δ ) α n ( x 0 x + ) = ( 1 α n ) ( I K * K ) e n + ( 1 α n ) K * F ( x n δ ) F ( x + ) K ( x n δ x + ) + K * K * R x n δ * y δ F ( x n δ ) α n K * ( y δ F ( x n δ ) ) + ( 1 α n ) K * ( y y δ ) α n ( x 0 x + ) = ( 1 α n ) ( I K * K ) e n + ( 1 α n ) K * F ( x n δ ) F ( x + ) K ( x n δ x + ) + K * ( 1 α n ) I R x n δ * y δ F ( x n δ ) + ( 1 α n ) K * ( y y δ ) α n ( x 0 x + ) .
Rewritting Equation (30), we have
e n + 1 = ( 1 α n ) ( I K * K ) e n + ( 1 α n ) K * ( y y δ ) α n ( x 0 x + ) + K * z n
where
z n = ( 1 α n ) ( F ( x n δ ) F ( x + ) K ( x n δ x + ) ) + [ ( 1 α n ) I R x n δ * ] ( y δ F ( x n δ ) ) .
By recurrence and Equation (31), we obtain the closed expression for the error
e n = i = 0 n 1 ( 1 α i ) ( I K * K ) n + j = 0 n 1 α n j 1 ( I K * K ) j i = 1 j ( 1 α n i ) e 0 + j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ ) + j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 .
Moreover, it holds
K e n = K i = 0 n 1 ( 1 α i ) ( I K * K ) n + K j = 0 n 1 α n j 1 ( I K * K ) j i = 1 j ( 1 α n i ) e 0 + K j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ ) + K j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 .
Next, for 0 n < N , using the discrepancy principle, triangle inequality, Equation (28), and τ > 2 η 1 η , we get
y δ F ( x n δ ) 2 y δ F ( x n δ ) τ δ 2 y F ( x n δ ) 2 1 η K e n .
Using Lemma 2, Proposition 2, and Equation (34), we obtain
z n ( 1 α n ) F ( x n δ ) F ( x + ) K ( x n δ x + ) + ( 1 α n ) I R x n δ * y δ F ( x n δ ) 1 2 ( 1 α n ) e n K e n c L + 1 2 K R e n ( 2 1 η ) K e n c ^ 1 K e n e n
where c ^ 1 = c L 2 + K R 1 η , and we use the fact that 1 α n 1 .
It holds that e n is decreasing independently of the source condition for 0 n < N ; see Proposition 2.2 in Scherzer [12].
Next, we show by induction that
e n K ^ 2 ( ln ( n + e ) ) p
and
K e n K ^ 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p
hold for all 0 n < N with K ^ 2 being a positive constant which does not depend on n. It is obvious for n = 0 . Assuming that Equations (36) and (37) are true for all k with 0 k < n < N , we have to show that Equations (36) and (37) are true for all k = n . We rewrite Equation (32) as follow
e n i = 0 n 1 ( 1 α i ) ( I K * K ) n e 0     + j = 0 n 1 α n j 1 ( I K * K ) j i = 1 j ( 1 α n i ) ] e 0     + j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ )     + j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 .
By assumption K 1 (see, e.g., Louis [18] or Vainikko and Veterennikov [19] cited in Hanke et al. [20]), we have
k = 0 n 1 ( I K * K ) k K * n ,
and
( I K * K ) j K * ( j + 1 ) 1 / 2 , j 1 .
Consequently,
j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ ) j = 1 n ( I K * K ) j 1 K * y y δ   n δ
and
j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 j = 0 n 1 ( I K * K ) j K * z n j 1   j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 .
Using Lemma 1 for n > 1 , Proposition 1, and Equations (39) and (40) to Equation (38), we obtain
e n c 1 ( ln ( n + e ) ) p w + c ˜ 1 ( ln ( n + e ) ) p w + n δ     + j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 .
Then, using Equation (35) to estimate the last term of Equation (41), we obtain
j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 c ^ 1 j = 0 n 1 ( j + 1 ) 1 / 2 K e n j 1 e n j 1 .
We apply the assumption of the induction in Equations (36) and (37) into Equation (42):
j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 c ^ 1 j = 0 n 1 ( j + 1 ) 1 / 2 K e n j 1 e n j 1 c ^ 1 K ^ 2 2 j = 0 n 1 ( j + 1 ) 1 / 2 ( n j ) 1 / 2 ( ln ( n j 1 + e ) ) 2 p = c ^ 1 K ^ 2 2 j = 0 n 1 j + 1 n + 1 1 / 2 n j n + 1 1 / 2 ( ln ( n j 1 + e ) ) 2 p 1 n + 1 .
Rewritting Equation (43), we have
j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 = c ^ 1 K ^ 2 2 ( ln ( n + e ) ) 2 p j = 0 n 1 j + 1 n + 1 1 / 2 n j n + 1 1 / 2 1 n + 1 ln ( n + e ) ln ( n j 1 + e ) 2 p c ^ 1 K ^ 2 2 ( ln ( n + e ) ) p j = 0 n 1 j + 1 n + 1 1 / 2 n j n + 1 1 / 2 1 n + 1 ln ( n + e ) ln ( n j 1 + e ) 2 p .
The next idea is similar to the proof of Lemma A.5 in Deuflhard et al. [13]. Firstly, n j 1 provides
ln n + 1 n j 1 + e ln ( n j 1 + e ) ln n + 1 n j 1 + e .
For 0 j n 1 , the properties of the logarithm provide
ln ( n + e ) ln ( n j 1 + e ) = ln ( n + e ) ln ( n + 1 ) 1 + ln n + 1 n j 1 + e ln ( n j 1 + e )   E 1 + ln n + 1 n j 1 + e
with a generic constant E < 2 which does not depend on n 1 .
Accordingly, Equation (44) can be estimated as follows:
j = 0 n 1 ( j + 1 ) 1 / 2 z n j 1 c ^ 1 E 2 p K ^ 2 2 ( ln ( n + e ) ) p j = 0 n 1 j + 1 n + 1 1 / 2 n j n + 1 1 / 2 1 n + 1 1 + ln n + 1 n j 1 + e 2 p c ^ 1 E 2 p K ^ 2 2 ( ln ( n + e ) ) p j = 0 n 1 j + 1 n + 1 1 / 2 n j n + 1 1 / 2 1 n + 1 1 ln n j n + 1 2 p .
The last summation is bounded since, with s : = 1 2 ( n + 1 ) , the integral
s 1 s x 1 / 2 ( 1 x ) 1 / 2 ( 1 ln ( 1 x ) ) 2 p d x
is bounded from above by a positive constant E p independently of n. Substituting the above estimation into Equation (41) yields
e n c 1 ( ln ( n + e ) ) p w + c ˜ 1 ( ln ( n + e ) ) p w + n δ     + c p K ^ 2 2 ( ln ( n + e ) ) p   = [ ( c 1 + c ˜ 1 ) w + c p K ^ 2 2 ] ( ln ( n + e ) ) p + n δ
with c p = c ^ 1 E 2 p E p .
Similarly, Equation (33) can be rewritten as
K e n K i = 0 n 1 ( 1 α i ) ( I K * K ) n e 0 + K j = 0 n 1 α n j 1 ( I K * K ) j i = 1 j ( 1 α n i ) e 0     + K j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ )     + K j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 .
By assumption K 1 (see, e.g., Louis [18] or Vainikko and Veterennikov [19] cited in Hanke et al. [20]), we have
( I K K * ) j K K * ( j + 1 ) 1
and
K k = 0 n 1 ( I K * K ) k K * I ( I K K * ) n 1 .
Consequently,
K j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ ) I ( I K K * ) k δ δ
and
K j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 j = 0 n 1 ( j + 1 ) 1 z n j 1 .
Using Lemma 1 for n > 1 and Proposition 1 and applying Equations (49) and (50) to Equation (48), we get
K e n c 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w + c ˜ 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w     + δ + j = 0 n 1 ( j + 1 ) 1 z n j 1 .
We may estimate the last term of Equation (51) by using Equations (35) and (45) and the fact that ( ln ( n + e ) ) p 1 as follows:
  j = 0 n 1 ( j + 1 ) 1 z n j 1 c ^ 1 j = 0 n 1 ( j + 1 ) 1 K e n j 1 e n j 1 c ^ 1 K ^ 2 2 j = 0 n 1 ( j + 1 ) 1 ( n j ) 1 / 2 ( ln ( n j 1 + e ) ) 2 p = c ^ 1 K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p   × j = 0 n 1 j + 1 n + 1 1 n j n + 1 1 / 2 ln ( n + e ) ln ( n j 1 + e ) 2 p ( ln ( n + e ) ) p 1 n + 1 c ^ 1 E K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p   × j = 0 n 1 j + 1 n + 1 1 n j n + 1 1 / 2 1 ln n j n + 1 2 p 1 n + 1 .
The last summation is bounded because, with s : = 1 2 ( n + 1 ) , the integral
s 1 s x 1 ( 1 x ) 1 / 2 ( 1 ln ( 1 x ) ) 2 p d x E ˜ p
with a positive constant E ˜ p independently of n. Substituting above information into (51) yields
K e n c 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w + c ˜ 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p w     + δ + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p   c 2 + c ˜ 2 w + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p + δ
with c ˜ p = c ^ 1 E 2 p E ˜ p .
Setting c * : = max { c 1 + c ˜ 1 , c 2 + c ˜ 2 } , Equations (47) and (54) become
e n c * w + c p K ^ 2 2 ( ln ( n + e ) ) p + n δ
and
K e n c * w + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p + δ .
Because of Equations (26) and (28) we have
τ δ y δ F ( x n δ ) δ + 1 1 η K e n
Moreover,
( 1 η ) ( τ 1 ) δ K e n c * w + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p + δ .
Due to τ > 2 η 1 η , we have Θ = ( 1 η ) ( τ 1 ) 1 > 0 . We can rewrite Equation (57) as follows:
δ 1 Θ c * w + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p .
Applying Equation (58) to Equation (55), we get
e n 1 + 1 Θ c * w + c ^ p K ^ 2 2 ( ln ( n + e ) ) p
with c ^ p = max { c p , c ˜ p } .
In similar manner, Equation (56) can be written as
K e n 1 + 1 Θ c * w + c ˜ p K ^ 2 2 ( n + 1 ) 1 / 2 ( ln ( n + e ) ) p .
Finally, we select w such that 1 + 1 Θ c * w + c ˜ p K ^ 2 2 K 2 . This is always possible for sufficiently small w ,13]. Therefore, the induction is completed. Using Equation (36), we have
e n K ^ 2 ln n ln ( n + e ) p ( ln n ) p K 2 ( ln n ) p
and similarly, by using Equation (34), we have
y δ F ( x n δ ) 2 1 η K 2 ( n + 1 ) 1 / 2 ln n ln ( n + e ) p ( ln n ) p 4 K 2 ( n + 1 ) 1 / 2 ( ln n ) p .
Thus, the assertion is obtained. □
Theorem 2.
Under the assumptions of Theorem 1 and 1 p 2 , we have
N 1 / 2 ( ln N ) p c δ
and
x + x N δ C ( ln δ ) p
with some constant c, C > 0 .
Proof. 
We recall Equation (32) and e 0 = x + x 0 = f ( K * K ) w selected from a source condition in Equation (7). Therefore,
e n = i = 0 n 1 ( 1 α i ) ( I K * K ) n + j = 0 n 1 α n j 1 ( I K * K ) j i = 1 j ( 1 α n i ) f ( K * K ) w + j = 1 n ( I K * K ) j 1 i = 1 j ( 1 α n i ) K * ( y y δ ) + j = 0 n 1 i = n j n 1 ( 1 α i ) ( I K * K ) j K * z n j 1 .
Then,
e N = f ( K * K ) w N + j = 1 N ( I K * K ) j 1 i = 1 j ( 1 α N i ) K * ( y y δ )
where
w N = i = 0 N 1 ( 1 α i ) ( I K * K ) N + j = 0 N 1 α N j 1 ( I K * K ) j i = 1 j ( 1 α N i ) w   + j = 0 N 1 i = N j N 1 ( 1 α i ) ( I K * K ) j f ˜ ( K * K ) z ˜ N j 1
with z ˜ N j 1 = z N j 1 , j = 0 , 1 , 2 , , N 1
and f ˜ ( K * K ) : = 0 1 λ 1 / 2 ( 1 ln λ ) p d E λ .
Applying Equation (A4) with q = p , we have
( I K * K ) j f ˜ ( K * K ) c 3 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p
for some constant c 3 > 0 . Using Equation (A9) by setting N 1 = k , we have
  j = 0 N 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p ( N j ) 1 / 2 ( ln ( N j 1 + e ) ) 2 p = j = 0 k 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p ( k + 1 j ) 1 / 2 ( ln ( k j + e ) ) 2 p   + ( k + 1 ) 1 / 2 ( ln ( k + 1 ) ) p D + ( N ) 1 / 2 ( ln N ) p .
From Equations (35), (36), (37), (63), and (64), we obtain
w N i = 0 N 1 ( 1 α i ) ( I K * K ) N w + j = 0 N 1 α N j 1 i = 1 j ( 1 α N i ) ( I K * K ) j w     + j = 0 N 1 i = N j N 1 ( 1 α i ) ( I K * K ) j f ˜ ( K * K ) z ˜ N j 1   ( N + 1 ) w + c 2 j = 0 N 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p z ˜ N j 1   ( N + 1 ) w + c ^ 1 c 2 j = 0 N 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p K e N j 1 e N j 1   ( N + 1 ) w     + c 2 c ^ 1 K ^ 2 2 j = 0 N 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p ( N j ) 1 / 2 ( ln ( N j 1 + e ) ) 2 p   ( N + 1 ) w + D + ( N ) 1 / 2 ( ln N ) p .
From Equation (62) we conclude that
e N f ( K * K ) w N + k = 0 N 1 ( I K * K ) k K * δ f ( K * K ) w N + N δ .
From Equation (A8) in Lemma A2 and Equation (29) for some c 4 > 0 , we have
f ( K * K ) w N c 4 ( ln δ ) p ( N + 1 ) w + D + ( N ) 1 / 2 ( ln N ) p .
Thus,
e N c 4 ( ln δ ) p ( N + 1 ) w + D + ( N ) 1 / 2 ( ln N ) p + N δ .
We apply Equation (58); then,
( N + 1 ) 1 / 2 ( ln ( N + e ) ) p 1 Θ δ c * w + c ˜ p K ^ 2 2 = c 5 δ
or
( N + 1 ) ( ln ( N + e ) ) 2 p c 5 2 δ 2
for some positive c 5 . By the fact that
N ( ln N ) 2 p ( N + 1 ) ( ln ( N + e ) ) 2 p c 5 2 δ 2
we have
N ( ln N ) 2 p c 5 2 δ 2 .
By Lemma A4, we have
N = c 6 2 ( ln δ ) 2 p δ 2
Applying Equation (68) to Equation (66), we get
e N c 4 ( ln δ ) p ( N + 1 ) w + D + ( N ) 1 / 2 ( ln N ) p     + c 6 ( ln δ ) p   = ( ln δ ) p c 4 ( N + 1 ) w + D + ( N ) 1 / 2 ( ln N ) p + c 6 .
For 1 p 2 , we know that N 1 / 2 ( ln N ) p c 7 for some c 7 > 0 ; see Figure 3.
Thus, the assertion can be obtained. □

4. Application to an Inverse Potential Problem

It is well known that an inverse potential problem is severely ill-posed. It is the problem of determining the shape of an unknown domain D from measurements of the Neumann boundary values of u on Ω R where the solution u fulfills Equations (1) and (2). In this work, Assumption 2.1 for the inverse potential problem cannot be presented. It fails even in the case of two concentric circles [2]. However, if we implement the method by representing the curve with a collocation basis, as will be seen in Proposition 3, the Fréchet derivative is reformulated. Without the verification of Assumption 1, we show a quite good performance of an approximated potential.
The nonlinear operator for an inverse potential problem is defined in the following form:
F ( x ) ( t ) = 1 4 π R 0 2 π x 2 ( s ) ds + i = 1 1 π R i + 1 ( i + 2 ) 0 2 π x i + 2 ( s ) cos ( i s ) ds cos ( i t ) + i = 1 1 π R i + 1 ( i + 2 ) 0 2 π x i + 2 ( s ) sin ( i s ) ds sin ( i t )
where F : L 2 [ 0 , 2 π ] L 2 [ 0 , 2 π ] . Moreover, the Fréchet derivative of the operator F is
F ( x ) h ( t ) = 1 2 π R 0 2 π x ( s ) h ( s ) ds   + i = 1 1 π R i + 1 0 2 π x i + 1 ( s ) h ( s ) cos ( i s ) ds cos ( i t )   + 0 2 π x i + 1 ( s ) h ( s ) sin ( i s ) ds sin ( i t ) ,
See Reference [1] for more details. In the presented work, we use R = 1 and x 0 = δ x + ( s ) . Since X = L 2 [ 0 , 2 π ] and Y = L 2 [ 0 , 2 π ] , we discretized [ 0 , 2 π ] into m intervals with the grid points 0 = t 0 , t 1 , , t m = 2 π and 0 = s 0 , s 1 , , s m = 2 π . Note that X m = span φ j ( m ) and Y m = span ψ j ( m ) , where the sets φ 1 ( m ) φ m ( m ) and ψ 1 ( m ) ψ m ( m ) are orthogonal bases. The orthogonal bases are defined with respect to the step length h ( m ) : = 2 π / m , m N by the piecewise continuous function with φ j ( m ) ( s ) = 1 for s [ s j 1 , s j ] , ψ j ( m ) ( t ) = 1 for t [ t j 1 , t j ] , and with φ j ( m ) ( s ) = 0 , ψ j ( m ) ( t ) = 0 otherwise. The result in the next proposition provides the formula for the calculation of x n + 1 δ .
Proposition 3.
Let x n δ ( s ) = j = 1 m u j ( m ) φ j ( m ) ( s ) , x n + 1 δ ( s ) = j = 1 m v j ( m ) φ j ( m ) ( s ) , x 0 = j = 1 m z j ( m ) φ j ( m ) ( s ) The coefficient vector V ( m ) is given by
v r ( m ) = ( 1 α n ) u r ( m ) + 1 h ( m ) q r + α n z r ( m )
for r = 1 , , m , where
q r = h ( m ) k = 1 m y k δ ( t k ) C i = 1 ( D k i + E k i ) h ( m ) 4 π ( x n δ ( s r 1 ) + x n δ ( s r ) ) + i = 1 h ( m ) 2 π A i r cos ( i t k ) + h ( m ) 2 π B i r sin ( i t k ) ,
A i r = ( x n δ ( s r 1 ) ) i + 1 cos ( s r 1 ) + ( x n δ ( s r ) ) i + 1 cos ( i s r ) , B i r = ( x n δ ( s r 1 ) ) i + 1 sin ( s r 1 ) + ( x n δ ( s r ) ) i + 1 sin ( i s r ) , C = h 4 π l = 1 m u l ( m ) 2 , D k i = h ( m ) π ( i + 2 ) l = 1 m u l ( m ) i + 2 cos ( i s l ) cos ( i t k ) E k i = h ( m ) π ( i + 2 ) l = 1 m u l ( m ) i + 2 sin ( i s l ) sin ( i t k ) .
and
z r ( m ) = h 2 x 0 ( s r 1 ) + x 0 ( s r ) .
Proof. 
The idea of the proof is analogous to Proposition 5 in Reference [21].  □
The numerical examples for recovering the potential x + are demonstrated in Figure 4, Figure 5 and Figure 6. We obtain data y δ by solving the direct problem for the test curves. The program was written in MATLAB2018a. The results are demonstrated in Figure 4 and Figure 5 for the first test curve x + = 5 + sin ( 3 s ) 6 and in Figure 6 for the second test curve x + = 1 sin ( s ) . For both examples, the number of basis functions is 65 and the number of equidistant grid points is 200. In Figure 4, α n = 1 2 ( 100 + n ) 0.9 , δ = 0.01 , and τ = 120 provide the error 0.4197 with the residual norm 1.0792 after 8 iterations for i max = 3 and the error 0.3911 with the residual norm 1.1171 after 8 iterations for i max = 6 . In Figure 5, α n = 1 2 ( 100 + n ) 0.9 , δ = 0.001 , and τ = 1100 provide the error 0.6282 with the residual error 1.0607 after 8 iterations for i max = 3 and the error 0.5869 with the residual norm 1.0993 after 8 iterations for i max = 6 . For the second example, α n = 1 2 ( 1000 + n ) 0.9 , δ = 0.01 , and τ = 3000 provide the error 0.5720 with the residual norm 28.2597 after 13 iterations and α n = 1 2 ( 1000 + n ) 0.9 , δ = 0.001 , and τ = 1700 provide the error 0.5925 with the residual norm 15.4140 after 14 iterations. Figure 4, Figure 5b,d, and Figure 6b show that the curve of ln x + x n δ lies below a straight line with slope p as suggested by Equation (29).

5. Conclusions

In this article, we show that the rate O ( ( ln δ ) p ) of the modified Landweber method in Equation (6) under the logarithmic source condition in Equation (7) with 1 p 2 is obtained. The regularization parameter was chosen according to the discrepancy principle. The linearity properties in Equations (17) and (18) of the nonlinear operator are needed although the verification for the inverse potential problem is not possible [2]. The test examples are used to illustrate the results in Theorem 1. For the modified Landweber regularization, the initial guess x 0 is an important information. With a good choice of initial guess, the shapes of the unknown domains D are quite good reconstructions. The curves in Figure 4, Figure 5b,d, and Figure 6b confirm the result in Theorem 1, where the curve of ln x + x n δ lies below a straight line with slope p .

Author Contributions

The authors P.P., P.S., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Faculty of Science of Silpakorn University under the grant number 148 SRF-JRG-2561-07 and by Centre of Excellence in Mathematics of Mahidol University.

Acknowledgments

This work was supported by the Faculty of Science of Silpakorn University under the grant number SRF-JRG-2561-07 and by Centre of Excellence in Mathematics of Mahidol University. The authors would like to thank the reviewers for valuable hints and improvements.

Conflicts of Interest

The authors declare no conflict of interest

Appendix A

Lemma A1.
Similar to Deuflhard et al. [13]. Let p > 0 and k N 0 . The real-valued function
f ^ ( λ ) = ( 1 λ ) k ( ln e λ ) p
defined on 0 , 1 satisfies
f ^ ( λ ) C ( ln ( k + e ) ) p
with C independent of k.
Moreover, for each q R , the real-valued function
g ^ ( λ ) = ( 1 λ ) k λ 1 / 2 ln e λ q
defined on 0 , 1 satisfies
g ^ ( λ ) C ( k + 1 ) 1 / 2 ( ln ( k + e ) ) q C ( k + 1 ) 1 / 2 ( ln ( k + 1 ) ) q
with C independent of k.
Proof. 
Following the proof of Deuflhard et al. [13] for 1 b , we have
f ^ ( k b ) = 1 1 k b k 1 b ln 1 k p ( ln ( k + e ) ) p
for k k 0 . Therefore, for any λ [ 0 , 1 ] (independent of k), we have f ^ ( λ ) C ( ln ( k + e ) ) p . Similarly, for a > 1 , we have
g ^ 2 ( λ ) 1 1 k a 2 k k 1 1 + a ln k 2 q ( k + 1 ) 1 ( ln ( k + e ) ) 2 q .
Therefore, it follows that g ^ ( λ ) C ( k + 1 ) 1 / 2 ( ln ( k + e ) ) q
Lemma A2
([13]). Let p 1 , C > 0 and δ > 0 be sufficiently small such that 1 ( ln ( δ C ) ) 2 p δ . Let
0 1 exp ( ( ( 1 ln ( λ ) ) 2 p ) 1 / ( 2 p ) ) ( 1 ln ( λ ) ) 2 p d E λ w 2 = C δ 2 .
Then,
0 1 ( 1 ln ( λ ) ) 2 p d E λ w 2 C ( ln δ ) 2 p
with a generic constant C.
Lemma A3
([13]). Let p 1 , k N , k 2 . Then, there exists a constant D, which is independent of k, such that
j = 0 k 1 j + 1 k + 1 1 / 2 k j k + 1 1 / 2 1 k + 1 ln ( k + 2 ) ln ( k j + 1 ) 2 p D ( ln ( k + 2 ) ) p j = 0 k 1 j + 1 k + 1 1 k j k + 1 1 / 2 1 k + 1 ln ( k + 2 ) ln ( k j + 1 ) 2 p D .
Moreover, there exists a constant D (independent of k) such that
j = 0 k 1 ( j + 1 ) 1 / 2 ( ln ( j + 1 ) ) p ( k j + 1 ) 1 / 2 ( ln ( k j + 1 ) ) 2 p D .
Lemma A4
([13]). Let k ^ be a solution of
k ( ln k ) 2 p = C δ 2 .
Then, k ^ satisfies
k ^ = O ( ln δ ) 2 p δ 2 .

References

  1. Böckmann, C.; Kammanee, A.; Braunß, A. Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inv. Ill-Posed Probl. 2011, 19, 345–367. [Google Scholar] [CrossRef]
  2. Hettlich, F.; Rundell, W. Iterative methods for the reconstruction of an inverse potential problem. Inverse Probl. 1996, 12, 251–266. [Google Scholar] [CrossRef]
  3. Hettlich, F.; Rundell, W. A second degree method for nonlinear inverse problems. SIAM J. Numer. Anal. 1999, 37, 587–620. [Google Scholar] [CrossRef]
  4. Van den Doel, K.; Ascher, U. On level set regularization for highly ill-posed distributed parameter estimation problems. J. Comput. Phys. 2006, 216, 707–723. [Google Scholar] [CrossRef] [Green Version]
  5. Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauss—Newton method for an inverse potential and an inverse scattering problem. Inverse Probl. 1997, 13, 1279. [Google Scholar] [CrossRef]
  6. Pornsawad, P.; Sapsakul, N.; Böckmann, C. A modified asymptotical regularization of nonlinear ill-posed problems. Mathematics 2019, 7, 419. [Google Scholar] [CrossRef] [Green Version]
  7. Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018. [Google Scholar] [CrossRef] [Green Version]
  9. Pornsawad, P.; Böckmann, C. Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [Google Scholar] [CrossRef] [Green Version]
  10. Mahale, P.; Nair, M. Tikhonov regularization of nonlinear ill-posed equations under general source condition. J. Inv. Ill-Posed Probl. 2007, 15, 813–829. [Google Scholar] [CrossRef]
  11. Romanov, V.; Kabanikhin, S.; Anikonov, Y.; Bukhgeim, A. Ill-Posed and Inverse Problems: Dedicated to Academician Mikhail Mikhailovich Lavrentiev on the Occasion of his 70th Birthday; De Gruyter: Berlin, Germany, 2018. [Google Scholar]
  12. Scherzer, O. A modified Landweber iteration for solving parameter estimation problems. Appl. Math. Optim. 1998, 38, 45–68. [Google Scholar] [CrossRef]
  13. Deuflhard, P.; Engl, W.; Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14, 1081–1106. [Google Scholar] [CrossRef]
  14. Bakushinsky, A.; Kokurin, M.; Kokurin, M. Regularization Algorithms for Ill-Posed Problems; Inverse and Ill-Posed Problems Series; De Gruyter: Berlin, Germany, 2018. [Google Scholar]
  15. Schuster, T.; Kaltenbacher, B.; Hofmann, B.; Kazimierski, K. Regularization Methods in Banach Spaces; Radon Series on Computational and Applied Mathematics; De Gruyter: Berlin, Germany, 2012. [Google Scholar]
  16. Albani, V.; Elbau, P.; de Hoop, M.V.; Scherzer, O. Optimal convergence rates results for linear inverseproblems in Hilbert spaces. Numer. Funct. Anal. Optim. 2016, 37, 521–540. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods for Nonlinear Ill-Posed Problems; De Gruyter: Berlin, Germany; Boston, MA, USA, 2008. [Google Scholar]
  18. Louis, A.K. Inverse und Schlecht Gestellte Probleme; Teubner Studienbücher Mathematik, B. G. Teubner: Stuttgart, Germany, 1989. [Google Scholar]
  19. Vainikko, G.; Veterennikov, A.Y. Iteration Procedures in Ill-Posed Problems; Nauka: Moscow, Russia, 1986. [Google Scholar]
  20. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
  21. Böckmann, C.; Pornsawad, P. Iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Plot of ( ln ( k + 1 + e ) ) p and 1 2 ( n k 1 + l 0 ) ψ ( ln ( k + e ) ) p with n = 1000 , l 0 = 10 , and (left) ψ = 1 and (right) ψ = 0.5 .
Figure 1. Plot of ( ln ( k + 1 + e ) ) p and 1 2 ( n k 1 + l 0 ) ψ ( ln ( k + e ) ) p with n = 1000 , l 0 = 10 , and (left) ψ = 1 and (right) ψ = 0.5 .
Mathematics 08 00608 g001
Figure 2. Plot of ln ( k + 1 + e ) ln ( k + e ) p + 1 .
Figure 2. Plot of ln ( k + 1 + e ) ln ( k + e ) p + 1 .
Mathematics 08 00608 g002
Figure 3. Graph of y = N 1 / 2 ( ln N ) p for 1 p 2 .
Figure 3. Graph of y = N 1 / 2 ( ln N ) p for 1 p 2 .
Mathematics 08 00608 g003
Figure 4. The polar plot shows the exact solution (dot line) and the computed solution (solid line) for (a) i max = 3 and (c) i max = 6 with δ = 0.01 . In (a) the thin curve is an initial value. In (c) the thin curves are the curve of x n δ for n = 1 , , 8 . The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale for (b) i max = 3 and (d) i max = 6 are shown. The initial value is x 0 = 0.1 + 1 5 sin ( 3 s ) . The parameter α n in Equation (71) is 1 2 ( 100 + n ) 0.9 .
Figure 4. The polar plot shows the exact solution (dot line) and the computed solution (solid line) for (a) i max = 3 and (c) i max = 6 with δ = 0.01 . In (a) the thin curve is an initial value. In (c) the thin curves are the curve of x n δ for n = 1 , , 8 . The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale for (b) i max = 3 and (d) i max = 6 are shown. The initial value is x 0 = 0.1 + 1 5 sin ( 3 s ) . The parameter α n in Equation (71) is 1 2 ( 100 + n ) 0.9 .
Mathematics 08 00608 g004
Figure 5. The polar plot shows the exact solution (dot line) and the computed solution (solid line) for (a) i max = 3 and (c) i max = 6 with δ = 0.001 . The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale for (b) i max = 3 and (d) i max = 6 are shown. The initial value is x 0 = 0.1 + 1 5 sin ( 3 s ) . The parameter α n in Equation (71) is 1 2 ( 100 + n ) 0.9 .
Figure 5. The polar plot shows the exact solution (dot line) and the computed solution (solid line) for (a) i max = 3 and (c) i max = 6 with δ = 0.001 . The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale for (b) i max = 3 and (d) i max = 6 are shown. The initial value is x 0 = 0.1 + 1 5 sin ( 3 s ) . The parameter α n in Equation (71) is 1 2 ( 100 + n ) 0.9 .
Mathematics 08 00608 g005
Figure 6. (a) The polar plot shows the exact solution (dot line) and the computed solution (solid line) for example 2 with (a,b) δ = 0.01 and (c,d) 0.001 . In (a), the thin curve is an initial value. The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale is shown in (b) and (d). The initial value is x 0 = 1.6 sin ( s ) . The parameter α n in Equation (71) is 1 2 ( 1000 + n ) 0.9 .
Figure 6. (a) The polar plot shows the exact solution (dot line) and the computed solution (solid line) for example 2 with (a,b) δ = 0.01 and (c,d) 0.001 . In (a), the thin curve is an initial value. The error x + x n δ versus the logarithm of the number of iteration step using a double logarithm scale is shown in (b) and (d). The initial value is x 0 = 1.6 sin ( s ) . The parameter α n in Equation (71) is 1 2 ( 1000 + n ) 0.9 .
Mathematics 08 00608 g006aMathematics 08 00608 g006b

Share and Cite

MDPI and ACS Style

Pornsawad, P.; Sungcharoen, P.; Böckmann, C. Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems. Mathematics 2020, 8, 608. https://doi.org/10.3390/math8040608

AMA Style

Pornsawad P, Sungcharoen P, Böckmann C. Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems. Mathematics. 2020; 8(4):608. https://doi.org/10.3390/math8040608

Chicago/Turabian Style

Pornsawad, Pornsarp, Parada Sungcharoen, and Christine Böckmann. 2020. "Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems" Mathematics 8, no. 4: 608. https://doi.org/10.3390/math8040608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop