Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution

: The modeling of many problems of practical interest leads to nonlinear ill-posed equations (for example, the parameter identification problem (see the Numerical section)). In this article, we introduce a new source condition (SC) and a new parameter choice strategy (PCS) for the Tikhonov regularization (TR) method for nonlinear ill-posed problems. The new PCS is introduced using a new SC to compute the regularization parameter (RP) before computing the regularized solution. The theoretical results are verified using a numerical example.


Introduction
Many problems of practical interest lead to nonlinear ill-posed equations.For example, consider the inverse problem of identifying the distributed growth law x(t), t ∈ (0, 1) in the initial value problem dy dt = x(t)y(t), y(0) = c, t ∈ (0, 1) from the noisy data y δ (t) ∈ L 2 (0, 1).
If it is the exact case, we can use the variable separable method and obtain that x(t) = d dt ln y.Assume there is a fidelity term δ sin( t δ 2 ) added to ln y so that ln y δ = ln y + δ sin t Taking the derivative with respect to t for finding new x δ , we obtain Note that the magnitude of noise is small (if δ is small) in ( 2), but it is large in (3).This is typical of an ill-posed problem (the violation of Hadamard's criterion [1]).One can reformulate the above problems as an ill-posed operator equation L(x) = y with [L(x)](t) = ce t 0 x(θ)dθ , x ∈ L 2 (0, 1), t ∈ (0, 1). ( The problem is to find x for a given y, when y is not exactly known.The modeling of problems in acoustics, electrodynamics, gravimetry, phase retrieval, etc., that leads to the solving of ill-posed equations can be found in [2]. Another real-life application occurs in the parameter identification problem when mathematical models used in biology, physics, economics, etc., are often defined by a Partial Differential Equation (PDE) (see Example 1) [3,4].It is known that in general the solution of such a PDE need not be an elementary function.So, based on the experimental data, one need to obtain the parameters of the mathematical model.This type of problem is known as the parameter identification problem [5].
In this paper, we consider the abstract nonlinear ill-posed equation where L : D(L) ⊂ U −→ V is a nonlinear operator and U, V are Hilbert spaces.Throughout the paper, it is assumed that L is weakly/sequentially closed, the continuous operator D(L) is a subset of U, L has the Fréchet derivative at all f ∈ D(L) and is denoted by L ′ ( f ), and L ′ ( f ) * is the adjoint of the linear operator L ′ ( f ).We are interested in an f 0 -minimum norm solution f ( f 0 −MNS) (see [5,6]) of ( 5) (here, f 0 is an apriori estimate in the interior of D(L), see [5,7,8]).Recall that a solution f of ( 5) is called an f 0 − MNS of (5) if We assume that f does not depend continuously on the data g, and the available data are g δ with ∥g − g δ ∥ ≤ δ.
In this paper, we introduce a new SC, i.e, we assume that where It is known that [5,8,14,16], under the SCs (9) and (10) ).We shall prove that the SC (11) also gives the convergence rate O(δ 2ν 2ν+1 ) (hereafter, we call ν the Hölder-type parameter).We formulate the new SC to introduce a new PCS (this stategy is apriori in the sense that the RP α is chosen depending on δ and g δ before computing the regularized solution f δ α ) to choose α.The new PCS gives the order Note that most of the apriori PCS depends on the unknown ν in the SC.The advantages of our proposed PCS are (i) it is independent of the parameter ν, (ii) it provides the order O(δ 2ν 2ν+1 ) for 0 < ν ≤ 1 2 , and (iii) it is apriori in the sense that it is computed before computing the regularized solution f δ α .In earlier studies such as [10,11,20,[26][27][28], the regularization parameter α = α(n, δ), depending on the iteration step, is computed in each iteration, and the stopping index is determined using some stopping criteria [11,20,[26][27][28].This apprach is computationally very expensive, but our approach requires the computation of α = α(δ) only once (here, α is independent of the iteration step); hence, one can also fix the stopping index for a given tolerence level in the beginning of the computation (see the comparison table in Example 1).
The above-mentioned advantages are obtained without actually using the operator L for computing α and f δ α (or the iteratively regularized solution).Another class of regularization methods is the so-called iterative regularization methods [26][27][28][29][30][31][32][33][34][35][36] (and the reference therein).Since our aim in this paper is to introduce a new PCS that allows us to compute the RP α (depending on g δ and δ) before computing the regularized solution f δ α , we leave the details of the above-mentioned (except ( 11)) source conditions and iterative regularization methods to motivated readers.
The rest of the paper is arranged as follows.An error analysis under the new SC is given in Section 2. A new PCS is given in Section 3, the numerical results are given in Section 4, and the paper ends with a conclusion in Section 5, followed by the Appendix.

Error Analysis
The proof of our results is based on the following assumptions (cf.[5,9]).
(i) ∃ constant k 0 > 0 and a continuous function φ : where ∥φ( where where where Remark 1.(a) Note that, by (ii) above, we have where Using the above assumptions, one can prove the following identities (proof of which is given in and (c) We will be using the following estimates: Let r := δ √ α + 2r 0 , where r 0 = ∥ f 0 − f ∥.Then, since f δ α is the minimizer of (7), we have Similarly, we have First, we shall prove that Proposition 1. Suppose (i) and (iii) hold.Then, the following hold: Proof.The proof is given in Appendix B.
Remark 2. Similarly, one can prove Remark 3. Proposition 1 shows that SC (11) is not a severe restriction, but it almost follows from SC (9) or SC (10).But the advantage of using SC (11), as mentioned in the introduction, is that one can compute the regularization parameter α (depending on g δ and δ) before computing the regularized solution f δ α (see Section 3).
Lemma 1.If we suppose k 0 r < 2, then assumptions (i) and (iii) hold.Let f δ α be as in ( 8) and f α be the solution of (8) with g in place of g δ .Then, Proof.The proof is given in Appendix C.
Next, we prove the main result of this Section using Lemma 1 and Lemma 2.
Theorem 1.Let the assumptions in Lemmas 1 and 2 hold.Then, the result follows from Lemma 1 and Lemma 2. ), for 0 ≤ ν ≤ 1.But, ν is unknown, so such a choice is impossible when it comes to practical cases.So, we consider a new PCS that does not require knowledge of the unknown parameter ν and provide the order O(δ 2ν 2ν+1 ), for 0 ≤ ν ≤ 1 2 and O(δ

New Parameter Choice Strategy
where Theorem 2. The function α → d(α, g δ ) for α > 0, defined in (24), is monotonically increasing, continuous, and where P is the orthogonal projection onto the null space N(L * 0 ) of L * 0 .
Further, we assume for some c > 1.
The application of the intermediate value theorem gives the following theorem.
Theorem 3. If g δ satisfies ( 6) and ( 25), then ∃ is a unique α such that We will be using the following moment inequality: where B is positive selfadjoint operator (see [37]).
Remark 5. Note that α = α(δ) satisfies ( 26) and is independentof ν and gives the order O(δ 2 ) for 1 2 < ν ≤ 1.Also, observe that the PCS does not depend on the operator L and that the regularization parameter α is computed before computing f δ α .

Numerical Example
Next, we provide an example satisfying the assumptions (i)-(iv).
We estimate the parameter α using PCS (26).To compute f δ α in (8), we use the Gauss-Newton method, which defines the iterate { f δ k,α } for k = 1, 2, . . .by Since we are estimating q, we will use the notation q δ k,α for f δ k,α , q for f , and u δ for g δ in the example.
We take f = 100e −10(t−0.5) 2 and g 0 = 1, g 1 = 2 as in [28].Then, q = 5t 2 (1 − t) + sin(2πt).For our computation, we use random noise data u δ so that ∥u − u δ ∥ ≤ δ.Further, we have taken the initial approximation as q 0 = 0. We have used a finite difference method for solving the differential equations involved in the computation by dividing [0, 1] into 100 subintervals of equal length, and the resulting tridiagonal system has been solved by the Thomas algorithm [38].
We have taken c = 4 in (26) to compute α.Table 1 gives the values of δ, the parameter α computed using (26), and the error ∥q δ k,α − q∥ and time taken to compute α for different values of δ.The corresponding figures are provided in Figure 1 (h): method (30), δ = 0.005  We compare our method with that of the most widely used iterative method [26] for (5), which is the regularized Gauss Newton method, in which the iterations x δ α,k are defined for k = 0, 1, 2, . . .by where f δ 0,α := x 0 .Here, (α k ) is a given sequence of numbers such that Stopping index: Choose k δ as the first positive integer that satisfies where τ > 1 is a sufficiently large constant not depending on δ.We have taken λ = 1.05 and α k = 1/k in our computations.We use a 4-core 64 bit Windows machine with 11th Gen Intel(R) Core(TM) i5-1135G7 CPU @ 2.40GHz for all our computations (using MATLAB).
Clearly, the table shows that our approach requires less computational time than that of method (30).

Conclusions
We introduced a new SC and a new PCS for the TR of nonlinear ill-posed problems.Our PCS does not require knowledge of ν, and it gives the error estimate The advantage of our method is that one can compute the RP α before computing the regularized solution f δ α .We also applied the method to the parameter identification problem modeled as in Example 1 and obtained favourable numerical results.

Appendix A. Proof of the Identities (17)-(20)
Using assumption (i), we have and using (iii), we have and by (i) and (iii); Further, using (ii), (iv), and Remark 1 (a) and (c), we obtain

Appendix B. Proof Proposition 1
Suppose Then, by (i) and (iii) we have This proves (P 1 ).To prove (P 2 ), we use the formula ( [37], p. 287) for the fractional power of positive self-adjoint operators B given by where and ϱ is a complex number such that 0 < Reϱ < n.Suppose that f 0 − f = (L * L) ν z, 0 ≤ ν < 1.Then, by using the above formula, we have So, by using (i) and (iii) we have (again, using (iii) we have) where Further, by (i) and (iii) we have By spliting the limit of intergration and rearranging the terms, we obtain This proves (P 2 ).

Appendix C. Proof of Lemma 1
Observe that, Then, by (A1) we have, (A7) Here, we have used the relations (L * L) 1 2 = UL, where U is the unitary operator and The result now follows from (A6), (A7), and (A8).