Updating the Landweber Iteration Method for Solving Inverse Problems

: The Landweber iteration method is one of the most popular methods for the solution of linear discrete ill-posed problems. The diversity of physical problems and the diversity of operators that result from them leads us to think about updating the main methods and algorithms to achieve the best results. We considered in this work the linear operator equation and the use of a new version of the Landweber iterative method as an iterative solver. The main goal of updating the Landweber iteration method is to make the iteration process fast and more accurate. We used a polar decomposition to achieve a symmetric positive deﬁnite operator instead of an identity operator in the classical Landweber method. We carried out the convergence and other necessary analyses to prove the usability of the new iteration method. The residual method was used as an analysis method to rate the convergence of the iteration. The modiﬁed iterative method was compared to the classical Landweber method. A numerical experiment illustrates the effectiveness of this method by applying it to solve the inverse boundary value problem of the heat equation (IBVP).


Introduction
In the early twentieth century, Hadamard [1] labeled the situations for well-posed problems for linear operator equations, stating that a problem is well-posed when it fulfilled the following points:
Uniqueness, this solution is unique; 3.
Stability (the given data are continuously dependent on the solution).
If at least one of the above points or conditions is not fulfilled in the problem, the problem is considered to be an ill-posed problem. The violations of 1 and 2 can often be improved with a small re-formulation of the problem. Violations of stability are much harder to remedy because they imply that a small disturbance in the data leads to a large disturbance in the estimated solution [2][3][4][5]. The inverse problem under the study of the heat equation can be solved by many methods; for example, the method of regularization by Tikhonov A.N. [6], the method of Lavrentiev M.M. [7], the method of quasi-solutions by Ivanova V.K. [8], and many others.
The Landweber iteration is a basic and initial method for solving inverse problems or linear operator equations. Due to its ease of operation and relatively low complexity per iteration, the Landweber method attracted a lot of attention for studying inverse problems. The useful methods for solving inverse problems depend on the profound insight through the problems related to the algorithms for solving these problems [9][10][11][12][13][14].
Regarding Landweber iterations in Hilbert spaces, see [15,16]. In the last years, the Landweber method has been extended to solve inverse problems in many spaces, such as t Banach spaces [17][18][19]. Many works have modified the Landweber iteration method, such as [20], which used a revision of the residual principle method for iteration of the Landweber in order to solve the inverse problems (linear and non-linear) in Banach spaces and the author proved the new convergence results. In [21], the author estimated the convergence of the error between the exact and estimated solution by considering the regularization level in the estimated solution by using the initial data for selecting the parameter of regularization. In [22], the author presents the Landweber iterative method and accelerates it by using a sparse Jacobian-based reconstruction method.
This article presents an update of the classical Landweber iteration. We use a new reversible operator to increase iteration convergence. We also compared the proposed iterative method with the classical Landweber iteration. A numerical experiment illustrates the effectiveness of this method through an application solving the heat equation's inverse boundary value problem.

Problem Statement
In general, the generic form used to explain numerical solutions to ill-posed problems is applied to the following linear operator equation for the first kind: where A is a linear positive compact operator and act from a Hilbert space H to the same H. We also assume that u, f ∈ H and the operator compressed, self-adjoint, and affirmative description A : H → H . In particular, the operator A is severely ill-conditioned and may be rank-deficient. Operators of this kind arise from the discretization of linear ill-posed problems, such as Fredholm or Volterra integral equations of the first kind. Right-hand side data are given with some error or measurement noise, which is typical of practical studies. Considering the following scenario to depict this general situation: assume the right-hand side of the problem (1) given with some error level δ. Instead of f , we have f δ with the following norm equation: We pose a problem that requires finding the estimated solution for (1) and (2) with some input data or information provided in the right-hand side f δ . We called u α an estimated solution, and the parameter α presented the regularization parameter and this parameter linked to the error level δ, i.e., α = α(δ), and we calculated the error estimate u δ − u . The definition of steady ways for addressing the ill-posed problems are dependent on the usage of a priori data concerning the inexactness of the incoming data. Once the right-hand side is given with some error, we now attempt to solve following problem: Variational methods, as an alternative way of solving the problem (3), compute minimizing the norm for the residual form r = Au α − f δ , or the functional discrepancy method as the following form: There are varieties of possible solutions to the problem that satisfy Equation (3), which are exact in the face of some discrepancy δ. Considering the well-posed problems required the identification of the class of wanted solutions and clearly giving a priori restrictions on the solution. We are attentive to the bounded solutions for the given problem (1) and (2), hence, the a priori is constrained with the following equation: where M = const > 0.

Method of Iterative Regularization
In order to solve the problems with ill-posed definitions, the iteration methods have been successfully used. By calling the problem (1), we will discuss the specific features in of the updated iteration Landweber method below. The ill-posedness behaviors of the inverse problems are connected to the fact that the eigenvalues of operator A, shown in reducing order ( The general form of the iteration method for solving Equation (3)with approximately given input data, can be rewritten as following form: in this method, called the two-layer iteration method, the α k is defined as a relaxation parameter and E is a positive definite operator. E is equivalent to the identity operator, the (6) classical Landweber iteration method [23]. There are several well-known methods that can be driven in the form of (3) by selecting the operator E. Cimmino's method [24] is E = 1 m diag 1/ a i 2 , where a i represents the ith row of A. The CAV method [25] used the following form of operator E where N j is the number of non-zeroes in the jth column of A. We define the new type of the Landweber iteration method by using E as follows: Q is unitary operator and P = √ AA T , by using the polar decomposition of the operator A: Now the new version of the iteration method is shown in the following iterative equation: Depending on the specific method, iteration method (9) can resolve the variational problem by the minimization action of the functional discrepancy method We defined the MLI algorithm (Algorithm 1), which represented the Modified-Landweber iterative (MLI). The classical Landweber method can be defined by the following algorithm (Algorithm 2), and we called it the Classical-Landweber iterative (CLI) method.
In order to make the iteration method in (9) effective, it is critical to invent faithful stopping criteria. This method consists of reducing the given problem to the variational problem: The iterating process is much longer with the noise input data f δ , which leads to an iterative solution with a loss of resolution u k . This phenomenon is called semi-convergence and is presented by Natterer [26].

Analysis of Convergence of Iterations
In this section, we consider Theorem 1 in [27]. To understand the convergence behavior of the iteration technique, we took a closer look at the errors between the estimated and exact solutions by using the constant parameter α k = α.
We called u * as the unique solution for Equation (4).
Then, the iterates of (9) converge to a solution (called u α ) of (4) if and only if 0 < α < 2/σ 2 1 with σ 1 the largest singular value of Q T A.
Proof of Theorem 1. Assume u 0 = 0. Let B = AQ T A and c = AQ T f δ . Then using (9), we obtain: We assume that the singular value decomposition (SVD) for Q T A = U ∑ V T . We represent B in the following form: where where Mathematics 2022, 10, 2798

of 13
It follows that: with SVD where This completes the proof.

Number of Iterations
We need to formulate the conditions under which MLI (9) provides the estimated solution for problem (1) and (2) after the number of iterations n(δ). (9)

Theorem 2. Let the number of iterations in MLI
Proof of Theorem 2. We represent the inexactness at the nth iteration as z n = u n − u. By (9), we obtain: where u 0 some initial estimate. In order to find the exact solution, we can use the following representation: This illustration corresponds to the iteration solution for problem (1) in which the initial approximation coincides with the exact solution.
With equality (17) taken into account, for the inexactness, we obtain the following: where z (1) inexactness. The first term z (1) n in (18) is in the iteration method, and the term z (2) n is related to the inexactness on the input data of (1).
Under 0 < α < 2 γ , where γ > 0, we take In order to verify this variation, we have passed from difference (19) to the equivalent inequality where A self-adjoint and positive definition with the estimate AQ T > γI taken into account, we obtain: provided 0 < α < 2 γ is satisfied. Taking inequality (19), we obtained: The estimate z n deserves a more detailed study. To begin with, assume that z 0 ∈ H. Such a situation is met, for instance, with the initial approximation u 0 = 0 in the solution to problems (1) and (2) in class (5). Let us show that s(n) = z (1) n → 0 for n → ∞ . We use the representation: For any small ε > 0, can found N such that By |1 − ασ i | < 1, we obtain: In case of sufficiently large N, for the first term we obtain the following: The substitution of (20) into (18) yields the estimate z n ≤ nαδ + s(n).
In the updated iteration method (9), the number of iterations matched the inexactness input data; also the input data or right-hand side serves the regularization parameter.

Estimating the Rate of Convergence
We know the convergence without needing the discovery the rate. By narrowing the class of a priori limitations of the solution, we define the following theorem for the estimated solution as an explicit function of the inexactness of the input data.
Consider the iterative method (9) under the more stringent constraints of the iterative parameter,

Theorem 3. Suppose that the exact solution of problem (3) belongs to the class
Then, for the inaccuracy of the iteration method (9) with u 0 = 0 their holds the estimate Proof of Theorem 3. By condition (23), it is necessary to know the quantity of s(n) in (21). The iterations number n determines the outcome. With u 0 = 0, we find z 0 = u, by using the above, we find: After that, in view of (23) we find: Under constraint (22) on the iteration parameter, we have o < ασ i ≤ 1 and, hence, Minimizing the input data of (24) lets us formulate the stop criterion: i.e., n(δ) = ϑ(δ −1/(p+1) ). Here, for the inexactness of the estimated solution, we obtain the following estimation z n opt ≤ M 2 δ p/(p+1) with .
The approximation of (27) establishes the direct requirement of the rate at which the approximated solution converges to the exact solution on inexactness δ and on the exact solution's smoothness (parameter p).

Numerical Results
We considered the inverse boundary value problem found in [28]: supposing that the u(t) is a function such that and T 0 |u (t)| 2 dt ≤ r 2 , r is the identified number. By applying the separation of variables method, we achieve: Integrating by two parts for the right-hand side of (34), we obtain: where u (0) = u (0) = 0, by (32) and a n (t) = 2 From (33) and (37) we find the next integral equation for the first kind: where u (τ) = g(τ). We define the operator P : We used the discretization algorithm in [29] and applied it in [30], by specifying the derivative for the kernel we achieve the following: where The operator A is an infinite-dimensional operator. The next step applies the discretization algorithm to the integral Equation (40) and converts the operator A to the finite-dimension operator A n , where A n → A for n → ∞, and for operator P, there is an operator P n , where P n → P for n → ∞, From the above, we convert the system of differential Equations (28)-(32) to the linear operator equation A n u = f n , where u = P n g, by reformulating the differential equation to an integral equation by using the separation of variables and applying the discretization algorithm to convert the integral equation into a system of linear algebra equations or linear operator equations.
. . . where where A n = A n P n The operator A n in (43) is a non-injective operator, because it has the triangular property [28].
Considering the above problem, we need to find the function u(t) ∈ H 4 [0, T]. The real solution, and the u(x 0 , t) = f (t) represented the input function, where the x 0 ∈ (0, 1), t ∈ [0, T], x 0 = 0.5 and T = 1. In [31], we prepared a case study using MATLAB code. First of all, the discretization algorithm was applied to obtain the operator, and by using u(t) = sin(π/2 * t), we obtained the vector f . We then add some noise, as explained in the code for obtaining the vector f δ . Finally, we computed the SVD for operator A, and used it in the MLI and CLI algorithms to obtain the approximation solutions u α .
Algorithm 1 MLI (number iteration): MLI the value of the regularization parameter that was used with 1000 iterations (see Figure 1). as explained in the code for obtaining the vector f  . Finally, we computed the SVD for operator A , and used it in the MLI and CLI algorithms to obtain the approximation solutions u  . Algorithm 1 MLI (number iteration): MLI the value of the regularization parameter that was used with 1000 iterations (see Figure 1).  In Figure 3, the iterations increased to 10,000. Algorithm 2 CLI (number of iteration): the value of regularization parameter with 1000 iterations (see Figure 2). Algorithm 1 MLI (number iteration): MLI the value of the regularization parame that was used with 1000 iterations (see Figure 1).  In Figure 3, the iterations increased to 10,000. In Figure 3, the iterations increased to 10,000. In Figure 4, the iterations increased to 20,000.

Conclusions Remarks and Observations
This paper defined algorithms that solve ill-posed inverse problems by using an iterative regularization method (Landweber iterative type). The regularization parameter was chosen by considering the iteration method that was most consistent with the real solution. The residual method was used as an analysis method to rate the last iteration's convergence. We observed that the minimum of the discrepancy is extremely unstable with respect to data perturbations on the right-hand side of the linear operator equation. It is clear that the updated Landweber iteration algorithm obtained a good approximation solution compared with the classical Landweber method. That means that our updated Landweber method successfully fixed this instability in the discrepancy method. The suggested algorithms successfully solve the inverse boundary value problem for the heatconducting problem. In Figure 4, the iterations increased to 20,000.  In Figure 4, the iterations increased to 20,000.

Conclusions Remarks and Observations
This paper defined algorithms that solve ill-posed inverse problems by using an iterative regularization method (Landweber iterative type). The regularization parameter was chosen by considering the iteration method that was most consistent with the real solution. The residual method was used as an analysis method to rate the last iteration's convergence. We observed that the minimum of the discrepancy is extremely unstable with respect to data perturbations on the right-hand side of the linear operator equation. It is clear that the updated Landweber iteration algorithm obtained a good approximation solution compared with the classical Landweber method. That means that our updated Landweber method successfully fixed this instability in the discrepancy method. The suggested algorithms successfully solve the inverse boundary value problem for the heatconducting problem.

Conclusions Remarks and Observations
This paper defined algorithms that solve ill-posed inverse problems by using an iterative regularization method (Landweber iterative type). The regularization parameter was chosen by considering the iteration method that was most consistent with the real solution. The residual method was used as an analysis method to rate the last iteration's convergence. We observed that the minimum of the discrepancy is extremely unstable with respect to data perturbations on the right-hand side of the linear operator equation. It is clear that the updated Landweber iteration algorithm obtained a good approximation solution compared with the classical Landweber method. That means that our updated Landweber method successfully fixed this instability in the discrepancy method. The suggested algorithms successfully solve the inverse boundary value problem for the heatconducting problem.

Data Availability Statement:
The datasets used to support this study are included within the code on GitHub. We prepared the case study by using MATLAB code. First, the discretization algorithm was applied to obtain the operator and, by using it, we obtained the vector. We then added some noise as explained in the code to obtain the vector. Finally, we computed the SVD for the operator and used it in the MLI and CLI algorithms to obtain approximation solutions [31].

Conflicts of Interest:
The authors declare no conflict of interest.