Next Article in Journal
Tail Asymptotics for a Retrial Queue with Bernoulli Schedule
Previous Article in Journal
A Novel Multi-Source Domain Adaptation Method with Dempster–Shafer Evidence Theory for Cross-Domain Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Updating the Landweber Iteration Method for Solving Inverse Problems

by
Hassan K. Ibrahim Al-Mahdawi
1,
Hussein Alkattan
1,
Mostafa Abotaleb
1,*,
Ammar Kadi
2 and
El-Sayed M. El-kenawy
3
1
Department of System Programming, South Ural State University, 454080 Chelyabinsk, Russia
2
Department of Food and Biotechnology, South Ural State University, 454080 Chelyabinsk, Russia
3
Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2798; https://doi.org/10.3390/math10152798
Submission received: 22 June 2022 / Revised: 13 July 2022 / Accepted: 5 August 2022 / Published: 7 August 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The Landweber iteration method is one of the most popular methods for the solution of linear discrete ill-posed problems. The diversity of physical problems and the diversity of operators that result from them leads us to think about updating the main methods and algorithms to achieve the best results. We considered in this work the linear operator equation and the use of a new version of the Landweber iterative method as an iterative solver. The main goal of updating the Landweber iteration method is to make the iteration process fast and more accurate. We used a polar decomposition to achieve a symmetric positive definite operator instead of an identity operator in the classical Landweber method. We carried out the convergence and other necessary analyses to prove the usability of the new iteration method. The residual method was used as an analysis method to rate the convergence of the iteration. The modified iterative method was compared to the classical Landweber method. A numerical experiment illustrates the effectiveness of this method by applying it to solve the inverse boundary value problem of the heat equation (IBVP).
MSC:
65F22; 15A29; 65F10; 90C20

1. Introduction

In the early twentieth century, Hadamard [1] labeled the situations for well-posed problems for linear operator equations, stating that a problem is well-posed when it fulfilled the following points:
  • Solution exists;
  • Uniqueness, this solution is unique;
  • Stability (the given data are continuously dependent on the solution).
If at least one of the above points or conditions is not fulfilled in the problem, the problem is considered to be an ill-posed problem. The violations of 1 and 2 can often be improved with a small re-formulation of the problem. Violations of stability are much harder to remedy because they imply that a small disturbance in the data leads to a large disturbance in the estimated solution [2,3,4,5]. The inverse problem under the study of the heat equation can be solved by many methods; for example, the method of regularization by Tikhonov A.N. [6], the method of Lavrentiev M.M. [7], the method of quasi-solutions by Ivanova V.K. [8], and many others.
The Landweber iteration is a basic and initial method for solving inverse problems or linear operator equations. Due to its ease of operation and relatively low complexity per iteration, the Landweber method attracted a lot of attention for studying inverse problems. The useful methods for solving inverse problems depend on the profound insight through the problems related to the algorithms for solving these problems [9,10,11,12,13,14].
Regarding Landweber iterations in Hilbert spaces, see [15,16]. In the last years, the Landweber method has been extended to solve inverse problems in many spaces, such as t Banach spaces [17,18,19]. Many works have modified the Landweber iteration method, such as [20], which used a revision of the residual principle method for iteration of the Landweber in order to solve the inverse problems (linear and non-linear) in Banach spaces and the author proved the new convergence results. In [21], the author estimated the convergence of the error between the exact and estimated solution by considering the regularization level in the estimated solution by using the initial data for selecting the parameter of regularization. In [22], the author presents the Landweber iterative method and accelerates it by using a sparse Jacobian-based reconstruction method.
This article presents an update of the classical Landweber iteration. We use a new reversible operator to increase iteration convergence. We also compared the proposed iterative method with the classical Landweber iteration. A numerical experiment illustrates the effectiveness of this method through an application solving the heat equation’s inverse boundary value problem.

2. Problem Statement

In general, the generic form used to explain numerical solutions to ill-posed problems is applied to the following linear operator equation for the first kind:
A u = f ,
where A is a linear positive compact operator and act from a Hilbert space H to the same H . We also assume that u , f H and the operator compressed, self-adjoint, and affirmative description A : H H . In particular, the operator A is severely ill-conditioned and may be rank-deficient. Operators of this kind arise from the discretization of linear ill-posed problems, such as Fredholm or Volterra integral equations of the first kind.
Right-hand side data are given with some error or measurement noise, which is typical of practical studies. Considering the following scenario to depict this general situation: assume the right-hand side of the problem (1) given with some error level δ . Instead of f , we have f δ with the following norm equation:
f δ f δ ,
We pose a problem that requires finding the estimated solution for (1) and (2) with some input data or information provided in the right-hand side f δ . We called u α an estimated solution, and the parameter α presented the regularization parameter and this parameter linked to the error level δ , i.e., α = α ( δ ) , and we calculated the error estimate u δ u .
The definition of steady ways for addressing the ill-posed problems are dependent on the usage of a priori data concerning the inexactness of the incoming data. Once the right-hand side is given with some error, we now attempt to solve following problem:
A u α = f δ ,
Variational methods, as an alternative way of solving the problem (3), compute minimizing the norm for the residual form r = A u α f δ , or the functional discrepancy method as the following form:
J 0 ( u α ) = A u α f δ 2 .
There are varieties of possible solutions to the problem that satisfy Equation (3), which are exact in the face of some discrepancy δ . Considering the well-posed problems required the identification of the class of wanted solutions and clearly giving a priori restrictions on the solution. We are attentive to the bounded solutions for the given problem (1) and (2), hence, the a priori is constrained with the following equation:
u M ,
where M = c o n s t > 0 .

3. Method of Iterative Regularization

In order to solve the problems with ill-posed definitions, the iteration methods have been successfully used. By calling the problem (1), we will discuss the specific features in of the updated iteration Landweber method below. The ill-posedness behaviors of the inverse problems are connected to the fact that the eigenvalues of operator A , shown in reducing order ( λ 1 λ 2 λ k 0 ) , tend to zero for k .
The general form of the iteration method for solving Equation (3)with approximately given input data, can be rewritten as following form:
u k + 1 = u k + α k A T E ( f δ A u k )
in this method, called the two-layer iteration method, the α k is defined as a relaxation parameter and E is a positive definite operator. E is equivalent to the identity operator, the (6) classical Landweber iteration method [23].
There are several well-known methods that can be driven in the form of (3) by selecting the operator E . Cimmino’s method [24] is E = 1 m d i a g ( 1 / a i 2 ) , where a i represents the ith row of A . The CAV method [25] used the following form of operator E   E = d i a g ( 1 / j = 1 n N j a i j 2 ) , where N j is the number of non-zeroes in the jth column of A . We define the new type of the Landweber iteration method by using E as follows:
E = Q = A P 1
Q is unitary operator and P = A A T , by using the polar decomposition of the operator A :
A = Q P ,
Now the new version of the iteration method is shown in the following iterative equation:
u k + 1 = u k + α k A Q T ( f δ A u k )
Depending on the specific method, iteration method (9) can resolve the variational problem by the minimization action of the functional discrepancy method J 0 ( u α ) = A u α f δ 2 . We defined the MLI algorithm (Algorithm 1), which represented the Modified-Landweber iterative (MLI).
Algorithm 1 MLI (number iteration)
  • u 0 = 0 ¯ , z e r o s   v e c t o r   .
  • [ U , , V ] = S V D ( A )
  • Q = U V T
  • u k = u 0
  • f o r   k = 1 , 2 , 3 , , iteration   times
    • u k = u k + α A Q T ( f δ A u k )
    • v a r i a t i o n a l = A u k f δ 2
  • End loop
The classical Landweber method can be defined by the following algorithm (Algorithm 2), and we called it the Classical-Landweber iterative (CLI) method.
Algorithm 2 CLI (number of iteration)
  • u 0 = 0 ¯ , z e r o s   v e c t o r   .
  • u k = u 0
  • f o r   k = 1 , 2 , 3 , i t e r a t i o n   t i m e s
    • u k = u k + α A T ( f δ A u k )
    • v a r i a t i o n a l = A u k f δ 2
  • End loop
In order to make the iteration method in (9) effective, it is critical to invent faithful stopping criteria. This method consists of reducing the given problem to the variational problem:
A u n ( δ ) f δ 2 δ 2 .
The iterating process is much longer with the noise input data f δ , which leads to an iterative solution with a loss of resolution u k . This phenomenon is called semi-convergence and is presented by Natterer [26].

4. Analysis of Convergence of Iterations

In this section, we consider Theorem 1 in [27]. To understand the convergence behavior of the iteration technique, we took a closer look at the errors between the estimated and exact solutions by using the constant parameter α k = α .
We called u * as the unique solution for Equation (4).
Theorem 1.
Let  α k = α for  k 0 . Then, the iterates of (9) converge to a solution (called u α ) of (4) if and only if  0 < α < 2 / σ 1 2  with  σ 1  the largest singular value of  Q T A .
Proof of Theorem 1.
Assume u 0 = 0 . Let B = A Q T A and c = A Q T f δ .
Then using (9), we obtain:
u k = ( I α B ) u k 1 + α c = = α j = 0 k 1 ( I α B ) k j 1 c .
We assume that the singular value decomposition (SVD) for Q T A = U V T . We represent B in the following form:
B = ( Q T A ) T ( Q T A ) = V T V T ,
where T = d i a g ( σ 1 2 , σ 2 2 , , σ p 2 , 0 , 0 ) , σ 1 σ 2 σ p 0 , and p = r a n k ( A ) .
Using (11), we have
j = 0 k 1 ( I α B ) k j 1 = V E k V T ,
where
E k = d i a g ( 1 ( 1 α σ 1 2 ) k α σ 1 2 , , 1 ( 1 α σ p 2 ) k α σ p 2 , k , k ) .
It follows that:
u k = V ( α E k ) V T c = V ( α E k ) T U T Q T f δ = i p { 1 ( 1 α σ i 2 ) k } u i T Q T f δ σ i v i .
with SVD
u * = V E ¯ U T Q T f δ ,
where
E ¯ = d i a g ( 1 σ 1 , , 1 σ p , 0 , , 0 ) .
Note that if | 1 α σ i 2 | < 1 , if | 1 α σ i | < 1 , for i = 1 , 2 , , p , i.e., 0 < α < 2 σ 1 2 , we find
lim k ( α E k T ) = E ¯ .
This completes the proof. □

5. Number of Iterations

We need to formulate the conditions under which MLI (9) provides the estimated solution for problem (1) and (2) after the number of iterations n ( δ ) .
Theorem 2.
Let the number of iterations in MLI (9)  n ( δ ) and n ( δ ) δ 0 as δ 0 . Then, u n ( δ ) u 0 such that δ 0 .
Proof of Theorem 2.
We represent the inexactness at the nth iteration as z n = u n u . By (9), we obtain:
u n = ( I α A Q T ) n u 0 + k = 0 n 1 ( I α A Q T ) k α f δ .
where u 0 some initial estimate.
In order to find the exact solution, we can use the following representation:
u = ( I α A Q T ) n u + k = 0 n 1 ( I α A Q T ) k α f .
This illustration corresponds to the iteration solution for problem (1) in which the initial approximation coincides with the exact solution.
With equality (17) taken into account, for the inexactness, we obtain the following:
z n = z n ( 1 ) + z n ( 2 )
where z n ( 1 ) = ( I α A Q T ) n z 0 ,   z n ( 2 ) = k = 0 n 1 ( I α A Q T ) k α ( f δ f ) , with z 0 = u 0 u being inexactness. The first term z n ( 1 ) in (18) is in the iteration method, and the term z n ( 2 ) is related to the inexactness on the input data of (1).
Under 0 < α < 2 γ , where γ > 0 , we take
I α A Q T 1 .
In order to verify this variation, we have passed from difference (19) to the equivalent inequality
( I ε Q A T ) ( I α A Q T ) I
where A self-adjoint and positive definition with the estimate A Q T > γ I taken into account, we obtain:
( I α Q A T ) ( I α A Q T ) I = = α A Q T ( α A Q T 2 I ) A Q T α A Q T ( α γ 2 ) A Q T 0 .
provided 0 < α < 2 γ is satisfied.
Taking inequality (19), we obtained:
z n ( 2 ) = k = 0 n 1 I α A Q T k α f δ f n α δ .
The estimate z n ( 1 ) deserves a more detailed study. To begin with, assume that z 0 H . Such a situation is met, for instance, with the initial approximation u 0 = 0 in the solution to problems (1) and (2) in class (5). Let us show that s ( n ) = z n ( 1 ) 0 for n . We use the representation:
s 2 ( n ) = i = 1 ( 1 α σ i ) 2 n ( z 0 , w i ) 2 .
For any small ε > 0 , can found N such that
i = N + 1 ( z 0 , w i ) 2 ε 2 .
By | 1 α σ i | < 1 , we obtain:
s 2 ( n ) i = 1 N ( 1 α σ i ) 2 n ( z 0 , w i ) 2 + i = N + 1 ( z 0 , w i ) 2 .
In case of sufficiently large N, for the first term we obtain the following:
i = 1 N ( 1 α σ i ) 2 n ( z 0 , w i ) 2 ε 2 .
The substitution of (20) into (18) yields the estimate
z n n α δ + s ( n ) .
such that s ( n ) 0 for n . Estimating (21) completes the proof. □
In the updated iteration method (9), the number of iterations matched the inexactness input data; also the input data or right-hand side serves the regularization parameter.

6. Estimating the Rate of Convergence

We know the convergence without needing the discovery the rate. By narrowing the class of a priori limitations of the solution, we define the following theorem for the estimated solution as an explicit function of the inexactness of the input data.
Consider the iterative method (9) under the more stringent constraints of the iterative parameter,
0 < α < 2 / σ 1 2 ,
Theorem 3.
Suppose that the exact solution of problem (3) belongs to the class
A p u M , 0 < p < .
 Then, for the inaccuracy of the iteration method (9) with  u 0 = 0  their holds the estimate
z n n α δ + M 1 n p , M 1 = M 1 ( α , p , M ) .
Proof of Theorem 3.
By condition (23), it is necessary to know the quantity of s ( n ) in (21). The iterations number n determines the outcome.
With u 0 = 0 , we find z 0 = u , by using the above, we find:
z n ( 1 ) = i = 1 σ i p ( 1 α σ i ) n ( u , w i ) σ i p w i .
After that, in view of (23) we find:
s ( n ) max σ i i = 1 σ i p | 1 α σ i | n A p u .
Under constraint (22) on the iteration parameter, we have o < α σ i 1 and, hence,
s ( n ) max σ i σ i p | 1 α σ i | n 1 α p max 0 < η < 1 χ ( η ) ,
where χ ( η ) = η p ( 1 η ) n . The function χ ( η ) attains its maximum at the point
η = η * = p p + n
and,
χ ( η * ) = ( p n ) p ( 1 p p + n ) p + n < p p n p exp ( p ) .
Replace the (25) solutions in estimate (24), in which the constant
M 1 = p p α p exp ( p ) M
depends on p , α and M , and does not depend on n . Minimizing the input data of (24) lets us formulate the stop criterion:
n o p t = ( p M 1 α ) 1 / ( p + 1 ) δ 1 / ( p + 1 ) ,
i.e., n ( δ ) = ϑ ( δ 1 / ( p + 1 ) ) . Here, for the inexactness of the estimated solution, we obtain the following estimation
z n o p t M 2 δ p / ( p + 1 )
with
M 2 = α ( p M 1 α ) 1 / ( p + 1 ) + M 1 ( p M 1 α ) p / ( p + 1 ) .
The approximation of (27) establishes the direct requirement of the rate at which the approximated solution converges to the exact solution on inexactness δ and on the exact solution’s smoothness (parameter p). □

7. Numerical Results

We considered the inverse boundary value problem found in [28]:
u ( x , t ) t = 2 u ( x , t ) x 2 ; 0 < x < 1 , 0 < t T ,
u ( x , 0 ) = 0 ; 0 x 1 ,
u ( 0 , t ) x = 0 ; 0 < t T ,
u ( 1 , t ) = u ( t ) ; 0 < t T ,
supposing that the u ( t ) is a function such that
u ( t ) H 4 [ 0 , T ] , u ( 0 ) = u ( 0 ) = u ( 0 ) = u ( T ) = u ( T ) = 0 ,
and 0 T | u ( t ) | 2 d t r 2 ,     r is the identified number. By applying the separation of variables method, we achieve:
u ( x 0 , t ) = n = 0 a n ( t ) c o s ( ( n + 0.5 ) π x 0 ) + u ( t ) ,
where x 0 ( 0 , 1 ) , t [ 0 , T ] and
a n ( t ) = 2 e ( n + 0.5 ) 2 π 2 t ( n + 0.5 ) π 0 t h ( τ ) e ( n + 0.5 ) 2 π 2 τ d τ ,
Integrating by two parts for the right-hand side of (34), we obtain:
a n ( t ) = 2 ( n + 0.5 ) π 0 t u ( τ ) e ( n + 0.5 ) 2 π 2 ( t τ ) d τ + 2 ( n + 0.5 ) 3 π 3 u ( t ) [ 2 ( n + 0.5 ) 3 π 3 u ( 0 ) + 2 ( n + 0.5 ) 5 π 5 u ( t ) + 2 ( n + 0.5 ) 5 π 5 u ( 0 ) ] ,
a n ( t ) = 2 ( n + 0.5 ) 5 π 5 0 t u ( τ ) e ( n + 0.5 ) 2 π 2 ( t τ ) d τ [ 2 ( n + 0.5 ) 3 π 3 u ( 0 ) + 2 ( n + 0.5 ) 5 π 5 u ( 0 ) ] ,
where u ( 0 ) = u ( 0 ) = 0 , by (32) and
a n ( t ) = 2 ( n + 0.5 ) 5 π 5 0 t u ( τ ) e ( n + 0.5 ) 2 π 2 ( t τ ) d τ ,
From (33) and (37) we find the next integral equation for the first kind:
f ( t ) = u ( x 0 , t ) = u ( t ) + 0 t n = 0 2 c o s ( n + 0.5 ) π x 0 ( n + 0.5 ) 5 π 5 e ( n + 0.5 ) 2 π 2 ( t τ ) g ( τ ) d τ ,
where u ( τ ) = g ( τ ) . We define the operator P :   L 2 [ a ,   b ]   L 2 [ a ,   b ] by:
u ( t ) = P g ( τ ) = 0 t ( t τ ) 2 2 g ( τ ) d τ , g ( τ ) , P g ( τ ) L 2 [ 0 , T ] ,
We used the discretization algorithm in [29] and applied it in [30], by specifying the derivative for the kernel we achieve the following:
A u ( t ) = 0 t K ( t , τ ) g ( τ ) d τ = f ( t ) ,   t [ 0 , T ] ,
where K ( t , τ ) = ( t τ ) 2 2 + n = 0 2 c o s ( n + 0.5 ) π x 0 ( n + 0.5 ) 5 π 5 e ( n + 0.5 ) 2 π 2 ( t τ ) .
The operator A is an infinite-dimensional operator. The next step applies the discretization algorithm to the integral Equation (40) and converts the operator A to the finite-dimension operator A n , where A n A for n , and for operator P , there is an operator P n , where P n P for n ,
K ¯ i ( t ) = K ( τ i , t ) ,
K ¯ i ( t ) ; τ i τ τ i + 1 , t [ 0 , T ] , i = 0 , 1 ,   , n 1 ,
K ¯ i ( t j ) ; τ i τ τ i + 1 , t j t t j + 1 , i = 0 , 1 , , n 1 , j = 0 , 1 , n 1 ,
K n ( t , τ ) = K ¯ i ( t j ) ,
A n [ u ( t ) ] = 0 t K n ( t , τ ) g ( τ ) d τ = f ( t ) , t [ 0 , T ] ,
K ¯ i ( t j ) = { K ( τ i , t j ) i j 0 i > j .
From the above, we convert the system of differential Equations (28)–(32) to the linear operator equation A n u = f n , where u = P n g , by reformulating the differential equation to an integral equation by using the separation of variables and applying the discretization algorithm to convert the integral equation into a system of linear algebra equations or linear operator equations.
  A ¯ n [ g ( τ 0 ) g ( τ 1 ) g ( τ n 1 ) ] = [ f ( t 0 ) f ( t 2 ) f ( t n 1 ) ] ,
where
A ¯ n = 1 n [ K ( τ 0 , t 0 ) 0 K ( τ 0 , t 1 ) K ( τ 1 , t 1 ) 0 0 K ( τ 0 , t n 1 ) K ( τ 1 , t n 1 ) K ( τ n 1 , t n 1 ) ] ,
where A ¯ n = A n P n
The operator A ¯ n in (43) is a non-injective operator, because it has the triangular property [28].
Considering the above problem, we need to find the function   u ( t ) H 4 [ 0 , T ] . The real solution, and the   u ( x 0 , t ) = f ( t ) represented the input function, where the x 0 ( 0 , 1 ) , t [ 0 , T ] , x 0 = 0.5 and T = 1 . In [31], we prepared a case study using MATLAB code. First of all, the discretization algorithm was applied to obtain the operator, and by using   u ( t ) = sin ( π / 2 t ) , we obtained the vector f . We then add some noise, as explained in the code for obtaining the vector f δ . Finally, we computed the SVD for operator A , and used it in the MLI and CLI algorithms to obtain the approximation solutions u α .
Algorithm 1 MLI (number iteration): MLI the value of the regularization parameter that was used with 1000 iterations (see Figure 1).
Algorithm 2 CLI (number of iteration): the value of regularization parameter with 1000 iterations (see Figure 2).
In Figure 3, the iterations increased to 10,000.
In Figure 4, the iterations increased to 20,000.

8. Conclusions Remarks and Observations

This paper defined algorithms that solve ill-posed inverse problems by using an iterative regularization method (Landweber iterative type). The regularization parameter was chosen by considering the iteration method that was most consistent with the real solution. The residual method was used as an analysis method to rate the last iteration’s convergence. We observed that the minimum of the discrepancy is extremely unstable with respect to data perturbations on the right-hand side of the linear operator equation. It is clear that the updated Landweber iteration algorithm obtained a good approximation solution compared with the classical Landweber method. That means that our updated Landweber method successfully fixed this instability in the discrepancy method. The suggested algorithms successfully solve the inverse boundary value problem for the heat-conducting problem.

Author Contributions

Data curation, H.K.I.A.-M., M.A., H.A. and E.-S.M.E.-k.; Formal analysis, M.A., H.A. and A.K.; Funding acquisition, H.A., M.A.; Investigation, H.K.I.A.-M., M.A. and H.A.; Methodology, H.K.I.A.-M., M.A. and E.-S.M.E.-k.; Project administration, M.A. and H.A; Resources, M.A. and A.K.; Software, H.K.I.A.-M. and M.A.; Visualization, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by Act 211 Government of the Russian Federation, contract No. 02.A03.21.0011. The work was supported by the Ministry of Science and Higher Education of the Russian Federation (government order FENU-2020-0022).

Data Availability Statement

The datasets used to support this study are included within the code on GitHub. We prepared the case study by using MATLAB code. First, the discretization algorithm was applied to obtain the operator and, by using it, we obtained the vector. We then added some noise as explained in the code to obtain the vector. Finally, we computed the SVD for the operator and used it in the MLI and CLI algorithms to obtain approximation solutions [31].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Daniell, P.J.; Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations. Math. Gaz. 1924, 12, 173. [Google Scholar] [CrossRef]
  2. Al-Mahdawi, H.K. Studying the Picard’s Method for Solving the Inverse Cauchy Problem for Heat Conductivity Equations. Bull. South Ural State Univ. Ser. Comput. Math. Softw. Eng. 2019, 8, 5–14. [Google Scholar] [CrossRef]
  3. Al-Mahdawi, H.K. Development the Regularization Computing Method for Solving Boundary Value Problem to Heat Equation in the Composite Materials. J. Phys. Conf. Ser. 2021, 1999, 12136. [Google Scholar] [CrossRef]
  4. Al-Mahdawi, H.K. Development the Numerical Method to Solve the Inverse Initial Value Problem for the Thermal Conductivity Equation of Composite Materials. J. Phys. Conf. Ser. 2021, 1879, 32016. [Google Scholar] [CrossRef]
  5. Al-Mahdawi, H.K.; Sidikova, A.I. An Approximate Solution of Fredholm Integral Equation Of The First Kind By The Regularization Method With Parallel Computing. Turkish J. Comput. Math. Educ. 2021, 12, 4582–4591. [Google Scholar]
  6. Tikhonov, A.N. On the Regularization of Ill-Posed Problems. Proc. USSR Acad. Sci. 1963, 153, 49–52. [Google Scholar]
  7. Lavrentiev, M.M. The Inverse-Problem in Potential Theory. Dokl. Akad. Nauk SSSR 1956, 106, 389–390. [Google Scholar]
  8. Ivanov, V.K. The application of Picard’s method to the solution of integral equations of the first kind. Bull. Inst. Politenn. Iasi. 1968, 14, 71–78. [Google Scholar]
  9. Glasko, V.B.; Kulik, N.I.; Shklyarov, I.N.; Tikhonov, A.N. An inverse problem of heat conductivity. Zhurnal Vychislitel’ noi Mat. i Mat. Fiz. 1979, 19, 768–774. [Google Scholar]
  10. Belonosov, A.S.; Shishlenin, M.A. Continuation problem for the parabolic equation with the data on the part of the boundary. Siber. Electron. Math. Rep. 2014, 11, 22–34. [Google Scholar]
  11. Kabanikhin, S.I.; Hasanov, A.; Penenko, A.V. A gradient descent method for solving an inverse coefficient heat conduction problem. Numer. Anal. Appl. 2008, 1, 34–45. [Google Scholar] [CrossRef]
  12. Yagola, A.G.; Stepanova, I.E.; Van, Y.; Titarenko, V.N. Obratnye zadachi i metody ikh resheniya: Prilozheniya k geofizike (Inverse Problems and Methods for Their Solution: Applications to Geophysics); Binom. Laboratoriya Znanii: Moscow, Russia, 2014. [Google Scholar]
  13. Kabanikhin, S.I.; Krivorot’ko, O.I.; Shishlenin, M.A. A numerical method for solving an inverse thermoacoustic problem. Numer. Anal. Appl. 2013, 6, 34–39. [Google Scholar] [CrossRef]
  14. Tanana, V.P. On the order-optimality of the projection regularization method in solving inverse problems. Sib. Zhurnal Ind. Mat. 2004, 7, 117–132. [Google Scholar]
  15. Clason, C.; Nhu, V.H. Bouligand-Levenberg-Marquardt iteration for a non-smooth ill-posed inverse problem. arXiv 2019, arXiv:1902.10596. [Google Scholar] [CrossRef]
  16. Clason, C.; Nhu, V.H. Bouligand–Landweber iteration for a non-smooth ill-posed problem. Numer. Math. 2019, 142, 789–832. [Google Scholar] [CrossRef] [Green Version]
  17. Jin, Q. Landweber-Kaczmarz method in Banach spaces with inexact inner solvers. Inverse Probl. 2016, 32, 104005. [Google Scholar] [CrossRef] [Green Version]
  18. Kaltenbacher, B.; Schöpfer, F.; Schuster, T. Convergence of some iterative methods for the regularization of nonlinear ill-posed problems in Banach spaces. Inverse Probl. 2009, 25, 19. [Google Scholar] [CrossRef] [Green Version]
  19. Schöpfer, F.; Louis, A.K.; Schuster, T. Nonlinear iterative methods for linear ill-posed problems in Banach spaces. Inverse Probl. 2006, 22, 311. [Google Scholar] [CrossRef] [Green Version]
  20. Real, R.; Jin, Q. A revisit on Landweber iteration. Inverse Probl. 2020, 36, 75011. [Google Scholar] [CrossRef]
  21. Li, D.-G.; Fu, J.-L.; Yang, F.; Li, X.-X. Landweber Iterative Regularization Method for Identifying the Initial Value Problem of the Rayleigh–Stokes Equation. Fractal Fract. 2021, 5, 193. [Google Scholar] [CrossRef]
  22. Wang, J. A two—Step accelerated Landweber—Type iteration regularization algorithm for sparse reconstruction of electrical impedance tomography. Math. Methods Appl. Sci. 2021, 1–12. [Google Scholar] [CrossRef]
  23. Landweber, L. An Iteration Formula for Fredholm Integral Equations of the First Kind. Am. J. Math. 1951, 73, 615. [Google Scholar] [CrossRef]
  24. Cimmino, G. Cacolo approssimato per le soluzioni dei systemi di equazioni lineari. La Ric. Sci. 1938, 1, 326–333. [Google Scholar]
  25. Censor, Y.; Gordon, D.; Gordon, R. Component averaging: An efficient iterative parallel algorithm for large and sparse unstructured problems. Parallel Comput. 2001, 27, 777–808. [Google Scholar] [CrossRef]
  26. Natterer, F. Computerized tomography. In The Mathematics of Computerized Tomography; Springer: Berlin/Heidelberg, Germany, 1986; pp. 1–8. [Google Scholar]
  27. Mesgarani, H.; Azari, Y. Numerical investigation of Fredholm integral equation of the first kind with noisy data. Math. Sci. 2019, 13, 267–268. [Google Scholar] [CrossRef] [Green Version]
  28. Al-Mahdawi, H.K. Solving of an Inverse Boundary Value Problem for the Heat Conduction Equation by Using Lavrentiev Regularization Method. J. Phys. Conf. Ser. 2021, 1715, 12032. [Google Scholar] [CrossRef]
  29. Tanana, V.P.; Vishnyakov, E.Y.; Sidikova, A.I. An approximate solution of a Fredholm integral equation of the first kind by the residual method. Numer. Anal. Appl. 2016, 9, 74–81. [Google Scholar] [CrossRef]
  30. Al-Mahdawi, H.K. Development of a numerical method for solving the inverse Cauchy problem for the heat equation. Bull. South Ural State Univ. Ser. Comput. Math. Softw. Eng. 2019, 8, 22–31. [Google Scholar]
  31. Al-Mahdawi, H.K. BVP_PDE_Heat; GitHub Inc.: San Francisco, CA, USA, 2022; Available online: https://github.com/hssnkd1/BVP_PDE_Heat/blob/main/Landweber (accessed on 25 May 2022).
Figure 1. Approximation solution by MLI.
Figure 1. Approximation solution by MLI.
Mathematics 10 02798 g001
Figure 2. Approximation solution by CLI.
Figure 2. Approximation solution by CLI.
Mathematics 10 02798 g002
Figure 3. Approximation solution with 10,000 iterations: (a) MLI algorithm; (b) CLI algorithm.
Figure 3. Approximation solution with 10,000 iterations: (a) MLI algorithm; (b) CLI algorithm.
Mathematics 10 02798 g003
Figure 4. Approximation solution with 20,000 iterations: (a) MLI algorithm; (b) CLI algorithm.
Figure 4. Approximation solution with 20,000 iterations: (a) MLI algorithm; (b) CLI algorithm.
Mathematics 10 02798 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Mahdawi, H.K.I.; Alkattan, H.; Abotaleb, M.; Kadi, A.; El-kenawy, E.-S.M. Updating the Landweber Iteration Method for Solving Inverse Problems. Mathematics 2022, 10, 2798. https://doi.org/10.3390/math10152798

AMA Style

Al-Mahdawi HKI, Alkattan H, Abotaleb M, Kadi A, El-kenawy E-SM. Updating the Landweber Iteration Method for Solving Inverse Problems. Mathematics. 2022; 10(15):2798. https://doi.org/10.3390/math10152798

Chicago/Turabian Style

Al-Mahdawi, Hassan K. Ibrahim, Hussein Alkattan, Mostafa Abotaleb, Ammar Kadi, and El-Sayed M. El-kenawy. 2022. "Updating the Landweber Iteration Method for Solving Inverse Problems" Mathematics 10, no. 15: 2798. https://doi.org/10.3390/math10152798

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop