Next Article in Journal
Emergence of an Aperiodic Dirichlet Space from the Tetrahedral Units of an Icosahedral Internal Space
Previous Article in Journal
Analysis of Magneto-hydrodynamics Flow and Heat Transfer of a Viscoelastic Fluid through Porous Medium in Wire Coating Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Stage Method for Piecewise-Constant Solution for Fredholm Integral Equations of the First Kind

Department of Mathematics, Shantou University, Shantou 515063, China
*
Author to whom correspondence should be addressed.
Mathematics 2017, 5(2), 28; https://doi.org/10.3390/math5020028
Submission received: 18 December 2016 / Revised: 12 May 2017 / Accepted: 16 May 2017 / Published: 22 May 2017

Abstract

:
A numerical method is proposed for estimating piecewise-constant solutions for Fredholm integral equations of the first kind. Two functionals, namely the weighted total variation (WTV) functional and the simplified Modica-Mortola (MM) functional, are introduced. The solution procedure consists of two stages. In the first stage, the WTV functional is minimized to obtain an approximate solution f TV * . In the second stage, the simplified MM functional is minimized to obtain the final result by using the damped Newton (DN) method with f TV * as the initial guess. The numerical implementation is given in detail, and numerical results of two examples are presented to illustrate the efficiency of the proposed approach.

1. Introduction

In many physical problems, the relation between the quantity observed and the quantity to be measured can be formulated as a Fredholm integral equation of the first kind:
0 1 k ( x , t ) f ( t ) d t = g ( x ) , 0 x 1 ,
where the kernel function k and the right-hand side g are known, while f is the unknown to be determined. The Fredholm integral equation of the first kind is ill-posed; see for instance [1], Chapter 2.
In practical applications, there is noise in the right-hand side; therefore, Equation (1) should be revised as:
0 1 k ( x , t ) f ( t ) d t = g ( x ) + η ( x ) , 0 x 1 ,
where η represents the error. The above equation can be written in the operator form:
K f = h ,
where ( K f ) ( x ) = 0 1 k ( x , t ) f ( t ) d t , 0 x 1 and h = g + η .
Numerical methods for obtaining a reasonable approximate solution to the Fredholm integral equation of the first kind have attracted many researchers, and many research results have been achieved; see, for instance, ([2], Chapter 12), and [3,4,5,6,7]. Due to the ill-posedness nature of the problem, numerical solutions are extremely sensitive to perturbations caused by observation and rounding errors. Therefore, regularization is required to obtain a reasonable approximate solution. Many regularization methods have been proposed; see, for instance, [8] Chapters 2 and 8. The Tikhonov regularization method [9], the truncated singular value decomposition (TSVD) method [10], the modified TSVD (MTSVD) method [11], the Chebyshev interpolation method [12], the collocation method [13,14], the projected Tikhonov regularization method [15], and so on, are applied to obtain approximate continuous solutions of Equation (2). The total variation (TV) regularization method [16,17,18,19,20,21,22], adaptive TV methods [23,24,25,26,27], the piecewise-polynomial TSVD (PP-TSVD) method [28], and so on, are applied to obtain approximate piecewise-continuous solutions.
In this paper, we focus on the case where the solution of Equation (1) is piecewise-constant and the possible function values are known, that is:
p = 1 m ( f ( x ) c p ) = 0 , x [ 0 , 1 ] ,
where c 1 < c 2 < < c m are given constants. This kind of problem arises in many applications, for instance barcode reading, where m = 2 , c 1 = 0 , c 2 = 1 , and image restoration, where m = 256 , c p = p 1 , p = 1 , 2 , , 256 .
This paper is organized as follows. In Section 2, two objective functionals are introduced; one is based on a weighted TV functional to allow the TV regularization to be spatially varying, and the other one is based on a simplified Modica-Mortola (MM) functional to make use of the a priori knowledge of the solution. In Section 3, the implementation of numerical methods for solving Equation (2) is presented in detail. In Section 4, numerical examples are presented to illustrate the effectiveness of the proposed approach. Finally, concluding remarks are given in Section 5.

2. The Objective Functionals

The TV regularization method has been shown to be an effective way to estimate piecewise-constant solutions. This method looks for a numerical solution that has small TV, which is not inclined to a continuous or discontinuous solution. The weighted TV regularization scheme is more efficient [23], which allows the TV regularization to be spatially varying, that is a small weight is used if there is a possible edge and a large weight if there is no edge.
A commonly-used weighted TV functional is defined by:
TV ω , β ( f ) = 0 1 ω ( x ) f ( x ) 2 + β d x ,
where β > 0 is a parameter and ω ( x ) 0 is a weighting function. An example of ω ( x ) is:
ω ( x ) = θ f ( x ) + θ ,
where θ > 0 is a parameter; see [23]. We can see that 0 < ω ( x ) 1 , and ω ( x ) decreases as f ( x ) increases. That is, if f ( x ) is large, the corresponding weight is small. The above weighting function is unsmooth, so we modify it as:
ω ( x ) = β + θ f ( x ) 2 + β + θ
so that ω ( x ) is smooth and 0 < ω ( x ) 1 .
One can use the weighted TV functional for Equation (2) to obtain a piecewise-constant solution. In this paper, we consider solving the following minimization problem:
min f Φ TV ( f ) = 1 2 J ( f ) + α TV ω , β ( f ) ,
where:
J ( f ) = 0 1 | K f h | 2 d x
and α > 0 is a small regularization parameter.
To impose the constraint p = 1 m ( f ( x ) c p ) = 0 to the numerical solutions, we consider the following simplified MM functional:
Φ M ( f ) = 1 2 J ( f ) + γ 2 M ( f ) ,
where γ > 0 is a regularization parameter and:
M ( f ) = 0 1 p = 1 m ( f ( x ) c p ) 2 d x .
Note that the MM functional, which is given by:
1 2 J ( f ) + α 2 0 1 ( f ( x ) ) 2 d x + γ 2 M ( f ) ,
(see for instance [29]) is a useful tool for solving constant constraint problems, e.g., Bogosel and Oudet applied the functional to analyze a spectral problem with the perimeter constraint [30].
It must be pointed out that the MM functional M ( f ) is not convex and has many minimal points. In general, we can only obtain a local minimizer for Φ M ( f ) . In other words, the numerical solution obtained by minimizing Φ M ( f ) depends on the initial guess. This observation motivates us to consider estimating piecewise-constant solutions for (2) by a two-stage method: compute the minimizer f TV * of Φ TV ( f ) and then use f TV * as the initial guess to obtain a minimizer of Φ M ( f ) to obtain the final result.

2.1. Discretization

To solve the minimization problems (5) and (6) numerically, we need to discretize the relevant functionals. We use the midpoint quadrature and the central divided difference to discretize integrals and first order derivatives, respectively. Let the interval 0 , 1 be partitioned uniformly into n subintervals ( i 1 ) Δ x , i Δ x , i = 1 , 2 , , n , then the quadrature points are  i 1 2 Δ x , i = 1 , 2 , n , where Δ x = 1 n . Let f = f ( i 1 2 ) Δ x i = 1 n , h = h ( i 1 2 ) Δ x i = 1 n and K = Δ x k i 1 2 Δ x , j 1 2 Δ x i , j = 1 n . Then, the discretization form of (2) is given by:
K f = h .
The discretization of the functionals J ( f ) , TV ω , β ( f ) , and M ( f ) are as follows.
(1)
The discretization of J ( f ) is given by:
J ( f ) = K f h 2 ,
where · denotes the vector two-norm.
(2)
Let e i be the i-th column of the n × n identity matrix, and let:
d i = e i + 1 e i , i = 1 , 2 , , n 1 ,
and:
D = [ d 1 , d 2 , , d n 1 ] .
We approximate the weighted TV functional TV ω , β ( f ) (cf. (3)) by:
TV ω , β ( f ) = i = 1 n 1 ω i ψ ( d i T f ) ,
where ω i = ω ( ( i 1 / 2 ) Δ x ) and ψ ( x ) = x 2 + β (we replace β ( Δ x ) 2 by β for the sake of simplicity). Obviously, if the weighting factors are set to ω i = 1 , i = 1 , 2 , , n 1 , the weighted TV functional is just the traditional TV functional. In this paper, we choose smooth factors by approximating (4):
ω i = β + θ ( d i T f ) 2 + β + θ , i = 1 , 2 , , n 1 .
(3)
As for M ( f ) , we simply approximate it by:
M ( f ) = i = 1 n p = 1 m ( f i c p ) 2 .
Here, we omit the factor Δ x for the sake of simplicity.
Therefore, the discretization of Φ TV ( f ) is:
Φ TV ( f ) = 1 2 K f h 2 + α i = 1 n 1 ω i ψ ( d i T f )
and the discretization of Φ M ( f ) is:
Φ M ( f ) = 1 2 K f h 2 + γ 2 i = 1 n p = 1 m ( f i c p ) 2 .

3. Numerical Implementation

In this section, the implementation of the damped Newton (DN) method for minimization of the functions Φ TV ( f ) and Φ M ( f ) (cf. (8) and (9)) is given. We first derive the gradients and the Hessians of J ( f ) , TV ω , β ( f ) and M ( f ) .
Lemma 1.
Let β and θ be positive constants, and define:
ϕ ( x ) = 1 x 2 + β , ρ ( x ) = 1 x 2 + β + θ .
(1) The gradient and the Hessian of J ( f ) = K f h 2 are given by:
grad J ( f ) = 2 K T ( K f h )
and:
Hess J ( f ) = 2 K T K ,
respectively.
(2) Let ω i = 1 , i = 1 , 2 , , n 1 and ϕ ( x ) be defined by (10). Let diag ( ϕ ( d 1 T f ) , , ϕ ( d n 1 T f ) ) be denoted by diag ( ϕ ( D T f ) ) . Then, the gradient and the Hessian of TV ω , β ( f ) are given by:
grad TV ω , β ( f ) = L ( f ) f
and:
Hess TV ω , β ( f ) = L ( f ) + L ( f ) f ,
respectively, where:
L ( f ) = D diag ( ϕ ( D T f ) ) D T
and:
L ( f ) f = D ( diag ( D T f ) ) 2 diag ( ϕ ( D T f ) ) 3 D T .
Here, we use the symbols L ( f ) and L ( f ) f in the same way as they have been used in [8].
(3) Let ω i , i = 1 , 2 , , n 1 , be given by (7), and ϕ ( x ) and ρ ( x ) be defined by (10). Let  diag ( ρ ( d 1 T f ) , , ρ ( d n 1 T f ) ) be denoted by diag ( ρ ( D T f ) ) . Then, the gradient and the Hessian of TV ω , β ( f ) are given by:
grad TV ω , β ( f ) = L ( f ) f
and:
Hess TV ω , β ( f ) = L ( f ) + L ( f ) f ,
respectively, where:
L ( f ) = θ ˜ D diag ( ϕ ( D T f ) ) diag ( ρ ( D T f ) ) 2 D T
and:
L ( f ) f = θ ˜ D ( diag ( D T f ) ) 2 ( diag ( ϕ ( D T f ) ) + 2 diag ( ρ ( D T f ) ) ) diag ( ρ ( D T f ) ) 2 diag ( ϕ ( D T f ) ) 2 D T
with θ ˜ = θ ( β + θ ) .
(4) The gradient of M ( f ) is given by:
grad M ( f ) = M f 1 , M f 2 , , M f n T ,
where the partial derivatives M f i , i = 1 , 2 , , n , are given by:
M f i = 2 p = 1 m ( f i c p ) 2 p = 1 m 1 f i c p , p = 1 m ( f i c p ) 0 , 0 , p = 1 m ( f i c p ) = 0 .
The Hessian of M ( f ) is a diagonal matrix given by:
Hess M ( f ) = diag 2 M f 1 2 , 2 M f 2 2 , , 2 M f n 2 ,
where:
2 M f i 2 = 2 p = 1 m ( f i c p ) 2 2 p = 1 m 1 f i c p 2 p = 1 m 1 ( f i c p ) 2 , p = 1 m ( f i c p ) 0 , 2 p = 1 , p p 0 m ( f i c p ) 2 , f i = c p 0 .
Proof. 
Results (1) and (2) of the lemma are well known; see [8], Section 8.2.
(3) Let ζ ( x ) = x 2 + β x 2 + β + θ , then we have:
ζ ( x ) = θ x x 2 + β ( x 2 + β + θ ) 2 = θ x ϕ ( x ) ρ ( x ) 2 , ζ ( x ) = θ ϕ ( x ) ρ ( x ) 2 θ x 2 ( ϕ ( x ) + 2 ρ ( x ) ) ρ ( x ) 2 ϕ ( x ) 2 ,
where ϕ ( x ) and ρ ( x ) are defined by (10). Let θ ˜ = θ ( β + θ ) ; it can be easily checked that for any v ,
d d τ TV ω , β ( f + τ v ) | τ = 0 = i = 1 n 1 θ ˜ · ( d i T f ) · ϕ ( d i T f ) · ρ ( d i T f ) 2 · d i T v = θ ˜ · ( D T v ) T · diag ( ϕ ( D T f ) ) · diag ( ρ ( D T f ) ) 2 · ( D T f ) = θ ˜ D diag ( ϕ ( D T f ) ) diag ( ρ ( d i T f ) ) 2 D T f , v .
It follows that:
grad TV ω , β ( f ) = L ( f ) f ,
where L ( f ) is given by (18).
To obtain the Hessian of TV ω , β , we consider TV ω , β ( f + τ v + ξ w ) . From the expression of ζ ( x ) , we have that for any v , w :
2 τ ξ TV ω , β ( f + τ v + ξ w ) | τ , ξ = 0 = i = 1 n 1 θ ˜ · ϕ ( d i T f ) · ρ ( d i T f ) 2 · d i T w · d i T v i = 1 n 1 θ ˜ · ( d i T f ) 2 · ( ϕ ( d i T f ) + 2 ρ ( d i T f ) ) · ρ ( d i T f ) 2 · ϕ ( d i T f ) 2 · d i T w · d i T v = ( L ( f ) + L ( f ) f ) w , v ,
where L ( f ) and L ( f ) f are given by (18) and (19) respectively. Consequently:
Hess TV ω , β ( f ) = L ( f ) + L ( f ) f .
(4) It is easy to see that the partial derivatives M f i , i = 1 , 2 , , n , are given by:
M f i = 2 p = 1 m ( f i c p ) l = 1 m p = 1 , p l m ( f i c p ) ,
which can be rewritten in the form of (21). For the second order partial derivatives, we have:
2 M f i 2 = 2 p = 1 m ( f i c p ) 2 2 p = 1 m 1 f i c p 2 p = 1 m 1 ( f i c p ) 2 , p = 1 m ( f i c p ) 0 , 2 p = 1 , p p 0 m ( f i c p ) 2 , f i = c p 0 ,
and
2 M f i f j = 0 , i j .
Therefore, the Hessian of M is a diagonal matrix given by:
Hess M ( f ) = diag 2 M f 1 2 , 2 M f 2 2 , , 2 M f 1 2 ,
where 2 M / f i 2 , i = 1 , 2 , , n , are given by (23). ☐
From Formulas (11)–(23), we can get the gradients and the Hessians of Φ TV ( f ) and Φ M ( f ) . We introduce the following DN method for minimization of Ψ ( f ) = Φ TV ( f ) or Φ M ( f ) .
Algorithm 1: Damped Newton (DN) method for the minimization of Ψ ( f ) .
  Input an initial guess f 0 ;
   ν = 0 ;
  Begin iterations
   Compute g = grad Ψ ( f ν ) and H = Hess Ψ ( f ν ) ;
   Solve H s = g to obtain s ;
   Obtain the minimum point of the one-dimensional nonnegative function Ψ ( f ν + τ s ) by using
  line search to get τ ν = arg min τ Ψ ( f ν + τ s ) ;
   Update the approximate solution: f ν + 1 = f ν + τ ν s ;
   Check the termination condition: If f ν + 1 f ν / f ν + 1 < ϵ , break;
    ν = ν + 1 ;
  End iterations
  Output f ν + 1 .
Remark 1.
(1) Since we carry out exact line search (Step 6) in the iterative process, the DN method converges to a local minimal point if the Hessian of Ψ ( f ) is invertible. In the case that Ψ ( f ) is singular, modification is required. In our numerical tests in Section 4, no modification is needed. It must be pointed out that for large-scale systems, the most expensive part of the algorithm is solving H s = g to obtain s . If the matrix H has a Toeplitz structure, then the conjugate gradient method with the fast Fourier transform (FFT) can be applied to solve the system efficiently; see [31] for details.
(2) For the minimization of Φ TV ( f ) , if we approximate the Hessian of Φ TV ( f ) by the positive definite matrix K T K + α L ( f ) (assume that K 1 0 where 1 is the vector with all elements equal to one) and set the step-size τ ν to 1, we get the Gauss–Newton method. If we obtain τ ν by using line search method, we get a modified Gauss–Newton (MGN) method. In this paper, we also consider the MGN method for minimization of Φ TV ( f ) . Obviously, the MGN method converges to a local minimal point.
To end this section, we state the process for solving Equation (2). We first obtain the minimizer f TV * of Φ TV ( f ) , and then, we obtain a minimizer f M * of Φ M ( f ) by using the DN method with f TV * as the initial guess. The approach can be summarized as Algorithm 2.
Algorithm 2: Weighted total variation Modica-Mortola (WTVMM) method for estimating piecewise-constant solution of Equation (2).
  Obtain the minimizer f TV * of Φ TV ( f ) by using the MGN method or the DN method;
  Obtain a minimizer f M * of Φ M ( f ) by using the DN method with f TV * as the initial guess;
  Output f M * .

4. Numerical Examples

In this section, we present numerical results for two examples to illustrate that our approach is indeed capable of estimating piecewise-constant solutions to Equation (2).
Besides the numerical solution obtained by using our approach, approximate solutions obtained by using the TV regularization method and the weighted TV regularization method with weighting factors ω i = ( β + θ ) / ( ( f i + 1 f i ) 2 + β + θ ) (cf. (7)) are presented. In the following tables and figures, we use the symbols weighted total variation Modica-Mortola (WTVMM), TV and WTV to denote the above three methods.
We set the number of subintervals to n = 128 . All tests were carried out by using MATLAB, and the termination parameter is set to ϵ = 10 5 ; see Step 8 of Algorithm 1. In the tables, the column “ ( α , β , θ , γ ) ” gives relevant parameters; the column “ # i t ” gives the number of iterations (for the WTVMM method, i 1 + i 2 denotes that the WTV method and the MM method require i 1 and i 2 iterations, respectively). The column “error” gives the relative error defined by:
error = f exact f app 2 f exact 2 ,
where f exact and f app are the vectors of the exact solution and the approximate solution at the quadrature points, respectively. Here, one iteration refers to Steps 4–8 of Algorithm 1.
There are several regularization parameter choosing methods, e.g., the discrepancy principle, the generalized discrepancy principle, the generalized cross validation, the L-curve and the normalized cumulative periodogram; see, for instance, [32,33], ([1], Chapters 5–6), ([8], Chapter 7), ([34], Chapters 5, 6 and 9). However, we choose the parameters by experiments since there are two additional parameters in the regularization term T V ω , β ( f ) . We consider the following rules in choosing the parameters:
(1)
Since we use x 2 + β as an approximation of | x | , we set β to a very small positive number. We note that if β is significantly larger than x 2 , then:
x 2 + β β + 1 2 β x 2 .
As a result, the regularization term x 2 + β leads to an H 1 -like regularization.
(2)
If θ is much larger than x 2 , then the weighting function ω ( x ) is close to one, i.e.,
ω ( x ) = β + θ x 2 + β + θ 1 .
It follows that the weighted TV regularization is about the same as the traditional TV regularization. Therefore, we should not choose a large value for θ.
(3)
We choose the value for α by the help of the discrepancy principle and observe if the numerical solution is near piecewise-constant. After choosing a value for α, we consider adjustment of β and θ: if the numerical solution is over-smooth, we choose a smaller value for β or θ; on the other hand, if the numerical solution is oscillating, we choose a larger value for β or θ.
We have tried the DN method and the MGN method for minimization of the functions corresponding to the TV functional ( ω i = 1 ) and the WTV functional (the weighted factors are given by (7)) in our numerical tests. Numerical results show that the DN method is better than the MGN method when ω i = 1 , and the MGN method is better than the DN method when ω i is given by (7). In the following, we show the numerical results carried out by the DN method for minimization of the function corresponding to the TV functional and those carried out by the MGN method for minimization of the function corresponding to the WTV functional.
Example 1.
The kernel of Equation (2) is given by:
k ( x , t ) = 1 σ 2 π e ( x t ) 2 2 σ 2 ,
where σ = 0.05 . In this case, the condition number of the matrix K is 7.8654 × 10 18 . The right-hand side g ( x ) is obtained by setting the exact solution to:
f exact ( t ) = c 1 , t 0 , 0.2 0.4 , 0.6 0.8 , 1 , c 2 , t 0.6 , 0.8 , c 3 , t 0.2 , 0.4 ,
where c 1 = 1 , c 2 = 2 , c 3 = 3 . Two noisy right-hand sides, which contain 1% and 10% of white noise, respectively, are tested.
Since the average of c 1 , c 2 and c 3 is two, we choose f 0 = ( 2 , 2 , , 2 ) T as the initial guess for the minimization of the functions corresponding to the TV and WTV functionals, which is not too far from the exact solution. The exact solution and the right-hand side are shown in Figure 1a,b, two noisy right-hand sides are shown in Figure 1c,d, respectively. Numerical solutions obtained by different methods, as well as the point-wise error for all numerical solutions are shown in Figure 2. The relevant parameters, the numbers of iterations and the relative errors are given in Table 1 and Table 2.
We have tried a one-stage method for solving Example 1. The best numerical solution we obtained by minimizing the function 1 2 J ( f ) + α TV 1 , β ( f ) + γ 2 M ( f ) for Example 1 is shown in Figure 3. We observe from all figures in Figure 2 and Figure 3 that the numerical solution obtained via the WTVMM method is the best.
We make the following remarks based on the above numerical results.
(1)
For the WTVMM method, the main cost lies in obtaining the numerical solution f TV * , i.e., minimization of Φ TV ( f ) defined by (8).
(2)
The numerical solutions obtained by using the WTV method are better than those obtained by the TV method. The numerical solutions obtained by using the WTVMM method can be quite accurate, which can be clearly seen from the relative errors shown in Table 1 and Table 2, the numerical solutions shown in Figure 2c and the point-wise errors shown in Figure 2d.
(3)
If the position of the break points in the solution can be identified by the previous step, the approximate solution obtained by the WTVMM method can be very accurate. In the other hand, since the local minimizer of Φ M obtained by the DN method is sensitive to the initial guess, we may not be able to obtain a good numerical solution if the approximate solution obtained in the previous step cannot identify the break points well.
To demonstrate the influence of the values of parameters on numerical solutions, we present the relative errors of the numerical solutions of the three methods for a considerably different set of the parameter combinations ( β , θ ) in Table 3 and Table 4. We can see from the tables that if the values of the parameters are not too far from the optimal ones, we can obtain satisfactory numerical solutions. Moreover, both values of β and θ affect the quality of the numerical solutions, and a correct choice of β seems more important (see the column corresponding to β = 2.5 × 10 3 in Table 3 and the one for β = 2.5 × 10 2 in Table 4). Moreover, we can see that satisfying numerical solutions can be obtained by using different parameter combinations.
Example 2.
(Barcode reading) The Fredholm integral equation can serve as a good approximation of the blurring inside a barcode reader ([1], p. 135). The kernel of the equation is given by:
k ( x , t ) = e ( x t ) 2 σ 2 .
In our numerical tests, we choose σ = 0.01 . The intensity of the printed barcode and the exact right-hand side are shown in Figure 4a,b, and two noisy right-hand sides are shown in Figure 4c,d, respectively.
Since the average of c 1 and c 2 is 0.5 , we choose f 0 = ( 0.5 , 0.5 , , 0.5 ) T as initial guess for minimization of the functions corresponding to the TV and WTV functionals. Numerical solutions obtained by different methods, as well as the the point-wise error for all numerical solutions are shown in Figure 5. The relevant parameters, the number of iterations and the relative errors are given in Table 5 and Table 6.
Again, from Figure 5 and Table 5 and Table 6, one can observe that the numerical solutions obtained by using the WTVMM method are much better than those obtained by using the TV and WTV methods. In fact, we can obtain an approximate barcode of very high quality by using the WTVMM method.

5. Concluding Remarks

We presented a two-stage numerical method for estimating piecewise-constant solutions for Fredholm integral equations of the first kind. The main work of the two-stage method is the minimization of Φ TV ( f ) , which may require many iterations. Numerical results showed that if the relevant parameters are chosen suitably, the proposed method can obtain satisfying numerical solutions. We will study regularization parameter choosing methods, as well as efficient algorithms for relevant minimization problems for weighted total variation (WTV) methods, including iterative scheme for parameter determination.

Acknowledgments

We thank the anonymous referees for carefully reading the manuscript and providing valuable comments and suggestions, which helped us to improve the contents and the presentation of the paper. The research was supported by the NSF of China No. 11271238.

Author Contributions

F.R. Lin put forward basic ideas and deduced theoretical results; S.W. Yang designed and carried out numerical experiments; F.R. Lin and S.W. Yang analyzed numerical data; S.W. Yang wrote a draft and F.R. Lin wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hansen, P.C. Discrete Inverse Problems: Insight and Algorithms; SIAM: Philadelphia, PA, USA, 2010. [Google Scholar]
  2. Delves, L.M.; Mohamed, J.L. Computational Methods for Integral Equations; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  3. Dehghan, M.; Saadatmandi, A. Chebyshev finite difference method for Fredholm integro-differential equation. Int. J. Comput. Math. 2008, 85, 123–130. [Google Scholar]
  4. Ghasemi, M.; Babolian, E.; Kajani, M.T. Numerical solution of linear Fredholm integral equations using sinecosine wavelets. Int. J. Comput. Math. 2007, 84, 979–987. [Google Scholar]
  5. Koshev, N.; Beilina, L. An adaptive finite element method for Fredholm integral equations of the first kind and its verification on experimental data. CEJM 2013, 11, 1489–1509. [Google Scholar]
  6. Koshev, N.; Beilina, L. A posteriori error estimates for Fredholm integral equations of the first kind. In Applied Inverse Problems, Springer Proceedings in Mathematics and Statistics; Springer: New York, NY, USA, 2013; Volume 48, pp. 75–93. [Google Scholar]
  7. Zhang, Y.; Lukyanenko, D.V.; Yagola, A.G. Using Lagrange principle for solving linear ill-posed problems with a priori information. Numer. Methods Progr. 2013, 14, 468–482. (In Russian) [Google Scholar]
  8. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  9. Tikhonov, A.N. Regularization of incorrectly posed problems. Sov. Math. Dokl. 1964, 4, 1624–1627. [Google Scholar]
  10. Hansen, P.C. The truncated SVD as a method for regularization. BIT Numer. Math. 1987, 27, 534–553. [Google Scholar]
  11. Hansen, P.C.; Sekii, T.; Shibahashi, H. The modified truncated SVD method for regularization in general form. SIAM J. Sci. Stat. Comput. 1992, 13, 1142–1150. [Google Scholar]
  12. Rashed, M.T. Numerical solutions of the integral equations of the first kind. Appl. Math. Comput. 2003, 145, 413–420. [Google Scholar]
  13. Lin, S.; Cao, F.; Xu, Z. A convergence rate for approximate solutions of Fredholm integral equations of the first kind. Positivity 2012, 16, 641–652. [Google Scholar]
  14. Wang, K.; Wang, Q. Taylor collocation method and convergence analysis for the Volterra-Fredholm integral equations. J. Comput. Appl. Math. 2014, 260, 294–300. [Google Scholar]
  15. Neggal, B.; Boussetila, N.; Rebbani, F. Projected Tikhonov regularization method for Fredholm integral equations of the first kind. J. Inequal. Appl. 2016. [Google Scholar]
  16. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica 1992, 60, 259–268. [Google Scholar]
  17. Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994, 10, 1217–1229. [Google Scholar]
  18. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar]
  19. Defrise, M.; Vanhove, C.; Liu, X. An algorithm for total variation regularization in high-dimensional linear problems. Inverse Probl. 2011, 27, 065002. [Google Scholar]
  20. Fadili, J.M.; Peyré, G. Total variation projection with first order schemes. IEEE Trans. Image Process. 2011, 20, 657–669. [Google Scholar]
  21. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar]
  22. Chan, R.H.; Liang, H.X.; Ma, J. Positively constrained total variation penalized image restoration. Adv. Adapt. Data Anal. 2011, 3, 1–15. [Google Scholar]
  23. Strong, D.M.; Blomgren, P.; Chan, T.F. Spatially adaptive local feature-driven total variation minimizing image restoration. Proc. SPIE 1997, 222–233. [Google Scholar]
  24. Chantas, G.; Galatsanos, N.P.; Molina, R.; Katsaggelos, A.K. Variational Bayesian image restoration with a product of spatially weighted total variation image priors. IEEE Trans. Image Process. 2010, 19, 351–362. [Google Scholar]
  25. Chen, Q.; Montesinos, P.; Sun, Q.S.; Heng, P.A.; Xia, D.S. Adaptive total variation denoising based on difference curvature. Image Vision Comput. 2010, 28, 298–306. [Google Scholar]
  26. Chopra, A.; Lian, H. Total variation, adaptive total variation and nonconvex smoothly clipped absolute deviation penalty for denoising blocky images. Pattern Recognit. 2010, 43, 2609–2619. [Google Scholar]
  27. El Hamidi, A.; Ménard, M.; Lugiez, M.; Ghannam, C. Weighted and extended total variation for image restoration and decomposition. Pattern Recognit. 2010, 43, 1564–1576. [Google Scholar]
  28. Hansen, P.C.; Mosegaard, K. Piecewise polynomial solutions to linear inverse problems. Lect. Notes Earth Sci. 1996, 63, 284–294. [Google Scholar]
  29. Jin, B.T.; Zou, J. Numerical estimation of piecewise constant Robin coefficient. SIAM J. Control Optim. 2009, 48, 1977–2002. [Google Scholar]
  30. Bogosel, B.; Oudet, É. Qualitative and numerical analysis of a spectral problem with perimeter constraint. SIAM J. Control Optim. 2016, 54, 317–340. [Google Scholar]
  31. Chan, R.H.; Ng, M.K. Conjugate gradient method for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [Google Scholar]
  32. Groetsch, C.W. Integral equations of the first kind, inverse problems and regularization: A crash course. J. Phys. Conf. Ser. 2007, 73, 012001. [Google Scholar]
  33. Goncharskii, A.V.; Leonov, A.S.; Yagola, A.G. A generalized discrepancy principle. USSR Comput. Math. Math. Phys. 1973, 13, 25–37. [Google Scholar]
  34. Bakushinsky, A.; Kokurin, M.Y.; Smirnova, A. Iterative Methods for Ill-posed Problems: An Introduction. Inverse and Ill-Posed Problems Series 54. De Gruyter: Berlin, Germany; New York, NY, USA, 2011. [Google Scholar]
Figure 1. The exact solution, the right-hand side and noisy right-hand sides for Example 1. (a) The exact solution; (b) the right-hand side; (c) a noisy right-hand side containing 1 % of white noise; and (d) a noisy right-hand side containing 10 % of white noise.
Figure 1. The exact solution, the right-hand side and noisy right-hand sides for Example 1. (a) The exact solution; (b) the right-hand side; (c) a noisy right-hand side containing 1 % of white noise; and (d) a noisy right-hand side containing 10 % of white noise.
Mathematics 05 00028 g001
Figure 2. The exact solution (dot), numerical solutions (x) and point-wise errors for the three methods for Example 1 with noisy right-hand sides containing 1 % (left) and 10 % (right) of white noise. (a) Numerical solutions of TV; (b) numerical solutions of WTV; (c) numerical solutions of WTVMM; and (d) point-wise errors of the numerical solutions.
Figure 2. The exact solution (dot), numerical solutions (x) and point-wise errors for the three methods for Example 1 with noisy right-hand sides containing 1 % (left) and 10 % (right) of white noise. (a) Numerical solutions of TV; (b) numerical solutions of WTV; (c) numerical solutions of WTVMM; and (d) point-wise errors of the numerical solutions.
Mathematics 05 00028 g002aMathematics 05 00028 g002b
Figure 3. The exact solution (circle) and the numerical solution obtained by minimizing 1 2 J ( f ) + α TV 1 , β ( f ) + γ M ( f ) (dot) for Example 1 where the noisy right-hand side contains 10 % of white noise.
Figure 3. The exact solution (circle) and the numerical solution obtained by minimizing 1 2 J ( f ) + α TV 1 , β ( f ) + γ M ( f ) (dot) for Example 1 where the noisy right-hand side contains 10 % of white noise.
Mathematics 05 00028 g003
Figure 4. The exact solution, the right-hand side, and noisy right-hand sides for Example 2. (a) The exact solution; (b) the right-hand side; (c) a noisy right-hand side containing 1 % of white noise; and (d) a noisy right-hand side containing 10 % of white noise.
Figure 4. The exact solution, the right-hand side, and noisy right-hand sides for Example 2. (a) The exact solution; (b) the right-hand side; (c) a noisy right-hand side containing 1 % of white noise; and (d) a noisy right-hand side containing 10 % of white noise.
Mathematics 05 00028 g004
Figure 5. The exact solution (dot), numerical solutions (x) and point-wise errors for the three methods for Example 2 with noisy right-hand sides containing 1 % (left) and 10 % (right) of white noise. (a) Numerical solutions of TV; (b) numerical solutions of WTV; (c) numerical solutions of WTVMM; and (d) point-wise errors of the numerical solutions.
Figure 5. The exact solution (dot), numerical solutions (x) and point-wise errors for the three methods for Example 2 with noisy right-hand sides containing 1 % (left) and 10 % (right) of white noise. (a) Numerical solutions of TV; (b) numerical solutions of WTV; (c) numerical solutions of WTVMM; and (d) point-wise errors of the numerical solutions.
Mathematics 05 00028 g005
Table 1. Parameters used, number of iterations and relative errors for different methods for Example 1 where the right-hand side contains 1% of white noise. TV: total variation; WTV: weighted total variation; WTVMM: weighted total variation Modica-Mortola.
Table 1. Parameters used, number of iterations and relative errors for different methods for Example 1 where the right-hand side contains 1% of white noise. TV: total variation; WTV: weighted total variation; WTVMM: weighted total variation Modica-Mortola.
( α , β , θ , γ ) # it Error
TV( 5.0 × 10 2 , 2.5 × 10 5 , -, -)24 4.5 × 10 2
WTV( 5.0 × 10 2 , 2.5 × 10 5 , 5.0 × 10 2 , -)11 8.61 × 10 3
WTVMM( 5.0 × 10 2 , 2.5 × 10 5 , 5.0 × 10 2 , 1)11 + 4 1.98 × 10 3
Table 2. Parameters used, number of iterations and relative errors for different methods for Example 1 where the noisy right-hand side contains 10% of white noise.
Table 2. Parameters used, number of iterations and relative errors for different methods for Example 1 where the noisy right-hand side contains 10% of white noise.
( α , β , θ , γ ) # it Error
TV( 1.0 × 10 1 , 5.0 × 10 5 , -, -)10 9.48 × 10 2
WTV( 1.0 × 10 1 , 5.0 × 10 3 , 5.0 × 10 2 , -)13 7.18 × 10 2
WTVMM( 1.0 × 10 1 , 5.0 × 10 2 , 5.0 × 10 2 , 1)10+11 5.03 × 10 2
Table 3. Relative errors for TV (1st element), WTV (2nd element) and WTVMM (3rd element) with different β and θ for Example 1 with the right-hand side containing 1% of white noise.
Table 3. Relative errors for TV (1st element), WTV (2nd element) and WTVMM (3rd element) with different β and θ for Example 1 with the right-hand side containing 1% of white noise.
β 2.5 × 10 3 2.5 × 10 4 2.5 × 10 6
θ
5.0( 1.05 × 10 1 , 4.82 × 10 2 , 4.88 × 10 2 )( 5.74 × 10 2 , 4.70 × 10 2 , 4.89 × 10 2 )( 4.67 × 10 2 , 4.88 × 10 2 , 4.89 × 10 2 )
5.0 × 10 1 ( 1.05 × 10 1 , 1.73 × 10 2 , 1.98 × 10 3 )( 5.74 × 10 2 , 1.30 × 10 2 , 1.98 × 10 3 )( 4.67 × 10 2 , 9.51 × 10 3 , 1.98 × 10 3 )
5.0 × 10 3 ( 1.05 × 10 1 , 1.05 × 10 2 , 1.98 × 10 3 )( 5.74 × 10 2 , 3.28 × 10 2 , 4.94 × 10 2 )( 4.67 × 10 2 , 1.42 × 10 1 , 1.30 × 10 1 )
Table 4. Relative errors for TV (1st element), WTV (2nd element) and WTVMM (3rd element) with different β and θ for Example 1 with the right-hand side containing 10% of white noise.
Table 4. Relative errors for TV (1st element), WTV (2nd element) and WTVMM (3rd element) with different β and θ for Example 1 with the right-hand side containing 10% of white noise.
β 5.0 × 10 1 5.0 × 10 2 5.0 × 10 4
θ
5.0 × 10 1 ( 1.48 × 10 1 , 1.46 × 10 1 , 1.49 × 10 1 )( 1.49 × 10 1 , 8.93 × 10 2 , 7.09 × 10 2 )( 9.48 × 10 2 , 8.37 × 10 2 , 8.59 × 10 2 )
5.0 × 10 3 ( 1.48 × 10 1 , 1.46 × 10 1 , 1.93 × 10 1 )( 1.49 × 10 1 , 8.56 × 10 2 , 5.03 × 10 2 )( 9.48 × 10 2 , 1.21 × 10 1 , 1.22 × 10 1 )
5.0 × 10 4 ( 1.48 × 10 1 , 1.47 × 10 1 , 1.79 × 10 1 )( 1.49 × 10 1 , 8.56 × 10 2 , 5.03 × 10 2 )( 9.48 × 10 2 , 1.31 × 10 1 , 1.31 × 10 1 )
Table 5. Parameters used, number of iterations and relative errors for different methods for Example 2 where the right-hand side contains 1% of white noise.
Table 5. Parameters used, number of iterations and relative errors for different methods for Example 2 where the right-hand side contains 1% of white noise.
( α , β , θ , γ ) # it Error
TV( 5.0 × 10 6 , 2.5 × 10 5 , -, -)14 1.13 × 10 2
WTV( 5.0 × 10 6 , 2.5 × 10 5 , 5.0 × 10 2 , -)8 3.79 × 10 3
WTVMM( 5.0 × 10 6 , 2.5 × 10 5 , 5.0 × 10 2 , 1)8+3 1.65 × 10 6
Table 6. Parameters used, number of iterations and relative errors for different methods for Example 2 where the right-hand side contains 10% of white noise.
Table 6. Parameters used, number of iterations and relative errors for different methods for Example 2 where the right-hand side contains 10% of white noise.
( α , β , θ , γ ) # it Error
TV( 4.0 × 10 5 , 1.0 × 10 4 , -, -)16 1.29 × 10 1
WTV( 4.0 × 10 5 , 2.5 × 10 3 , 5.0 × 10 2 , -)11 4.14 × 10 2
WTVMM( 4.0 × 10 5 , 2.5 × 10 3 , 5.0 × 10 2 , 1)11+4 1.65 × 10 5

Share and Cite

MDPI and ACS Style

Lin, F.-R.; Yang, S.-W. A Two-Stage Method for Piecewise-Constant Solution for Fredholm Integral Equations of the First Kind. Mathematics 2017, 5, 28. https://doi.org/10.3390/math5020028

AMA Style

Lin F-R, Yang S-W. A Two-Stage Method for Piecewise-Constant Solution for Fredholm Integral Equations of the First Kind. Mathematics. 2017; 5(2):28. https://doi.org/10.3390/math5020028

Chicago/Turabian Style

Lin, Fu-Rong, and Shi-Wei Yang. 2017. "A Two-Stage Method for Piecewise-Constant Solution for Fredholm Integral Equations of the First Kind" Mathematics 5, no. 2: 28. https://doi.org/10.3390/math5020028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop