A Multilevel Iteration Method for Solving a Coupled Integral Equation Model in Image Restoration

: The problem of out-of-focus image restoration can be modeled as an ill-posed integral equation, which can be regularized as a second kind of equation using the Tikhonov method. The multiscale collocation method with the compression strategy has already been developed to discretize this well-posed equation. However, the integral computation and solution of the large multiscale collocation integral equation are two time-consuming processes. To overcome these difﬁculties, we propose a fully discrete multiscale collocation method using an integral approximation strategy to compute the integral, which efﬁciently converts the integral operation to the matrix operation and reduces costs. In addition, we also propose a multilevel iteration method (MIM) to solve the fully discrete integral equation obtained from the integral approximation strategy. Herein, the stopping criterion and the computation complexity that correspond to the MIM are shown. Furthermore, a posteriori parameter choice strategy is developed for this method, and the ﬁnal convergence order is evaluated. We present three numerical experiments to display the performance and computation efﬁciency of our proposed methods.


Introduction
Continuous integral equations are often used to model certain practical problems in image processing. However, the corresponding discrete models are often used instead of continuous models because discrete models are much easier and more convenient to implement than continuous integral models. Discrete models are piecewise constant approximations of integral equation models, and they introduce a bottleneck model error that cannot be addressed by any image processing method. To overcome the accuracy deficiency of conventional discrete reconstruction models, we use continuous models directly to restore images, which are more in line with physical laws. This idea first appeared in [1], and was widely used later, such as in [2][3][4]. In addition to making more sense in physics, continuous models can be discretized with higher-order accuracy. This means that the model error will be significantly decreased when compared with piecewise constant discretization, especially in the field of image enlargement.
Many researchers have made great contributions to the solution of integral equations, however, many difficulties remain. First, integral operators are compact in Banach space since integral kernels are normally smooth. This will produce a situation in which the solutions of the relevant integral equations do not depend on the known data continuously. To overcome this problem, the Tikhonov [5] and the Lavrentiev [6] regularization methods were proposed to regularize the ill-posed integral

The Integral Equation Model for Image Restoration
In this section, we first depict the overall process of reconstructing out-of-focus images and then describe some common notations that are used throughout the paper. Finally, in the second subsection, we introduce the approach to formulating our image restoration problem into an integral equation.

System Overview
We present an overview flowchart for reconstructing an out-of-focus image in Figure 1. This process includes four main parts, that is, modeling, discretization, the parameter choice rule, and solving the equation.
For the input out-of-focus image, we formulate it into a Tikhonov-regularized integral equation, which is described in the next subsection. Solving this integral equation necessitates discretization. We propose a fully discrete multiscale collocation method based on an integral approximation strategy. This method works by converting the calculation of the integral into a matrix operation. The next two parts describe the parameter choice rule and solution of the problem. We note that these two parts are actually a whole when executed in practice. However, we describe it in two parts because it is too complicated to describe as a whole. For a clearer presentation, we first display the multilevel iteration method in Section 4 under the condition that we have already selected a good regularization parameter. Then, we describe how this regularization parameter is chosen in Section 5. Some notations are needed. Suppose that R d represents d-dimensional Euclidean space. In addition, Ω ⊂ R 2 and E ⊂ R denote the subsets of R d . L ∞ is a special kind of space of L p when p = ∞, and it is proved to be a Banach space. We use x, including x with upper and lower indices, to denote the solution of the equation. Similarly, we use y, including y with upper and lower indices, to denote the known variable of the equation. Furthermore, the operator K represents the blurring kernel, and K * represents the adjoint operator of K. The notation A ⊕ ⊥ B denotes the direct sum of spaces A and B when A⊥B. The notation a ∼ b means that a and b are in the same order. R(·) represents the range of the operator. Finally, many extended and new notations that are not declared here are defined in the context in which they are first used.

Model Definition
In this subsection, we describe an integral equation model obtained from a reformulation of the Tikhonov regularization equation for image restoration. In addition, an equivalent coupled system is developed for solving this integral equation quickly and efficiently.
Assume that the whole image is a support set. In general, images are usually rectangular. Let Ω ⊂ R 2 denotes the image domain, also the support set. The image restoration can be modeled by a continuous integral equation of the first kind where y is the blurred image and we aim to reconstruct the blurred image y into a clearer image x : Ω → R. Thus, image x can also be called the reconstructed image. K is the linear compact operator defined in terms of a kernel g by Let The images may not be in [0, 1] × [0, 1], but can be shaped to by scaling and shifting. According to [20], the out-of-focus image is usually modeled with the kernel where u:= (u, v),u := (u , v ) ∈ Ω, and σ is the parameter characterizing the level of ambiguity in the kernel. Blurring kernel (3) is a two-dimensional symmetric operator. It can be written as a combination of two univariate integral operators. That is to say, solving Equation (1) is equivalent to solving the following system: In this system, x 2 is the final solution of the reconstruction, and x 1 is the intermediate result. These two equations mean that we can first deal with all the rows of the image (first equation) and then all the columns (second equation). Hence, the problem has changed significantly. We have changed the procedure of dealing with a two-dimensional image to processing the rows of the image first, and then the columns. Furthermore, as can be seen from the two equations in (4), the rows and columns of the image are processed similarly, and both processes can be formulated to a one-dimensional integral equation: For x ∈ X := L ∞ (E), K is the linear compact operator defined by and y ∈ X is the observed function. Hence, we focus on Equation (5) in the following illustration. Ill-posedness makes Equation (5) difficult to solve, but the Tikhonov regularization method can handle this problem. By using the Tikhonov regularization method, we get the following equation: where α > 0 is the regularization parameter, and K * is the adjoint operator of K. It is easy to prove that the operator αI + K * K is invertible. In addition, y δ is usually noisy data, with for a noise level δ. x δ α denotes the solution of Equation (6) when y is replaced by y δ . K is self-adjoint; that is, K * = K. Note that Equation (6) includes K * K, which is defined by a 2-fold integral. This 2-fold integral is tremendously computationally expensive. By letting v δ α := y δ −Kx δ α √ α , the authors of [7] split Equation (6) into a coupled system: These two were proven to be equal in [7], and the advantage is that system (8) only involves one single integral. Thus, we do not solve Equation (6) but instead solve system (8) to reduce the computational difficulties.
In the next section, we apply a fully discrete method to solve system (8).

Fully Discrete Multiscale Collocation Method
In this section, a multiscale method is reviewed first, and then we develop a fully discrete formulation of the multiscale collocation method using an integral approximation strategy. By using this strategy, we turn the computation of the integral into a matrix operation, which will greatly reduce the calculation time.
Similarly, the multiscale collocation functionals and the collocation points are important. Suppose that ij is the collocation functional. Meanwhile, we have a sequence of collocation points {v ij : (i, j) ∈ U n } in E. We define an interpolation projection P n : L ∞ (E) → X n , n ∈ N and an orthogonal projection Q n : L 2 (E) → X n , n ∈ N.
For n ∈ N, let K n := P n K| X n . The multiscale collocation method for system (8) is to find the solutions v δ α,n := ∑ (i,j)∈U n v α,n ij e ij and x δ α,n := ∑ (i,j)∈U n x α,n ij e ij of the system √ αx δ α,n − K * n v δ α,n = 0 K n x δ α,n + √ αv δ α,n = P n y δ .
For (i, j), (i j ) ∈ U n , after the introduction of three definitions, Note that K n is a dense matrix. The compression of K n is an important factor of our method. Following the compression strategy of [7], we get the compression matrix K n : Thus, substituting K n with K n , we finally need to solve the equation

Integral Approximation Strategy
The computation of system (11) mainly lies in the integral operator K n . Next, we focus on this problem and develop an integral approximation strategy using the Gaussian quadrature formula to solve this coupled system.
Note that the integral operator K is and the orthogonal basis functions {e ij : (i, j) ∈ U n } are all piecewise polynomial functions. Thus, the piecewise Gauss-Legendre quadrature is used here. Integral approximation strategy: With the supports of these basis functions, we divide E equally into µ n parts. Let E q := [ q µ n , q+1 µ n ], for q ∈ Z µ n ; then, E = q∈Z µ n E q . Each basis function is a continuous function in E q , q ∈ Z µ n . Then, we choose m > 1 Gaussian points in each part E q and let γ = 2m − 1. All Gaussian points form a set of piecewise Gaussian points G in order. Let g(n) := |G|; then, g(n) = m × µ n . Thus, the integral under the accuracy of γ in E q can be written as where {g t , t = qm + 1, · · · , (q + 1)m} represents a Gaussian point in E q , and ρ t represents the weight corresponding to g t . Further, the integral with an accuracy of γ in E is which can be written as a formulation of vector multiplication Furthermore, as shown in Figure 2, we can use this strategy to approximate K n as a matrix form of where W g(n)×s(n) is a basis function matrix with weights. And it stands for all the basis functions {e ij , (i, j) ∈ U n } act on all piecewise Gaussian points of G with the weight of ρ t . L n denotes the matrix representation of point evaluation functional s , and the details of L n refer to [26]. In order to combine it with the matrix compression strategy, we write Equation (13) in blocks. This is the matrix representation K n , i, i ∈ Z n+1 , which is the approximate value of K n using the integral approximation strategy.
Then, Equation (10) becomes a formulation of the fully discrete multiscale collocation equation: where K n is as given in Equation (14). Let A := K * K, A n := K * n K n . Assume that K n is the operator corresponding to the matrix representation K n using the compression strategy, and define A n := K * n K n . Additionally, let K n represent the operator with respect to the matrix representation K n using the integral approximation strategy and the compression strategy, and then define A n := K * n K n . Therefore, corresponding to Equation (11), system (9) becomes By adopting the integral approximation strategy, the computation of the integral becomes the computation of the matrix multiplication (14), which greatly reduces the calculation.
Next, we estimate the convergence order of the fast multiscale collocation method using the integral approximation strategy. We should note that parameter c is different in different scenarios below, unless explicitly stated.
We assume that there exists a constant M such that K L 2 (E)→L ∞ (E) ≤ M. According to [5,12], for any α > 0, αI + K * K is invertible. We also have the inequality Assume thatx is the exact solution of Equation (1), and y ∈ R(K).
Following from [5,27], if hypothesis (H1) holds, then we have the convergence rate Suppose that c 0 > 0. In order to estimate the convergence rate of the integral approximation strategy, we propose the following hypothesis: (H2) For all n ∈ N and some positive constants r, is a Fredholm integral operator with a kernel having the rth derivative, this hypothesis holds true.
Following [28], we can write the remark below.
The remainder of the Gauss-Legendre quadrature is given as We can conclude that: Note that when m Gaussian points are used, the accuracy of the Gauss-Legendre quadrature is Corresponding to Remark 1, we give the following proposition.

Proposition 1.
Assume that the integral accuracy m satisfies Then, we can obtain the conclusion that Proof. Assume that r > 2m, the operator K ∈ H 2m [0, 1]. Because the polynomial functions are infinitely differentiable, we can get the result that Ke ij ∈ H 2m [0, 1], for (i, j) ∈ U n . From Remark (1), we gain the following inequality: Furthermore, we can infer from inequality (21) that when all µ n areas are added, then Thus, the proof has been completed.
Note that the Gaussian function has an infinite derivative; thus, assumption (19) is easy to satisfy.

Lemma 1.
Assume that hypothesis (H2) holds, and c 0 is the same parameter as in Theorem 1. If parameters n and γ are chosen from then we can conclude that αI + A n is invertible. In addition, Proof. According to a known result in [29], we conclude that αI + A n is invertible and Estimate (25) follows from the above bound, condition (24), estimates (17), and Theorem 1.
Next, we give two lemmas. The proofs are similar to those in [22], so they are omitted here.

Lemma 3.
Suppose that hypothesis (H2) holds, and inequality (24) is satisfied. Then, for n ∈ N and c > 0, For the remainder of this section, we estimate the error bound for x − x δ α,n ∞ .
Theorem 2. Suppose that hypotheses (H1) and (H2) and assumption (19) hold,x ∈ R(K * ), and inequality (24) is satisfied. Then, for c 1 > 0, Proof. From the triangle inequality, we have It is apparent that the estimate in this theorem follows directly from the above bound, inequality (18), and Lemmas 2 and 3.

Multilevel Iteration Method
In general, we solve Equation (16) while choosing the regularization parameter. When parameter selection is finished, the equation is solved. Note that when executed in practice, these two processes are a whole and occurring simultaneously. However, in order to describe it more clearly, we split it into two processes. In this section, we present the multilevel iteration method (MIM) for a fixed α and assume that this parameter is already well selected. In the next section, we show the regularization parameter choice rule.
In this section, we first describe the multilevel iteration method and then present the computation complexity of this algorithm. Finally, the error estimation is then proved.

Multilevel Iteration Method
After obtaining matrices E n , K n , and y δ , we begin to solve Equation (15). If we just invert this equation directly, it will require considerable time. Thus, we follow a MIM instead of inversion to obtain a fast algorithm.
First, we introduce the MIM to the coupled system (16). We now assume that the fixed parameter α is already selected according to the rule in Section 5.
With the decomposition of solution domain, for n = k + m and k, m ∈ N + , we can write the solutions x δ α,n ∈ X and v δ α,n ∈ X of system (16) as two block vectors where ( x δ α,n ) 0 ∈ X k , ( v δ α,n ) 0 ∈ X k , and ( x δ α,n ) j ∈ W k+j , ( v δ α,n ) j ∈ W k+j , for j = 1, 2, . . . , m. The operator K k+m also has the following matrix form The MIM for solving the coupled system (16) can be represented as Algorithm 1. Let K L k+m := P k K k+m Q k+m , and K H k+m := (P k+m − P k ) K k+m Q k+m .
We split the operator K k+m into two parts, K k+m = K L k+m + K H k+m , which are lower and higher frequencies, respectively. Similarly, we can obtain K * L k+m and K * H k+m by splitting K * k+m using an analogous approach. Accordingly, the coupled system (16) can be written as Algorithm 1: Multilevel Iteration Method (MIM).
Step 4 : Stopping criterion. Stop the iteration when

Computation Complexity
We now turn to studying the computation complexity of this algorithm. Specifically, we estimate the number of multiplications used in the method. As a result, we write the operator Equation (30) in the matrix representation form. First, we introduce the block matrix ]. Moreover, for a fixed k ∈ N, we define the blocks K k 0,0 := K k . Additionally, for s, s ∈ N, we define K k 0,s : We also partition matrix E n in the same way, which we omit here. Then, the matrix representations of the operators K L k+m and K H k+m are We also write down the matrix representations of the operators x δ, α,k+m and v δ, α,k+m at the th iteration as x k,m := [ x i : i ∈ Z m+1 ] and v k,m := [ v i : i ∈ Z m+1 ]. Furthermore, we define y δ k+m := [y δ i : i ∈ Z m+1 ]. Using this block form of the matrix, the solutions of Equation (30) become For a matrix A, we denote by N (A) the number of nonzero entries of A. Let For > 0, we need 2N ( K k,m ) + 2N (E k+m ) multiplications with an inverse operation R −1 k to obtain x i and v i , i = m, m − 1, . . . , 1, 0 from Equation (32). However, in all iterations, the inverse operation R −1 k only needs to be done once; thus, we assume that the inverse operation R −1 k needs M(k) multiplications. Hence, the number of multiplications for computing x i and v i , i = m, m − 1, . . . , 1, 0 from x −1 In addition, we only need to compute the same inverse operation R −1 k to obtain x 0 k,m and v 0 k,m in the first step in Algorithm 1. Therefore, we are now ready to summarize the above discussion in a proposition.

Proposition 2.
The total number of multiplications required to obtain x k,m and v k,m is given by Note that when we solve the coupled system (16) directly, we need to compute the inverse operation R −1 k+m . However, when we use the MIM, we only need to compute the inverse operation R −1 k . This is the key factor that leads to a fast algorithm.
In the next subsection, we estimate the error x − x δ, α,k+m ∞ .

Error Estimation
From the triangle inequality we only need to estimate x δ α,k+m − x δ, α,k+m ∞ . Let D k,m := K * n K H k+m + K * H k+m K L k+m , F k,m (α) := αI + K * L k+m K L k+m .

Lemma 4.
Suppose that hypotheses (H1) (H2) and condition (24) hold, then D k,m ∞ → 0, as k → ∞, for m ∈ N + . Then for N ∈ N + , k > N and m ∈ N + , (37) Proof. If these two hypotheses are true, then there exists a constant c 2 > 0 such that From the definition of F k+m (α), (17), and Theorem 1, for any u ∈ X k+m , we have This, together with Equation (38), proves that the first part of (37) is true. Then, obviously, the second part is true.
A corollary (cf, Corollary 9.9, page 337 of [10]) confirms that the condition numbers cond(αI + K * L n K L n ) and cond(αI + K n ) have the same order. In other words, the MIM will not ruin the well-condition property of the original multiscale method. However selecting an appropriate k is important, which will influence the result of the iteration process. It follows from Equation (30) Generally, in order to make the iteration process convergence, we would choose k such that and Theorem 3. If hypothesis (H2) and condition (24) hold, let where t k (α) = µ −rk and then for N 0 ∈ N + , k > N 0 , and m ∈ N + , we obtain the result Proof. We prove this result by induction on . First, we prove that Equation (44) holds when = 0. It is apparent that x δ,0 α,k+m = x δ α,k . Therefore, from the result of Lemmas 2 and 3, we can conclude that there exists a constant c, and then we obtain Equation (44) when = 0. Meanwhile, following from Equation (29), we can obtain Subtracting (40) from (46), we have where , and . On the basis of Lemma 4, for N 0 ∈ N + and k > N 0 , and Then, we get We assume that Equation (44) holds for = q − 1, q ∈ N + , and prove that it also holds for = q. Following from Equations (41) and (42), we can obtain From Equation (51) and inequality (52), we obtain which completes the proof of this theorem.

Theorem 4.
If hypotheses (H1) and (H2) and assumption (19) hold, the parameters n and γ are chosen to satisfy condition (24), and the integer k is chosen to satisfy Equations (41) and (42), then for N 0 ∈ N + , k > N 0 , and m ∈ N + , Moreover, if parameters k and m are chosen on the basis of n := k + m, and the number of iterations satisfies Proof. From the triangle inequality, we have α,k+m ∞ . The combination of Theorems 2 and 4 implies the inequality (54). If the three parameters n, γ, and k are chosen to satisfy conditions (41), (42), and (55), together with m := n − k, as well as inequality (54), we can conclude that which completes the proof.
Note that and γ α,k,m, is exponentially decreasing. Therefore, the stopping criterion can be reached at a finite step. With the next remark, we show that the stopping criterion of Algorithm 1 can guarantee the convergence rate of Equation (56).

Remark 2.
Suppose that hypotheses (H1) and (H2) and assumption (19) hold. The parameters n and γ are selected to satisfy condition (24), and n satisfies nµ −rn α 3/2 ≤ δ α . If the iteration stops when then the estimation error is Proof. From the triangle inequality, we get On the one hand, following from system (30), we can write x δ, α,k+m as On the other hand, Combining Equations (60) and (61), we have From the right inequality of (17) and the error bound (58), we can obtain According to Theorem 2, the above inequality, and also nµ −rn α 3/2 ≤ δ α , we can get the final result:

Regularization Parameter Choice Strategies
Choosing an appropriate regularization parameter α is an important process in solving the ill-posed integral equation. We present a posteriori parameter choice strategy [14] for our proposed method.
For any given α > 0, we choose the parameters k(α), m(α), n(α), γ according to Theorem 4. Following from [14], we assume that there exist two increasing continuous functions, with ϕ(0) = 0, such that we can write Equation (54) as Then, α = α opt := (ϕψ) −1 (δ) would be the best choice. For constants q 0 > 1 and ρ > 0, we let the positive integer N be determined by Obviously, α * := max{α i : α i ∈ M(∆ N )} can be the approximation of the regularization parameter, but the function ϕ(α) involves the unknown smoothness order ν of the integral operator. Therefore, it is infeasible to use M(∆ N ) directly, and a little modification is necessary. We next present a rule for choosing the regularization parameter α.

Rule 5.
As suggested in [14], we choose the parameter α := α + = max{α j : α j ∈ M + (∆ N )} as an approximation of α opt , where This is a posteriori parameter choice strategy, which does not involve the smoothness order ν. We next present a crucial lemma, which can be found in Theorem 2.1 of [14].
Proof. This lemma can be obtained from [7] with a slight modification. Thus, we omit the details of the proof.

Numerical Experiments
In this section, three numerical examples are presented for the restoration of out-of-focus images to verify the performance of the fully discrete multiscale collection method and the multilevel iteration method. Matlab was used to conduct our simulations, and all examples below were run on a computer with a 3.00 GHz CPU and 8 GB memory.
Using the integral equation necessitates transforming the discrete matrix (the observed image) into a continuous function. We used the method in [1] directly. Assume that the size of the image is ro × co, and the pixels of image are on the grid {(i/ro, j/co) : i = 0, 1, . . . , ro; j = 0, 1, . . . , co}. The function to formulate the image is where h ij is the old pixel value. Assume that s is a positive integer. Then, for l = 0, 1, . . . , s, Then Equation (5) becomes The noise level is defined as δ = y ∞ · e/100. Note that we employ the piecewise linear functions in [26] for simplicity in the following examples.

Example 1.
In our first experiment, we verified the effectiveness of the integral approximation strategy for the coupled integral equation. We set n = 0 as the initial level and measured the computing time in seconds to generate the coefficient matrix K n of the coupled operator Equation (9) for the range n = 5 to 10 in Table 1. For comparison, we repeated the experiment of the numerical quadrature scheme in [1]. In this example, we set both integral methods to have the same accuracy, γ = 3. For different values of n, T N denotes the computing time of the numerical quadrature scheme in [1], and T I denotes the computing time of the proposed integral approximation strategy. As we can see in Table 1, the proposed integral approximation strategy uses less time to generate the coefficient matrix K n , which proves that the integral approximation strategy is an efficient fast algorithm.    Example 2. The second simulation was executed to verify the efficiency of the proposed multilevel iteration method. Figure 3a is the original clear 'building' image with the size 256 × 384, and the blurred image is recovered in Figure 3b with σ = 0.01 in the blurring kernel and noise level δ = 0.01 using the MIM. For comparison, we also conducted experiments using the Gaussian elimination method (GEM) in [28] and the multilevel augmentation method (MAM) in [10]. T GEM , T MAM and T MI M represent the computing time of the Gaussian elimination method, the multilevel augmentation method and the multilevel iteration method, respectively. The value n listed in Table 2 ranges from 5 to 10, and in this case, the continuous intensity function (70) is needed. As shown in Section 2, two one-dimensional integral equations are solved to recover the blurred image. Therefore, the computing time here denotes the summation of the time needed to solve the coupled Equation (8) twice. In our MIM experiments, all processes stopped at the second iteration, which is very fast because of the good selection of the initial values x δ α,k and v δ α,k . These two initial values can be found in Step 1 of Algorithm 1. Figure 3c-e show the reconstructed images of (b) using the GEM, the MAM and the MIM. Meanwhile, Table 2 and Figure 4 exhibit the computing time of these three methods. On the whole, the computing time of the MIM is the least among the results of these methods. All results show the proposed multilevel iteration method requires less computational resources. The difference is obvious, especially when the indicator of the extended dimension n is large. Example 3. In this example, we demonstrate that the performance of the MIM is as good as the alternative method. As shown in Figure 5b-d, we consider the restoration of the 'Lena' image, which has the size 257 × 257.
We use the image with σ = 0.01 in the blurring kernel and different noise level δ. Note that when δ = 0, the image is noise free. We introduce the peak signal-to-noise ratio (PSNR) to evaluate the restored images and blurred images. For comparison, we solved the corresponding integral equation by using the proposed multilevel iteration method and the Gaussian elimination method with the piecewise linear polynomial basis functions at n = 8. Tables 3 and 4 list α + obtained from the parameter choice using Rule 1, the PSNR value of the blurred image (PB), the PSNR value of the reconstructed image (PR), and the corresponding time to solve the equation using the GEM and the MIM separately, where the noise level δ ranges from 0 to 0.15. Figure 5e-j show the reconstructed image corresponding to different methods and different noise levels of 0, 0.03, and 0.1. In general, by comparing the numerical experiments in Tables 3 and 4, we can conclude that there is almost no difference in the PSNR value of the reconstructed image using the GEM and the MIM, but more specifically, the MIM performs better in most cases. But more seriously, the MIM performs better in most cases.
Example 2, combined with Example 3, proves that the MIM uses less computation time than the alternative methods, while the performance is equally well. Therefore, we can conclude that the multilevel iteration method is an effective fast algorithm to solve the integral equation in image restoration.

Conclusions
In this paper, we formulate the problem of image restoration as an integral equation. In order to solve this integral equation, we propose two fast algorithms. The first one is the fully discrete multiscale collocation method, which converts the calculation of the integral to a matrix operation. The second one is the multilevel iteration method, which guarantees that the solution has an optimal order. All examples verify that the proposed methods are accurate and efficient when compared with alternative strategies. In the future, we will still focus on finding faster and more efficient methods.