Solving Inverse Conductivity Problems in Doubly Connected Domains by the Homogenization Functions of Two Parameters

: In the paper, we make the ﬁrst attempt to derive a family of two-parameter homogenization functions in the doubly connected domain, which is then applied as the bases of trial solutions for the inverse conductivity problems. The expansion coefﬁcients are obtained by imposing an extra boundary condition on the inner boundary, which results in a linear system for the interpolation of the solution in a weighted Sobolev space. Then, we retrieve the spatial- or temperature-dependent conductivity function by solving a linear system, which is obtained from the collocation method applied to the nonlinear elliptic equation after inserting the solution. Although the required data are quite economical, very accurate solutions of the space-dependent and temperature-dependent conductivity functions, the Robin coefﬁcient function and also the source function are available. It is signiﬁcant that the nonlinear inverse problems can be solved directly without iterations and solving nonlinear equations. The proposed method can achieve accurate results with high efﬁciency even for large noise being imposed on the input data.


Introduction
In recent decades, a large number of inverse problems of the nonlinear elliptic-type partial differential equation (PDE) have been well investigated, involving the inverse source problem, inverse conductivity problem as well as inverse Robin problem, which arise in several branches of applications in science and engineering. Analytical solutions to inverse problems are difficult to obtain since some information is missing, such as the boundary conditions or sources compared with the forward problems. Therefore, many numerical approaches have been developed to resolve inverse problems [1]. In the linear elliptic type PDEs, for identifying unknown sources, the regularization methods were advocated in [2,3]. Klose [4] solved an inverse source problem based on the radiative transfer equation arising in optical molecular imaging. In Ref. [5], Hon et al. applied Green's function for the inverse source identification. Then Li et al. [6] proposed the modified regularization method on the Poisson equation for determining an unknown source. Ahmadabadi and co-workers proposed the method of fundamental solutions for the inverse space-dependent heat source problems by using a new transformation [7]. The source function for a seawater intrusion problem in an unconfined aquifer has been studied by Slimani [8]. The inverse source problems were examined by Alahyane et al. [9] using the regularized optimal control method. Some new regularization methods were proposed for inverse source problems governed by fractional PDEs [10,11]. Nguyen [12] investigated the inverse source problems of the fractional diffusion equations based on the Tikhonov regularization method. Recently, Liu [13] have proposed a new procedure of boundary functions, which preserves the energy identity to identify the sources of 2D elliptic-type nonlinear PDEs. However, the methods proposed in [13] required extra boundary conditions of source function on a rectangle. We will extend the work to any 2D nonlinear elliptic equation without using extra boundary data of the source function in the doubly connected domain.
On the other hand, linear and nonlinear inverse conductivity problems have been studied by many authors. Kwon considered the anisotropic inverse conductivity and scattering problems [14]. The inverse problem of time-dependent thermal conductivity was studied by Huntul and Lesnic by recasting the original problems into the nonlinear least-squares minimization [15]. Isakov and Sever provided an integral equation method for inverse conductivity problems using the linearization method [16]. Based on Calderón's linearization method, a new direct algorithm was suggested for the anisotropic conductivities [17]. Liu et al. [18] constructed two-parameter homogenization functions for solving the bending problem of a thin plate in a rectangular domain where the boundary conditions can be exactly satisfied. Using the Lie-group iterative method, Liu and Atluri [19] solved the linear Calderón inverse problem in a rectangular domain, where the unknown conductivity function is effectively recovered. The linear and nonlinear inverse conductivity problems have also been studied by meshless methods, such as the meshless local Petrov-Galerkin method [20], the singular boundary method [21], and the local radial point interpolation method [22], the method of fundamental solutions [23], etc.
In this paper, based on the previous work in [18,19], we focus on the construction of two-parameter 2D homogenization functions in a doubly connected domain, and take linear equations to identify the space-dependent and temperature-dependent conductivity functions, the Robin coefficient function and also the source function in the 2D nonlinear elliptic equations. The derived homogenization functions are used as the bases. The undetermined expansion coefficients are solved by imposing the extra boundary conditions. In this way, the nonlinear inverse problems can be solved directly with high accuracy and efficiency even when twenty percent of noise is added to the known data.
We arrange the rest of this paper as follows. Section 2 describes some nonlinear inverse problems in a doubly connected domain of a 2D nonlinear elliptic equation, which includes the recovery of conductivity functions α(x, y) and α(u), the inverse Robin problem and the inverse source problem. In Section 3, we develop the homogenization functions with two parameters. In Section 4, the two-parameter homogenization functions act as the bases for the solution. In Section 5, the space-dependent conductivities of inverse problems are considered. In Section 6, we solve the temperature-dependent conductivity inverse problems. The inverse Robin problem and one example are given in Section 7, and the inverse source problem is solved in Section 8, where two examples are given. Section 9 makes the conclusions.

Nonlinear Inverse Problems
For this part, we briefly sketch the problems to be considered that desire the retrieval of unknown functions in the doubly connected domains.

Space-Dependent Inverse Conductivity Problem
First a space-dependent conductivity function α(x, y) is to be recovered from where n is an outward unit normal on Γ o . Besides an unknown conductivity function α(x, y) and the unknown solution u(x, y), other functions are given. Ω is a doubly connected domain with boundary are, respectively, the radius functions of inner boundary and outer boundary. In order to recover α(x, y), we over-specify where h 2 (x, y) is a given function. In Figure 1, we sketch the inverse conductivity problem.
x,y)=? In the polar coordinates (r, θ), Equation (1) is recast to Equation (5) is a first-order PDE for the function α(r, θ) = α(x, y) with respect to r and θ, where u r , u θ , u rr and u θθ are the coefficient functions. It is a nontrivial task to determine α even with the known u prescribed inside the Ω unless the boundary information of α on Γ is given in advance. Indeed, the inverse conductivity problem, which is considered in this paper, becomes more difficult and troublesome since the information of u is not given inside the solution domain, and only the boundary information is given according to Equations (2)-(4).

Temperature-Dependent Inverse Conductivity Problem
Secondly, we attempt to retrieve α(u) in when S and Q are given. The temperature-dependent inverse conductivity problem is to determine the unknown conductivity function α(u) considering the above governing equation along with the information from Equations (2)-(4). The problem becomes harder for the reason that Equation (6) is nonlinear for u and linear ODE for α(u) with respect to u.

Inverse Robin Problem to Determine γ(θ)
In the inner boundary, which is an inaccessible part of the boundary Γ, we cannot directly detect the transfer coefficient γ(θ) in When the information of u(x, y) and u n (x, y) on Γ i is unknown, h 3 (x, y) is given. Taken into the consideration of Equations (1)-(3), the unknown Robin coefficient γ(θ) will be recovered, which is known as the inverse Robin problem.

Inverse Problem for S(x, y)
When the function u(x, y) is known in advance, we can recover S(x, y) by S(x, y) = ∇ · [α(x, y)∇u(x, y)] − Q(u, u x , u y ), (8) where α(x, y) and Q(u, u x , u y ) are given functions. We will show that S(x, y) is recoverable from Equations (2)-(4) and (8), without solving nonlinear equations.

Two-Parameter Basis Functions
First, we demonstrate the basic idea of homogenization function by starting from a 2D boundary value problem (BVP): where L is a second-order linear differential operator. Let and then, B 0 (0, y) = h 1 (y), B 0 (a, y) = h 2 (y) (12) are apparent. Upon letting and according to the following compatibility conditions: it is easy to verify Therefore, we can produce the 2D homogenization function for the 2D BVP: Due to B(x, y), we can transform the original 2D BVP with non-homogeneous boundary conditions to one with homogeneous boundary conditions: with the help of the variable transformation from u(x, y) to v(x, y) = u(x, y) − B(x, y).
Obviously, Equations (17) and (18) are more easy to tackle than Equations (9) and (10). As an extension of B(x, y) to a two-parameter family, we have B(x, y, j, k) is indeed a family of 2D polynomials, which are complete bases and satisfy the boundary conditions automatically, A function is a so-called homogenization function if it satisfies the boundary conditions on the boundary of a domain. Since the solution u(x, y) must satisfy the prescribed boundary conditions, it is a member of homogenization functions.
Continuously, the two-parameter homogenization functions are constructed for developing the present method to solve the inverse problems of Equations (1)-(4).
, is a homogenization function, if the following conditions: respectively, and B 0 n signifying the normal derivative of B 0 (r, θ) on Γ o is given by where The following homogenization function has been derived [18]: Theorem 1. For the given Cauchy data h 1 (θ) and g(θ) on Γ o , there exist homogenization functions B(j, k, r, θ) in Ω, satisfying Equation (21): where j + 1, k ∈ N are parameters.
Proof. By Equation (26), B(j, k, ρ o , θ) = h 1 (θ) satisfies the first equation in Equation (21). Next, we consider the second equation in Equation (21), for which we need to prove It is obvious that ∂ ∂θ It follows from Equations (26), (28) and (30) that the first part in Equation (27) holds when B(j, k, r, θ) is differentiated to r, and we take The second part in Equation (27) is proven below. It follows from Equation (2) that From Equations (26) and (28)-(32), it follows that Due to Equation (27), which ends the proof of this theorem.
In Theorem 1, the numbers (j, k) are parameters, and then B(j, k, r, θ) is a two-parameter function. In addition to Theorem 1, we also have the following result for another twoparameter function E(j, k, r, θ).

A Novel Two-Parameter Homogenization Function Method
Since the set E(j, k, r, θ) is generated from the Pascal polynomials x j−k y k , it is a complete basis for the problem. By the same token, B(j, k, r, θ) is a complete basis. All the homogenization functions consist of a weighted Sobolev space denoted as B := {v(x, y) ∈ C 2 (Ω)|v(x, y) = h 1 (x, y), v n (x, y) = g(x, y), (x, y) ∈ Γ o }, which is a weighted space, because for any two functions v 1 (x, y), v 2 (x, y) ∈ B with a weighted linear combination is defined in the space B. More importantly, the approximate solution u(x, y) ∈ B.
In terms of the bases B(j, k, x, y), u(x, y) can be expanded by where a jk satisfies and guarantees conditions (2) and (3) being satisfied by u(x, y). The number of the coefficients a jk is n 1 = m 2 . As shown in Equation (4), we suppose that there are N data of u(x, y) on the inner boundary Γ i available, and then we can solve a linear system, including Equation (40), to determine a jk : where θ q = 2qπ/N, x q = ρ i (θ q ) cos θ q and y q = ρ i (θ q ) sin θ q .

Numerical Algorithm
Next, when u(x, y) is obtained from Equation (39), we recover α(x, y) by supposing where b ij are n := (m 0 + 1)(m 0 + 2)/2 unknown weighted parameters to be determined by the proposed numerical algorithm. In order to solve this problem, m 1 × m 2 points of (x, y) inside the solution domain Ω are collocated by x pq = r p cos θ q , y pq = r p sin θ q , Since u(x, y) can be approximated by Equation (39), we have Then, inserting Equation (42) into Equation (5) and collocating at point (x pq , y pq ), the following linear system can be obtained: from which we can determine b ij easily, and, correspondingly, the α(x, y) can be determined from Equation (42). Therefore, the proposed algorithm for recovering α(x, y) consists of two linear systems of equations, Equations (41) and (45). We impose the data by a noise: where R(j) are random numbers between [−1, 1], which are used to check the stability of the numerical solution.
To evaluate the accuracy, we consider the maximum error (ME) and a relative error defined by ME(α) := max |α n (x pq , y pq ) − α(x pq , y pq )|, upon comparing the numerical solution of α n to the exact one α at N 1 × N 2 grid points (x i , y j ) inside the domain with x pq = r p cos θ q , y pq = r p sin θ q , We take N 1 = 50 and N 2 = 10 in all computations. We must emphasize that all the linear systems we consider are over-determined, which means that the number of linear equations is much larger than the number of unknown coefficients. Therefore, we apply the conjugate gradient method (CGM) to solve the corresponding normal linear system, whose solution is unique in the sense of least squares.

Example 1
For Equation (5) with Q = 0, we consider u = r 2 + r cos θ = x 2 + y 2 + x, α = 10 + r 2 + r 4 cos 2 θ = 10 + (x 2 + y 2 )(x 2 + 1). (50) The outer boundary of the domain Ω is an ellipse: where we take a = 1.5 and b = 0.5, and the inner boundary is ρ i = 0.2. With m = 2, N = 40, m 0 = 4, m 1 = 30, m 2 = 10 and a noise s = 5% added into the given data, Figure 2a reveals that the maximum absolute error of u is 2.11 × 10 −2 , which is much smaller than max(u) = 3.29. The maximum absolute error of coefficient α(x, y) denoted as ME(α) is 0.18, which is quite a lot smaller than max(α) = 15.56. The value e(α) = 3.8 × 10 −3 is smaller than 7.65 × 10 −2 from previous studies. For this problem the dimension of the normal matrix of the first linear system of (41) and (40) is n 1 × n 1 = 4 × 4, and the condition number is small with COND = 570.046. The dimension of the normal matrix of the second linear system (45) is n × n = 15 × 15, and the condition number is COND = 41,996.23. They show that these two linear systems are stable. The convergence rate is a central issue in numerical methods and algorithms. In Table 1, we consider a different mesh parameter m 1 × m 2 used in the collocation method to influence the convergence rate as reflected in ME(α) and e(α). It can be seen that more collocated points lead to a more accurate solution of α. Although for a nonlinear elliptic equation: where the exact value of S(r, θ) can be obtained by inserting Equation (50) into the above equation, as shown in Figure 2b, the ME of u is 2.11 × 10 −2 , the ME of α(x, y) is 0.18 and e(α) = 3.74 × 10 −3 . For the nonlinear problem, we have the same condition numbers because the parameters used are the same.

Example 2
Consider α u rr + 1 r u r + 1 r 2 u θθ + α r u r + 1 r 2 α θ u θ = sin u(r, θ) + u 2 (r, θ) + S(r, θ), (53) For this problem, the outer boundary is given by Equation (51) Figure 3, the ME of u is 1.44 × 10 −2 , which is much smaller than max(u) = 18.79. The ME of α(x, y) is 2.47, which is much smaller than max(α) = 32.96. The value e(α) = 3.85 × 10 −2 is small. For this problem the dimension of the normal matrix of the first linear system (41) and (40) is n 1 × n 1 = 4 × 4, and the condition number is small with COND = 1844.258. The dimension of the normal matrix of the second linear system (45) is n × n = 6 × 6, and the condition number is COND = 1013.03. They show that these two linear systems are stable.

Numerical Algorithm
From the last section, we have already recovered the coefficients preceding α(u) and u 2 r + u 2 θ /r 2 preceding α (u) in Equation (6), if Q and S are prescribed in advance. Indeed u(x, y) can be derived from Equation (39). In this case, ∆u before α(u), and u 2 r + u 2 θ /r 2 before α (u) can be obtained numerically from Equation (39).
from which we can obtain c i , and then, α(u) is recovered from Equation (56). It should be noted here that that even for the highly nonlinear inverse problems for coefficient α(u), solving nonlinear equations is not needed.

Example 3
For a quadratic nonlinear Poisson equation: α(u) = 10 + u 2 + u is to be recovered. The outer boundary is Equation (51) with a = 3.5 and b = 2.5, and ρ i = 1 is a unit circle. With m = 2, N = 40, m 0 = 2, m 1 = 20, m 2 = 5 and s = 5%, as shown in Figure 4a, the ME of u is 5.68 × 10 −3 , which is much smaller than max(u) = 10.7. The ME of α(u) is 0.082, which is much smaller than max(α) = 135.43. The value e(α) = 6.03 × 10 −4 is quite small. Figure 4b compares the numerical and exact α(u) in the range of u ≤ 11. These two curves almost coincide.  For this problem, the dimension of the normal matrix of the first linear system (41) and (40) is n 1 × n 1 = 4 × 4, and the condition number is very small with COND = 33.959. The dimension of the normal matrix of the second linear system (57) is n × n = 3 × 3, and the condition number is COND = 177,571.79. They show that these two linear systems are stable.
With m = 2, N = 40, m 0 = 3, m 1 = 15, m 2 = 10 and s = 5%, as shown in Figure 5a, the ME of u is 2.78 × 10 −2 , which is much smaller than max(u) = 18.6. The ME of α(u) is 2.23, which is much smaller than max(α) = 356.45. The value e(α) = 9.35 × 10 −3 is quite small. Figure 5b compares the numerical and exact α(u) in the range of u ≤ 19. These two curves almost coincide. For this problem, the dimension of the normal matrix of the first linear system (41) and (40) is n 1 × n 1 = 4 × 4, and the condition number is very small with COND = 6.3185. The dimension of the normal matrix of the second linear system (57) is n × n = 4 × 4, and the condition number is COND = 2.15 × 10 8 .
To reduce the condition number for the second linear system (57), we choose m 0 = 2, m 1 = 10 and m 2 = 10, such that the dimension of the normal matrix reduces to n × n = 3 × 3, and the condition number reduces to COND = 297,651.317. Meanwhile, ME(α) and e(α) are slightly increased to 2.38 and 1.08 × 10 −2 , respectively.

Numerical Method
Now, we detect γ(θ) by using the data in Equations (2) and (3). Basically, we need to solve Equations (1)- (3) in Ω. For this purpose, we take where the number of a jk , j, k = 1, . . . , m is n 1 = (m + 1)(m + 2)/2, which are to be determined. Instead of B(j, k, x, y) used in Equation (39), we employ E(j, k, x, y) from Equation (36) as the bases of u(x, y). The reason is that the order of r j+2 in E(j, k, x, y) is much lower than the order of r 2j in B(j, k, x, y). Like Equation (43), we arrange m 1 × m 2 points of (x, y) and collocating which comes to: a jk E(j, k, x pq , y pq ), a jk E y (j, k, x pq , y pq ) +S(x pq , y pq ), p = 1, . . . , m 2 , q = 1, . . . , m 1 .
from which we can compute a jk , and then u(x, y) is obtained from Equation (64), which is inserted in Equation (7) to find γ(θ) along the inner boundary Γ i .

Example 5
We give a solution of a linear diffusion-convection equation: where α = 1 + x 2 + y 2 , which is defined in a domain Ω by Equation (51) with a = 2.5 and b = 1.5 and by ρ i (θ) = 1.3 + 0.1 cos θ.
With m = 2, m 1 = 20, m 2 = 20 and s = 20%, as shown in Figure 6a, the ME of u is 4.58 × 10 −3 , which is much smaller than max(u) = 8.16. Figure 6b compares the numerical and exact γ(θ), of which these two curves almost coincide with the ME being 1.28 × 10 −2 . For this problem, we merely solve the linear system (65) and (40), whose dimension of the normal matrix is n 1 × n 1 = 6 × 6, and the condition number is small with COND = 446.16.

Numerical Method to Recover S(x, y)
The numerical method to recover S(x, y) is very simple, which is obtained by merely inserting the numerical solution of u(r, θ) in Equation (39) into the following equation: where α(r, θ) and Q(u(r, θ), u r (r, θ), u θ (r, θ)) are given functions.

Example 6
For Equation (69) with Q = u 2 , we consider u = r 2 + r cos θ = x 2 + y 2 + x, α = 10 + r 2 + r 4 cos 2 θ = 10 + (x 2 + y 2 )(x 2 + 1), (70) with a = 1.5 and b = 0.5 in Equation (51), and we take ρ i = 0.2. In this case, we have m = 2, N = 30 and a noise s = 5% added into the given data, as shown in Figure 7a, the maximum absolute error of u is 1.35 × 10 −2 , which is much smaller than max(u) = 3.29. The maximum absolute error of S(x, y), denoted as ME(S), is 1.22, which is much more accurate than max S = 101.5. The value e(S) = 5.2 × 10 −3 is small. For this problem, we merely solve the linear system (41) and (40), whose dimension of the normal matrix is n 1 × n 1 = 4 × 4, and the condition number is small with COND = 476.623. In Table 2, we consider different mesh parameters N used in the collocation method for u(x, y) to influence the convergence rate as reflected in ME(S) and e(S). It can be seen that more collocated points lead to a more accurate solution of S.
The outer and inner boundaries are given by Equations (62) and (63), respectively. For this example, we have m = 2, N = 20 and a noise s = 5%, as shown in Figure 7b, the maximum absolute error of u is 2.03 × 10 −2 , which is more accurate than max(u) = 18.62. The ME of S(x, y) is 1.04, which is much smaller than max S = 4001.42. The value e(S) = 2.99 × 10 −4 is quite small. The dimension of the normal matrix of the linear system (41) and (40) is n 1 × n 1 = 4 × 4, and the condition number is small with COND = 19.858.
In Table 3, we consider different mesh parameters of N used in the collocation method for u(x, y) to influence the convergence rate as reflected in ME(S) and e(S). It can be seen that more collocated points lead to a more accurate solution of S and even for a small N = 3 the accuracy is good.

Conclusions
In the paper, we have constructed a category of two-parameter homogenization functions in the 2D doubly connected domain for automatically satisfying the outer Dirichlet and Neumann boundary conditions of the nonlinear elliptic equation. A new numerical method was developed for solving the inverse problems through the technique of twoparameter homogenization functions, which include the recovery of the space-dependent and temperature-dependent conductivity functions and also the source function. We first determine u(x, y) in terms of the bases and then a linear system to satisfy the inner boundary condition by the method of collocation is solved. Back-substituting the solution into the nonlinear elliptic equation, we recovered the unknown space-dependent and temperaturedependent conductivity functions by collocating points inside the domain and solving the derived linear equations. The basis B(j, k, x, y) has good behavior used in the interpolation for u(x, y) in a weighted Sobolev space, such that we can recover u(x, y) very well; hence, after the back substitution of u(x, y) into the governing equation, the source function was directly recovered with high accuracy. It maintains the same advantages of accuracy and efficiency for solving the inverse conductivity problems and inverse Robin problems, even for large noise.

Conflicts of Interest:
The authors declare no conflict of interest.