Next Article in Journal
Symmetry-Driven Two-Population Collaborative Differential Evolution for Parallel Machine Scheduling in Lace Dyeing with Probabilistic Re-Dyeing Operations
Previous Article in Journal
On LRS Space-Times Admitting Conformal Motions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm

School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1242; https://doi.org/10.3390/sym17081242
Submission received: 22 May 2025 / Revised: 26 June 2025 / Accepted: 15 July 2025 / Published: 5 August 2025
(This article belongs to the Section Computer)

Abstract

The regularization parameter plays an important role in regularization-based electrical tomography (ET) algorithms, but the existing methods generally cannot determine the parameter. Moreover, these methods are not real-time since a thorough search must be performed for the best parameter. To address the issue, a reproducing kernel-based interpolation approximation method is proposed to efficiently estimate the best regularization parameter from a group of representative samples. The optimization and generation of the new method have been verified by theoretical analysis and experimental demonstration. The theoretical evaluation is conducted in a Hilbert space with a known reproducing kernel, and its symmetry ensures the uniqueness of the interpolation. And experimental validation is carried out using both simulated and actual models, each with a range of distinct features. Results indicate that the new method can approximately find the best regularization parameter. Consequently, when using the regularization parameter, the new method can effectively improve both the spatial resolution and steadiness of ET imaging process.

1. Introduction

Electrical tomography (ET) [1,2,3] is an advanced visualization and detection technique characterized by its nonradiative nature, low cost, and fast response. But the ET is a nonlinear process that is generally simplified to solving linear problems. Due to its nonlinearity, inherent ill-posedness, and soft-field effect [4,5], the quality of ET imaging is insufficient for practical applications. Various mathematical regularization methods have been proposed to enhance the imaging quality, contingent upon the correct selection of their regularization parameters [6,7]. Some effective algorithms have been proposed to determine the regularization parameter. The two commonly used parameter selection methods are the L-curve method [8] and the generalized cross validation (GCV) [9].
The L-curve method uses the relationship between the residual and regularization norms, which can be plotted as a curve resembling the letter ‘L’. The inflection point of the L-curve, a prevalent method for determining the regularization parameter, serves as a key global indicator for its computation. But the L-curve’s inflection point may not exist or may fail to converge in some cases [10]. The GCV method is suitable for large-scale data with white noise. When the data size is small or the noise is correlated, the method becomes unstable. In practice, the GCV function may be too flat when reaching the minimum point, making the identification of this minimum—and thus the determination of regularization parameters—challenging [11]. In the past decades, efforts have been made to overcome these problems. Research on the L-curve method for inverse problems can be categorized into two aspects: the first involves intuitive analysis or theoretical justification of L-curve’s existence and convergence [12,13]. The second focuses on computing the global corner automatically to determine the appropriate regularization parameter for inverse problem [14,15]. Since these methods must depend on a thorough search for the best parameter, these methods cannot work in real time. In summary, these existing algorithms suffer from inaccurate, unstable, and nonlinear problems in practice, which will further be explained in the next section. Different from the existing method, in this paper, we use the best approximation to solve the above issue after selecting a group of representative samples with optimal regularization parameters.
There are many interpolation methods such as polynomial-based, spline-based approximation, [15,16] and so on. These methods have their own applicable range and limitations. Nevertheless, the optimization and generalization of these methods cannot be guaranteed. Different methods can yield vastly distinct approximation errors, and each has its applicability confined to certain situations. Furthermore, the methodology for selecting representative samples remains unaddressed. So far, few studies have attempted to analyze the criteria for representative samples. In recent years, the reproducing kernel-based interpolation approximation method for a given function in an arbitrary data space has received ever-increasing attention [17]. Unlike existing methods, the symmetry and reproducing properties of the kernel ensure optimization and generalization. Essentially, in any two-dimensional Hilbert space, the best interpolation approximation model [18] has been constructed, but it cannot be applied in high-dimensional data spaces. Considering these advances and their limitations, in this paper, we first map the high-dimensional ET measurements onto a two-dimensional dataset. Then the representative samples of various measurements are found by the typical clustering algorithms after using a kernel trial. Consequently, the method can provide a fast response for any new ET measurements.

2. Related Work

This section introduces the ET principle and the corresponding imaging algorithm, followed by a review of the optimal interpolation approximation in a two-dimensional data space.

2.1. ET Principle and Related Imaging Algorithm

Let Ω be a round ET detection field with boundary Γ on which M measurement electrodes are attached. A typical ET measuring process involves sequentially injecting current or voltage into Ω through each electrode on Γ and obtaining measurements from other electrodes. After injecting all electrodes in turns, m measurements can be obtained. After discretizing Ω to n units/pixels, the nonlinear map f from Ω to Γ can be approximately expressed as
U = Sσ, s.t., U ∈ Γ, σ ∈ Ω
where U represents the vector of m boundary measurements; σ denotes the conductivity within n partitioned pixels in Ω; and S is called the sensitivity matrix which reflects the nonlinear map J.
The most used algorithm that ET solves σ is the linear back projection (LBP) as follows:
σ = STU
when ST is the transposition of both S and U are given. Due to the ill-posed nature of the problem and the “soft field” effect [19], the imaging quality of LBP is too low to satisfy the requirements of many engineering applications. To address the issue, Equation (2) is often solved by optimizing the following objective function:
J ~ = | | S σ U | | min
where ||·|| is defined as a chosen similarity norm used to measure the residual error between two comparable entities. But the optimization solution of (4) is often non-unique and unstable. A variety of algorithms based on regularization have been introduced, among which Tikhonov regularization (TR) is the most commonly used [20] with the objective function as follows:
σ = arg min Δ n × 1 { S σ U 2 2 + λ R ( σ ) }
When R(σ) is a regularization function, λ is the regularization parameter that balances the effect between the two items. If R(σ) is taken as ||σ||2, Equation (5) has an analytic solution as
σ = ( S T S + λ I ) 1 S T U
where I is an identity matrix. Over the past decades, significant efforts have been dedicated to determining the optimal regularization parameter λ. Both the L-curve method and generalized cross validation are the most representative algorithms, as explained below:
(1)
L-curve (LC): The LC method selects the optimal parameter of λ by identifying the corner of the L-curve plot, characterized by the pair (||σ||, ||Sσ-U||). This corner can be pinpointed using the second-order difference index within the plot [21]. Since the L-curve is commonly represented in a log-log coordinate system, the second-order difference is computed as follows
λ = arg max λ k ρ ( λ k ) η ( λ k ) ρ ( λ k ) η ( λ k ) ( ρ ( λ ) ) 3
where ρ ( λ k ) and ρ ( λ k ) are one-order and two-order differences on log(||Sσ-U||). η ( λ k ) and η ( λ k ) are one-order and two-order differences on log(||σ||), respectively.
(2)
Generalized cross validation (GCV): The GCV method [19] generalizes cross validation as follows:
There has been a substantial amount of interest in estimating a good value of λ from the data. A conservative estimate might put the number of published values for λ at several dozen. In the paper, we examine the properties of the method GCV for obtaining a good estimate of λ from the data. The GCV estimate, used in ridge regression, corresponds to the minimizer of V(λ), as given by
V ( λ ) = 1 m | | ( E A ( λ ) y | | / [ 1 m T r a c e ( E A ( λ ) ) ] , s . t . , A ( λ ) = X ( X T X + m λ I ) 1 X T
This estimate is a rotation-invariant version of Allen’s Press or ordinary cross-validation.
Both LC and GCV have their respective limitations [22,23] as follows:
(1)
Extreme points on their curves are often uncertain, and encountering multiple points may prevent algorithms from correctly identifying them. Efforts had been made to overcome these problems, but the effectiveness and efficiency of those studies are conditional.
(2)
Both of the two algorithms are time-consuming, requiring numerous candidates for optimal values to be evaluated by the objective function to find the extreme point. Hence, they are not suitable for most engineering applications requiring real-time and rapid responses.
Therefore, in this paper, we get off these traditional methods and thus use the interpolation method to approximate the optimal parameter, after constructing a prior optimization parameter and the representative samples are selected.

2.2. Interpolation Approximation in Reproducing Kernel Space

Let λk be the best hyperparameter of the ET image σk when using the TR algorithm, k = 1, 2, …, L. Since the sensitivity matrix S in any ET algorithm is fixed, thus σk is perfectly determined by Uk. Consequently, Uk corresponds to the only λk, k = 1, 2, …, L.
Define the map function f(·) from U to λ as
λ = f(U)
The function f(·) is uncertain and unknown but can be assessed by Interpolation approximation.
Let{(Uk, λk)}(0≦kL) be a set of samples on f(·). According to the numeric approximate theory [24], the hyperparameter λopt for any new measurement U can be determined by calculating its nearest neighbors as follows:
λ o p t = k = 1 m p k λ k , s . t , p k = ( d k ) 1 / s = 1 m ( d s ) 1 ; d k = | | U U k | |
But there are at least two problems with the determination method, which is shown as follows:
(1)
The optimality of pk cannot be assured in any mathematical sense; thus, the generalization of the approximation form used in Equation (9) is also not guaranteed.
(2)
The weighting scheme applied in Equation (9) renders it sensitive to noise and unable to reflect the typicality or compatibility of each interpolated image [25].
In this paper, based on the solved reproducing kernel, we aimed to find the optimal interpolation approximation for any boundary measurements U within the detection field Ω when employing the TR algorithm. Let ERn be a set of real numbers, and let H be a Hilbert space of real-valued function. A binary function R(·, ·) on E×E is a reproducing kernel if the following conditions are met:
(1) For tE, R(·, t) ∈ H; (2) and for fH, t, rE, it holds
f(t) = <f(r), R(r, t)>
where <·,·> is the inner product in H. Define a group of functional {Fk}(0≦kL) as
Fk(f) = f(Uk) = λk, s.t., fH
Assume that Hn is a subspace of H, the operator from H to Hn is defined as
( H n f ) ( s ) = k = 1 m α i ( r ) λ i , s . t , r E , { α i ( r ) } ( 1 i L ) H n
For any f(r)∈H, the following discrepancy criterion is defined as
d E ( r ) = inf { α i ( r ) } H n sup f H | f ( r ) ( H n f ) ( r ) |
where sup and inf refer to the upper and the low bound, respectively. If there is a set of functional { α i ( r ) } ( 1 i L ) H n in En such that Equation (13) attains its optimum, thus the functional is called as the best approximation operator [26].
Let W be a two-dimensional data vector space such that
W = { f | f ( x , y ) C , f x , f y , f x y L 2 }
where C is a set of continuous functions defined on E and L2 is the set of a square integrable function. The inner product in W is defined as
< f , g > = E ( f g + f x g x + f y g y + f x y g x y ) d σ , s . t . , f , g W
Specially, the reproducing kernel in W 2 1 has been found [26], and it is
R ( s , t ) ( x , y ) = R s ( x ) R t ( y ) , s . t , ( f ( x , y ) , R ( s , t ) ( x , y ) ) = f ( s , t )
where
R x ( x ) = [ c h ( s + x b a ) + c h ( | s + x | b + a ) ] / [ 2 s h ( b a ) ] , s , x [ a , b ] R t ( y ) = [ c h ( t + y c d ) + c h ( | t + y | c + d ) ] / [ 2 s h ( c d ) ] , t , y [ c , d ]
The formula shows that the reproducing kernel is symmetric and ensures the uniqueness of the interpolation. When {((xk, yk), λk*)}(0≦kL) is a group of vectors in a two-dimensional function f(x,y), the best interpolation operator [18] in the sense of the uniform approximation is solved as
H n w ( x , y ) = j = 1 n R j * ( x , y ) λ j * , s . t . , λ j = k = 1 j β j k λ k a n d R j * ( x , y ) = t = 1 j β j t R t ( x , y )
where βjt is a group of Schmit coefficients that orthonormalizes Rs,t (x1,y1), Rs,t (x2,y2), …, Rs,t (xL,yL), j, t = 1, 2, …, L. Furthermore, Hn(x,y) satisfies
F j ( H n U f ) = λ j , j = 1 , 2 , , L
Essentially, if { ( x j , y j ) } ( 1 j L ) is dense in [a, b] × [c, d], the following conclusion holds
lim n ( H n U f ) ( x , y ) = u . a . f ( x , y ) , s . t , f ( x , y ) W , ( x , y ) [ a , b ] × [ c , d ]
where “u. a.” means uniform approximation.
Hereafter, the best interpolation approximation operator for f(x,y) by Equation (17) is called as BIAO in this paper.

3. The Best Interpolation Approximation for the Hypermeter in TR

In this section, any m-dimensional ET measurement is turned into a two-dimensional vector by a multidimensional scaling map, and then the BIAO is found by a set of representative samples.

3.1. Feature Reduction by Multidimensional Scaling

Let {Gk}(0≦kL) be a set of ET images, where each ET image Gk responds to a unique measurement Uk and a parameter λk that yields the best ET imaging quality when using the TR algorithm, UkRn, k = 1, 2, …, L. To effectively approximate f(•), it is necessary to determine the BIAO from Equation (7) for the set of pairs {(Uk, λk)}(0≦kL). But subject to the dimensionality curse [27] associated with large dimension m, it is impossible to interpolate in every dimension within Rn. To address the issue, the vector of Uk is first transformed into a two-dimensional vector using the multidimensional scaling (MDS) mapping [28]. The use of MDS aims to obtain the following two advantages at least:
(1)
There are feasible two-dimensional BIAO methods in practice, and the upper bound of their approximation errors is estimable [29]. Specially, the generalization of the BIAO within a two-dimensional Hilbert space based on a reproducing kernel is both theoretically demonstrated and practically validated. Inversely, these properties in other typical interpolation approximation cannot be assured.
(2)
The topological structure of the original data remains unchanged after MDS mapping. MDS preserves the distances between points as unchanged as possible then maps from a high-dimensional space to a selected low-dimensional space. Specially, if L is small, the mapped distances are nearly unchanged. Hence, the distribution of all data points can be visually observed and further be interpolated and computed in the two-dimensional data space.
The typical MDS map refers to a set of related ordination techniques used in information visualization to display the information contained within a distance matrix.
In case of {(Uk, λk)}(0≦kL), the data to be analyzed is a set of L vectors U = {U1, U2, …, UL} in Rn on which a distance function is defined, dij = ||UiUj||, for i-th and j-th points. These distances consist of a dissimilarity matrix D = {dij} ∈ Rn. In view of D, MDS aims to find L two-dimensional vectors Y1, Y2, …, YL in R2 such that
dij = ||UiUj|| = ||YiYj|| for all Ui and UjU
where || ● || is a vector norm. In typical MDS, the norm is the Euclidean distance, but currently, it has been developed into a metric or arbitrary distance function. In other words, MDS attempts to find an embedding from the L points into Rd such that distances are preserved. If the dimension d is reduced to two, we may plot all points to obtain a visualization of the similarities between the n points. Usually, MDS is formulated as an optimization problem, where Y1, Y2, …, YL are solved by the following typical cost function
min Y 1 , Y 2 , , Y L { d i j | | Y i Y j | | } 2
In this paper, the minimization solution is found in terms of the matrix eigenvalue decomposition method [30]. Consequently, λ = f(U) is rewritten as λ = f(Y) after U is mapped to Y by MDS.
Figure 1a shows five groups of ET images, and each group contains eight images that have similar characteristics such as the shapes and distributions. The reconstructed images with various target object distributions and their corresponding data vectors by MDS are shown in Figure 1b as well. As seen, the similar images have closer distances than those dissimilar images. And these low-dimensional vectors can reflect the similarity among the high-dimensional ET measurements.

3.2. Representative Sample and Best Interpolation Approximation

Let X = {(Uk, λk)}(0≦kL) be a set of samples, in which each has the input measurement Uk and the output best hyperparameter λk when using the TR algorithm, and Yk be the corresponding two-dimensional vector of Uk after using MDS map, denoting S = {(Yk, λk)}(0≦kL).
To implement BIAO for λ = f(Y), a set of representative samples must be selected from S. The set must generally satisfy the following conditions at least:
(1)
If Yp and Yq in S are similar/dissimilar, λp and λq must be nearly same/different, satisfying the principle “similar problem has a similar solution” [31]. Specifically, it must be prohibited that the same value of Y corresponds to two different values of λ, and else the determined output will be contradictory.
(2)
Each representative sample Yp is centralized in a neighborhood, so that, when any Y falls in the neighborhood of Yp, the corresponding λ will fall into the neighborhood of λp. Hence, it is necessary that neighborhoods from different representative samples in Y should be separated to each other.
But λ = f(Y) is uncertain and may be nonlinear except λk = f(Yk). Therefore, when two vectors Ys and Yt in S are similar, λs and λt may be dissimilar, and the reverse is true as well.
For example, the least squares method uses the inner product as a distance metric to measure the distance between samples for regression. Similarly, other metrics, such as the Euclidean distance, can also be used. However, Euclidean distance does not capture nonlinear relationships. To solve the problem, we use a kernel trial [32] such that ||λsλt|| can be calculated by ||YsYt|| in S as follows:
d s t 2 = | | λ s λ t | | 2 = | | 1 | C s | i s = 1 | C s | f ( Y s ) 1 | C t | i t = 1 | C t | f ( Y t ) | | 2 = 1 | C s | i s = 1 | C s | f ( Y s ) f ( Y t ) 2 | C s | | C t | i s = 1 | C s | i t = 1 | C t | f ( Y s ) f ( Y t ) 1 | C t | i t = 1 | C t | f ( Y s ) f ( Y t ) = 1 | C s | 2 i s = 1 | C s | R ( Y s , Y t ) 2 | C s | | C t | i s = 1 | C s | i t = 1 | C t | R ( Y s , Y t ) 1 | C t | 2 i t = 1 | C t | R ( Y s , Y t ) = t r ( K M s t )
where R(·,·) is the reproducing kernel by Equation (12) and
K = R k 1 , k 1 R k 1 , k 2 R k 2 , k 1 R k 2 , k 2 , ( M ) p q = 1 / ( | C s | | C s | ) , λ s , λ t f r o m C s 1 / ( | C t | | C t | ) , v s , v t f r o m C t 1 / ( | C s | | C t | ) , o t h e r w i s e
Hence, the distance between (Ys, λs) and (Yt, λt) in S is perfectly calculated by ||YsYt||. The use of the R(·,·) can address nonlinear problems and is mathematically well-founded.
In the sense of clustering analysis, each neighborhood refers to a cluster, and each representative sample is one clustering center. And two conditions mentioned above align precisely with the fundamental clustering principle of maximizing distance between clusters while minimizing distance within clusters. Therefore, the typical C-means (CM) clustering algorithm [33] is used to partition all samples in {Yk}(0≦kL) to c clusters C1, C2, …, Cc with their individual clustering centers, v1, v2, …, vc, which are regarded as c representative samples for BIAO. CM uses the alternative optimization iteration [34] by the following two steps. One is to determine the belongingness of each center to the nearest center principle, and the other is to calculate the center of each cluster by
v i = Y j C i Y j / | C i | , i = 1 , 2 , , c
The number of clusters is solved by the most used Davies–Bouldin (DB) validity index [35]. Let Δ i be the within-cluster distance of ith cluster computed by the mean of all distances in Ci; δ i j be the between-cluster distance between Ci and Cj, δij = ||vi − vj||; and c be the number of clusters. DB finds the optimal number of clusters by maximizing between-cluster distance whereas minimizing within-cluster distance. On the other hand, Zthe pair λs and λt in the same clusters must have as small distance as possible. Hence, for a set of the numbers of clusters c, DB finds out the optimal number of clusters c* by maximizing DB along all possible values of c as follows:
min D B ( c ) = i = 1 c R i / c , s . t . , R i = max j , j i ( Δ i + Δ j ) / δ i j
where δij = ||vivj||, △i = ∇iξi, is the mean of all distances among points in Ci, and ξi is the mean of all different pairs of |zszt| in Ci. Usually, c is taken as the interval [2, n1/2], and n1/2 is commonly regarded as the maximal number of clusters for a dataset containing n points.
Figure 2a shows the process that DB finds c* in the two-dimensional data space with 40 samples shown in Figure 1, and Figure 2b shows the process of finding five centers/representative samples after determining c* the optimal number of clusters. The different clusters refer to different colors, and the representative samples are the relative clustering centers.
Let {(Yk, λk)}(0≦kc) be a set of representative samples and { a i ( Y ) } 1 c X n be a set of orthogonal functions from {(Yk, λk)}(0≦kc). According to Equation (23), the BIAO for any Y from input U is calculated as
H n w ( Y ) = j = 1 n R j * ( Y ) λ j * , s . t . , λ j = k = 1 j β j k λ k a n d R j * ( Y ) = t = 1 j β j t R t ( Y )
where βjt is a group of Schmit coefficients that orthonormalizes Rs,t (Y1), Rs,t (Y2), …, Rs,t (Yn), j, t = 1, 2, …, n. Hc is the functional in W, and Equation (24) satisfies ( H c f ) ( Y i ) = λ i , i = 1 , 2 , , c . The detailed implementation steps of BIAO for f(.) are shown in Table 1.
As shown in Figure 3, the flow of the two-dimensional reproducing kernel-based interpolation approximation for determining the best regularization parameter is illustrated. The time required to construct the reproducing kernel model is not included in the real-time application process, so it does not affect the estimation of the optimal parameter. The time for optimal parameter estimation includes both the interpolation computation time and the incremental MDS computation time.

4. Experiment

The section includes two parts: one is a set of simulated cross-sections of lung tissue in which each is constructed by a real CT image. The other consists of real models in which target objects have various shapes and positions in the detection field. After selecting a group of representative samples, BIAO is used to determine the optimal parameter for the TR algorithm. The reconstructed images based on BIAO are compared with those based on the LC and GCV methods. The detection field Ω was divided into 812 pixels, and all experimental measurements were obtained using a 16-electrode ERT system. Hence, the number of measurements for a frame of ERT image is 240.
All reconstructed images were assessed using two indices: time resolution and imaging quality. The time resolution is defined as the time required to reconstruct a single frame of an ERT image. In contrast to original models, imaging quality is evaluated by relative error (RE) between the original and the reconstructed images:
R E = j = 1 n | σ j σ j * | / σ j
where σj is the calculated conductivity at jth pixel and σj* is the actual one in simulation or experiment, j = 1, 2, …, n. Currently, RE is widely used to evaluate imaging quality in Ω. On the other hand, considering that selecting the best parameter is crucial, the correctness of the selected parameters from BIAO, LC, and GCV methods is evaluated by the following error criteria:
HE = ||log(λopt) − log(λ*)||
where λopt is the solved parameter by any method and λ* is the best one among all values of λ.

4.1. CT-Based ET Simulation

We retrospectively collected 436 CT samples from a set of patients in the Tianjin General Hospital. Partial representative CT images are shown in the second row in Table 2. According to the shapes and positions of various tissues in CT, the corresponding models were built by COMSOL6.0 software, where these average conductivities of various lung lobes were determined by prior values that are shown in Table 3. The study was conducted in accordance with the Declaration of Helsinki, and all experiments were approved by the ethics committee of General Hospital of Tianjin Medical University.
All samples are partitioned into two groups: one group is the representative samples, and the other group is the testing samples that are used to quantitatively evaluate the quality of any ET image.
The fourth and eighteenth rows in Table 2 show the reconstructed images with the best parameter when using the TR algorithm. The other rows are the reconstructed images, RE, HE, and runtime from LC, GCV, and BIAO in turns. These results are concluded as follows:
(1)
Imaging quality. These images from BIAO are very close to the best ones in terms of shapes and positions, essentially in M13, M14, and M15. Because BAIO is based on the principle that similar measurements yield similar outputs, it is data-driven and stable. In contrast, these images from LC have large errors in M13, M14, and M15; some tissues cannot be found at all. This is because the corner of the L-curve can become blurred or even disappear. And the images from GCV are the worst. This is because GCV assumes Gaussian noise, while noise in ERT measurements, particularly model noise, is often non-Gaussian.
(2)
RE value. The RE value of each reconstructed image from BIAO is lower than those from LC and GCV. Both LC and GCV have very similar RE values since they have similar mechanisms, which is consistent with the results of the reconstructed images. The L-curve and GCV methods do not perform well in this context. The L-curve relies on the existence of a clear corner, which is not theoretically guaranteed. GCV assumes a specific type of noise, but in practice, the noise includes both model noise and measurement system noise, which do not satisfy the assumptions underlying GCV. The reproducing kernel method is data-driven and can maintain stability when applied to data of the same distribution.
(3)
HE value. The HE value of each reconstructed image is different from their RE value; partial values of HE are closer to optimal value λ* from BIAO than that from LC and GCV such as M11, M13, and M16. We conclude the reason results from the nonlinear relation of f(•). For example, the distribution of objects affects the optimal parameter differently. Adding an object at the center causes little change in measured values but a large change in the optimal regularization parameter. In contrast, adding an object at the boundary leads to large changes in measured values but only minor changes in the optimal parameter.
(4)
Runtime. Except for the time to find the representative samples and build the BIAO, the run time of BIAO is significantly shorter than those of LC and GCV in which each of them takes a group of discrete values in the interval [10−8, 10−1]. Although BIAO requires prior calibration, this does not affect its implementation or application.

4.1.1. Experiments Under Various Noise Levels

The ERT process is subject to various noises in measurements. In this study, we focus on the typical white Gaussian noise, and it is assessed as
N L = 100 × N max / Δ φ max
where NL represents the noise level, Δ φ max is the maximal measurements, and N max stands for the maximum value of the noise.
As shown in Table 4, experiments at various noise levels (1% to 10%) were conducted for four models. As the noise level increases, the RE values also increase. Among them, the RE of BAIO and the L-curve remain relatively stable compared to GCV. The RE of BAIO is closest to the minimal RE under different noise conditions, with less variation than GCV and the L-curve, demonstrating better stability. On average, the RE of BAIO deviates from the minimal RE by 0.0069, while that of the L-curve deviates by 0.0527 and GCV by 0.2145. In fact, the L-curve and GCV methods are quite unstable, especially when the noise level varies. This instability is evident in their results, which fluctuate significantly with changes in noise. The average fluctuation of the CC value for GCV is 0.075, while, for BAIO and the L-curve methods, the average fluctuations are 0.002 and 0.004, respectively.

4.1.2. Sensitivity Analysis of Cluster Number

The number of clusters affects the interpolation error. When all samples are used as interpolation nodes, the error is naturally low; however, this leads to a significant increase in computational cost. Conversely, using fewer clusters results in greater interpolation error. To analyze the relationship between the number of clusters and interpolation error, we conducted a sensitivity analysis using 3706 samples. The sample distribution is shown in Figure 4. The interpolation error is defined as follows:
E R = 1 / N M i = 1 N M log ( λ * ) log ( λ )
Here, NM denotes the number of samples.
Figure 5 presents the two-dimensional distribution of samples after MDS, with color representing the optimal λ. The results indicate a nonlinear yet monotonic relationship between the 2D coordinates and the optimal parameter: samples on the left correspond to lower λ values, while those on the right correspond to higher values. The error as a function of the number of clusters is depicted in Figure 6. As the number of clusters increases, the interpolation error decreases. Notably, increasing the number of clusters from 2 to 50 results in a reduction of 0.2460 in error, while increasing from 50 to 100 yields only a marginal decrease of 0.0131. Considering computational complexity, 50 clusters represent a balanced and practical choice.

4.1.3. Real Models with Various Features

Different from the above experiments in which various target objects have similar features, the group of models in the experiment have very different features such as sizes and positions. The set of experiments was carried out using the TJU-ERT system, which has 32 electrodes and was developed at Tianjin University, China. The TJU-ERT system consists of a PC for data processing and image reconstruction, an FPGA-based data acquisition system, and a drum with 32 electrodes. The diameter of each electrode is 10 mm, and the diameter of the cylindrical tank is 320 mm.
The representative and testing samples in the experiments are individually denoted as V21–V26 and M21–M26 and are shown in the second and fifteenth rows of Table 5. All models have drums filled with salt water with a conductivity of 0.01 S/m. The Plexiglas rods represent objects whose conductivity is close to 0 S/m. The optimal and three reconstructed images of each model under GCV, LC, and BIAO are shown in the third/sixteenth, fifth/eighteenth, eighth/twenty-first, and eleventh/twenty-fourth rows in Table 5, respectively. When using TR, their values of BIAO, LC, and GCV were compared along the same group of discrete values of λ, where the searching interval of λ ranges in [10−8, 10−1] and is discretized to eight subintervals in which the value of λ is regarded as a constant. The third and sixteenth row in Table 5 shows the reconstructed images with the best parameter λ* when using the TR algorithm.
The other rows in Table 5 are the reconstructed images, RE, HE, and runtime from LC, GCV, and BIAO in turn, respectively. These results are concluded as follows:
(1)
Imaging quality. The imaging results from the three methods can be visually evaluated by comparing the shapes and positions of all target objects with those of the real models. Even though there are different sizes and shapes in M21–M26, these images from BIAO are closer to the best ones. In contrast, the images from GCV exhibit large errors with numerous artifacts present in all models. Also, the images from LC and BIAO show that the shapes and positions of the target objects are similar, but those from LC have slightly more artifacts than those from BIAO. However, for the models with small or central objects, the reconstructed images from BIAO and LC are unclear, and the edge information is not obvious. However, the images from both LC and GCV are similar in edge information to the target objects.
(2)
RE value. The RE value of each reconstructed image from BIAO is lower than those from LC and GCV. The result is consistent with that shown in their imaging quality. Both LC and GCV have very similar RE values since they have similar mechanisms. The RE values of all images are shown below those images in Table 5. The reconstructed images obtained by both algorithms with different measurement numbers show the shape and position of the objects, and the quality of the reconstructed images varies. The reconstructed images obtained by the LC of the multi-objective distribution model have large artifacts. It can only confirm the approximate position and shape of the targets. The M25 model can distinguish the number and shape of the targets, while the M22 model can only roughly distinguish the number of targets but not the shape.
(3)
HE value. The HE value of each reconstructed image is different from their RE value; partial values of HE are closer to the optimal value λ* from BIAO than that from LC and GCV. We conclude that the optimal value usually exists within a small interval and is not positively corrected with its own distance due to the nonlinear relation of f(•).
(4)
Runtime. Except for the time to find the representative samples and building the BIAO, the run time of BIAO is an order of magnitude less than those from LC and GCV in which each of them takes a group of discrete values in the interval [10−8, 10−1]. Thus, when a set of representative samples is available, the BIAO can find the correct parameter in real-time when using the TR algorithm.

5. Conclusions

A method for selecting the regularization parameter based on the best interpolation approximation is proposed in this paper. The optimization and generalization of the proposed method were theoretically and experimentally demonstrated. Consequently, once a group of representative samples has been found, the proposed method can effectively and quickly find the best regularization parameter, which is very valuable and useful in practice. Note that the proposed method can be applied not only to the TR algorithm discussed in this paper but also easily extended to other regularization methods. According to the promising results of this work, a conclusion can be made that this proposed approximation method is reasonable and effective.
However, it is also clear that this method cannot significantly improve the ERT quality when the representative samples are insufficient, considering that different noise levels and various geometric boundaries of the detection field can both affect the data distribution, which in turn influences the results. In addition, how to update or replace representative samples during the application process is also an issue that requires further investigation.

Author Contributions

Methodology, S.Y.; Software, F.D.; Validation, F.D.; Writing—original draft, S.Y.; Writing—review & editing, S.Y.; Funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Foundation of China, grant number 61973232.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Peng, L.; Yang, Y.; Li, Y.; Zhang, M.; Wang, H.; Yang, W. Deep learning-based image reconstruction for electrical capacitance tomography. Meas. Sci. Technol. 2025, 36, 062003. [Google Scholar] [CrossRef]
  2. York, T. Status of electrical tomography in industrial applications. J. Electron. Imaging 2001, 10, 608. [Google Scholar] [CrossRef]
  3. Tan, Y.; Yue, S.; Cui, Z.; Wang, H. Measurement of Flow Velocity Using Electrical Resistance Tomography and Cross-Correlation Technique. IEEE Sens. J. 2021, 21, 20714–20721. [Google Scholar] [CrossRef]
  4. Wang, J.; Deng, J.; Liu, D. Deep prior embedding method for electrical impedance tomography. Neural Netw. 2025, 188, 107419. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, Z.; Zhang, T.; Wang, Q. DELTA: Delving into high-quality reconstruction for electrical impedance tomography. IEEE Sens. J. 2025, 25, 13618–13631. [Google Scholar] [CrossRef]
  6. Sun, B.; Yue, S.; Hao, Z.; Cui, Z.; Wang, H. An Improved Tikhonov Regularization Method for Lung Cancer Monitoring Using Electrical Impedance Tomography. IEEE Sens. J. 2019, 19, 3049–3057. [Google Scholar] [CrossRef]
  7. Vauhkonen, M. Electrical Impedance Tomography and Prior Information. Ph.D. Thesis, Department of Physics, University of Kuopio, Kuopio, Finland, 1997. [Google Scholar]
  8. Antoni, J.; Idier, J.; Bourguignon, S. A Bayesian interpretation of the L-curve. Inverse Probl. 2023, 39, 065016. [Google Scholar] [CrossRef]
  9. Bellec, P.C.; Du, H.; Koriyama, T.; Patil, P.; Tan, K. Corrected generalized cross-validation for finite ensembles of penalized estimators. J. R. Stat. Soc. Ser. B Stat. Methodol. 2025, 87, 289–318. [Google Scholar] [CrossRef]
  10. Vogel, C.R. Non-convergence of the l-curve regularization parameter selection method. Inverse Probl. 1996, 12, 535. [Google Scholar] [CrossRef]
  11. Buccini, A.; Reichel, L. Generalized cross validation for lplq minimization. Numer. Algorithms 2021, 88, 1595–1616. [Google Scholar] [CrossRef]
  12. Hanke, M. Limitations of the L-curve method in ill-posed problems. BIT Numer. Math. 1996, 36, 287–301. [Google Scholar] [CrossRef]
  13. Calvetti, D.; Reichel, L.; Shuibi, A. L-curve and curvature bounds for Tikhonov regularization. Numer. Algorithms 2004, 35, 301–314. [Google Scholar] [CrossRef]
  14. Mc Carthy, P.J. Direct analytic model of the L-curve for Tikhonov regularization parameter selection. Inverse Prob. 2003, 19, 643–663. [Google Scholar] [CrossRef]
  15. Amiri-Simkooei, A.; Esmaeili, F.; Lindenbergh, R. Least squares B-spline approximation with applications to geospatial point clouds. Measurement 2025, 221, 116887. [Google Scholar] [CrossRef]
  16. Gasca, M.; Sauer, T. Polynomial interpolation in several variables. Adv. Comput. Math. 2000, 12, 377–410. [Google Scholar] [CrossRef]
  17. Berlinet, A.; Thomas-Agnan, C. Reproducing Kernel Hilbert Spaces in Probability and Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Li, F.; Cui, M. A best approximation for the solution of one-dimensional variable-coefficient burgers’ equation. Numer. Methods Partial Differ. Equ. 2009, 25, 1353–1365. [Google Scholar] [CrossRef]
  19. Polydorides, N. Image Reconstruction Algorithm for Soft-Field Tomography. Ph.D. Thesis, Department of Electrical Engineering and Electronics, UMIST, Manchester, UK, 2002. [Google Scholar]
  20. Huang, P.; Bao, Z.; Guo, J. Detection of magnetic samples by electromagnetic tomography with PID controlled iterative L1 regularization method. IEEE Sens. J. 2025, 25, 15477–15488. [Google Scholar] [CrossRef]
  21. Li, J.; Yue, S.; Ding, M.; Wang, H. Choquet Integral-Based Fusion of Multiple Patterns for Improving EIT Spatial Resolution. IEEE Trans. Appl. Supercond. 2019, 29, 0603005. [Google Scholar] [CrossRef]
  22. Wang, X.; Chen, J.; Richard, C. Tuning-free plug-and-play hyperspectral image deconvolution with deep priors. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5506413. [Google Scholar] [CrossRef]
  23. Rezghi, M.; Hosseini, S.M. A new variant L-curve for Tikhonov regularization. J. Comput. Appl. Math. 2009, 231, 914–924. [Google Scholar] [CrossRef]
  24. Barbour, A.D.; Holst, L.; Janson, S. Poisson Approximation; Oxford Studies in Probability; Clarendon Press: Oxford, UK, 1992; Volume 2. [Google Scholar]
  25. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Approximation by Stein’s Method: Probability and Its Applications; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  26. Cui, M.G.; Deng, Z.X. On the best operator of interpolation in W21. Math. Numer. Sin. 1986, 8, 209–216. [Google Scholar]
  27. Fasshauer, G.E.; Hickernell, F.J.; Ye, Q. Solving support vector machines in reproducing kernel Banach spaces with positive definite functions. Appl. Comput. Harmon. Anal. 2015, 38, 115–139. [Google Scholar] [CrossRef]
  28. Borg, I.; Groenen, P. Modern Multidimensional Scaling: Theory and Applications; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  29. Paulsen, V.I.; Raghupathi, M. An Introduction to the Theory of Reproducing Kernel Hilbert Spaces; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2016; Volume 152. [Google Scholar]
  30. Wang, Q.; Heng, Z. Near MDS codes from oval polynomials. Discrete Math. 2021, 344, 112277. [Google Scholar] [CrossRef]
  31. Girosi, F.; Jones, M.; Poggio, T. Regularization Theory and Neural Networks Architectures. Neural Comput. 1995, 7, 219–269. [Google Scholar] [CrossRef]
  32. Liu, W.K.; Jun, S.; Zhang, Y.F. Reproducing kernel particle methods. Int. J. Numer. Methods Fluids 1995, 20, 1081–1106. [Google Scholar] [CrossRef]
  33. Wang, Z.; Yue, S.; Li, Q. Unsupervised Evaluation and Optimization for Electrical Impedance Tomography. IEEE Trans. Instr. Meas. 2021, 70, 4506312. [Google Scholar] [CrossRef]
  34. Bandyopadhyay, S.; Maulik, U. An evolutionary technique based on k-means algorithm for optimal clustering in RN. Inf. Sci. 2022, 146, 221–237. [Google Scholar] [CrossRef]
  35. Yue, S.; Wu, T.; Cui, L.; Wang, H. Clustering mechanism for electric tomography imaging. Sci. China Inf. Sci. 2012, 55, 2849–2864. [Google Scholar] [CrossRef]
Figure 1. A set of ET images and their distributions by MDS map. (a) ‘Similar samples’ distribution. (b) ‘Similar samples’ measurement visualization by MDS map.
Figure 1. A set of ET images and their distributions by MDS map. (a) ‘Similar samples’ distribution. (b) ‘Similar samples’ measurement visualization by MDS map.
Symmetry 17 01242 g001
Figure 2. Determination of the representative samples by CM and DB. (a) The number of clusters determined by DB. (b) 5 representative samples in the set of 40 samples.
Figure 2. Determination of the representative samples by CM and DB. (a) The number of clusters determined by DB. (b) 5 representative samples in the set of 40 samples.
Symmetry 17 01242 g002
Figure 3. The flow of the two-dimensional reproducing kernel-based interpolation approximation for the best regularization parameter. (a) Construction of the two-dimensional reproducing kernel interpolation model. (b) Obtaining the optimal parameter based on the measured values.
Figure 3. The flow of the two-dimensional reproducing kernel-based interpolation approximation for the best regularization parameter. (a) Construction of the two-dimensional reproducing kernel interpolation model. (b) Obtaining the optimal parameter based on the measured values.
Symmetry 17 01242 g003
Figure 4. Model variations.
Figure 4. Model variations.
Symmetry 17 01242 g004
Figure 5. Two-dimensional sample distribution.
Figure 5. Two-dimensional sample distribution.
Symmetry 17 01242 g005
Figure 6. Error vs. number of clusters curve.
Figure 6. Error vs. number of clusters curve.
Symmetry 17 01242 g006
Table 1. The implementation step to solve BIAO.
Table 1. The implementation step to solve BIAO.
Input: A set of samples on the ET image and relative parameter, {(Uk, λk)} (0≦kL).
Output: BIAO for f(.) from U to λ.
Method:
1. Solve {(Yk, λk)} (0≦kL) from {(Uk, λk)} (0≦kL) by MDS map;
2. Calculate the distance of any two samples after in {(Yk, λk)} (0≦kL) using the kernel trial;
3. Determine the number of clusters c* by DB along the value of c when c ∈ [2, n1/2];
4. Cluster (Yk, λk)} (0≦kL) by the CM algorithm to obtain c cluster centers, v1, v2, …, vc;
5. Take v1, v2, …, vc as the representative samples;
6. Construct BAIO by v1, v2, …, vc.
Table 2. Test on the CT-based lung imaging model.
Table 2. Test on the CT-based lung imaging model.
Representative Sample
CTSymmetry 17 01242 i001
V11
Symmetry 17 01242 i002
V12
Symmetry 17 01242 i003
V13
Symmetry 17 01242 i004
V14
Symmetry 17 01242 i005
V15
Symmetry 17 01242 i006
V16
Symmetry 17 01242 i007
ETSymmetry 17 01242 i008Symmetry 17 01242 i009Symmetry 17 01242 i010Symmetry 17 01242 i011Symmetry 17 01242 i012Symmetry 17 01242 i013
λ*Symmetry 17 01242 i014Symmetry 17 01242 i015Symmetry 17 01242 i016Symmetry 17 01242 i017Symmetry 17 01242 i018Symmetry 17 01242 i019
RE0.57510.61150.56420.58820.57560.6214
LCSymmetry 17 01242 i020Symmetry 17 01242 i021Symmetry 17 01242 i022Symmetry 17 01242 i023Symmetry 17 01242 i024Symmetry 17 01242 i025
Re0.71890.76400.56420.73290.73730.6417
HE/T3/4.443/4.210/4.304/4.183/4.561/4.33
GCVSymmetry 17 01242 i026Symmetry 17 01242 i027Symmetry 17 01242 i028Symmetry 17 01242 i029Symmetry 17 01242 i030Symmetry 17 01242 i031
RE0.80250.81900.69060.74840.83000.6868
HE/T5/4.317/4.356/4.128/4.198/3.996/4.27
BIAOSymmetry 17 01242 i032Symmetry 17 01242 i033Symmetry 17 01242 i142Symmetry 17 01242 i034Symmetry 17 01242 i035Symmetry 17 01242 i143
RE0.57510.61150.56420.58820.57560.6214
HE/T0/0.770/0.590/0.660/0.720/0.680/0.75
Testing sample Symmetry 17 01242 i036
CTSymmetry 17 01242 i037
M11
Symmetry 17 01242 i038
M12
Symmetry 17 01242 i039
M13
Symmetry 17 01242 i040
M14
Symmetry 17 01242 i041
M15
Symmetry 17 01242 i042
M16
ETSymmetry 17 01242 i043Symmetry 17 01242 i044Symmetry 17 01242 i045Symmetry 17 01242 i046Symmetry 17 01242 i047Symmetry 17 01242 i048
λ*Symmetry 17 01242 i049Symmetry 17 01242 i050Symmetry 17 01242 i051Symmetry 17 01242 i052Symmetry 17 01242 i053Symmetry 17 01242 i054
RE0.61500.57210.59320.60960.57990.6719
LCSymmetry 17 01242 i055Symmetry 17 01242 i056Symmetry 17 01242 i057Symmetry 17 01242 i058Symmetry 17 01242 i059Symmetry 17 01242 i060
RE0.65540.61930.64340.72180.60090.6856
HE/T2/4.051/4.392/4.184/4.352/4.212/4.23
GCVSymmetry 17 01242 i061Symmetry 17 01242 i062Symmetry 17 01242 i063Symmetry 17 01242 i064Symmetry 17 01242 i065Symmetry 17 01242 i066
RE0.66220.65880.68210.73700.60450.8555
HE/T6/4.126/4.286/4.618/4.455/4.389/4.31
BIAOSymmetry 17 01242 i067Symmetry 17 01242 i068Symmetry 17 01242 i069Symmetry 17 01242 i070Symmetry 17 01242 i071Symmetry 17 01242 i072
RE0.61250.57160.59530.61080.57710.6850
HE/T0/0.720.30/0.770.24/0.790.57/0.701.21/0.680.04/0.73
Note: Hereafter, the sign “T” is the runtime (second) of each algorithm.
Table 3. Conductivity distribution of lung tissue.
Table 3. Conductivity distribution of lung tissue.
ConductivityRight LungLeft Lung
Upper LobeMiddle LobeDown LobeUpper LobeLower Lobe
Range0.265~0.320.26~0.300.24~0.280.197~0.2750.22~0.267
Mean0.300.280.260.250.24
Table 4. The RE curves of maximal CC, BAIO, L_curve, or GCV.
Table 4. The RE curves of maximal CC, BAIO, L_curve, or GCV.
ModelRE Curves
Symmetry 17 01242 i073Symmetry 17 01242 i074
Symmetry 17 01242 i075Symmetry 17 01242 i076
Symmetry 17 01242 i077Symmetry 17 01242 i078
Symmetry 17 01242 i079Symmetry 17 01242 i080
Table 5. Test on the continuously distributed mode.
Table 5. Test on the continuously distributed mode.
Representative Sample
ModelSymmetry 17 01242 i081
V21
Symmetry 17 01242 i082
V22
Symmetry 17 01242 i083
V23
Symmetry 17 01242 i084
V24
Symmetry 17 01242 i085
V25
Symmetry 17 01242 i086
V26
Symmetry 17 01242 i087
λ*Symmetry 17 01242 i088Symmetry 17 01242 i089Symmetry 17 01242 i090Symmetry 17 01242 i091Symmetry 17 01242 i092Symmetry 17 01242 i093
RE0.30110.24120.20220.25220.20230.2340
LCSymmetry 17 01242 i094Symmetry 17 01242 i095Symmetry 17 01242 i096Symmetry 17 01242 i097Symmetry 17 01242 i098Symmetry 17 01242 i099
Re0.35490.28700.22890.30330.25820.3023
HE/T1/0.152/0.221/0.182/0.192/0.212/0.20
GCVSymmetry 17 01242 i100Symmetry 17 01242 i101Symmetry 17 01242 i102Symmetry 17 01242 i103Symmetry 17 01242 i104Symmetry 17 01242 i105
RE0.42490.35320.28220.35220.30280.3840
HE/T2/0.123/0.093/0.104/0.143/0.123/0.14
BIAOSymmetry 17 01242 i106Symmetry 17 01242 i107Symmetry 17 01242 i144Symmetry 17 01242 i108Symmetry 17 01242 i109Symmetry 17 01242 i110
RE0.30110.24120.20220.25220.20230.2340
HE/T0/0.0580/0.0600/0.0530/0.0620/0.0550/0.071
Testing sample
ModelSymmetry 17 01242 i111
M21
Symmetry 17 01242 i112
M22
Symmetry 17 01242 i113
M23
Symmetry 17 01242 i114
M24
Symmetry 17 01242 i115
M25
Symmetry 17 01242 i116
M26
Symmetry 17 01242 i117
λ*Symmetry 17 01242 i118Symmetry 17 01242 i119Symmetry 17 01242 i120Symmetry 17 01242 i121Symmetry 17 01242 i122Symmetry 17 01242 i123
RE0.18360.20720.19800.22380.21220.2123
LCSymmetry 17 01242 i124Symmetry 17 01242 i125Symmetry 17 01242 i126Symmetry 17 01242 i127Symmetry 17 01242 i128Symmetry 17 01242 i129
RE0.23040.28020.25020.28090.23330.2345
HE/T2/0.212/0.192/0.182/0.171/0.151/0.16
GCVSymmetry 17 01242 i130Symmetry 17 01242 i131Symmetry 17 01242 i132Symmetry 17 01242 i133Symmetry 17 01242 i134Symmetry 17 01242 i135
RE0.27020.32300.32010.31490.31820.2823
HE/T3/0.114/0.105/0.123/0.163/0.123/0.13
BIAOSymmetry 17 01242 i140Symmetry 17 01242 i136Symmetry 17 01242 i137Symmetry 17 01242 i138Symmetry 17 01242 i139Symmetry 17 01242 i141
RE0.21020.21020.19800.23080.22330.2123
HE/T0.4304/0.0590.49/0.0600.15/0.0611.17/0.0561.07/0.0580.10/0.061
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, F.; Yue, S. Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm. Symmetry 2025, 17, 1242. https://doi.org/10.3390/sym17081242

AMA Style

Dong F, Yue S. Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm. Symmetry. 2025; 17(8):1242. https://doi.org/10.3390/sym17081242

Chicago/Turabian Style

Dong, Fanpeng, and Shihong Yue. 2025. "Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm" Symmetry 17, no. 8: 1242. https://doi.org/10.3390/sym17081242

APA Style

Dong, F., & Yue, S. (2025). Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm. Symmetry, 17(8), 1242. https://doi.org/10.3390/sym17081242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop