Next Article in Journal
Research on Indoor Object Detection and Scene Recognition Algorithm Based on Apriori Algorithm and Mobile-EFSSD Model
Previous Article in Journal
Seasonally Adaptive VMD-SSA-LSTM: A Hybrid Deep Learning Framework for High-Accuracy District Heating Load Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Fredholm Integral Equations of the First Kind Using a Gaussian Process Model Based on Sequential Design

School of Computer and Information Engineering, Guizhou University of Commerce, Guiyang 550014, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2407; https://doi.org/10.3390/math13152407
Submission received: 3 July 2025 / Revised: 24 July 2025 / Accepted: 24 July 2025 / Published: 26 July 2025

Abstract

In this study, a Gaussian process model is utilized to study the Fredholm integral equations of the first kind (FIEFKs). Based on the H H k formulation, two cases of FIEFKs are under consideration with respect to the right-hand term: discrete data and analytical expressions. In the former case, explicit approximate solutions with minimum norm are obtained via a Gaussian process model. In the latter case, the exact solutions with minimum norm in operator forms are given, which can also be numerically solved via Gaussian process interpolation. The interpolation points are selected sequentially by minimizing the posterior variance of the right-hand term, i.e., minimizing the maximum uncertainty. Compared with uniform interpolation points, the approximate solutions converge faster at sequential points. In particular, for solvable degenerate kernel equations, the exact solutions with minimum norm can be easily obtained using our proposed sequential method. Finally, the efficacy and feasibility of the proposed method are demonstrated through illustrative examples provided in this paper.

1. Introduction

In recent years, mathematical models have been extensively employed to solve complex systems. Many inverse problems in these systems can be reduced to solving specific FIEFKs, such as image restoration [1,2], the backward heat conduction problem [3,4,5], nuclear magnetic resonance [6], and so forth. Additionally, some mathematical inverse problems—such as numerical differentiation, the numerical inversion of integral transformations, and numerical solutions of differential equations—can be translated into FIEFKs.
The solvability and construction of FIEFKs have been considered in [7,8]. However, this type of equation has a significant property, referred to as ill-posedness, which is relative to well-posedness as defined by Jacques Hadamard [9] in the 1920s. Due to the difficulty in obtaining exact solutions for such equations, researchers often resort to numerical methods. Currently, ill-posedness has been manifested as a relatively large condition number of the coefficient matrix in a linear system. It is well known that this condition number can be effectively dominated through regularization, which has subsequently given rise to various regularization methods [10,11,12,13,14,15,16,17]. Herein, finite-dimensional regularized approximate methods [15,16,17] are proposed within specific subspaces. These subspaces are typically obtained in two ways, either by selecting basis functions to perform an orthogonal decomposition of the space or meshing the definition domain. In addition, other finite rank approximate projection methods [18,19] have been developed based on these subspaces, yielding least squares solutions with minimum norm through operator inversion. In recent years, some machine learning methods, such as support vector regression [20,21], have also been used to study Fredholm integral equations.
In addition to regularization methods, wavelet techniques are also used to solve FIEFKs. S. Yousefi and A. Banifatemi [22] proposed a method for estimating the coefficients of numerical solutions by means of the orthogonal properties of CAS wavelets. The wavelet–Galerkin method proposed by K. Maleknejad et al. [23] can transform FIEFKs into linear algebraic equations that possess sparse coefficient matrixes. F. Fattahzadeh [24] propounded a numerical direct method to research two-dimensional linear and nonlinear FIEFKs via the Haar wavelet; however, this method did not involve numerical integration. In addition, block-pulse functions are popular. For example, Z. Masouri et al. [17,25] proposed a regularization–direct method to overcome the ill-posedness of integral equations based on vector forms of block-pulse functions without employing any projection techniques. K. Maleknejad and E. Saeedipoor [26] enhanced block-pulse functions and proposed a numerical direct method based on orthogonal hybrid block-pulse functions and Legendre polynomials. Furthermore, numerous other methods have been developed for solving integral equations. For example, iteration methods [2], Taylor expansion methods [27], maximum entropy regularization [28], and additional methods can be found in [29].
In practice, the majority of methods are devoted to numerical solutions by transforming integral equations into linear algebraic equations. For FIEFKs, the condition number is dominated by certain regularization parameters as follows:
E h ( x , t ) f ( t ) d t = g ( x ) , x D .
For convenience, the above equation can be rewritten as
L f = g ,
whereby the linear operator L is herein referred to as
( L f ) ( x ) : = f , h x H = E h ( x , t ) f ( t ) d t ,
and its adjoint operator is defined as
( L * g ) ( t ) : = g , h t * L 2 ( D ) = D h ( x , t ) g ( x ) d x .
Here, h ( x , t ) is an explicit continuous function on D × E , and the objective function f ( t ) is pending in H, which is either a square-integrable space L 2 ( E ) or a reproducing kernel Hilbert space (RKHS) H Q .
It is common for the right-hand term g ( x ) to be unknown, yet discrete data with respect to it are typically given, such as
y i = g x i + ε i , 1 i n .
Assume that ε i is bounded, i.e.,
1 n i = 1 n ε i 2 δ ,
where δ is a noise level. According to this assumption,
min f H 1 n i = 1 n L ( f ) x i y i 2 + λ f H 2 ,
which has been widely studied in [10,11,12], and the norm · H is considered in the solution space H ( L 2 ( E ) or RKHS H Q ). This problem has a unique solution:
f n , λ ( t ) = η x 1 ( t ) , , η x n ( t ) ( K X X + n λ I n ) 1 Y ,
where X = x 1 , , x n T , Y = y 1 , , y n T , and
η x ( t ) = h x ( t ) , H = L 2 ( E ) E h x ( s ) Q ( t , s ) d s , H = H Q ,
and its i j -th entry k ( x i , x j ) of matrix K X X is referred to as
k ( x , x ) = E E h x ( t ) h x ( s ) Q ( s , t ) d s d t , H = H Q E h x ( t ) h x ( t ) d t , H = L 2 ( E ) .
However, in terms of practical problems, it may be difficult to determine the noise level δ . Thus, it was assumed that ε i N 0 , σ 2 was i.i.d. In this instance, they also presented the following solution based on Gaussian process regression (GPR) [30,31,32]:
f ¯ n ( t ) = η x 1 ( t ) , , η x n ( t ) ( K X X + σ 2 I n ) 1 Y .
Notice that Equations (5) and (7) are based on the H H k formulation, which was suggested by Qian [33], and this formulation was used to study Equation (1) by R. Qiu et al. [34,35,36]. More specifically, this formulation can transform the range space R ( L ) (the non-closed subspace in L 2 ( D ) ) of L into an RKHS, which is a closed space with its topology; that is, it transforms ill-posed problems into well-posed problems in the mathematical sense. Based on this formulation, the results in [34,35] can be extended in this paper using GPR. Moreover, an adaptive RKHS regularization with additive Gaussian noise was proposed in [37], which was also associated with the H H k formulation.
Beyond this, we are more concerned with the essential question of Equation (1): when there are explicit analytical expressions for h ( x , t ) and g ( x ) , can the exact solution be obtained accordingly? Exact solutions with minimum norm are rarely found, except in [10], where the analytical solution in operator form has been presented. Based on the H H k formulation, we also present exact solutions in L 2 ( D ) for all the operator forms regardless of H = H Q or L 2 ( E ) in Theorems 1 and 2. We further analyze the potential challenges in the computation of these solutions. For this reason, we provide an approximate solution with the minimum norm based on Gaussian process interpolation as
f ¯ n ( t ) = η x 1 ( t ) , , η x n ( t ) K X X Y .
Herein, K X X is the Moore–Penrose pseudoinverse of K X X , and its minimum positive eigenvalue acts as a regularization parameter (see [38,39]). Our proposed method can sequentially select interpolation points by minimizing the posterior variance V [ g ¯ ( x ) ] of g ( x ) , defined as Equation (27); that is, each point is generated at the maximum uncertainty of g ¯ ( x ) , which is the Gaussian process interpolation of g ( x ) in Equation (1). Compared with uniform points, the solution in Equation (8) generally has a better approximation at sequential points. For convenience, we refer to this method as the sequential Gaussian process interpolation (SGPI).
The remainder of this paper is structured as follows. We introduce the H H k formulation and present the exact solutions in operator form for a given g ( x ) in Section 2. In Section 3, we utilize a Gaussian process model to solve FIEFKs and present our strategy of selecting the sequential points for the interpolation model. In Section 4, we demonstrate that the exact solutions of degenerate kernel equations can be found using the SGPI method for any initial point. Several examples illustrate the efficacy and feasibility of the SGPI method in Section 5. Finally, we present a brief conclusion of this paper in Section 6.

2. Minimal-Norm Solution

2.1. Moore–Penrose Pseudoinverse

Let N ( L ) and R ( L ) be denoted as the null space and range space, respectively. It can be demonstrated that N ( L ) is a closed linear space in H and R ( L ) is a non-closed subspace in L 2 ( D ) unless R ( L ) is a finite-dimensional space.
Definition 1.
For g ( x ) L 2 ( D ) , a function f H is named an ordinary least squares (OLS) solution of Equation (2) if
L f g L 2 ( D ) = inf L v g L 2 ( D ) : v H .
A function u H is called the minimal-norm solution (or the best approximate solution), denoted by f , if u H = inf f S f H . Here, S denotes all OLS solutions. Additionally, L is called the Moore–Penrose pseudoinverse of L, defined as Equation (3), if L g = f .
In general, L is a closed operator on D ( L ) : = R ( L ) R ( L ) L 2 ( D ) . The focal point that warrants our attention is that D ( L ) = L 2 ( D ) if and only if subspace R ( L ) is closed [19] in L 2 ( D ) . Moreover, there exists another orthogonal decomposition H = N ( L ) N ( L ) on E; thus,
f N ( L ) , S = f + N ( L ) .
The null space of a special integral equation has been studied in [40], and the uniqueness of solutions is considered in [41,42]. In fact, the solution f is also unique in the sense of the Moore–Penrose pseudoinverse (see [10,38,43,44,45]).

2.2. H– H k Formulation

For a linear integral operator L, defined as Equation (3), it is known that L is a compact linear operator from H to L 2 ( D ) and R ( L ) is a non-clsoed subspace in L 2 ( D ) . This means that Equation (1) is ill-posed. Nevertheless, R ( L ) can be transformed into an RKHS H k , for which its norm is endowed with a reproducing kernel topology through the H H k formulation. Henceforth, R ( L ) is denoted as H k . Based on this formulation, Equation (1) has been transformed mathematically into a well-posed equation.
For any positive definite kernel k, via the Moore–Aronszajn theorem [46], a unique RKHS H k exists such that its reproducing kernel equals k. This means that there is a one-to-one correspondence between an RKHS and a certain positive definite kernel. More properties and their applications about RKHS have been summarized in [1].
For any g 1 = L ( f 1 ) and g 2 = L ( f 2 ) in H k , its inner product
g 1 , g 2 H k : = P N ( L ) f 1 , P N ( L ) f 2 H ,
and
k ( x , x ) : = h x , h x H ,
where P N ( L ) represents an orthogonal projection operator.
Hereafter, Equations (9) and (10), which contain the following isometric isomorphism relationship, are referred to as the H H k formulation according to [33], and H k represents the canonical range space. Based on Equations (9) and (10), we obtain the following lemma.
Lemma 1
(Lemma 1 and Proposition 2 in [35]). Let L be defined as Equation (3); then,
(1)
If H = L 2 ( E ) , then N ( L ) = s p a n h x x D ;
(2)
If H = H Q , then N ( L ) = s p a n η x x D ; η x ( t ) is defined as Equation (6);
(3)
R ( L ) is an RKHS.
Apart from this, there exist some isometric isomorphisms between N ( L ) and H k , which yield the following correspondence:
N ( L ) H k ,
h x ( t ) , η x ( t ) k x ( x ) ,
P N ( L ) f ( t ) g ( x ) ,
P N ( L ) Q t ( t ) η t * ( x ) ,
where a b means L a = b and a N ( L ) . The definitions of η t * and η x can be found in Equations (4) and (6); herein, f ( t ) and g ( x ) satisfy Equation (1).

2.3. Minimal-Norm Solution in H Q

Let H be an RKHS H Q , and, given the continuous function g ( x ) H k , Equation (1) is solvable. In this case, the conclusion has been obtained by the authors of [10].
Theorem 1
(Theorem 4.1 in [10]). Let L satisfy Equation (14) and the continuous function g ( x ) H k ; then,
f ( t ) = η t * , g H k , t E .

2.4. Minimal-Norm Solution in L 2 ( E )

In this subsection, we assume that H = L 2 ( E ) . Generally speaking, it can be noted that L is an unbounded operator from R ( L ) (topology in L 2 ( D ) ) to L 2 ( E ) , unless h ( x , t ) of Equation (1) is a degenerate integral kernel. However, based on the H H k formulation, we can make the following conclusion.
Lemma 2.
Let H = L 2 ( E ) and R ( L ) = H k ; then, L is bounded.
Proof. 
Let g R ( L ) ( g 0 ) and L f = g ; then,
L = sup g 0 L g L 2 ( E ) g H k = sup g 0 P N ( L ) f L 2 ( E ) g H k = sup g 0 P N ( L ) f L 2 ( E ) P N ( L ) f L 2 ( E ) = 1 .
Hence, L is bounded. □
Remark 1.
Based on the H– H k formulation, whether the integral kernel of Equation (1) is degenerate or not, R ( L ) with H k -topology is always closed. Hence, solving Equation (1) is a well-conditioned problem mathematically.
Theorem 2.
Let H = L 2 ( E ) and g ( x ) , h t * R ( L ) for fixed t E ; then,
f ( t ) = g , h t * H k .
Proof. 
Based on Equation (10),
g ( x ) = g ( y ) , K x ( y ) H k = g ( y ) , h x , h y H H k = h x , g ( y ) , h y H k H .
By Equation (3), show that g ( y ) , h y H k is a solution of Equation (2). Since
h t * ( y ) = h y ( t ) , g , h t * H k N ( L ) ,
then Equation (16) has been proven. □
In this section, we present expressions of two operator forms of f , that is, solutions in Equations (15) and (16), regardless of whether g ( x ) is in R ( L ) . Among them, Equation (15) also depends on RKHS H Q , which can usually be selected as some absolutely continuous function spaces, for example W 2 1 [ a , b ] in [4] and W in [38]. In addition, these solutions in operator forms may be calculated analytically. However, it is possible that some of these solutions can only be computed numerically, which are implemented through the Gaussian process model mentioned below in this paper.

3. Gaussian Process Model

For a Gaussian process (GP), one positive definite kernel is often called the covariance kernel, which is used to represent the covariance function of some random variables. Therefore, every RKHS has a covariance function (its reproducing kernel).
Definition 2.
Let m : D R be a real-valued function and k : D × D R be a positive definite kernel; then, g : D R is called a Gaussian process such that g X = ( g ( x 1 ) , g ( x 2 ) , g ( x n ) ) T is a joint Gaussian distribution for any X = x 1 , , x n T ; herein, x i D , 1 i n , and n N .
Usually, a Gaussian process is written as
g G P ( m , k ) ,
where m ( x ) = E [ g ( x ) ] and k ( x , x ) = E [ ( g ( x ) m ( x ) ) ( g ( x ) m ( x ) ) ] .
The above definition was derived from Ref. [31], whose authors refer to many applications of the Gaussian process in machine learning. Furthermore, it should be mentioned that connections and equivalences between Gaussian processes and kernel methods are established in [32]; for example, the estimator of kernel ridge regression (KRR) is consistent with the posterior mean of GPR. Inspired by this equivalence, we can employ the Gaussian process model (regression and interpolation) to study the numerical approximation of g ( x ) of Equation (1) in R ( L ) and then obtain numerical forms for solutions in Equations (15) and (16).

3.1. Gaussian Process Regression

To distinguish the vector X = x 1 , , x n T , customarily, X n : = x 1 , , x n is referred to as an experimental design on D. We assume that the observed values y i and the function values g ( x i ) satisfy
y i = g ( x i ) + ε i , x i X n ,
where ε i N 0 , σ 2 are independent and identically distributed normal distributions. In the Gaussian process of Equation (17), without loss of generality, let the mean function m ( x ) 0 in this paper, i.e., g G P 0 , k . Based on the H H k formulation, k x , x = h x , h x H , and then the posterior mean and variance at x are
g ¯ n ( x ) : = E [ g ( x ) X , Y ] = K x X T ( K X X + σ 2 I n ) 1 Y ,
V [ g ¯ n ( x ) ] : = V [ g ( x ) X , Y ] = k ( x , x ) K x X T ( K X X + σ 2 I n ) 1 K x X ,
where Y = y 1 , , y n T , K x X = k x 1 , x , , k x n , x T , K X X = K x 1 X , , K x n X is known as the covariance matrix, and g ¯ n ( x ) is the Gaussian process regression of g ( x ) .
For certain complex reproducing kernels, deriving analytical expressions for the solutions in Equations (15) and (16) may be impractical, in which case they can be approximated separately by the minimal-norm solution of the following equation:
U f = w ¯ ,
where U = L * L , w ¯ = L * g ¯ n , and g ¯ n is defined as Equation (19) when H = L 2 ( E ) or H = H Q . Likewise, let f ¯ D , n : = U w ¯ . Its analytical expression is provided in Proposition 1, which has extended the results found in [35].
Proposition 1.
Let z i = w ( t i ) + ε i , t i E , ε i N 0 , σ 2 , i = 1 , , n .
If H = H Q and g D ( L ) , then
f ¯ D , n ( t ) = ( U Q t 1 ) ( t ) , , ( U Q t n ) ( t ) ( P + σ 2 I n ) 1 Z .
If H = L 2 ( E ) and g D ( L ) , then
f ¯ D , n ( t ) = ( L * h t 1 * ) ( t ) , , ( L * h t n * ) ( t ) ( K + σ 2 I n ) 1 Z ,
where
Z = z 1 , , z n T , P ( i , j ) = U Q t i , U Q t j H Q , K ( i , j ) = L * h t i * , L * h t j * L 2 ( D ) .
Furthermore, with probability 1, we have
lim n lim σ 0 f ¯ D , n f H = 0 .
Proof. 
This is the repetition of Theorems 1 and 2 in [35] when L is replaced by U, g by w, and k by k U . □
Meanwhile, it is important to emphasize the role of σ 2 , which functions as a double-edged sword. On the one hand, the ill-posedness of K X X worsens as n increases. The variance σ 2 is added to the diagonal of K X X , which can effectively improve its stability. In this case, σ 2 is referred to as “nugget”, which is a regularization parameter. On the other hand, if the problem at hand is itself an interpolation model—for example, given certain points on g ( x ) of Equation (1) to determine its optimal interpolation function— σ 2 can make this problem a non-interpolation model. Therefore, it is necessary to discuss the interpolation model separately. At that point, the minimum positive eigenvalue of K X X acts as a regularization parameter, as seen in methods such as truncated eigendecomposition (TEIG), discussed in [38], and kriging with pseudoinverse (PI kriging) regularization, proposed in [39].

3.2. Gaussian Process Interpolation

In this subsection, we focus on the noise-free scenario (interpolation). Suppose that there are interpolated points x i , y i i = 1 n that satisfy
y i = g ( x i ) , x i X n .
With the help of the Moore–Penrose pseudoinverse, the regression model in Equation (18) can be transformed into the interpolation model in Equation (25), as shown in the following Lemma 3, which can be found in [47,48].
Lemma 3
(Theorem 3.4 in [48]). Let g G P 0 , k satisfy Equation (25); then,
g ¯ n ( x ) = K x X T K X X Y ,
V [ g ¯ n ( x ) ] = k ( x , x ) K x X T K X X K x X .
Firstly, let g ( x ) R ( L ) , and let f ¯ n be the posterior mean of f . Then, f ¯ n = L g ¯ n ; i.e., it is also the minimal-norm solution of the following equation:
L f = g ¯ n .
In order to describe the approximation error between f ¯ n and f , we also need to introduce the fill distance, defined as
h X n : = sup x D inf x i X n x x i 2 ,
for a given X n = x 1 , , x n . Obviously, if lim n h X n = 0 , then X n is dense in D.
Theorem 3.
Let g G P 0 , k and g ( x ) H k satisfy Equation (25).
If H = L 2 ( E ) , then
f ¯ n ( t ) = H t X T K X X Y ,
V [ f ¯ n ( t ) ] = h t * , h t * H k H t X T K X X H t X ,
where H t X T = ( h x 1 ( t ) , , h x n ( t ) ) .
If H = H Q , then
f ¯ n ( t ) = η t X T K X X Y ,
V [ f ¯ n ( t ) ] = η t * , η t * H k η t X T K X X η t X ,
where η t X T = ( η x 1 ( t ) , , η x n ( t ) ) and η t * ( x ) = η ( x , t ) = η x ( t ) .
If lim h X n 0 V [ g ¯ n ( x ) ] = 0 , then
lim h X n 0 f ¯ n f H = 0 .
Proof. 
We only need to prove Equations (30) and (31), and similar methods can prove Equations (32) and (33). Based on Equation (12), we have
L h x = k x ,   L H t X T = K x X T .
Therefore, in view of Equation (26) in Lemma 3, we have
f ¯ n ( t ) = L g ¯ n ( x ) = H t X T K X X Y .
Since g G P 0 , k , then
E [ g g T ] = k .
In accordance with Lemma 2, we can obtain E [ f ] = E [ L g ] = L E [ g ] = 0 . Moreover,
E [ [ ( L g ) ( t ) ] [ L g ( t ) ] T ] = L x k x , x ( L x ) T ,
f ( t ) G P 0 , L x k x , x ( L x ) T .
Based on E [ Y ( L g ) T ] = E [ Y g T ] ( L x ) T = K x X ( L x ) T ,
Y ( L g ) ( t ) N 0 0 , K X X K x X ( L x ) T L x K x X T L x k x , x ( L x ) T x = x ,
and, therefore,
V [ f ¯ n ( t ) ] = L x k x , x ( L x ) T L x K x X T K X X K x X ( L x ) T x = x .
On account of k x , x = k x , k x H k and Lemma 2, we have
L x k x , x ( L x ) T = L x k x , k x ( L x ) T H k = h t * , h t * H k ,
L x K x X T K X X K x X ( L x ) T = H t X T K X X H t X .
Consequently,
V [ f ¯ n ( t ) ] = h t * , h t * H k H t X T K X X H t X .
Next, we will show Equation (34).
Based on the H H k formulation, we get
f ¯ n f H = K x X T K X X Y g H k ,
and then we can simply show that
lim h X n 0 K x X T K X X Y g H k = 0 .
For any c 1 , , c n , we have
sup m H k 1 i = 1 n c i m ( x i ) = sup m H k 1 i = 1 n c i k ( · , x i ) , m H k .
According to the Cauchy–Schwartz inequality, we get
sup m H k 1 i = 1 n c i k ( · , x i ) , m H k sup m H k 1 i = 1 n c i k ( · , x i ) H k m H k = i = 1 n c i k ( · , x i ) H k .
Let u ( x ) : = i = 1 n c i k ( · , x i ) / i = 1 n c i k ( · , x i ) H k ; then, u H k = 1 , and, further,
sup m H k 1 i = 1 n c i k ( · , x i ) , m H k i = 1 n c i k ( · , x i ) , u H k = i = 1 n c i k ( · , x i ) H k .
That is,
sup m H k 1 i = 1 n c i m ( x i ) = i = 1 n c i k ( · , x i ) H k .
We set c i = ( K x X T K X X ) i and m = g based on the above equality, yielding
sup m H k 1 ( g ( x ) i = 1 n c i g ( x i ) ) = k ( · , x ) i = 1 n c i k ( · , x i ) H k ,
k ( · , x ) i = 1 n c i k · , x i H k 2 = k ( x , x ) 2 i = 1 n c i k x , x i + i = 1 n i = j n c i c j k x i , x j = k ( x , x ) 2 K x X T K X X K x X + K x X T K X X K X X K X X K x X = k ( x , x ) 2 K x X T K X X K x X + K x X T K X X K x X = k ( x , x ) K x X T K X X K x X = V [ g ¯ n ( x ) ] .
As a result of g ( x ) i = 1 n c i g ( x i ) = g ( x ) K x X T K X X Y , we have
( g ( x ) K x X T K X X Y ) 2 V [ g ¯ n ( x ) ] .
Since lim h X n 0 V [ g ¯ n ( x ) ] = 0 , then
lim h X n 0 K x X T K X X Y = g ( x ) .
Thus, this proves Equation (35); that is, we have completed the proof of Equation (34).
Theorem 4.
Suppose that g ( x ) R ( L ) ¯ / R ( L ) satisfies Equation (25); then,
lim h X n 0 f ¯ n H = + .
Proof. 
In fact, we can simply show that, if lim h X n 0 f ¯ n H < + , then g ( x ) R ( L ) . Let H n be the set of all functions in R ( L ) satisfying Equation (25); that is,
H n : = m ( x ) R ( L ) | m ( x i ) = g ( x i ) = y i , x i X D , 1 i n ,
and then H 1 H n H = g ( x ) . Because L f ¯ n ( t ) = K x X T K X X Y H n , we get
K x X T K X X Y H k = f ¯ n ( t ) H .
As a result of lim h X n 0 f ¯ n H < + , we know that K x X T K X X Y n = 1 is a Cauchy sequence in R ( L ) ( = H k ) . Therefore,
lim h X n 0 K x X T K X X Y = g ( x ) .
Based on K x X T K X X Y H n R ( L ) , g ( x ) R ( L ) . This conflicts with g ( x ) R ( L ) ¯ / R ( L ) , resulting in
lim h X n 0 f ¯ n H = + .
Theorem 4 has thus been proven. □
Secondly, we consider the case where g ( x ) D ( L ) via the following remark.
Remark 2.
Let g D ( L ) ; then, w = L * g H k U . Hence, we can assume that w G P ( 0 , k U ) , where k U ( t , t ) = L * h t * , L * h t * L 2 ( D ) if H = L 2 ( E ) and k U ( t , t ) = U Q t , U Q t H Q if H = H Q . With a derivation process similar to that in Lemma 3 and Theorem 3, similar conclusions of Theorems 3 and 4 can also be obtained.

3.3. Sequential Design

It is common knowledge that the regularization parameter σ 2 in Equations (22) and (23) can be estimated using maximum likelihood or cross-validation methods. Additionally, calculating the pseudoinverse K X X involves a parameter, i.e., the positive threshold η below, in which an eigenvalue is considered as zero. One suggestion is given in [39] whereby η is equal to λ 1 / κ m a x , where λ 1 is the maximum eigenvalue of K X X and κ m a x is a reasonable condition number, such as κ m a x = 10 8 . In addition to these methods, this paper considers the regularization problem from another perspective, that is, by selecting as few points in Equation (25) as possible to ensure that f ¯ n ( t ) converges faster under an acceptable condition number of K X X .
In practice, the selection of an experimental design X n = x 1 , , x n has also been widely studied. For example, Wahba proposed the collocation–projection method based on certain asymptotically optimal sequences in a specific RKHS to solve FIEFKs on [0,1] [49]. However, validating such optimal sequences in this RKHS may be inconvenient. The optimal design of discrete points is determined by maximizing the singular value (smallest) of the semi-discrete or the fully discrete matrix in [50], which facilitates the control of the condition number. Bardow proposed a method to minimize the expected total error between an exact solution and estimated solution, which is useful in addressing the bias–variance trade-off that is crucial for ill-posed problems [51]. Furthermore, the concept of a redundant point set was proposed by Mohammadi et al. to obtain linear-dependent information that makes K X X non-invertible [39]. However, despite the above examples, there remains no systematic strategy for an optimal experimental design for FIEFKs.
In this paper, an optimal experimental design is determined via the posterior variance V [ g ¯ n ( x ) ] of g ( x ) . Based on the H H k formulation and Equation (36), we have
f ¯ n f H = K x X T K X X Y g H k ,
and
| g ( x ) K x X T K X X Y | V [ g ¯ n ( x ) ] .
Therefore, we simply need to minimize V [ g ¯ n ( x ) ] at each step to obtain the next point, which is a greedy algorithm. We notice that Equation (39) becomes an equality at each point; that is, V [ g ¯ n ( x ) ] is a precise supremum.
For a given design ( X n = x 1 , , x n ), new points ( x n + 1 ( 1 ) , , x n + 1 ( l ) ) have been obtained by minimizing V [ g ¯ n ( x ) ] , denoted as D n + 1 . In this case, the next point ( x n + 1 ) is incorporated into the design X n through the following:
max x n + 1 ( j ) D n + 1 min x i X n x n + 1 ( j ) x i 2 ,
where · 2 denotes the Euclidean distance. It should be noted that there may be more than one point to satisfy Equation (40), from which only one needs to be selected. In this paper, the initial point in design X n is usually selected as the center of D. More selections can be found in [52].

4. Degenerate Kernel Equation

It should be emphasized that there is a special integral kernel that warrants a separate discussion, i.e., the degenerate kernel:
h ( x , t ) = i = 1 m a i ( x ) b i ( t ) .
In this case, if Equation (1) with the integral kernel represented as Equation (41) is solvable, i.e., g ( x ) R ( L ) , then we can obtain the following corollary.
Corollary 1.
Let Equation (1) be solvable with integral kernel h ( x , t ) as Equation (41) and satisfy Equation (25). We then have
f ¯ n ( t ) = B m T ( t ) A A B 1 A Y ,
g ¯ n ( x ) = A m T ( x ) B A A B 1 A Y ,
V [ f ¯ n ( t ) ] = L L B m T ( t ) B 1 L L B m ( t ) ( A A B m ( t ) ) T B 1 A A B m ( t ) ,
V [ g ¯ n ( x ) ] = A m T ( x ) B [ B 1 A A B 1 A A ] B A m ( x ) ,
where B m ( t ) = b 1 ( t ) , , b m ( t ) T , A m ( x ) = a 1 ( x ) , , a m ( x ) T , b i j = E b i ( t ) b j ( t ) d t , B = b i j 1 i , j m , A T is the conjugate transpose of A = A m ( x 1 ) , , A m ( x n ) T .
Proof. 
According to Equation (10), we have k ( x , x ) = A m T ( x ) B A m ( x ) , so K X X = A B A T . Because { b i ( t ) } i = 1 m is linearly independent, B is obtained as a positive definite matrix. Thus, we can claim that
K X X = ( A T ) B 1 A .
According to Corollary 3.2 in [53], we can prove that
r a n k B C ( A B ) T K X X A 0 0 K X X ( B A T ) T 0 0 = r a n k ( K X X ) + r a n k ( B ) ,
where r a n k ( K X X ) represents the rank of matrix K X X . Evidently, through direct calculation, r a n k ( K X X ) r a n k ( B ) and
r a n k B C ( A B ) T K X X A 0 0 K X X ( B A T ) T 0 0 = r a n k B 0 0 0 A B 1 A A A K X X 0 K X X A A K X X 3 .
At this point, we simply need to prove that
r a n k A B 1 A A A K X X K X X A A K X X 3 = r a n k ( K X X ) .
We notice that
A B 1 A A A K X X K X X A A K X X 3 = A B 1 / 2 K X X A B 1 / 2 A B 1 / 2 K X X A B 1 / 2 T .
Therefore, we simply need to show that
r a n k A B 1 / 2 K X X A B 1 / 2 = r a n k ( K X X ) .
Based on r a n k ( K X X ) = r a n k ( A ) and
r a n k A B 1 / 2 K X X A B 1 / 2 = r a n k A K X X A B = r a n k A A B A T A B = r a n k ( A ) ,
this completes the proof of Equation (47); that is, Equation (46) is correct. According to Theorem 1.1.2 in [54] and Equation (46), we have
K X X = ( A ) T B 1 A .
Since h ( x , t ) = B m T ( t ) A m ( x ) , we obtain K x X T = L H t X T = A m T ( x ) B A T . Thus,
L K x X T = H t X T = B m T ( t ) A T ,
L A m T ( x ) = L L B m T ( t ) B 1 .
Therefore, based on Equations (30) and (31), we have
f ¯ n ( t ) = B m T ( t ) A A B 1 A Y ,
V [ f ¯ n ( t ) ] = L L B m T ( t ) B 1 L L B m ( t ) ( A A B m ( t ) ) T B 1 A A B m ( t ) .
Similarly, based on Lemma 3 and Equation (46), we obtain
g ¯ n ( x ) = A m T ( x ) B A A B 1 A Y ,
V [ g ¯ n ( x ) ] = A m T ( x ) [ B B A A B 1 A A B ] A m ( x ) .
Let us analyze the minimal-norm solution in Equation (42) from another point of view. Under the assumption of Equation (25), we expect that one solution f ( t ) = B m T ( t ) C can simultaneously minimize
i = 1 n E h x i , t f ( t ) d t y i 2 + C 2 2 ,
where h x i , t is defined as Equation (41) and C is to be determined. Obviously, by minimizing the first part of Equation (48), we obtain C = B 1 A Y . In conjunction with the second part, C is unique and acquired as
C = A A B 1 A Y .
Namely, the solution B m T ( t ) C , obtained by optimizing Equation (48), is the minimal-norm.
Corollary 2.
Let Equation (1) be solvable with the integral kernel h ( x , t ) as defined in Equation (41). Suppose that there are m distinct points x i , y i i = 1 m such that | K X X | 0 . Then,
f ¯ m = B m T B 1 A 1 g X ,
g ¯ m = g = A m T A 1 Y ,
V [ f ¯ m ] = V [ g ¯ m ] = 0 ,
where | K X X | is the determinant of K X X .
Proof. 
According to K X X = ( A T ) B 1 A and d e t ( K X X ) 0 , we know that A is an invertible matrix; that is, A = A 1 . In this case, L L = I and A A = I m . Here, I and I m denote the identity operator and identity matrix with order m, respectively. With this notation, the conclusions to be proven immediately follow.
Remark 3.
Corollary 1 has two improvements compared with Corollary 2. On the one hand, we do not require | K X X | 0 , which is difficult to guarantee in practical problems. The exact solution can be obtained by simply taking the points sequentially such that r a n k ( K X X ) = m . On the other hand, we present an analytical representation of the minimum-norm solution regardless of whether n and m are equal. That is to say, the conclusions in Corollary 1 are valid for any solvable degenerate kernel equation.

5. Illustrative Examples

Example 1.
Let us consider the classical FIEFKs below [29]:
0 1 e x t f ( t ) d t = g ( x ) ,
where g ( x ) = e x + 1 1 x + 1 , which has the exact solution of f ( t ) = e t .
Let h x ( t ) = e x t and H = L 2 ( [ 0 , 1 ] ) according to Equation (10). Then,
K ( x , x ) = e x + x 1 x + x , g ( x ) = K ( x , 1 ) .
According to Equation (16), then
f ( t ) = g , h t * H K = h t * ( 1 ) = h 1 ( t ) = e t .
Example 2.
Let us consider
0 1 h ( x , t ) f ( t ) d t = g ( x ) ,
with integral kernel
h ( x , t ) = x ( 1 t ) , 0 x t 1 t ( 1 x ) , 0 t < x 1 .
Similarly to [55], assume that g ( x ) of Equation (49) is chosen as
g ( x ) = 25 1008 x ( x 1 ) 17 17 x + 11 x 2 + 11 x 3 3 x 4 3 x 5 + x 6 ,
and then
f ( t ) = 25 18 t ( t 1 ) 3 + 3 t 2 t 2 2 t 3 + t 4 ,
is the minimal-norm solution of Equation (1).
Our aim is to obtain an optimally approximate solution with as few points as sequentially possible. According to Equation (10), if x x , we get
k ( x , x ) = x 3 3 ( 1 x ) 1 x + x x 1 x 3 3 + 1 x x [ x 2 x 2 2 x 3 x 3 3 ] .
If x < x , x , and x is interchangeable in the above equality. At this point, we can apply Equations (27) and (30) to discuss this example.
In comparison with uniform points, the new point can be obtained by sequentially minimizing V [ g ¯ n ( x ) ] when we choose the initial point as 0.5 in this numerical experiment, as shown in Figure 1. This is because it is the midpoint of the interval [0,1]. Therefore, under the same number of points, the approximate solution obtained via sequential sampling yields a lower absolute error compared to uniform sampling. When the points reach a certain amount, their approximate effect is consistent, as shown in Figure 2.
It is emphasized that the covariance matrix K X X is always invertible, and the maximum condition number is 2.0193 × 10 4 during sequential sampling.
Example 3
(Phillips Equation [56]). Let us consider the following FIEFKs:
6 6 h ( x , t ) f ( t ) d t = g ( x ) , 6 x 6 ,
where f ( t ) , h ( x , t ) , and g ( x ) are presented by
h ( x , t ) = f ( x t ) ,   f ( t ) = 1 + cos π t 3 , | t | < 3 0 , | t | 3 ,
g ( x ) = ( 6 | x | ) 1 + 1 2 cos π x 3 + 9 2 π sin π | x | 3 .
This integral equation has also been studied in Refs. [57,58], in which the right-hand term g ( x ) is assumed to be unknown, so the observations are considered to be noisy. However, our starting point is to make full use of the g ( x ) information and use as few points as possible to approximate the exact solution so as to save the observation cost. In this example, we cannot assume that the initial point equals the midpoint 0 of [ 6 , 6 ] ; otherwise, the exact solution would be obtained directly based on Equation (30), i.e.,
f ¯ 1 ( t ) = h ( 0 , t ) k ( 0 , 0 ) 1 g ( 0 ) = 1 + cos π t 3 .
Strategically choosing ± 1 as the initial point can take advantage of the symmetry of the domain and may not necessarily take the midpoint of the interval. As Figure 3 demonstrates, sequential point generation via posterior variance minimization V [ g ¯ n ( x ) ] dynamically responds to regularization threshold η . This parameter fundamentally transforms the solution paradigm: when η > 0 , the Gaussian process transitions from exact interpolation to approximate smoothing—a deliberate regularization mechanism in ill-posed problems. Crucially, η balances dual objectives: (1) numerical stability through condition number control of K X X and (2) approximation fidelity via error minimization. Experimental validation in Figure 4 reveals that η = 10 4 emerges as a better configuration: it maintains solution stability (preventing exponential error growth observed at η < 10 6 ) while achieving the accuracy required for the lower threshold configuration, as shown in Figure 4. At this threshold, sequential design achieves comparable absolute error to uniform discretization. It was also determined that the threshold η and the number of interpolation points n are closely related, and that a precise mathematical relationship is not yet available. The sensitivity analysis confirms the role of η as a bias–variance trade-off controller: higher values (higher bias at η = 10 2 , as shown in Figure 4) make the solution too smooth, while lower values ( η > 10 6 ) lead to instability. This identifies η = 10 4 as being close to the stability–accuracy threshold for the problem.
Remark 4.
The core computational workflow employs a greedy algorithm to sequentially determine the next interpolation point by minimizing the posterior variance
V [ g ¯ n ( x ) ] = k ( x , x ) K x X T K X X K x X .
This process generates multiple candidate points at each step. Crucially, the final selection among these candidates follows the maximin criterion: prioritizing points that maximize the minimum distance to existing nodes. This dual-stage approach ensures optimal spatial distribution. Furthermore, as defined by the mathematical expression of V [ g ¯ n ( x ) ] , the point generation mechanism operates independently of the specific form of g ( x ) —the right-hand term of Equation (1). This intrinsic independence from g ( x ) fundamentally distinguishes the method from uniform discretization methods, where adaptation is not possible and more information about g ( x ) needs to be known, limiting the scope of application.

6. Conclusions

This paper sought to address the most fundamental question in integral equations: if the right-hand term possessed an analytical expression, could an analytical expression of the solution likewise be derived? It was concluded that various analytical solutions could be presented in operator forms. Given the computational complexity associated with the inner product of the complex reproducing kernel, numerical solutions were investigated. Utilizing the H H k formulation, a novel point selection strategy was proposed by minimizing the posterior variance V [ g ¯ n ( x ) ] of the integral equation’s right-hand term. This greedy algorithm for point selection yielded enhanced approximation accuracy for the solutions. Specifically, it achieved comparable absolute error to the other regularization methods while requiring fewer discretization points. Consequently, the condition number of K X X was effectively maintained within acceptable bounds, thereby ensuring solution stability. Furthermore, increasing the threshold η could be implemented as a supplementary measure. Future research will investigate methodologies for the precise selection of η to solve FIEFKs.

Author Contributions

Conceptualization, R.Q.; Validation, J.X. and M.X.; Formal analysis, J.X.; Writing—original draft, R.Q.; Writing—review & editing, R.Q., J.X. and M.X.; Supervision, R.Q.; Funding acquisition, R.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guizhou University of Commerce Natural Science Projects (No. 2023ZKYB003 and No. 2024BAXM035).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Notations

The following are the main notations used throughout this manuscript:
FIEFKsFredholm integral equations of the first kind
RKHSReproducing kernel Hilbert space
GPRGaussian process regression
SGPISequential Gaussian process interpolation
GPGaussian process
KRRKernel ridge regression
TEIGTruncated eigendecomposition
PI krigingKriging with pseudoinverse
· H Norm in H
Moore–Penrose pseudoinverse
X n Experimental design
h X n Fill distance
K X X Covariance matrix
E [ · ] Posterior mean
V [ · ] Posterior variance
G P ( m , k ) GP with mean m and variance k

References

  1. Saitoh, S.; Sawano, Y. Theory of Reproducing Kernels and Applications; Springer Science & Business Media: Singapore, 2016. [Google Scholar]
  2. Mesgarani, H.; Parmour, P. Application of numerical solution of linear Fredholm integral equation of the first kind for image restoration. Math. Sci. 2023, 17, 371–378. [Google Scholar] [CrossRef]
  3. Du, H.; Cui, M. Representation of the exact solution and a stability analysis on the Fredholm integral equation of the first kind in reproducing kernel space. Appl. Math. Comput. 2006, 182, 1608–1614. [Google Scholar] [CrossRef]
  4. Du, H.; Cui, M. Approximate solution of the Fredholm integral equation of the first kind in a reproducing kernel Hilbert space. Appl. Math. Lett. 2008, 21, 617–623. [Google Scholar] [CrossRef]
  5. Chang, C.W.; Liu, C.S.; Chang, J.R. A quasi-boundary semi-analytical approach for two-dimensional backward heat conduction problems. Comput. Mater. Con. 2010, 15, 45–66. [Google Scholar]
  6. Venkataramanan, L.; Song, Y.Q.; Hurlimann, M.D. Solving Fredholm integrals of the first kind with tensor product structure in 2 and 2.5 dimensions. IEEE Trans. Signal. Process. 2002, 50, 1017–1026. [Google Scholar] [CrossRef]
  7. Serikbai, A.; Dias, N.; Ilya, S. Solvability and Construction of a Solution to the Fredholm Integral Equation of the First Kind. J. Appl. Math. Phys. 2024, 12, 720–735. [Google Scholar] [CrossRef]
  8. Kashirin, A.A.; Smagin, S.I. On the Solvability of Fredholm Boundary Integral Equations of the First Kind for the Three-Dimensional Transmission Problem on the Spectrum. Differ. Equ. 2024, 60, 204–214. [Google Scholar] [CrossRef]
  9. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Courier Corporation: North Chelmsford, MA, USA, 2003. [Google Scholar]
  10. Nashed, M.Z.; Wahba, G. Generalized inverses in reproducing kernel spaces: An approach to regularization of linear operator equations. SIAM J. Math. Anal. 1974, 5, 974–987. [Google Scholar] [CrossRef]
  11. Wahba, G. Practical approximate solutions to linear operator equations when the data are noisy. SIAM J. Numer. Anal. 1977, 14, 651–667. [Google Scholar] [CrossRef]
  12. Wen, J.; Wei, T. Regularized solution to the Fredholm integral equation of the first kind with noisy data. J. Appl. Math. Inform. 2011, 29, 23–37. [Google Scholar]
  13. Tikhonov, A.N.; Goncharsky, A.V.; Stepanov, V.V.; Yagola, A.G. Numerical Methods for the Solution of Ill-Posed Problems; Springer Science & Business Media: Dordrecht, The Netherlands, 1995. [Google Scholar]
  14. Wazwaz, A.M. The regularization method for Fredholm integral equations of the first kind. Comput. Math. Appl. 2011, 61, 2981–2986. [Google Scholar] [CrossRef]
  15. Neggal, B.; Boussetila, N.; Rebbani, F. Projected Tikhonov regularization method for Fredholm integral equations of the first kind. J. Inequal. Appl. 2016, 2016, 195 . [Google Scholar] [CrossRef]
  16. Tanana, V.P.; Vishnyakov, E.Y.; Sidikova, A.I. An approximate solution of a Fredholm integral equation of the first kind by the residual method. Numer. Anal. Appl. 2016, 9, 74–81. [Google Scholar] [CrossRef]
  17. Masouri, Z.; Hatamzadeh, S. A regularization-direct method to numerically solve first kind Fredholm integral equation. Kyungpook Math. J. 2020, 60, 869–881. [Google Scholar]
  18. Lee, J.W.; Prenter, P.M. An analysis of the numerical solution of Fredholm integral equations of the first kind. Numer. Math. 1978, 30, 1–23. [Google Scholar] [CrossRef]
  19. Du, N. Finite-dimensional approximation settings for infinite-dimensional Moore–Penrose inverses. SIAM J. Numer. Anal. 2008, 46, 1454–1482. [Google Scholar] [CrossRef]
  20. Mohammadi, A.; Tari, A. A new approach to numerical solution of the time-fractional KdV-Burgers equations using least squares support vector regression. J. Math. Model. 2024, 12, 583–602. [Google Scholar]
  21. Dehestani, H.; Ordokhani, Y.; Razzaghi, M. Ritz-least squares support vector regression technique for the system of fractional Fredholm-Volterra integro-differential equations. J. Appl. Math. Comput. 2025, 71, 3477–3508. [Google Scholar] [CrossRef]
  22. Yousefi, S.; Banifatemi, A. Numerical solution of Fredholm integral equations by using CAS wavelets. Appl. Math. Comput. 2006, 183, 458–463. [Google Scholar] [CrossRef]
  23. Maleknejad, K.; Lotfi, T.; Mahdiani, K. Numerical solution of first kind Fredholm integral equations with wavelets-Galerkin method (WGM) and wavelets precondition. Appl. Math. Comput. 2007, 186, 794–800. [Google Scholar] [CrossRef]
  24. Fattahzadeh, F. Approximate solution of two-dimensional Fredholm integral equation of the first kind using wavelet base method. Int. J. Appl. Comput. Math. 2019, 5, 138. [Google Scholar] [CrossRef]
  25. Hatamzadeh, V.S.; Masouri, Z. Numerical method for analysis of one-and two-dimensional electromagnetic scattering based on using linear Fredholm integral equation models. Math. Comput. Model. 2011, 54, 2199–2210. [Google Scholar] [CrossRef]
  26. Maleknejad, K.; Saeedipoor, E. An efficient method based on hybrid functions for Fredholm integral equation of the first kind with convergence analysis. Appl. Math. Comput. 2017, 304, 93–102. [Google Scholar] [CrossRef]
  27. Didgar, M.; Vahidi, A. Application of taylor expansion for fredholm integral equations of the first kind. Punjab. Univ. J. Math. 2020, 51, 1–14. [Google Scholar]
  28. Eggermont, P.B. Maximum entropy regularization for Fredholm integral equations of the first kind. SIAM J. Math. Anal. 1993, 24, 1557–1576. [Google Scholar] [CrossRef]
  29. Yuan, D.; Zhang, X. An overview of numerical methods for the first kind Fredholm integral equation. SN Appl. Sci. 2019, 1, 1178. [Google Scholar] [CrossRef]
  30. Yan, L.; Duan, X.; Liu, B.; Xu, J. Gaussian processes and polynomial chaos expansion for regression problem: Linkage via the RKHS and comparison via the KL divergence. Entropy 2018, 20, 191. [Google Scholar] [CrossRef]
  31. Williams, C.K.; Rasmussen, C.E. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  32. Kanagawa, M.; Hennig, P.; Sejdinovic, D.; Sriperumbudur, B.K. Gaussian processes and kernel methods: A review on connections and equivalences. arXiv 2018, arXiv:1807.02582. [Google Scholar] [CrossRef]
  33. Qian, T. Reproducing kernel sparse representations in relation to operator equations. Complex. Anal. Oper. Theory 2020, 14, 36. [Google Scholar] [CrossRef]
  34. Qiu, R.; Yan, L.; Duan, X. Solving Fredholm integral equation of the first kind using Gaussian process regression. Appl. Math. Comput. 2022, 425, 127032. [Google Scholar] [CrossRef]
  35. Qiu, R.; Duan, X.; Huangpeng, Q.; Yan, L. The best approximate solution of Fredholm integral equations of the first kind via Gaussian process regression. Appl. Math. Lett. 2022, 133, 108272. [Google Scholar] [CrossRef]
  36. Qiu, R.; Xu, M.; Zhu, P. Reproducing kernel Hilbert space method for high-order linear Fredholm integro-differential equations with variable coefficients. Appl. Math. Comput. 2025, 489, 129161. [Google Scholar] [CrossRef]
  37. Lu, F.; Ou, M.J.Y. An Adaptive RKHS Regularization for the Fredholm Integral Equations. Math. Methods. Appl. Sci. 2025, 48, 11124–11140. [Google Scholar] [CrossRef]
  38. De Alba, P.D.; Fermo, L.; Pes, F.; Rodriguez, G. Regularized minimal-norm solution of an overdetermined system of first kind integral equations. Numer. Algorithms 2023, 92, 471–502. [Google Scholar] [CrossRef]
  39. Mohammadi, H.; Riche, R.L.; Durrande, N.; Touboul, E.; Bay, X. An analytic comparison of regularization methods for Gaussian Processes. arXiv 2016, arXiv:1602.00853. [Google Scholar]
  40. Michel, V.; Orzlowski, S. On the null space of a class of Fredholm integral equations of the first kind. J. Inverse. Ill. Posed. Probl. 2016, 24, 687–710. [Google Scholar] [CrossRef]
  41. Ayupova, N.B. On the uniqueness of solutions to integral equations of the first kind. J. Inverse. Ill. Posed. Probl. 2002, 10, 13–22. [Google Scholar] [CrossRef]
  42. Hosseinzadeh, H.; Dehghan, M.; Sedaghatjoo, Z. The stability study of numerical solution of Fredholm integral equations of the first kind with emphasis on its application in boundary elements method. Appl. Numer. Math. 2020, 158, 134–151. [Google Scholar] [CrossRef]
  43. Nashed, M.Z. On moment-discretization and least-squares solutions of linear integral equations of the first kind. J. Math. Anal. Appl. 1976, 53, 359–366. [Google Scholar] [CrossRef]
  44. Lukas, M.A. Convergence rates for regularized solutions. Math. Comput. 1988, 51, 107–131. [Google Scholar] [CrossRef]
  45. Nashed, M.Z.; Wahba, G. Convergence rates of approximate least squares solutions of linear integral and operator equations of the first kind. Math. Comput. 1974, 28, 69–80. [Google Scholar] [CrossRef]
  46. Aronszajn, N. Theory of reproducing kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  47. Ye, Q. Optimal designs of positive definite kernels for scattered data approximation. Appl. Comput. Harmon. Anal. 2016, 41, 214–236. [Google Scholar] [CrossRef]
  48. Ye, Q. Kernel-based Approximation Methods for Generalized Interpolations: A Deterministic or Stochastic Problem? arXiv 2017, arXiv:1710.05192. [Google Scholar] [CrossRef]
  49. Wahba, G. On the optimal choice of nodes in the collocation-projection method for solving linear operator equations. J. Approx. Theory 1976, 16, 175–186. [Google Scholar] [CrossRef]
  50. Liu, J. Optimal experimental designs for linear inverse problems. Inverse Probl. Eng. 2001, 9, 287–314. [Google Scholar] [CrossRef]
  51. Bardow, A. Optimal experimental design of ill-posed problems: The METER approach. Comput. Chem. Eng. 2008, 32, 115–124. [Google Scholar] [CrossRef]
  52. Santner, T.J.; Williams, B.J.; Notz, W.I.; Williams, B.J. The Design and Analysis of Computer Experiments; Springer Science & Business Media: New York, NY, USA, 2003. [Google Scholar]
  53. Sun, W.; Wei, Y. Triple reverse-order law for weighted generalized inverses. Appl. Math. Comput. 2002, 125, 221–229. [Google Scholar] [CrossRef]
  54. Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses: Theory and Computations; Springer Science & Business Media: Singapore, 2018. [Google Scholar]
  55. Chen, Z.; Xu, Y.; Yang, H. Fast collocation methods for solving ill-posed integral equations of the first kind. Inverse Probl. 2008, 24, 065007. [Google Scholar] [CrossRef]
  56. Phillips, D.L. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach. 1962, 9, 84–97. [Google Scholar] [CrossRef]
  57. Ramlau, R.; Reichel, L. Error estimates for Arnoldi–Tikhonov regularization for ill-posed operator equations. Inverse Probl. 2019, 35, 055002. [Google Scholar] [CrossRef]
  58. Reichel, L.; Sadok, H.; Zhang, W.H. Simple stopping criteria for the LSQR method applied to discrete ill-posed problems. Numer. Algorithms 2020, 84, 1381–1395. [Google Scholar] [CrossRef]
Figure 1. Minimizing the posterior variance V [ g ¯ n ( x ) ] of g ( x ) also minimizes the maximum uncertainty of g ¯ n ( x ) . The subfigure on the right was obtained by minimizing the left subfigure. It can be seen that the generated points (red stars) are not uniformly distributed on [0,1].
Figure 1. Minimizing the posterior variance V [ g ¯ n ( x ) ] of g ( x ) also minimizes the maximum uncertainty of g ¯ n ( x ) . The subfigure on the right was obtained by minimizing the left subfigure. It can be seen that the generated points (red stars) are not uniformly distributed on [0,1].
Mathematics 13 02407 g001
Figure 2. The absolute error was plotted under the uniform and sequential points, which are denoted as Uni and Seq in this figure, respectively. In most cases, the approximate solutions under sequential points have smaller absolute error than uniform points. When h X n is less than a certain value, that is, when n is greater than a positive integer, the two have almost the same absolute error.
Figure 2. The absolute error was plotted under the uniform and sequential points, which are denoted as Uni and Seq in this figure, respectively. In most cases, the approximate solutions under sequential points have smaller absolute error than uniform points. When h X n is less than a certain value, that is, when n is greater than a positive integer, the two have almost the same absolute error.
Mathematics 13 02407 g002
Figure 3. The new point obtained by sequentially minimizing V [ g ¯ n ( x ) ] for different η . In this case, the Gaussian process interpolation model has turned into a non-interpolated model.
Figure 3. The new point obtained by sequentially minimizing V [ g ¯ n ( x ) ] for different η . In this case, the Gaussian process interpolation model has turned into a non-interpolated model.
Mathematics 13 02407 g003
Figure 4. The absolute error between the exact solution and the approximate solution is plotted under uniform and sequential points for different η .
Figure 4. The absolute error between the exact solution and the approximate solution is plotted under uniform and sequential points for different η .
Mathematics 13 02407 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, R.; Xu, J.; Xu, M. Solving Fredholm Integral Equations of the First Kind Using a Gaussian Process Model Based on Sequential Design. Mathematics 2025, 13, 2407. https://doi.org/10.3390/math13152407

AMA Style

Qiu R, Xu J, Xu M. Solving Fredholm Integral Equations of the First Kind Using a Gaussian Process Model Based on Sequential Design. Mathematics. 2025; 13(15):2407. https://doi.org/10.3390/math13152407

Chicago/Turabian Style

Qiu, Renjun, Juanjuan Xu, and Ming Xu. 2025. "Solving Fredholm Integral Equations of the First Kind Using a Gaussian Process Model Based on Sequential Design" Mathematics 13, no. 15: 2407. https://doi.org/10.3390/math13152407

APA Style

Qiu, R., Xu, J., & Xu, M. (2025). Solving Fredholm Integral Equations of the First Kind Using a Gaussian Process Model Based on Sequential Design. Mathematics, 13(15), 2407. https://doi.org/10.3390/math13152407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop