Abstract
In this study, a Gaussian process model is utilized to study the Fredholm integral equations of the first kind (FIEFKs). Based on the H– formulation, two cases of FIEFKs are under consideration with respect to the right-hand term: discrete data and analytical expressions. In the former case, explicit approximate solutions with minimum norm are obtained via a Gaussian process model. In the latter case, the exact solutions with minimum norm in operator forms are given, which can also be numerically solved via Gaussian process interpolation. The interpolation points are selected sequentially by minimizing the posterior variance of the right-hand term, i.e., minimizing the maximum uncertainty. Compared with uniform interpolation points, the approximate solutions converge faster at sequential points. In particular, for solvable degenerate kernel equations, the exact solutions with minimum norm can be easily obtained using our proposed sequential method. Finally, the efficacy and feasibility of the proposed method are demonstrated through illustrative examples provided in this paper.
Keywords:
Fredholm integral equations; Tikhonov regularization; Gaussian process model; ill-posed problem; Moore–Penrose pseudoinverse; H–Hk formulation MSC:
45B05
1. Introduction
In recent years, mathematical models have been extensively employed to solve complex systems. Many inverse problems in these systems can be reduced to solving specific FIEFKs, such as image restoration [1,2], the backward heat conduction problem [3,4,5], nuclear magnetic resonance [6], and so forth. Additionally, some mathematical inverse problems—such as numerical differentiation, the numerical inversion of integral transformations, and numerical solutions of differential equations—can be translated into FIEFKs.
The solvability and construction of FIEFKs have been considered in [7,8]. However, this type of equation has a significant property, referred to as ill-posedness, which is relative to well-posedness as defined by Jacques Hadamard [9] in the 1920s. Due to the difficulty in obtaining exact solutions for such equations, researchers often resort to numerical methods. Currently, ill-posedness has been manifested as a relatively large condition number of the coefficient matrix in a linear system. It is well known that this condition number can be effectively dominated through regularization, which has subsequently given rise to various regularization methods [10,11,12,13,14,15,16,17]. Herein, finite-dimensional regularized approximate methods [15,16,17] are proposed within specific subspaces. These subspaces are typically obtained in two ways, either by selecting basis functions to perform an orthogonal decomposition of the space or meshing the definition domain. In addition, other finite rank approximate projection methods [18,19] have been developed based on these subspaces, yielding least squares solutions with minimum norm through operator inversion. In recent years, some machine learning methods, such as support vector regression [20,21], have also been used to study Fredholm integral equations.
In addition to regularization methods, wavelet techniques are also used to solve FIEFKs. S. Yousefi and A. Banifatemi [22] proposed a method for estimating the coefficients of numerical solutions by means of the orthogonal properties of CAS wavelets. The wavelet–Galerkin method proposed by K. Maleknejad et al. [23] can transform FIEFKs into linear algebraic equations that possess sparse coefficient matrixes. F. Fattahzadeh [24] propounded a numerical direct method to research two-dimensional linear and nonlinear FIEFKs via the Haar wavelet; however, this method did not involve numerical integration. In addition, block-pulse functions are popular. For example, Z. Masouri et al. [17,25] proposed a regularization–direct method to overcome the ill-posedness of integral equations based on vector forms of block-pulse functions without employing any projection techniques. K. Maleknejad and E. Saeedipoor [26] enhanced block-pulse functions and proposed a numerical direct method based on orthogonal hybrid block-pulse functions and Legendre polynomials. Furthermore, numerous other methods have been developed for solving integral equations. For example, iteration methods [2], Taylor expansion methods [27], maximum entropy regularization [28], and additional methods can be found in [29].
In practice, the majority of methods are devoted to numerical solutions by transforming integral equations into linear algebraic equations. For FIEFKs, the condition number is dominated by certain regularization parameters as follows:
For convenience, the above equation can be rewritten as
whereby the linear operator L is herein referred to as
and its adjoint operator is defined as
Here, is an explicit continuous function on , and the objective function is pending in H, which is either a square-integrable space or a reproducing kernel Hilbert space (RKHS) .
It is common for the right-hand term to be unknown, yet discrete data with respect to it are typically given, such as
Assume that is bounded, i.e.,
where is a noise level. According to this assumption,
which has been widely studied in [10,11,12], and the norm is considered in the solution space H ( or RKHS ). This problem has a unique solution:
where , , and
and its -th entry of matrix is referred to as
Notice that Equations (5) and (7) are based on the H– formulation, which was suggested by Qian [33], and this formulation was used to study Equation (1) by R. Qiu et al. [34,35,36]. More specifically, this formulation can transform the range space (the non-closed subspace in ) of L into an RKHS, which is a closed space with its topology; that is, it transforms ill-posed problems into well-posed problems in the mathematical sense. Based on this formulation, the results in [34,35] can be extended in this paper using GPR. Moreover, an adaptive RKHS regularization with additive Gaussian noise was proposed in [37], which was also associated with the H– formulation.
Beyond this, we are more concerned with the essential question of Equation (1): when there are explicit analytical expressions for and , can the exact solution be obtained accordingly? Exact solutions with minimum norm are rarely found, except in [10], where the analytical solution in operator form has been presented. Based on the H– formulation, we also present exact solutions in for all the operator forms regardless of or in Theorems 1 and 2. We further analyze the potential challenges in the computation of these solutions. For this reason, we provide an approximate solution with the minimum norm based on Gaussian process interpolation as
Herein, is the Moore–Penrose pseudoinverse of , and its minimum positive eigenvalue acts as a regularization parameter (see [38,39]). Our proposed method can sequentially select interpolation points by minimizing the posterior variance of , defined as Equation (27); that is, each point is generated at the maximum uncertainty of , which is the Gaussian process interpolation of in Equation (1). Compared with uniform points, the solution in Equation (8) generally has a better approximation at sequential points. For convenience, we refer to this method as the sequential Gaussian process interpolation (SGPI).
The remainder of this paper is structured as follows. We introduce the H– formulation and present the exact solutions in operator form for a given in Section 2. In Section 3, we utilize a Gaussian process model to solve FIEFKs and present our strategy of selecting the sequential points for the interpolation model. In Section 4, we demonstrate that the exact solutions of degenerate kernel equations can be found using the SGPI method for any initial point. Several examples illustrate the efficacy and feasibility of the SGPI method in Section 5. Finally, we present a brief conclusion of this paper in Section 6.
2. Minimal-Norm Solution
2.1. Moore–Penrose Pseudoinverse
Let and be denoted as the null space and range space, respectively. It can be demonstrated that is a closed linear space in H and is a non-closed subspace in unless is a finite-dimensional space.
Definition 1.
For , a function is named an ordinary least squares (OLS) solution of Equation (2) if
A function is called the minimal-norm solution (or the best approximate solution), denoted by , if . Here, S denotes all OLS solutions. Additionally, is called the Moore–Penrose pseudoinverse of L, defined as Equation (3), if .
In general, is a closed operator on . The focal point that warrants our attention is that if and only if subspace is closed [19] in . Moreover, there exists another orthogonal decomposition on E; thus,
The null space of a special integral equation has been studied in [40], and the uniqueness of solutions is considered in [41,42]. In fact, the solution is also unique in the sense of the Moore–Penrose pseudoinverse (see [10,38,43,44,45]).
2.2. H– Formulation
For a linear integral operator L, defined as Equation (3), it is known that L is a compact linear operator from H to and is a non-clsoed subspace in . This means that Equation (1) is ill-posed. Nevertheless, can be transformed into an RKHS , for which its norm is endowed with a reproducing kernel topology through the H– formulation. Henceforth, is denoted as . Based on this formulation, Equation (1) has been transformed mathematically into a well-posed equation.
For any positive definite kernel k, via the Moore–Aronszajn theorem [46], a unique RKHS exists such that its reproducing kernel equals k. This means that there is a one-to-one correspondence between an RKHS and a certain positive definite kernel. More properties and their applications about RKHS have been summarized in [1].
For any and in , its inner product
and
where represents an orthogonal projection operator.
Hereafter, Equations (9) and (10), which contain the following isometric isomorphism relationship, are referred to as the H– formulation according to [33], and represents the canonical range space. Based on Equations (9) and (10), we obtain the following lemma.
2.3. Minimal-Norm Solution in
Let H be an RKHS , and, given the continuous function , Equation (1) is solvable. In this case, the conclusion has been obtained by the authors of [10].
Theorem 1
2.4. Minimal-Norm Solution in
In this subsection, we assume that . Generally speaking, it can be noted that is an unbounded operator from (topology in ) to , unless of Equation (1) is a degenerate integral kernel. However, based on the H– formulation, we can make the following conclusion.
Lemma 2.
Let and ; then, is bounded.
Proof.
Let and ; then,
Hence, is bounded. □
Remark 1.
Theorem 2.
Let and for fixed ; then,
Proof.
In this section, we present expressions of two operator forms of , that is, solutions in Equations (15) and (16), regardless of whether is in . Among them, Equation (15) also depends on RKHS , which can usually be selected as some absolutely continuous function spaces, for example in [4] and W in [38]. In addition, these solutions in operator forms may be calculated analytically. However, it is possible that some of these solutions can only be computed numerically, which are implemented through the Gaussian process model mentioned below in this paper.
3. Gaussian Process Model
For a Gaussian process (GP), one positive definite kernel is often called the covariance kernel, which is used to represent the covariance function of some random variables. Therefore, every RKHS has a covariance function (its reproducing kernel).
Definition 2.
Let be a real-valued function and be a positive definite kernel; then, is called a Gaussian process such that is a joint Gaussian distribution for any ; herein, , , and .
Usually, a Gaussian process is written as
where and .
The above definition was derived from Ref. [31], whose authors refer to many applications of the Gaussian process in machine learning. Furthermore, it should be mentioned that connections and equivalences between Gaussian processes and kernel methods are established in [32]; for example, the estimator of kernel ridge regression (KRR) is consistent with the posterior mean of GPR. Inspired by this equivalence, we can employ the Gaussian process model (regression and interpolation) to study the numerical approximation of of Equation (1) in and then obtain numerical forms for solutions in Equations (15) and (16).
3.1. Gaussian Process Regression
To distinguish the vector , customarily, is referred to as an experimental design on D. We assume that the observed values and the function values satisfy
where are independent and identically distributed normal distributions. In the Gaussian process of Equation (17), without loss of generality, let the mean function in this paper, i.e., . Based on the H– formulation, , and then the posterior mean and variance at x are
where , , is known as the covariance matrix, and is the Gaussian process regression of .
For certain complex reproducing kernels, deriving analytical expressions for the solutions in Equations (15) and (16) may be impractical, in which case they can be approximated separately by the minimal-norm solution of the following equation:
where , , and is defined as Equation (19) when or . Likewise, let . Its analytical expression is provided in Proposition 1, which has extended the results found in [35].
Proposition 1.
Let .
If and , then
If and , then
where
Furthermore, with probability 1, we have
Proof.
This is the repetition of Theorems 1 and 2 in [35] when L is replaced by U, g by w, and k by . □
Meanwhile, it is important to emphasize the role of , which functions as a double-edged sword. On the one hand, the ill-posedness of worsens as n increases. The variance is added to the diagonal of , which can effectively improve its stability. In this case, is referred to as “nugget”, which is a regularization parameter. On the other hand, if the problem at hand is itself an interpolation model—for example, given certain points on of Equation (1) to determine its optimal interpolation function— can make this problem a non-interpolation model. Therefore, it is necessary to discuss the interpolation model separately. At that point, the minimum positive eigenvalue of acts as a regularization parameter, as seen in methods such as truncated eigendecomposition (TEIG), discussed in [38], and kriging with pseudoinverse (PI kriging) regularization, proposed in [39].
3.2. Gaussian Process Interpolation
In this subsection, we focus on the noise-free scenario (interpolation). Suppose that there are interpolated points that satisfy
With the help of the Moore–Penrose pseudoinverse, the regression model in Equation (18) can be transformed into the interpolation model in Equation (25), as shown in the following Lemma 3, which can be found in [47,48].
Lemma 3
(Theorem 3.4 in [48]). Let satisfy Equation (25); then,
Firstly, let , and let be the posterior mean of . Then, ; i.e., it is also the minimal-norm solution of the following equation:
In order to describe the approximation error between and , we also need to introduce the fill distance, defined as
for a given . Obviously, if , then is dense in D.
Proof.
Therefore, in view of Equation (26) in Lemma 3, we have
Since , then
In accordance with Lemma 2, we can obtain . Moreover,
Based on ,
and, therefore,
On account of and Lemma 2, we have
We only need to prove Equations (30) and (31), and similar methods can prove Equations (32) and (33). Based on Equation (12), we have
Based on the H– formulation, we get
and then we can simply show that
For any , we have
According to the Cauchy–Schwartz inequality, we get
Let ; then, , and, further,
That is,
We set and based on the above equality, yielding
As a result of , we have
Since , then
Thus, this proves Equation (35); that is, we have completed the proof of Equation (34).
□
Theorem 4.
Suppose that satisfies Equation (25); then,
Proof.
As a result of , we know that is a Cauchy sequence in . Therefore,
Based on , . This conflicts with , resulting in
Theorem 4 has thus been proven. □
In fact, we can simply show that, if , then . Let be the set of all functions in satisfying Equation (25); that is,
and then . Because , we get
Secondly, we consider the case where via the following remark.
Remark 2.
Let ; then, . Hence, we can assume that , where if and if . With a derivation process similar to that in Lemma 3 and Theorem 3, similar conclusions of Theorems 3 and 4 can also be obtained.
3.3. Sequential Design
It is common knowledge that the regularization parameter in Equations (22) and (23) can be estimated using maximum likelihood or cross-validation methods. Additionally, calculating the pseudoinverse involves a parameter, i.e., the positive threshold below, in which an eigenvalue is considered as zero. One suggestion is given in [39] whereby is equal to , where is the maximum eigenvalue of and is a reasonable condition number, such as . In addition to these methods, this paper considers the regularization problem from another perspective, that is, by selecting as few points in Equation (25) as possible to ensure that converges faster under an acceptable condition number of .
In practice, the selection of an experimental design has also been widely studied. For example, Wahba proposed the collocation–projection method based on certain asymptotically optimal sequences in a specific RKHS to solve FIEFKs on [0,1] [49]. However, validating such optimal sequences in this RKHS may be inconvenient. The optimal design of discrete points is determined by maximizing the singular value (smallest) of the semi-discrete or the fully discrete matrix in [50], which facilitates the control of the condition number. Bardow proposed a method to minimize the expected total error between an exact solution and estimated solution, which is useful in addressing the bias–variance trade-off that is crucial for ill-posed problems [51]. Furthermore, the concept of a redundant point set was proposed by Mohammadi et al. to obtain linear-dependent information that makes non-invertible [39]. However, despite the above examples, there remains no systematic strategy for an optimal experimental design for FIEFKs.
In this paper, an optimal experimental design is determined via the posterior variance of . Based on the H– formulation and Equation (36), we have
and
Therefore, we simply need to minimize at each step to obtain the next point, which is a greedy algorithm. We notice that Equation (39) becomes an equality at each point; that is, is a precise supremum.
For a given design (), new points () have been obtained by minimizing , denoted as . In this case, the next point () is incorporated into the design through the following:
where denotes the Euclidean distance. It should be noted that there may be more than one point to satisfy Equation (40), from which only one needs to be selected. In this paper, the initial point in design is usually selected as the center of D. More selections can be found in [52].
4. Degenerate Kernel Equation
It should be emphasized that there is a special integral kernel that warrants a separate discussion, i.e., the degenerate kernel:
In this case, if Equation (1) with the integral kernel represented as Equation (41) is solvable, i.e., , then we can obtain the following corollary.
Corollary 1.
Proof.
this completes the proof of Equation (47); that is, Equation (46) is correct. According to Theorem 1.1.2 in [54] and Equation (46), we have
Since , we obtain . Thus,
Therefore, based on Equations (30) and (31), we have
Similarly, based on Lemma 3 and Equation (46), we obtain
According to Equation (10), we have so . Because is linearly independent, B is obtained as a positive definite matrix. Thus, we can claim that
According to Corollary 3.2 in [53], we can prove that
where represents the rank of matrix . Evidently, through direct calculation, and
At this point, we simply need to prove that
We notice that
Therefore, we simply need to show that
Based on and
□
Let us analyze the minimal-norm solution in Equation (42) from another point of view. Under the assumption of Equation (25), we expect that one solution can simultaneously minimize
where is defined as Equation (41) and C is to be determined. Obviously, by minimizing the first part of Equation (48), we obtain In conjunction with the second part, C is unique and acquired as
Namely, the solution , obtained by optimizing Equation (48), is the minimal-norm.
Corollary 2.
where is the determinant of .
Let Equation (1) be solvable with the integral kernel as defined in Equation (41). Suppose that there are m distinct points such that . Then,
Proof.
According to and , we know that A is an invertible matrix; that is, . In this case, and . Here, I and denote the identity operator and identity matrix with order m, respectively. With this notation, the conclusions to be proven immediately follow.
□
Remark 3.
Corollary 1 has two improvements compared with Corollary 2. On the one hand, we do not require , which is difficult to guarantee in practical problems. The exact solution can be obtained by simply taking the points sequentially such that . On the other hand, we present an analytical representation of the minimum-norm solution regardless of whether n and m are equal. That is to say, the conclusions in Corollary 1 are valid for any solvable degenerate kernel equation.
5. Illustrative Examples
Example 1.
Let us consider the classical FIEFKs below [29]:
where , which has the exact solution of .
Example 2.
is the minimal-norm solution of Equation (1).
Let us consider
with integral kernel
Similarly to [55], assume that of Equation (49) is chosen as
and then
Our aim is to obtain an optimally approximate solution with as few points as sequentially possible. According to Equation (10), if , we get
In comparison with uniform points, the new point can be obtained by sequentially minimizing when we choose the initial point as 0.5 in this numerical experiment, as shown in Figure 1. This is because it is the midpoint of the interval [0,1]. Therefore, under the same number of points, the approximate solution obtained via sequential sampling yields a lower absolute error compared to uniform sampling. When the points reach a certain amount, their approximate effect is consistent, as shown in Figure 2.
Figure 1.
Minimizing the posterior variance of also minimizes the maximum uncertainty of . The subfigure on the right was obtained by minimizing the left subfigure. It can be seen that the generated points (red stars) are not uniformly distributed on [0,1].
Figure 2.
The absolute error was plotted under the uniform and sequential points, which are denoted as Uni and Seq in this figure, respectively. In most cases, the approximate solutions under sequential points have smaller absolute error than uniform points. When is less than a certain value, that is, when n is greater than a positive integer, the two have almost the same absolute error.
It is emphasized that the covariance matrix is always invertible, and the maximum condition number is during sequential sampling.
Example 3
(Phillips Equation [56]). Let us consider the following FIEFKs:
where , , and are presented by
This integral equation has also been studied in Refs. [57,58], in which the right-hand term is assumed to be unknown, so the observations are considered to be noisy. However, our starting point is to make full use of the information and use as few points as possible to approximate the exact solution so as to save the observation cost. In this example, we cannot assume that the initial point equals the midpoint 0 of ; otherwise, the exact solution would be obtained directly based on Equation (30), i.e.,
Strategically choosing as the initial point can take advantage of the symmetry of the domain and may not necessarily take the midpoint of the interval. As Figure 3 demonstrates, sequential point generation via posterior variance minimization dynamically responds to regularization threshold . This parameter fundamentally transforms the solution paradigm: when , the Gaussian process transitions from exact interpolation to approximate smoothing—a deliberate regularization mechanism in ill-posed problems. Crucially, balances dual objectives: (1) numerical stability through condition number control of and (2) approximation fidelity via error minimization. Experimental validation in Figure 4 reveals that emerges as a better configuration: it maintains solution stability (preventing exponential error growth observed at ) while achieving the accuracy required for the lower threshold configuration, as shown in Figure 4. At this threshold, sequential design achieves comparable absolute error to uniform discretization. It was also determined that the threshold and the number of interpolation points n are closely related, and that a precise mathematical relationship is not yet available. The sensitivity analysis confirms the role of as a bias–variance trade-off controller: higher values (higher bias at , as shown in Figure 4) make the solution too smooth, while lower values () lead to instability. This identifies as being close to the stability–accuracy threshold for the problem.
Figure 3.
The new point obtained by sequentially minimizing for different . In this case, the Gaussian process interpolation model has turned into a non-interpolated model.
Figure 4.
The absolute error between the exact solution and the approximate solution is plotted under uniform and sequential points for different .
Remark 4.
The core computational workflow employs a greedy algorithm to sequentially determine the next interpolation point by minimizing the posterior variance
This process generates multiple candidate points at each step. Crucially, the final selection among these candidates follows the maximin criterion: prioritizing points that maximize the minimum distance to existing nodes. This dual-stage approach ensures optimal spatial distribution. Furthermore, as defined by the mathematical expression of , the point generation mechanism operates independently of the specific form of —the right-hand term of Equation (1). This intrinsic independence from fundamentally distinguishes the method from uniform discretization methods, where adaptation is not possible and more information about needs to be known, limiting the scope of application.
6. Conclusions
This paper sought to address the most fundamental question in integral equations: if the right-hand term possessed an analytical expression, could an analytical expression of the solution likewise be derived? It was concluded that various analytical solutions could be presented in operator forms. Given the computational complexity associated with the inner product of the complex reproducing kernel, numerical solutions were investigated. Utilizing the H– formulation, a novel point selection strategy was proposed by minimizing the posterior variance of the integral equation’s right-hand term. This greedy algorithm for point selection yielded enhanced approximation accuracy for the solutions. Specifically, it achieved comparable absolute error to the other regularization methods while requiring fewer discretization points. Consequently, the condition number of was effectively maintained within acceptable bounds, thereby ensuring solution stability. Furthermore, increasing the threshold could be implemented as a supplementary measure. Future research will investigate methodologies for the precise selection of to solve FIEFKs.
Author Contributions
Conceptualization, R.Q.; Validation, J.X. and M.X.; Formal analysis, J.X.; Writing—original draft, R.Q.; Writing—review & editing, R.Q., J.X. and M.X.; Supervision, R.Q.; Funding acquisition, R.Q. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Guizhou University of Commerce Natural Science Projects (No. 2023ZKYB003 and No. 2024BAXM035).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
Notations
The following are the main notations used throughout this manuscript:
| FIEFKs | Fredholm integral equations of the first kind |
| RKHS | Reproducing kernel Hilbert space |
| GPR | Gaussian process regression |
| SGPI | Sequential Gaussian process interpolation |
| GP | Gaussian process |
| KRR | Kernel ridge regression |
| TEIG | Truncated eigendecomposition |
| PI kriging | Kriging with pseudoinverse |
| Norm in H | |
| † | Moore–Penrose pseudoinverse |
| Experimental design | |
| Fill distance | |
| Covariance matrix | |
| Posterior mean | |
| Posterior variance | |
| GP with mean m and variance k |
References
- Saitoh, S.; Sawano, Y. Theory of Reproducing Kernels and Applications; Springer Science & Business Media: Singapore, 2016. [Google Scholar]
- Mesgarani, H.; Parmour, P. Application of numerical solution of linear Fredholm integral equation of the first kind for image restoration. Math. Sci. 2023, 17, 371–378. [Google Scholar] [CrossRef]
- Du, H.; Cui, M. Representation of the exact solution and a stability analysis on the Fredholm integral equation of the first kind in reproducing kernel space. Appl. Math. Comput. 2006, 182, 1608–1614. [Google Scholar] [CrossRef]
- Du, H.; Cui, M. Approximate solution of the Fredholm integral equation of the first kind in a reproducing kernel Hilbert space. Appl. Math. Lett. 2008, 21, 617–623. [Google Scholar] [CrossRef]
- Chang, C.W.; Liu, C.S.; Chang, J.R. A quasi-boundary semi-analytical approach for two-dimensional backward heat conduction problems. Comput. Mater. Con. 2010, 15, 45–66. [Google Scholar]
- Venkataramanan, L.; Song, Y.Q.; Hurlimann, M.D. Solving Fredholm integrals of the first kind with tensor product structure in 2 and 2.5 dimensions. IEEE Trans. Signal. Process. 2002, 50, 1017–1026. [Google Scholar] [CrossRef]
- Serikbai, A.; Dias, N.; Ilya, S. Solvability and Construction of a Solution to the Fredholm Integral Equation of the First Kind. J. Appl. Math. Phys. 2024, 12, 720–735. [Google Scholar] [CrossRef]
- Kashirin, A.A.; Smagin, S.I. On the Solvability of Fredholm Boundary Integral Equations of the First Kind for the Three-Dimensional Transmission Problem on the Spectrum. Differ. Equ. 2024, 60, 204–214. [Google Scholar] [CrossRef]
- Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Courier Corporation: North Chelmsford, MA, USA, 2003. [Google Scholar]
- Nashed, M.Z.; Wahba, G. Generalized inverses in reproducing kernel spaces: An approach to regularization of linear operator equations. SIAM J. Math. Anal. 1974, 5, 974–987. [Google Scholar] [CrossRef]
- Wahba, G. Practical approximate solutions to linear operator equations when the data are noisy. SIAM J. Numer. Anal. 1977, 14, 651–667. [Google Scholar] [CrossRef]
- Wen, J.; Wei, T. Regularized solution to the Fredholm integral equation of the first kind with noisy data. J. Appl. Math. Inform. 2011, 29, 23–37. [Google Scholar]
- Tikhonov, A.N.; Goncharsky, A.V.; Stepanov, V.V.; Yagola, A.G. Numerical Methods for the Solution of Ill-Posed Problems; Springer Science & Business Media: Dordrecht, The Netherlands, 1995. [Google Scholar]
- Wazwaz, A.M. The regularization method for Fredholm integral equations of the first kind. Comput. Math. Appl. 2011, 61, 2981–2986. [Google Scholar] [CrossRef]
- Neggal, B.; Boussetila, N.; Rebbani, F. Projected Tikhonov regularization method for Fredholm integral equations of the first kind. J. Inequal. Appl. 2016, 2016, 195 . [Google Scholar] [CrossRef]
- Tanana, V.P.; Vishnyakov, E.Y.; Sidikova, A.I. An approximate solution of a Fredholm integral equation of the first kind by the residual method. Numer. Anal. Appl. 2016, 9, 74–81. [Google Scholar] [CrossRef]
- Masouri, Z.; Hatamzadeh, S. A regularization-direct method to numerically solve first kind Fredholm integral equation. Kyungpook Math. J. 2020, 60, 869–881. [Google Scholar]
- Lee, J.W.; Prenter, P.M. An analysis of the numerical solution of Fredholm integral equations of the first kind. Numer. Math. 1978, 30, 1–23. [Google Scholar] [CrossRef]
- Du, N. Finite-dimensional approximation settings for infinite-dimensional Moore–Penrose inverses. SIAM J. Numer. Anal. 2008, 46, 1454–1482. [Google Scholar] [CrossRef]
- Mohammadi, A.; Tari, A. A new approach to numerical solution of the time-fractional KdV-Burgers equations using least squares support vector regression. J. Math. Model. 2024, 12, 583–602. [Google Scholar]
- Dehestani, H.; Ordokhani, Y.; Razzaghi, M. Ritz-least squares support vector regression technique for the system of fractional Fredholm-Volterra integro-differential equations. J. Appl. Math. Comput. 2025, 71, 3477–3508. [Google Scholar] [CrossRef]
- Yousefi, S.; Banifatemi, A. Numerical solution of Fredholm integral equations by using CAS wavelets. Appl. Math. Comput. 2006, 183, 458–463. [Google Scholar] [CrossRef]
- Maleknejad, K.; Lotfi, T.; Mahdiani, K. Numerical solution of first kind Fredholm integral equations with wavelets-Galerkin method (WGM) and wavelets precondition. Appl. Math. Comput. 2007, 186, 794–800. [Google Scholar] [CrossRef]
- Fattahzadeh, F. Approximate solution of two-dimensional Fredholm integral equation of the first kind using wavelet base method. Int. J. Appl. Comput. Math. 2019, 5, 138. [Google Scholar] [CrossRef]
- Hatamzadeh, V.S.; Masouri, Z. Numerical method for analysis of one-and two-dimensional electromagnetic scattering based on using linear Fredholm integral equation models. Math. Comput. Model. 2011, 54, 2199–2210. [Google Scholar] [CrossRef]
- Maleknejad, K.; Saeedipoor, E. An efficient method based on hybrid functions for Fredholm integral equation of the first kind with convergence analysis. Appl. Math. Comput. 2017, 304, 93–102. [Google Scholar] [CrossRef]
- Didgar, M.; Vahidi, A. Application of taylor expansion for fredholm integral equations of the first kind. Punjab. Univ. J. Math. 2020, 51, 1–14. [Google Scholar]
- Eggermont, P.B. Maximum entropy regularization for Fredholm integral equations of the first kind. SIAM J. Math. Anal. 1993, 24, 1557–1576. [Google Scholar] [CrossRef]
- Yuan, D.; Zhang, X. An overview of numerical methods for the first kind Fredholm integral equation. SN Appl. Sci. 2019, 1, 1178. [Google Scholar] [CrossRef]
- Yan, L.; Duan, X.; Liu, B.; Xu, J. Gaussian processes and polynomial chaos expansion for regression problem: Linkage via the RKHS and comparison via the KL divergence. Entropy 2018, 20, 191. [Google Scholar] [CrossRef]
- Williams, C.K.; Rasmussen, C.E. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
- Kanagawa, M.; Hennig, P.; Sejdinovic, D.; Sriperumbudur, B.K. Gaussian processes and kernel methods: A review on connections and equivalences. arXiv 2018, arXiv:1807.02582. [Google Scholar] [CrossRef]
- Qian, T. Reproducing kernel sparse representations in relation to operator equations. Complex. Anal. Oper. Theory 2020, 14, 36. [Google Scholar] [CrossRef]
- Qiu, R.; Yan, L.; Duan, X. Solving Fredholm integral equation of the first kind using Gaussian process regression. Appl. Math. Comput. 2022, 425, 127032. [Google Scholar] [CrossRef]
- Qiu, R.; Duan, X.; Huangpeng, Q.; Yan, L. The best approximate solution of Fredholm integral equations of the first kind via Gaussian process regression. Appl. Math. Lett. 2022, 133, 108272. [Google Scholar] [CrossRef]
- Qiu, R.; Xu, M.; Zhu, P. Reproducing kernel Hilbert space method for high-order linear Fredholm integro-differential equations with variable coefficients. Appl. Math. Comput. 2025, 489, 129161. [Google Scholar] [CrossRef]
- Lu, F.; Ou, M.J.Y. An Adaptive RKHS Regularization for the Fredholm Integral Equations. Math. Methods. Appl. Sci. 2025, 48, 11124–11140. [Google Scholar] [CrossRef]
- De Alba, P.D.; Fermo, L.; Pes, F.; Rodriguez, G. Regularized minimal-norm solution of an overdetermined system of first kind integral equations. Numer. Algorithms 2023, 92, 471–502. [Google Scholar] [CrossRef]
- Mohammadi, H.; Riche, R.L.; Durrande, N.; Touboul, E.; Bay, X. An analytic comparison of regularization methods for Gaussian Processes. arXiv 2016, arXiv:1602.00853. [Google Scholar]
- Michel, V.; Orzlowski, S. On the null space of a class of Fredholm integral equations of the first kind. J. Inverse. Ill. Posed. Probl. 2016, 24, 687–710. [Google Scholar] [CrossRef]
- Ayupova, N.B. On the uniqueness of solutions to integral equations of the first kind. J. Inverse. Ill. Posed. Probl. 2002, 10, 13–22. [Google Scholar] [CrossRef]
- Hosseinzadeh, H.; Dehghan, M.; Sedaghatjoo, Z. The stability study of numerical solution of Fredholm integral equations of the first kind with emphasis on its application in boundary elements method. Appl. Numer. Math. 2020, 158, 134–151. [Google Scholar] [CrossRef]
- Nashed, M.Z. On moment-discretization and least-squares solutions of linear integral equations of the first kind. J. Math. Anal. Appl. 1976, 53, 359–366. [Google Scholar] [CrossRef][Green Version]
- Lukas, M.A. Convergence rates for regularized solutions. Math. Comput. 1988, 51, 107–131. [Google Scholar] [CrossRef][Green Version]
- Nashed, M.Z.; Wahba, G. Convergence rates of approximate least squares solutions of linear integral and operator equations of the first kind. Math. Comput. 1974, 28, 69–80. [Google Scholar] [CrossRef]
- Aronszajn, N. Theory of reproducing kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
- Ye, Q. Optimal designs of positive definite kernels for scattered data approximation. Appl. Comput. Harmon. Anal. 2016, 41, 214–236. [Google Scholar] [CrossRef]
- Ye, Q. Kernel-based Approximation Methods for Generalized Interpolations: A Deterministic or Stochastic Problem? arXiv 2017, arXiv:1710.05192. [Google Scholar] [CrossRef]
- Wahba, G. On the optimal choice of nodes in the collocation-projection method for solving linear operator equations. J. Approx. Theory 1976, 16, 175–186. [Google Scholar] [CrossRef]
- Liu, J. Optimal experimental designs for linear inverse problems. Inverse Probl. Eng. 2001, 9, 287–314. [Google Scholar] [CrossRef]
- Bardow, A. Optimal experimental design of ill-posed problems: The METER approach. Comput. Chem. Eng. 2008, 32, 115–124. [Google Scholar] [CrossRef]
- Santner, T.J.; Williams, B.J.; Notz, W.I.; Williams, B.J. The Design and Analysis of Computer Experiments; Springer Science & Business Media: New York, NY, USA, 2003. [Google Scholar]
- Sun, W.; Wei, Y. Triple reverse-order law for weighted generalized inverses. Appl. Math. Comput. 2002, 125, 221–229. [Google Scholar] [CrossRef]
- Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses: Theory and Computations; Springer Science & Business Media: Singapore, 2018. [Google Scholar]
- Chen, Z.; Xu, Y.; Yang, H. Fast collocation methods for solving ill-posed integral equations of the first kind. Inverse Probl. 2008, 24, 065007. [Google Scholar] [CrossRef]
- Phillips, D.L. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach. 1962, 9, 84–97. [Google Scholar] [CrossRef]
- Ramlau, R.; Reichel, L. Error estimates for Arnoldi–Tikhonov regularization for ill-posed operator equations. Inverse Probl. 2019, 35, 055002. [Google Scholar] [CrossRef]
- Reichel, L.; Sadok, H.; Zhang, W.H. Simple stopping criteria for the LSQR method applied to discrete ill-posed problems. Numer. Algorithms 2020, 84, 1381–1395. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).