1. Introduction
The Karhunen–Loève (KL) expansion [
1,
2] is an extension of the Fourier series from deterministic functions to random processes. The KL expansion is a representation in which the process is decomposed into a series of orthogonal functions similar to Fourier series. The process of finding the coefficients consists of minimizing the mean square error of the final representation. While for Fourier analysis, the coefficients are deterministic and the orthogonal functions are trigonometric functions, the coefficients for the KL expansion are random and the orthogonal functions are derived from solution of a Fredholm integral equation of the first type, whose kernel is the covariance function of the stationary process associated with the covariance function of the process [
3].
The KL expansion is a powerful mathematical tool used to represent random processes in a compact and efficient way. It has several important properties: (i) the KL expansion provides an optimal representation of the random process in terms of mean square error; (ii) the resulting random coefficients are uncorrelated, which simplifies the analysis and modeling of the random process; and (iii) the KL expansion converges in the mean square sense, so in practical applications, it is important to determine the number of terms needed to achieve a certain level of accuracy. The KL expansion has applications in various fields, including signal processing and denoising [
4], image compression and feature extraction [
5], and modeling of complex systems [
6].
Unfortunately, there are no simple methods for computing the eigenfunctions, and there is no single KL expansion for all stochastic processes. However, it is possible to approximate the KL expansion (with an infinite coordinate space) of a stationary process in a wide sense using orthogonal transforms with a finite coordinate space, such as discrete sinusoidal transforms [
7,
8]. The use of this type of transform was justified by the fact that it belongs to the family of transforms asymptotically equivalent to the KL transform of a first-order Markov process [
9].
Another way to solve this problem is to use parametric models to compute the orthogonal eigenfunctions by solving the integral equation analytically. Such solutions have been found for the case of exponential [
10,
11] and exponential-cosine [
12] covariance function models. In this paper, we propose an analytical solution to the integral equation for a parametric model of oscillating covariance function with a small number of parameters. This model approximates the empirical covariance function of real images much more accurately than the exponential and exponential-cosine models. Consequently, the KL expansion is also more accurate, and the obtained explicit eigenfunctions and eigenvalues depend on a small number of model parameters.
The contributions of this work are as follows:
An analytical solution of the integral KL equation for a parametric model of oscillating covariance function of a stationary process is obtained.
Based on the solution, explicit eigenfunctions and eigenvalues are derived as functions of a small number of model parameters.
Using computer simulation, the performance of the proposed method is illustrated using a real image.
The rest of this paper is organized as follows.
Section 2 recalls preliminaries on the KL expansion.
Section 3 presents an analytical solution of the integral equation.
Section 4 illustrates the eigenvalues and eigenfunctions for a real image using computer simulation.
Section 5 summarizes our conclusions.
2. Preliminaries on KL Expansion
Let
denote the ensemble mean and let
and
. Here,
is the covariance function of the process
n(
x,
v). If the covariance function is continuous in the square
, then the process is second-order continuous in
, and if, in addition,
, then it is second-order stationary. From Karhunen’s theorem [
1,
3], any second-order random function
n(
x,
v) that is second-order continuous in
can be expanded as follows:
with convergence in the mean for
x in [−
T,
T]. The quantities
and
are determined from the KL integral equation
where
K(
x) is the covariance function of a continuous stationary second-order process with the power spectral density
,
. Here,
are orthonormal over
and
are normalized uncorrelated random variables; that is,
where
k,
l is the Kronecker delta.
The covariance function of a second-order stationary process for our parametric oscillatory model has the form
where
P is the variance of the process, and
and
are the model parameters. The power spectral density can be expressed as (see
Appendix A)
and it is always a real, non-negative, and even function of the frequency
.
It can be shown that the eigenfunctions generated by such kernels using Equation (2) are complete in , and the eigenvalues are positive.
3. Solution to KL Integral Equation
To solve the KL integral in Equation (2) with a parametric oscillating covariance function, the integral equation is first converted into a differential equation. To do this, we use the power spectral density given in Equation (5) and the Fourier transform property: differentiating a function in the signal domain corresponds to multiplying the function spectrum by . After finding the solution to the differential equation, we substitute the solution back into the integral equation to satisfy the boundary conditions.
Let us substitute Equation (4) into Equation (2) and eliminate the magnitude sign as follows:
where
Applying the Fourier transform property (differentiation of a function in the signal domain corresponds to multiplication of the function spectrum by
) to the power spectral density given in Equation (5), we obtain a fourth-order linear differential equation with constant coefficients (see
Appendix B):
for
The integral equation satisfies the following condition:
with
. The roots of the characteristic equation are given by the formulas
where
.
A general solution of the integral equation can be written as follows:
where
are the imaginary and real solutions of the characteristic equation, respectively. Note that the real solutions are related to the imaginary solution as follows:
Moreover, the normalized eigenvalues
can be calculated through the imaginary and real roots, respectively, as follows:
Let us introduce the following notations:
and
with
Substituting Equation (11) for imaginary roots into Equations (12) and performing integration, we obtain
Repeating the procedure for real roots, that is, substituting Equation (11) for real roots into Equation (13) and performing integration, we obtain
Note that equality
holds if and only if one of the conditions, either (1)
or (2)
, is satisfied. Combining Equations (14) and (15), for the first condition, we obtain
The following system of equations ensures that the first condition is met:
From the second equation of this system,
can be expressed as a function of
as follows:
where
Now, using the orthonormality property of eigenfunctions (see Equation (3)), the normalized constant
can be expressed in terms of
and
as follows:
where
Solving the system consisting of Equation (10) and the first equation in (17), one can numerically calculate the set
Finally, the eigenfunctions for this case are given by
where
and
.
Similarly, for the second condition
, we can write
The fulfillment of the second condition is ensured by the following system of equations:
From the second equation of this system, we can obtain
through
as
where
Using the orthonormality property of eigenfunctions (see Equation (3)), the normalized constant has the following form:
where
Solving the system consisting of Equation (10) and the first equation in (22), one can numerically calculate the set
Finally, the eigenfunctions for the second condition are given by
where
and
.
Let us sort the solutions in ascending order with respect to their values as follows: , . The odd-numbered solutions correspond to the first condition, while the even-numbered solutions correspond to the second condition. The corresponding real solutions , the eigenvalues , can be calculated as functions of using Equations (10) and (11), respectively.
4. Results
In this section, experimental results illustrating the eigenvalues and eigenfunctions for a real aerial photograph are presented.
Figure 1 shows a test image of 256 × 256 pixels; each pixel has 256 quantization levels.
The mean and standard deviation of the image are 132 and 38, respectively. If a two-dimensional covariance function can be separated into functions of row and column coordinates, then the two-dimensional eigenfunctions can be expressed as products of one-dimensional eigenfunctions obtained by solving the integral equation using one-dimensional covariance functions computed over rows and columns. We illustrate our results with one-dimensional eigenfunctions using the averaged covariance function constructed over image rows. Assuming that the image rows are realizations of a stationary process, we first calculate the empirical covariance functions in all rows, and then, by averaging these functions, we obtain one experimental covariance function. The argument
x is scaled to the interval
.
Figure 2 shows the experimental covariance curve computed over image rows and its three approximations optimized in terms of mean square error (MSE) using the following covariance functions: exponential
, exponentially cosine
, and parametric oscillating
.
Since the MSE has a minimum as a function of the model parameters, the least squares method was used for numerical optimization. The method determines the best fit function by selecting model parameters to minimize the sum of squared differences between experimental covariance values and computed model values. The optimal approximation parameters for the exponential model are ; for the exponentially cosine model, they are , and for the parametric oscillatory model, they are , , .
It is obvious that the approximation by the parametric oscillating function is much better with respect to the MSE than the approximation by the other two models.
Figure 3 shows the normalized eigenvalue
as a function of the imaginary solution
(see Equation (11)). A solution to the integral equation in (2) exists if the normalized eigenvalue
(limit value).
Figure 4 shows the real solution as a function of the imaginary solution. The range of the real solution is
, while the imaginary solution is
.
The values of
and
that satisfy the system of Equations (17) and (22) can be determined graphically as shown in
Figure 5.
In the graph, the horizontal curve corresponds to Equation (10), while the vertical curves are the graphs of the first formulas of Equations (17) and (22). The odd set of intersections corresponds to the first condition (set
), and the even set corresponds to the second condition (set
. The corresponding numerical solution of the Karhunen–Loève integral equation for the first ten eigenfunctions and eigenvalues is given in
Table 1.
The corresponding eigenfunctions given in Equations (20) and (25) are shown in
Figure 6.
It is interesting to note that as
increases,
tends to a constant (see
Figure 4 and
Table 1 (4th column)). This means that as the index
k increases, the second terms in Equations (20) and (25) also tend to constants. Therefore, the eigenfunctions become approximately a set of periodic sines and cosines with frequencies approximately equal to
. Consequently, the procedure for calculating
and the remaining parameters of the model can be simplified.
Table 2 shows the true values
, the approximate values
, and the absolute errors
in Equations (17) and (22) when using approximate values instead of true values. Since for this model
, the index
k in the table starts with
k = 2.