Next Article in Journal
Consensus-Regularized Federated Learning for Superior Generalization in Wind Turbine Diagnostics
Previous Article in Journal
A Hybrid UNet with Attention and a Perceptual Loss Function for Monocular Depth Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Karhunen–Loève Expansion Using a Parametric Model of Oscillating Covariance Function

by
Vitaly Kober
1,2,3,*,
Artyom Makovetskii
2 and
Sergei Voronin
2
1
Center for Scientific Research and Higher Education of Ensenada, Ensenada 22860, Mexico
2
Department of Mathematics, Chelyabinsk State University, Chelyabinsk 454001, Russia
3
Institute for Information Transmission Problems, Russian Academy of Sciences, Moscow 127051, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2569; https://doi.org/10.3390/math13162569
Submission received: 28 June 2025 / Revised: 10 August 2025 / Accepted: 10 August 2025 / Published: 11 August 2025

Abstract

The Karhunen–Loève (KL) expansion decomposes a stochastic process into a set of orthogonal functions with random coefficients. The basic idea of the decomposition is to solve the Fredholm integral equation associated with the covariance kernel of the process. The KL expansion is a powerful mathematical tool used to represent stochastic processes in a compact and efficient way. It has wide applications in various fields including signal processing, image compression, and complex systems modeling. The KL expansion has high computational complexity, especially for large data sets, since there is no single unique transformation for all stochastic processes and there are no fast algorithms for computing eigenvalues and eigenfunctions. One way to solve this problem is to use parametric models for the covariance function. In this paper, an explicit analytical solution of the KL expansion for a parametric model of oscillating covariance function with a small number of parameters is proposed. This model approximates the covariance functions of real images very well. Computer simulation results using a real image are presented and discussed.

1. Introduction

The Karhunen–Loève (KL) expansion [1,2] is an extension of the Fourier series from deterministic functions to random processes. The KL expansion is a representation in which the process is decomposed into a series of orthogonal functions similar to Fourier series. The process of finding the coefficients consists of minimizing the mean square error of the final representation. While for Fourier analysis, the coefficients are deterministic and the orthogonal functions are trigonometric functions, the coefficients for the KL expansion are random and the orthogonal functions are derived from solution of a Fredholm integral equation of the first type, whose kernel is the covariance function of the stationary process associated with the covariance function of the process [3].
The KL expansion is a powerful mathematical tool used to represent random processes in a compact and efficient way. It has several important properties: (i) the KL expansion provides an optimal representation of the random process in terms of mean square error; (ii) the resulting random coefficients are uncorrelated, which simplifies the analysis and modeling of the random process; and (iii) the KL expansion converges in the mean square sense, so in practical applications, it is important to determine the number of terms needed to achieve a certain level of accuracy. The KL expansion has applications in various fields, including signal processing and denoising [4], image compression and feature extraction [5], and modeling of complex systems [6].
Unfortunately, there are no simple methods for computing the eigenfunctions, and there is no single KL expansion for all stochastic processes. However, it is possible to approximate the KL expansion (with an infinite coordinate space) of a stationary process in a wide sense using orthogonal transforms with a finite coordinate space, such as discrete sinusoidal transforms [7,8]. The use of this type of transform was justified by the fact that it belongs to the family of transforms asymptotically equivalent to the KL transform of a first-order Markov process [9].
Another way to solve this problem is to use parametric models to compute the orthogonal eigenfunctions by solving the integral equation analytically. Such solutions have been found for the case of exponential [10,11] and exponential-cosine [12] covariance function models. In this paper, we propose an analytical solution to the integral equation for a parametric model of oscillating covariance function with a small number of parameters. This model approximates the empirical covariance function of real images much more accurately than the exponential and exponential-cosine models. Consequently, the KL expansion is also more accurate, and the obtained explicit eigenfunctions and eigenvalues depend on a small number of model parameters.
The contributions of this work are as follows:
  • An analytical solution of the integral KL equation for a parametric model of oscillating covariance function of a stationary process is obtained.
  • Based on the solution, explicit eigenfunctions and eigenvalues are derived as functions of a small number of model parameters.
  • Using computer simulation, the performance of the proposed method is illustrated using a real image.
The rest of this paper is organized as follows. Section 2 recalls preliminaries on the KL expansion. Section 3 presents an analytical solution of the integral equation. Section 4 illustrates the eigenvalues and eigenfunctions for a real image using computer simulation. Section 5 summarizes our conclusions.

2. Preliminaries on KL Expansion

Let E . denote the ensemble mean and let m x = E n x , v and K x , u = E n x , v m x n u , v m u . Here, K x , u is the covariance function of the process n(x, v). If the covariance function is continuous in the square T x , u T , then the process is second-order continuous in T x T , and if, in addition, K x , u = K x u , then it is second-order stationary. From Karhunen’s theorem [1,3], any second-order random function n(x, v) that is second-order continuous in T x T can be expanded as follows:
n x , v = m x + k = 1 λ k ξ k v φ k x ,
with convergence in the mean for x in [−T, T]. The quantities λ k and φ k x are determined from the KL integral equation
λ φ x = T T K x u φ u d u ,  
where K(x) is the covariance function of a continuous stationary second-order process with the power spectral density S w , T x T . Here, φ k x are orthonormal over T x T and ξ k v are normalized uncorrelated random variables; that is,
E ξ k v ξ l v = δ k l ; T T φ k x φ l x d x = δ k l ,
where k, l   ,   δ k l is the Kronecker delta.
The covariance function of a second-order stationary process for our parametric oscillatory model has the form
K x = P e α x cos β x + γ sin β x ,
where P is the variance of the process, and α > 0 , β 0 and γ   are the model parameters. The power spectral density can be expressed as (see Appendix A)
S w = 2 P α 2 + β 2 α + γ β + w 2 α γ β w 4 + 2 w 2 α 2 β 2 + α 2 + β 2 2 ,
and it is always a real, non-negative, and even function of the frequency w .
It can be shown that the eigenfunctions generated by such kernels using Equation (2) are complete in T x T , and the eigenvalues are positive.

3. Solution to KL Integral Equation

To solve the KL integral in Equation (2) with a parametric oscillating covariance function, the integral equation is first converted into a differential equation. To do this, we use the power spectral density given in Equation (5) and the Fourier transform property: differentiating a function in the signal domain corresponds to multiplying the function spectrum by j w . After finding the solution to the differential equation, we substitute the solution back into the integral equation to satisfy the boundary conditions.
Let us substitute Equation (4) into Equation (2) and eliminate the magnitude sign as follows:
λ φ x = T T P e α x u cos β x u + γ sin β x u φ u d u = T x P e α x u cos β x u + γ sin β x u φ u d u   +                         , x T P e α x u cos β u x + γ sin β u x φ u d u
where T x , u T .
Applying the Fourier transform property (differentiation of a function in the signal domain corresponds to multiplication of the function spectrum by j w ) to the power spectral density given in Equation (5), we obtain a fourth-order linear differential equation with constant coefficients (see Appendix B):
φ ( 4 ) x 2 α 2 β 2 + β γ α P λ φ ( 2 ) x + α 2 + β 2 α 2 + β 2 2 β γ + α P λ φ x = 0 ,
for λ 0 . The integral equation satisfies the following condition: 0 < λ ^ < 2 β γ + α / α 2 + β 2 with λ ^ = λ / P . The roots of the characteristic equation are given by the formulas
b 2 1 , 2 = D α 2 β 2 + β γ α λ ^ ; b 2 3 , 4 = α 2 β 2 + β γ α λ ^ + D ,
where D = α 2 β 2 + β γ α λ ^ α 2 + β 2 α 2 + β 2 2 β γ + α λ ^ > 0 .
A general solution of the integral equation can be written as follows:
φ x = C 1 e i b 1 x + C 2 e i b 2 x + C 3 e b 3 x + C 4 e b 4 x ,
where b 2 = b 1 ; b 4 = b 3 are the imaginary and real solutions of the characteristic equation, respectively. Note that the real solutions are related to the imaginary solution as follows:
b 3 2 = α 2 + β 2 3 α 2 β γ β 3 γ + α 3 3 α β 2 + b 1 2 α + β γ α 2 + β 2 α + β γ + b 1 2 α β γ .
Moreover, the normalized eigenvalues λ ^ can be calculated through the imaginary and real roots, respectively, as follows:
λ ^ = 2 α 2 + β 2 α + β γ + b 1 2 α β γ α 2 + β 2 2 + b 1 2 2 α 2 2 β 2 + b 1 2 = 2 α 2 + β 2 α + β γ + b 3 2 β γ α α 2 + β 2 2 b 3 2 2 α 2 2 β 2 b 3 2 .
Let us introduce the following notations:
K L 1 = λ C 1 e i b 1 x + C 2 e i b 1 x T T K x u C 1 e i b 1 u + C 2 e i b 1 u d u ,
and
K L 2 = λ C 3 e b 3 x + C 4 e b 3 x T T K x u C 3 e b 3 u + C 4 e b 3 u d u ,
with K L 1 + K L 2 = 0 .
Substituting Equation (11) for imaginary roots into Equations (12) and performing integration, we obtain
K L 1 = i b 1 γ b 1 2 + α 2 γ β 2 γ 2 α β C 1 e i T b 1 C 2 e i T b 1 α γ + β b 1 2 + α 2 + β 2 α γ β C 1 e i T b 1 + C 2 e i T b 1 e α T + x sin β T + x + i b 1 γ b 1 2 + α 2 γ β 2 γ 2 α β C 2 e i T b 1 C 1 e i T b 1 α γ + β b 1 2 + α 2 + β 2 α γ β C 2 e i T b 1 + C 1 e i T b 1 e α T x sin β T x + i b 1 b 1 2 + 2 α β γ + α 2 β 2 C 1 e i T b 1 C 2 e i T b 1 + b 1 2 β γ α α 2 + β 2 β γ + α C 1 e i T b 1 + C 2 e i T b 1 e α T + x cos β T + x + i b 1 b 1 2 + 2 α β γ + α 2 β 2 C 2 e i T b 1 C 1 e i T b 1 + b 1 2 β γ α α 2 + β 2 β γ + α C 2 e i T b 1 + C 1 e i T b 1 e α T x cos β T x / b 1 4 + 2 β 2 α 2 b 1 2 α 2 + β 2 2 .
Repeating the procedure for real roots, that is, substituting Equation (11) for real roots into Equation (13) and performing integration, we obtain
K L 2 = α 2 + β 2 + b 3 2 α γ β 2 α γ b 3 2 C 3 e T b 3 + C 4 e T b 3 + α 2 + β 2 + b 3 2 γ b 3 2 α γ β α b 3 C 3 e T b 3 C 4 e T b 3 e α T + x sin β T + x + α 2 + β 2 + b 3 2 α γ β 2 α γ b 3 2 C 4 e T b 3 + C 3 e T b 3 + α 2 + β 2 + b 3 2 γ b 3 2 α γ β α b 3 C 4 e T b 3 C 3 e T b 3 e α T x sin β T x + α 2 + β 2 + b 3 2 β γ + α 2 α b 3 2 C 3 e T b 3 + C 4 e T b 3 + α 2 + β 2 + b 3 2 b 3 2 β γ + α α b 3 C 3 e T b 3 C 4 e T b 3 e α T + x cos β T + x + α 2 + β 2 + b 3 2 β γ + α 2 α b 3 2 C 4 e T b 3 + C 3 e T b 3 + α 2 + β 2 + b 3 2 b 3 2 β γ + α α b 3 C 4 e T b 3 C 3 e T b 3 e α T x cos β T x / α + b 3 2 + β 2 α b 3 2 + β 2 .
Note that equality K L 1 + K L 2 = 0 holds if and only if one of the conditions, either (1) C 1 + C 2 = C 1 ; C 3 + C 4 = C 3 or (2) C 1 C 2 = C 1 ; C 3 C 4 = C 3 , is satisfied. Combining Equations (14) and (15), for the first condition, we obtain
K L 1 + K L 2 = α γ α 2 + β 2 + b 1 2 β α 2 + β 2 b 1 2 cos T b 1 γ α 2 β 2 + b 1 2 2 α β b 1 sin T b 1 e α T 4 C 1 + sin T β cos β x cosh α x cos ( T β ) sin β x sinh α x + α α 2 + β 2 + b 1 2 + β γ α 2 + β 2 b 1 2 cos T b 1 2 α β γ + α 2 β 2 + b 1 2 b 1 sin T b 1 e α T 4 C 1 + cos T β cos β x cosh ( α x ) +   sin T β sin β x sinh α x / α 2 + β + b 1 2 α 2 + β b 1 2 + α γ β α 2 + β 2 + b 3 2 2 α γ b 3 2 cosh T b 3 α 2 + β 2 + b 3 2 γ b 3 2 α b 3 α γ β × sinh T b 3 e α T 4 C 3 + sin T β cos β x cosh ( α x ) cos T β sin β x sinh α x + β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 cosh T b 3 b 3 α 2 + β 2 + b 3 2 γ b 3 2 α b 3 β γ + α × sinh T b 3 e α T 4 C 3 + cos T β cos β x cosh ( α x ) + sin T β sin β x sinh α x / α + b 3 2 + β 2 α b 3 2 + β 2 = 0 .
The following system of equations ensures that the first condition is met:
α γ β α 2 + β 2 + b 3 2 2 α γ b 3 2 cosh T b 3 α 2 + β 2 + b 3 2 γ b 3 2 α b 3 α γ β sinh T b 3 C 3 + α + b 3 2 + β 2 α b 3 2 + β 2 + α γ α 2 + β 2 + b 1 2 β α 2 + β 2 b 1 2 cos T b 1 γ α 2 β 2 + b 1 2 2 α β b 1 sin T b 1 C 1 + α 2 + β + b 1 2 α 2 + β b 1 2 = 0 , β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 cosh T b 3 b 3 α 2 + β 2 + b 3 2 2 α b 3 β γ + α sinh T b 3 C 3 + α + b 3 2 + β 2 α b 3 2 + β 2 + α α 2 + β 2 + b 1 2 + β γ α 2 + β 2 b 1 2 cos T b 1 2 α β γ + α 2 β 2 + b 1 2 b 1 sin T b 1 C 1 + α + b 3 2 + β 2 α b 3 2 + β 2 = 0 .
From the second equation of this system, C 3 + can be expressed as a function of C 1 + as follows:
C 3 + = Q + C 1 + ,
where
Q + = α + b 3 2 + β 2 α b 3 2 + β 2 α 2 + β + b 1 2 α 2 + β b 1 2 × α α 2 + β 2 + b 1 2 + β γ α 2 + β 2 b 1 2 cos T b 1 2 α β γ + α 2 β 2 + b 1 2 b 1 sin T b 1 β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 cosh T b 3 b 3 α 2 + β 2 + b 3 2 2 α b 3 β γ + α sinh T b 3 .
Now, using the orthonormality property of eigenfunctions (see Equation (3)), the normalized constant C 1 + can be expressed in terms of b 1 and b 3 as follows:
C 1 + = 2 b 1 b 3 b 1 2 + b 3 2 b 1 2 + b 3 2 b 1 Q + 2 sinh 2 b 3 T + b 3 sin 2 b 1 T + 2 b 1 b 3 T Q + 2 + 1 + D P ,
where D P = 8 b 1 b 3 Q + b 1 sin b 1 T cosh b 3 T + b 3 cos b 1 T sinh b 3 T .
Solving the system consisting of Equation (10) and the first equation in (17), one can numerically calculate the set b 1 + k , b 3 + k , k Finally, the eigenfunctions for this case are given by
φ k x = C 1 + k cos b 1 + k x + C 3 + k cosh b 3 + k x
where T x T and k .
Similarly, for the second condition C 1 C 2 = C 1 ; C 3 C 4 = C 3 , we can write
K L 1 + K L 2 = α γ + β b 1 2 α 2 + β 2 α γ β sin T b 1 γ α 2 β 2 + b 1 2 2 α β × b 1 cos T b 1 e α T 4 i C 1 sin T β cos β x sinh α x cos ( T β ) sin β x cosh α x α β γ b 1 2 + α 2 + β 2 α + β γ sin T b 1 + 2 α β γ + α 2 β 2 + b 1 2 b 1 cos T b 1 × e α T 4 i C 1 cos T β cos β x sinh ( α x ) + sin T β sin β x cosh α x / b 1 4 + 2 b 1 2 β 2 α 2 α 2 + β 2 2 + α γ β α 2 + β 2 + b 3 2 2 α γ b 3 2 sinh T b 3 α 2 + β 2 + b 3 2 γ b 3 2 α b 3 α γ β cosh T b 3 e α T 4 C 3 × sin T β cos β x sinh ( α x ) cos T β sin β x cosh α x + β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 sinh T b 3 b 3 α 2 + β 2 + b 3 2 2 α b 3 β γ + α × cosh T b 3 e α T 4 C 3 cos T β cos β x sinh ( α x ) + sin T β sin β x cosh α x / α + b 3 2 + β 2 α b 3 2 + β 2 .
The fulfillment of the second condition is ensured by the following system of equations:
α γ β α 2 + β 2 + b 3 2 2 α γ b 3 2 sinh T b 3 α 2 + β 2 + b 3 2 γ b 3 2 α b 3 α γ β cosh T b 3 C 3 α + b 3 2 + β 2 α b 3 2 + β 2 + α γ + β b 1 2 α 2 + β 2 α γ β sin T b 1 γ α 2 β 2 + b 1 2 2 α β b 1 cos T b 1 C 1 b 1 4 + 2 b 1 2 β 2 α 2 α 2 + β 2 2 = 0 , β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 sinh T b 3 b 3 α 2 + β 2 + b 3 2 2 α b 3 β γ + α cosh T b 3 C 3 α + b 3 2 + β 2 α b 3 2 + β 2 α β γ b 1 2 + α 2 + β 2 α + β γ sin T b 1 + 2 α β γ + α 2 β 2 + b 1 2 b 1 cos T b 1 C 1 b 1 4 + 2 b 1 2 β 2 α 2 α 2 + β 2 2 = 0 .
From the second equation of this system, we can obtain C 3 through C 1 as
C 3 = Q C 1 ,
where
Q = α + b 3 2 + β 2 α b 3 2 + β 2 b 1 4 + 2 b 1 2 β 2 α 2 α 2 + β 2 2 × α β γ b 1 2 + α 2 + β 2 α + β γ sin T b 1 + 2 α β γ + α 2 β 2 + b 1 2 b 1 cos T b 1 β γ + α α 2 + β 2 + b 3 2 2 α b 3 2 sinh T b 3 b 3 α 2 + β 2 + b 3 2 2 α b 3 β γ + α cosh T b 3 .
Using the orthonormality property of eigenfunctions (see Equation (3)), the normalized constant has the following form:
C 1 = 2 b 1 b 3 b 1 2 + b 3 2 Q 2 b 1 b 1 2 + b 3 2 2 T b 3 sinh 2 b 3 T D N ,
where D N = 2 4 b 1 Q b 1 cos b 1 T sinh b 3 T b 3 sin b 1 T cosh b 3 T + sin 2 b 1 T / 2 b 1 T b 1 2 + b 3 2 b 3 . Solving the system consisting of Equation (10) and the first equation in (22), one can numerically calculate the set b 1 k , b 3 k , k Finally, the eigenfunctions for the second condition are given by
φ k x = C 1 k sin b 1 k x + C 3 k sinh b 3 k x ,
where T x T and k .
Let us sort the solutions b 1 + k , b 1 k in ascending order with respect to their values as follows: b 1 1 < b 1 2 < b 1 3 < , k . The odd-numbered solutions correspond to the first condition, while the even-numbered solutions correspond to the second condition. The corresponding real solutions b 3 + k , b 3 k , the eigenvalues λ ^ k , can be calculated as functions of b 1 k using Equations (10) and (11), respectively.

4. Results

In this section, experimental results illustrating the eigenvalues and eigenfunctions for a real aerial photograph are presented. Figure 1 shows a test image of 256 × 256 pixels; each pixel has 256 quantization levels.
The mean and standard deviation of the image are 132 and 38, respectively. If a two-dimensional covariance function can be separated into functions of row and column coordinates, then the two-dimensional eigenfunctions can be expressed as products of one-dimensional eigenfunctions obtained by solving the integral equation using one-dimensional covariance functions computed over rows and columns. We illustrate our results with one-dimensional eigenfunctions using the averaged covariance function constructed over image rows. Assuming that the image rows are realizations of a stationary process, we first calculate the empirical covariance functions in all rows, and then, by averaging these functions, we obtain one experimental covariance function. The argument x is scaled to the interval x 1 , 1 . Figure 2 shows the experimental covariance curve computed over image rows and its three approximations optimized in terms of mean square error (MSE) using the following covariance functions: exponential exp a x , exponentially cosine exp α x cos ( β x ) , and parametric oscillating exp a x cos b x + γ sin b x .
Since the MSE has a minimum as a function of the model parameters, the least squares method was used for numerical optimization. The method determines the best fit function by selecting model parameters to minimize the sum of squared differences between experimental covariance values and computed model values. The optimal approximation parameters for the exponential model are M S E = 1.055 ,   α = = 0.098 ; for the exponentially cosine model, they are M S E = 0.965 , α = 0.064 , β = 0.048 ; and for the parametric oscillatory model, they are M S E = 0.424 ,   α = 0.033 , β = 0.025 , γ = 1 .
It is obvious that the approximation by the parametric oscillating function is much better with respect to the MSE than the approximation by the other two models. Figure 3 shows the normalized eigenvalue λ ^ as a function of the imaginary solution b 1 (see Equation (11)). A solution to the integral equation in (2) exists if the normalized eigenvalue 0 < λ ^ < 2 β γ + α / α 2 + β 2 = 9.335 (limit value). Figure 4 shows the real solution as a function of the imaginary solution. The range of the real solution is 0 < b 3 < 0.0154 , while the imaginary solution is b 1 0.1072 .
The values of b 3 and b 1 that satisfy the system of Equations (17) and (22) can be determined graphically as shown in Figure 5.
In the graph, the horizontal curve corresponds to Equation (10), while the vertical curves are the graphs of the first formulas of Equations (17) and (22). The odd set of intersections corresponds to the first condition (set b 1 + k , b 3 + k ), and the even set corresponds to the second condition (set b 1 k , b 3 k . The corresponding numerical solution of the Karhunen–Loève integral equation for the first ten eigenfunctions and eigenvalues is given in Table 1.
The corresponding eigenfunctions given in Equations (20) and (25) are shown in Figure 6.
It is interesting to note that as b 1 increases, b 3 tends to a constant (see Figure 4 and Table 1 (4th column)). This means that as the index k increases, the second terms in Equations (20) and (25) also tend to constants. Therefore, the eigenfunctions become approximately a set of periodic sines and cosines with frequencies approximately equal to b ¯ 1 k = k 1 π / 2 . Consequently, the procedure for calculating b 1 k and the remaining parameters of the model can be simplified. Table 2 shows the true values b 1 k , the approximate values b ¯ 1 k , and the absolute errors E R k in Equations (17) and (22) when using approximate values instead of true values. Since for this model b 1 0.1072 , the index k in the table starts with k = 2.

5. Conclusions

In this paper, an analytical solution of the Karhunen–Loève integral equation is presented for a special case: the covariance function of a stationary process is a parametric oscillatory model. Based on the solution, explicit eigenfunctions and eigenvalues are derived depending on a small number of model parameters. Based on the solution, explicit eigenfunctions and eigenvalues are derived depending on three model parameters. The results of computer simulation using a real image illustrate all theoretical results using the parametric model of the oscillating covariance function. An approximate solution of the integral equation is proposed, and it is shown that with an increase in the index of the eigenfunction the approximation error decreases.

Author Contributions

Conceptualization, V.K., A.M., and S.V.; methodology, V.K., A.M., and S.V.; software, V.K.; writing—original draft preparation, V.K.; writing—review and editing, V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors thank the Russian Science Foundation, grant No. 25-11-20031.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The appendix contains the derivation of Equation (5). The power spectral density describes the distribution of signal power over different frequencies. For a stationary random process, the power spectral density according to the Wiener–Khinchin theorem can be calculated by applying the Fourier transform to the covariance function, that is,
S w = K x e j x w d x ,  
where K x and S w are the covariance function and the power spectral density, respectively. Since the covariance function is real and even, Equation (A1) can be rewritten as
S w = 0 K x e j x w d x + 0 K x e j x w d x .  
In our case, Equation (A2) takes the following form:
S w = P 0 e α x cos β x + γ sin β x e j x w d x + P 0 e α x cos β x + γ sin β x e j x w d x .
The first term of Equation (A3) is calculated as follows:
S 1 w = P lim x e α x e j w x j γ w sin β x + j w cos β x + β γ cos β x + α γ sin β x + + α cos β x β sin β x j w β γ α / w 2 + 2 j w α + α 2 + β 2 .
Similarly, the second term of Equation (A3) is calculated as
S 2 w = = P lim x e α x e j w x j γ w sin β x + j w cos β x β γ cos β x α γ sin β x α cos β x + β sin β x j w + β γ + α / w 2 + 2 j w α α 2 β 2 .
Taking into account that α > 0 and finding the limits in Equations (A4) and (A5), the power spectral density has the form
S w = S 1 w + S 2 w = P j w β γ α w 2 + 2 j w α + α 2 + β 2 P j w + β γ + α w 2 + 2 j w α α 2 β 2 .
After some manipulations, we obtain an expression identical to Equation (5)
S w = 2 P α 2 + β 2 α + γ β + w 2 α γ β w 4 + 2 w 2 α 2 β 2 + α 2 + β 2 2 .

Appendix B

The appendix contains the derivation of Equation (7). To derive the formula, two properties of the Fourier transform are used: (1) convolution of functions in the signal domain corresponds to multiplication of the corresponding spectra in the Fourier transform domain; (2) differentiation of a function in the signal domain corresponds to multiplication of its spectrum by jw. Taking the Fourier transform to Equation (2) and applying the first property, one can write
λ Ψ w = S w Ψ w ,  
where Ψ w is the Fourier transform of the eigenfunction φ x .
Substituting Equation (5) into Equation (A8), we obtain
λ Ψ w = 2 P α 2 + β 2 α + γ β + w 2 α γ β Ψ w w 4 + 2 w 2 α 2 β 2 + α 2 + β 2 2 .
Let us transform the equation to the standard form of a polynomial in the descending order of the power of w as follows:
λ w 4 Ψ w + 2 w 2 Ψ w α 2 β 2 + β γ α P + α 2 + β 2 α 2 + β 2 2 β γ + α P Ψ w = 0 .
Finally, applying the second property to Equation (A10) and after some manipulation, we obtain
φ ( 4 ) x 2 α 2 β 2 + β γ α P λ φ ( 2 ) x + α 2 + β 2 α 2 + β 2 2 β γ + α P λ φ x = 0 .

References

  1. Karhunen, K. Über lineare methoden in der wahrscheinlichkeitsrechnung. Ann. Acad. Sci. Fenn. Ser. A1 Math.-Phys. 1947, 37, 1–79. [Google Scholar]
  2. Loève, M. Probability Theory II. Graduate Texts in Mathematics, 4th ed.; Springer: New York, NY, USA, 1978. [Google Scholar]
  3. Van Trees, H.L. Detection, Estimation, and Modulation Theory; John Willey & Sons: Hoboken, NY, USA, 1968. [Google Scholar]
  4. Darlington, S. Linear least-squares smoothing and predictions with applications. Bell Syst. Tech. J. 1958, 37, 1221–1294. [Google Scholar] [CrossRef]
  5. Liu, S.; Xie, Z.; Hu, Z. A Distributed non-intrusive load monitoring method using Karhunen–Loève feature extraction and an improved deep dictionary. Electronics 2024, 13, 3970. [Google Scholar] [CrossRef]
  6. Wang, X.; Beller, L.; Czado, C.; Holzapfel, F. Modeling of stochastic wind based on operational flight data using Karhunen–Loève expansion method. Sensors 2020, 20, 4634. [Google Scholar] [CrossRef] [PubMed]
  7. Jain, A.K. A fast Karhunen–Loève transform for a class of random processes. IEEE Trans. Commun. 1976, 24, 1023–1029. [Google Scholar] [CrossRef]
  8. Kober, V. Fast algorithms for the computation of sliding discrete sinusoidal transforms. IEEE Trans. Signal Process. 2004, 52, 1704–1710. [Google Scholar] [CrossRef]
  9. Unser, M. On the approximation of the discrete Karhunen-Loeve transform for stationary processes. Signal Process. 1984, 7, 231–249. [Google Scholar] [CrossRef]
  10. Slepian, D. Estimation of signal parameters in the presence of noise. Trans. IRE 1954, PGIT-3, 68–89. [Google Scholar] [CrossRef]
  11. Youla, D. Solution of a homogeneous Wiener-Hopf integral equation occurring in the expansion of second order stationary random functions. IRE Trans. Inform. Theory 1957, IT-3, 187–193. [Google Scholar] [CrossRef]
  12. Kober, V.; Alvarez-Borrego, J. Karhunen–Loève expansion of stationary random signals with exponentially oscillating covariance function. Opt. Eng. 2003, 42, 1018–1023. [Google Scholar]
Figure 1. Test image.
Figure 1. Test image.
Mathematics 13 02569 g001
Figure 2. Approximations of the experimental covariance function.
Figure 2. Approximations of the experimental covariance function.
Mathematics 13 02569 g002
Figure 3. Normalized eigenvalue versus imaginary solution.
Figure 3. Normalized eigenvalue versus imaginary solution.
Mathematics 13 02569 g003
Figure 4. Real solution versus imaginary solution.
Figure 4. Real solution versus imaginary solution.
Mathematics 13 02569 g004
Figure 5. Graphical solution of the system of Equations (17) and (22).
Figure 5. Graphical solution of the system of Equations (17) and (22).
Mathematics 13 02569 g005
Figure 6. Eigenfunctions φ k x calculated for (a) the first condition (Equation (20)), (b) the second condition (Equation (25)).
Figure 6. Eigenfunctions φ k x calculated for (a) the first condition (Equation (20)), (b) the second condition (Equation (25)).
Mathematics 13 02569 g006
Table 1. Numerical solution of the Karhunen–Loève integral equation.
Table 1. Numerical solution of the Karhunen–Loève integral equation.
k λ ^ k b 1 k b 3 k C 1 k C 3 k × 1 0 4
11.924210.244020.013780.69704169.4023
20.045641.594070.015340.9927710.3956
30.011623.159900.015370.99712−1.4092
40.005214.720240.015370.99917−1.1909
50.002936.292380.015370.999270.3561
60.001887.856980.015370.999700.4298
70.001309.430910.015370.99968−0.1586
80.0009610.998940.015380.99985−0.2194
90.0007312.570970.015380.999820.0893
100.0005814.139790.015380.999910.0327
Table 2. Approximation of the numerical solution b 1 .
Table 2. Approximation of the numerical solution b 1 .
k b 1 k b ¯ 1 k E R k
21.594071.570790.03593
33.159903.141590.02365
44.720244.712380.00398
56.292386.283180.00278
67.856987.853980.00143
79.430919.424770.00142
810.9989410.995570.00073
912.5709712.566370.00072
1014.1397914.137160.00044
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kober, V.; Makovetskii, A.; Voronin, S. Karhunen–Loève Expansion Using a Parametric Model of Oscillating Covariance Function. Mathematics 2025, 13, 2569. https://doi.org/10.3390/math13162569

AMA Style

Kober V, Makovetskii A, Voronin S. Karhunen–Loève Expansion Using a Parametric Model of Oscillating Covariance Function. Mathematics. 2025; 13(16):2569. https://doi.org/10.3390/math13162569

Chicago/Turabian Style

Kober, Vitaly, Artyom Makovetskii, and Sergei Voronin. 2025. "Karhunen–Loève Expansion Using a Parametric Model of Oscillating Covariance Function" Mathematics 13, no. 16: 2569. https://doi.org/10.3390/math13162569

APA Style

Kober, V., Makovetskii, A., & Voronin, S. (2025). Karhunen–Loève Expansion Using a Parametric Model of Oscillating Covariance Function. Mathematics, 13(16), 2569. https://doi.org/10.3390/math13162569

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop