Abstract
The article is devoted to Bayes optimization problems of nonlinear observable stochastic systems (NLOStSs) based on wavelet canonical expansions (WLCEs). Input stochastic processes (StPs) and output StPs of considered nonlinearly StSs depend on random parameters and additive independent Gaussian noises. For stochastic synthesis we use a Bayes approach with the given loss function and minimum risk condition. WLCEs are formed by covariance function expansion coefficients of two-dimensional orthonormal basis of wavelet with a compact carrier. New results: (i) a common Bayes’ criteria synthesis algorithm for NLOStSs by WLCE is presented; (ii) partial synthesis algorithms for three of Bayes’ criteria (minimum mean square error, damage accumulation and probability of error exit outside the limits) are given; (iii) an approximate algorithm based on statistical linearization; (iv) three test examples. Applications: wavelet optimization and parameter calibration in complex measurement and control systems. Some generalizations are formulated.
Keywords:
Bayes’ criterion; damage accumulation criterion; Haar wavelets; stochastic process; wavelet canonical expansion MSC:
62C10; 65T60
1. Introduction
Wavelet canonical expansion (WLCE) of stochastic processes (StPs) is formed by the covariance function expansion coefficients of two-dimensional orthonormal basis of wavelets with a compact carrier [1]. Methods of linear analysis and synthesis of StPs in nonstationary linear observable stochastic systems (LOStSs) were developed in [2,3,4] for Bayes’ criteria (BC). Computer experiments confirmed high-efficiency of the algorithms for a small number of terms in WLCE. For scalar nonstationary nonlinear observable stochastic systems (NLOStS), exact methods based on canonical expansion (CE) were developed in [2,3,4]. In practice, the quality analysis of NLOStSs based on CE and WLCE increase the computational flexibility and accuracy of corresponding stochastic numeral technologies.
This article is devoted to the problems of optimization for nonstationary NLOStSs by WLCE. Section 2 is devoted to a problem statement for Bayes’ criteria in terms of the risk theory. The common BC algorithm of the WLCE method with a compact carrier is developed in Section 3. In Section 4, three BC (minimum mean square error, damage accumulation and probability of error exit outside the limits) particular algorithms are presented. An approximate algorithm based on a method of statistical linearization is given in Section 5. Section 6 contains three test examples illustrating the accuracy of developed algorithms for nonlinear functions.
Described algorithms are very useful for optimal BC quality analysis of complex NLOStSs in the presence of internal and external noises and stochastic factors described by CE and WLCE. The corresponding comparative analysis is given in [2,3,4].
2. Problem Statement
Let the scalar real input StP be presented as the sum of the useful signal and Gaussian random additive noise ,
Here is a nonlinear function of time t and a vector of random parameters with density . At the output we need to obtain StP as follows,
where is the known nonlinear transform of useful signal, , and is Gaussian random additive noise. Noises and are independent from .
The choice of criteria for comparing alternative systems for the same purpose, such as any question regarding the choice of criteria, is largely a matter of common sense, which can often be approached from the consideration of operating conditions and the purpose of any concrete system.
Thus, we obtain the following general principle for estimating the quality of a system and selecting the criterion of optimality [2,3,4]. The solution quality of the problem in each actual case is estimated by a loss function , the value of which is determined by the actual realizations of the signal and its estimator , where A is the optimal operator.
The criterion of the maximum probability that the signal will not exceed a particular value can be represented as
If we take the function as the characteristic function of the corresponding set of values of the error, the following formula
is valid. In applications connected with damage accumulation it is needed to employ (3) with function l in the form
The quality of the solution of the problem on average for a given realization of the signal , with all possible realizations of the estimator corresponding to particular realization of the signal , is estimated by the conditional mathematical expectation of the loss function for the given realization of the signal,
This quantity is called conditional risk. The conditional risk depends on the operator for the estimator and on the realization of signal . Finally, the average quality of the solution for all possible realizations of and its estimator is characterized by the mathematical expectation of the conditional risk
This quantity is called mean risk.
All criteria of minimum risk which correspond to the possible loss functions or functional which may contain undetermined parameters are known as Bayes’ criteria.
So it is sufficient to find systems operators which minimize conditional mathematical expectation of the loss function at the time moment t for each realization of the observed StP at the time interval :
For this problem solution we use WLCE [1]. At first, we find the conditional density of the useful output (or vector and random noise ) relative to the observable StP .
3. Common Algorithm of the WLCE Method
As it is known in [4], for noises and we use WLCE with the common random variables
and
where for all are uncorrelated random variables with mean zero and variances; for all , and are coordinate functions; for all are real functions satisfying Equation (12); and , if , otherwise, .
For input StP we construct WLCE by the following formulae:
and
We obtain from (10) and (14) the following presentations:
So StP depends upon and all sets of .
In the case of Gaussian-independent noises, and WLCE of do not depend upon WLCE . Consequently, coordinate functions are expressed via coefficients of WLE for So we obtain the following result:
At last, the conditional density of vector relative to StP coincides with the conditional density vector relative to and for time interval is equal to:
where
Theorem 1.
Let the following conditions hold for a stochastic system (1), (2):
- (1)
- The covariance function of random noises is known and belongs to the space ;
- (2)
- The joint covariance function of random noise and is known and belongs to the space ;
- (3)
- The function relative to the variable ; is the parameter.
Then unknown parameters and in (19) for the conditional density of vector relative to StP are expressed in terms of the coefficients of the wavelet expansion of the given functions , , over the selected wavelet bases.
Proof of Theorem 1.
In space we fix the orthonormal wavelet basis with a compact carrier [5,6],
The wavelet basis may be rewritten in the following form:
In space we fix the two-dimensional orthonormal wavelet basis in the form tensor product for the case when dealing is performed identically for two variables,
where .
Then in space we construct the following two-dimensional wavelet expansion (WLE) of the covariance function ,
where
Here, variances are calculated according to recurrent formulae,
where —auxiliary coefficients. Parameters are expressed by means of WLE coefficients of the covariance function ,
The rest coefficient is .
Coordinate functions are defined by the following recurrent formulae:
Used here are auxiliary functions:
Real functions satisfying the conditions (11) and (12) are expressed in terms of basic wavelet functions,
Coordinate functions are defined by (26) and (27) for the joint covariance function .
Considering u as the parameter we obtain the following WLE for function ,
or in (21) notations,
So it is the general case that we obtain the following final expressions:
Theorem 1 is proved. □
In the case of a given density (19) we calculate conditional risk:
For optimal synthesis it is necessary to find the optimal output StP at a given time moment from the condition of integral minimum (32). Let us consider this integral as the function of variable at fixed and . The value of parameter for which the integral reaches a minimal value, defines the optimal operator in the case of Bayes’ criterion (3). Replacing in variables by random ones we obtain the optimal operator:
The quality of the optimal operator is numerically calculated on the basis of mean risk by the known formula [2,3,4]:
The common algorithm of the WLCE method for NLOStS synthesis using Bayes’ criteria consists of four Steps (Algorithm 1).
| Algorithm 1 Common Synthesis of NLOStS by WLCE |
|
4. Synthesis of NLOStSs for Particular Optimal Criteria
Let us consider the following minimum optimal criteria: (i) mean square error; (ii) damage accumulation; (iii) probability of error exit outside the limits.
In case (i) the loss function and conditional risk are described by the formulae:
For the optimal estimate it is necessary to find the minimum for parameter in integral (36):
The solution of the Euler equation [7],
gives the explicit expression for parameter :
The right hand of (39) is the conditional mathematical expectation of useful output StP relative to the input StP , consequently, the optimal estimate of output StP for the mean square error is defined by formula:
For criterion (ii) we have the following formulae:
and (40).
For criterion (iii) we obtain the following formulae:
and
The equation for takes the form
The solution (48) for gives the value for which the density at satisfies the equal , which is equal to the density for values , satisfying equality . So the optimal operator is defined by (47).
5. Approximate Algorithm Based on Statistical Linearization
Let us apply the method of statistical linearization (MSL) [2,3,4] for NLOStSs (1) and (2) at the Gaussian random vector with the density
where are the elements of mean vector and , and are elements of the inverse matrix,, of the covariance matrix .
At first, according to MSL, we replace the nonlinear function in Equation (1) with the linear one,
where , are determined from the mean square error approximation condition [2,3,4]. Using notation
we replace Equation (1) with the linear one,
At condition using the WLCE method of linear synthesis [2,3,4] we have the following equations:
where
So we obtain WLE
or in notations (21)
and
As a result Equations (53) and (54) may be rewritten in the form
From Equations (10) and (55) for noise we have the following presentation:
Taking into consideration the Equation (2),
We conclude that the WLCE of output does not depend upon the WLCE of random noise and
for Gaussian noises and .
The conditional joint density of relative to (or StP ) and the conditional mathematical expectation of Bayes’ loss function are expressed by known formulae [2,3,4]:
Here
Theorem 2.
Let the following conditions hold for a stochastic system (1), (2):
- (1)
- The covariance function of random noises is known and belongs to the space ;
- (2)
- The joint covariance function of random noise and is known and belong to the space ;
- (3)
- The random vector given by the normal probability density (49);
- (4)
- The nonlinear function is approximated by a linear one with respect to random parameters according to the statistical linearization method in the form (50);
- (5)
- The conditional probability density of relative to StP (or relative to a set of random variables according to the Formulas (53)–(55)) is approximated by the Formulas (65) and (67).
Then we obtain an approximate optimal estimate of the output StP for three criteria (minimum mean square error, damage accumulation and probability of error exit outside the limits):
where and are the conditional expectation and conditional covariance matrix of relative to StP , , , .
Proof of Theorem 2.
As it was mentioned in Section 3, the value of parameter at which integral (66) reaches the minimum defined optimal Bayes operator. Changing in variables for random ones we obtain the optimal system operator
In Section 4 we found out that the optimal estimate of output (2) based on the observed input (1) in the equal conditional mathematical expectation for three criteria. So the following expressions are valid: .
Due to the Gaussian distribution of variables , , , the density will be Gaussian. Denoting
we obtain the following approximate presentation of (69):
where and are the conditional expectation and conditional covariance matrix.
Presentation (71) in notations
takes the form
Equating to zero partial derivative by in (65) we obtain the equation for solving the conditional mathematical expectation ,
The system of linear algebraic equations may be rewritten in matrix form
where
Solving Equation (75)
and using notations
we obtain the approximate optimal estimate
Parameter is computed according the known formulae [4,5,6]:
Theorem 2 is proved. □
So we come to the following approximate algorithm for the construction of the optimal operator on the basis of the MSL and WLCE methods (Algorithm 2).
| Algorithm 2 Synthesis of NLOStS by MSL and WLCE |
|
6. Test Examples
6.1. Example 1
Let us consider the BC optimal filter for the reproduction of nonlinear signal using observation for, , where . Random noise has zero mathematical expectation and the covariance function . Random noises and are independent. The scalar random parameter is normal with mathematical expectation and variance . According to Section 4 for three criteria we have .
After calculation at we obtain, according (15), (16), (29) and (30), the following formulae:
Using notations
we come to the required result:
For input data,
the results of computer experiments are shown in Figure 1 and Figure 2. Figure 1 corresponds to case 1 when the signal value and estimate coincide, whereas in Figure 2 the signal value and estimate differ in one random point.
Figure 1.
Example 1. Case 1.
Figure 2.
Example 1. Case 2.
6.2. Example 2
At conditions when
for the input data of Section 6.1. Using Section 4 results, according to (20) we have
Denoting and according to (15), (16), (29) and (30), and using the formulae
we obtain .
The results of computer experiments based on Algorithm 1 for signal, signal estimate, error square and accumulation of damage for time interval [9, 18] are given in Figure 3 and Figure 4. Here the standard MATLAB R2019a software procedures with the step 0.01 were implemented.
Figure 3.
Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graphs of signal extrapolation and estimate extrapolation ; (b) graph of error module .
Figure 4.
Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graph of error square ; (b) graphs of accumulation of damage .
6.3. Example 3
Let us consider the application of MSL (Section 5) to the system of the test example in Section 6.2. After calculations we have following formulae,
and
where
So we obtain the following approximate result:
The results of computer experiments based on Algorithm 2 for signal, signal estimate, error module, error square, accumulation, conditional mathematical expectation and relative error in % are given in Figure 5, Figure 6 and Figure 7. The conditional variance is constant and equal 0.00196.
Figure 5.
Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graphs of signal extrapolation and estimate extrapolation ; (b) graph of error module .
Figure 6.
Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graph of error square ; (b) graphs of accumulation of damage .
Figure 7.
Graphs of: (a) conditional mathematical expectation ; (b) relative error in %.
Comparing test examples in Section 6.2 and Section 6.3 we conclude that MSL provides a good engineering accuracy of about 20–30%.
7. Conclusions
Algorithms for the optimal synthesis of nonstationary nonlinear StSs (Pugachev’s Equations (1) and (2)) based on canonical expansions of stochastic processes are well developed and applied [4]. Algorithm 1 (Theorem 1) is oriented for Gaussian noises and non-Gaussian parameters. Algorithm 2 (Theorem 2) is valid for Gaussian parameters and noises. Corresponding algorithms based on wavelet canonical expansions are well worked out only for observable linear StSs. For non-Gaussian parameters and noises we recommend the use of CE with independent components with non-Gaussian distributions [4].
An important issue of the considered nonstationary stochastic systems is to obtain faster convergence speed. Structural functions and covariance functions of noises in our approaches demand one- and two-dimensional -spaces. We use Haar wavelets due to their simplicity and analytical expressions. It is known that wavelet expansions based on Haar wavelets have poor convergence compared to wavelet expansions based on, for example, Daubechies wavelets. We suggest two ways for the improvement of convergence speed: (i) increase the maximal level of revolution of fix wavelet bases; (ii) choice of fixed type of carrier. Computer experiments in Examples 1–3 confirm good engineering accuracy even for resolution two levels and eight members of canonical expansions.
For the NLOStSs described, stochastic differences and differential equations corresponding to algorithms are given in [2,3,4]. Applications:
- –
- Approximate linear and equivalent linearized model building for observable nonstationary and nonlinear stochastic systems;
- –
- Bayes’ criterion optimization and the calibration of parameters in complex quality measurement and control systems;
- –
- Estimation of potential efficiency of nonstationary nonlinear stochastic systems;
- –
- Directions of future generalizations and implementations;
- –
- New models of scalar- and vector-observable stochastic systems (nonlinear, with additive and parametric noises, etc.);
- –
- New classes of Bayes’ criterion (integral, functional, mixed);
- –
- Implementation of wavelet-integral canonical expansions for hereditary stochastic systems.
This research was carried out using the infrastructure of the Shared Research Facilities “High Performance Computing and Big Data” (CKP “Informatics”) of FRC CSC RAS (Moscow, Russia).
Author Contributions
Conceptualization, I.S.; methodology, I.S., V.S. and T.K.; software, E.K. and T.K.; All authors have read and agreed to the published version of the manuscript.
Funding
The research received no external funding.
Data Availability Statement
No new data were generated or analyzed during this study.
Acknowledgments
I.V. Sinitsyna being text translstor.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
| BC | Bayes’ criteria |
| CE | Canonical expansion |
| LOStS | Linear observable stochastic systems |
| MSL | Method of statistical linearization |
| NLOStS | Nonlinear observable stochastic system |
| StP | Stochastic process |
| StS | Stochastic system |
| WLCE | Wavelet canonical expansion |
| Basic Designations | |
| random function, noise | |
| random function, noise | |
| mathematical expectation of random function | |
| input stochastic process | |
| output stochastic process | |
| estimator | |
| loss function | |
| system operator | |
| conditional risk | |
| mean risk | |
| random vector parameter | |
| random variable of CE of random function | |
| coordinate function of canonical expansion of random function | |
| coordinate function of canonical expansion of random function | |
| variance of random variable | |
| covariance function of random function | |
| joint covariance function of random function and | |
| random variable of canonical expansion of StP | |
| probability density of random vector | |
| conditional probability density of random vector relative to random variables | |
| conditional probability density of random variables relative to | |
| Haar scaling function | |
| Haar mother wavelet |
References
- Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics 2022, 10, 1517. [Google Scholar] [CrossRef]
- Pugachev, V.S. Theory of Random Functions and Its Applications to Control Problems; Pergamon Press Ltd.: Oxford, UK, 1965. [Google Scholar]
- Pugachev, V.S.; Sinitsyn, I.N. Stochastic Systems. Theory and Applications; World Scientific: Singapore, 2001. [Google Scholar]
- Sinitsyn, I. Canonical Expansion of Random Functions. In Theory and Applications, 2nd ed.; Torus Press: Moscow, Russia, 2023. (In Russian) [Google Scholar]
- Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
- Chui, C.K. An Introduction to Wavelets; Academic Press: Boston, MA, USA, 1992. [Google Scholar]
- Irene Fonseca, I.; Leoni, G. Modern Methods in the Calculus of Variations Lp Spaces; SMM; Springer: New York, NY, USA, 2007. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).