Next Article in Journal
An Algorithm for Numerical Integration of ODE with Sampled Unknown Functional Factors
Next Article in Special Issue
Identification of Quadratic Volterra Polynomials in the “Input–Output” Models of Nonlinear Systems
Previous Article in Journal
A Novel Dynamical Regulation of mRNA Distribution by Cross-Talking Pathways
Previous Article in Special Issue
Differential Neural Network-Based Nonparametric Identification of Eye Response to Enforced Head Motion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions

by
Igor Sinitsyn
1,2,
Vladimir Sinitsyn
1,2,
Eduard Korepanov
1 and
Tatyana Konashenkova
1,*
1
Federal Research Center “Computer Science and Control”, Russian Academy of Sciences (FRC CSC RAS), 119333 Moscow, Russia
2
Moscow Aviation Institute, National Research University, 125993 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1517; https://doi.org/10.3390/math10091517
Submission received: 25 March 2022 / Revised: 22 April 2022 / Accepted: 27 April 2022 / Published: 2 May 2022

Abstract

:
This article is devoted to analysis and optimization problems of stochastic systems based on wavelet canonical expansions. Basic new results: (i) for general Bayes criteria, a method of synthesized methodological support and a software tool for nonstationary normal (Gaussian) linear observable stochastic systems by Haar wavelet canonical expansions are presented; (ii) a method of synthesis of a linear optimal observable system for criterion of the maximal probability that a signal will not exceed a particular value in absolute magnitude is given. Applications: wavelet model building of essentially nonstationary stochastic processes and parameters calibration.

1. Introduction

Nowadays, for stochastic systems research, e.g., functioning at essentially nonstationary disturbances of complex structures, we need analytical modeling technologies for accurate analysis and synthesis. Methods of analysis and synthesis based on canonical expansions are very suitable for quick analytical modeling realizations using the first two probabilistic moments. Wavelet canonical expansions essentially increase the flexibility and accuracy of corresponding technologies.
It is known [1,2,3] that canonical expansion (CE) of stochastic processes (StP) is widely used to solve problems of analysis, modeling and synthesis of linear nonstationary stochastic systems (StS). For StS with high availability, corresponding software tools based on CE were worked out in [4,5,6,7,8]. In [4], we gave a brief review of the known algorithmic and software tools. In [5,6], the issues of instrumental software for analytical modeling of nonstationary scalar and vector random functions by means of wavelet CE (WLCE) are considered. The parameters of WLCE are expressed in terms of the coefficients of the expansion of the covariance matrix of random function over two-dimensional Dobshy wavelets. Article [7] continues the thematic cycle dedicated to analytical modeling of linear nonstationary StS based on wavelet and wavelet canonical expansions. The article describes wavelet algorithms for analytical modeling of mathematical expectation, a covariance matrix and a matrix of covariance functions, as well as wavelet algorithms for spectral and correlation-analytical express modeling.
The article [8] continues the thematic cycle devoted to software tools for analytical modeling of linear with parametric interference (Gaussian and non-Gaussian) StS based on nonlinear correlation theory (the method of normal approximation and the method of canonical expansions). Analytical methods are based on orthogonal decomposition of covariance matrix elements using a two–dimensional Dobshy wavelet with a compact carrier and Galerkin–Petrov wavelet methods.
In [5], for an essentially nonstationary StP wavelet, CE (WLCE) was proposed. Nowadays, deterministic wavelet methods are intensively applied to the problems of numerical analysis and modeling. A broad class of numerical methods based on Haar wavelets achieved great success [9]. These methods are simple in the sense of versatility and flexibility and possess less computational cost for accuracy analysis problems. The theory and practice of wavelets has attained its modern growth due to mathematical analysis of the wavelet in [10,11,12]. The concept of multiresolution analysis was given in [13]. In [14,15] method to construct wavelets with compact support and scaling function was developed. Among the wavelet families, which are described by an analytical expression, the Haar wavelets deserve special attention. Haar wavelets, in combination with the Galerkin method, are very effective and popular for solving different classes of deterministic equations [16,17,18,19,20,21,22,23,24,25]. The application of a wavelet for CE of StP and stochastic differential and integrodifferential equations was given in [7,8,26].
In [27,28], design problems for linear mean square (MS) optimal filters are considered on the basis of WLCE. Explicit formulae for calculating the MS optimal estimate of the signal and the MS optimal estimate of the quality of the constructed linear MS optimal operator are derived. Articles [29,30] are devoted to the synthesis of wavelets in accordance with complex statistical criteria (CsC). The basic definitions of CsC and approaches are given. Methodological support is based on Haar wavelets. The main wavelet equations, algorithms, software tools and examples are given. Some particular aspects of the StS wavelet synthesis under nonstationary (for example, shock) perturbations are presented in [31].
The developed wavelet algorithms have a fairly high degree of versatility and can be used in various applied fields of science. Such complex StS describes organizations–technical–economical systems functioning in the presence of internal and external noises and stochastic factors. The developed wavelet algorithms are used for data analysis and information processing in high-availability stochastic systems, in complex data storage systems, model building and calibration.
Let us state the general problem of the Bayes synthesis of linear nonstationary normal observable StS (OStS) by WLCE means. Special attention will be paid to the synthesis of linear optimal system for criterion of the maximum probability that the signal will not exceed a particular value in absolute magnitude. For example, the results of computer experiments are presented and discussed.

2. Bayes Criteria

In practice [1,2], the choice of criterion for comparing alternative systems for the same purpose, like any question regarding the choice of criteria, is largely a matter of common sense, which can often be approached from consideration of operating conditions and purpose of any particular system.
The criterion of the maximum probability that the signal will not exceed a particular value in absolute magnitude can be represented as
E [ l ( W , W * ) ] = min .
If we take the function l as the characteristic function of the corresponding set of values of the error, the following formula is valid:
l ( W , W * ) = { 1 at | W * W | > W , 0 at | W * W | W .
In applications connected with damage accumulation (1) needs to be employed with function l in the form:
l ( W , W * ) = 1 e k 2 ( W * W ) 2 .
Thus, we get the following general principle for estimating the quality of a system and selecting the criterion of optimality. The quality of the solution of the problem in each actual case is estimated by a function l ( W , W * ) , the value of which is determined by the actual realizations of the signal W and its estimator W * . It is expedient to call this the loss function. The quality of the solution of the problem on average for a given realization of the signal W with all possible realizations of the estimator W * corresponding to particular realization of the signal W is estimated by the conditional mathematical expectation of the loss function for the given realization of the signal:
ρ ( A | W ) = E [ l ( W , W * | W ) ] .
This quantity is called conditional risk. The conditional risk depends on the operator A for the estimator W * and on the realization of signal W . Finally, the average quality of the solution for all possible realization of W and its estimator W * is characterized by the mathematical expectation of the conditional risk
R ( A ) = E [ ρ ( A | W ) | W ] = E [ l ( W , W * ) ] .
This quantity is called the mean risk.
All criteria of minimum risk which correspond to the possible loss functions or functionals which may contain undetermined parameters are known as Bayes’ criteria.

3. Basic formulae for Optimal Bayes Synthesis of Linear Systems

Let us consider scalar linear OStS with real StP Z ( τ )   ( τ [ t T , t ] ) , which is the sum of the useful signal and the additive normal noise X ( τ ) :
Z ( τ ) = r = 1 N U r ξ r ( τ ) + X ( τ ) .
The useful signal is the linear combination of given random parameters U r ( r = 1 , N ¯ ) . We need to get StP W ( t ) in the following form:
W ( t ) = r = 1 N U r ζ r ( t ) + Y ( t ) .
Here, ξ 1 ( τ ) , , ξ N ( τ ) , ζ 1 ( τ ) , , ζ N ( τ ) are known structural functions; U 1 , , U N are given random variables (RV) which do not depend on noises X ( τ ) , Y ( τ )   ( E X ( τ ) = 0 ,   E Y ( τ ) = 0 ) .
We state to construct an optimal system with operator A in cases when output StP:
W * ( t ) = A Z
based on observation StP Z ( τ ) at time interval [ t T , t ] , reproducing given output signal W ( t ) for criteria (1) with maximal accuracy.
It is known [1,2,3] that the solution of this problem through CE is based on two-stage procedures based on Formulae (4) and (5).
Vector CE [ X ( τ ) Y ( τ ) ] T presents the linear combination of uncorrelated RV with deterministic coordinate functions:
X ( τ ) = ν V ν x ν ( τ ) , Y ( τ ) = ν V ν y ν ( τ )
According to [1,2] for V ν we have
V ν = t T t a ν ( τ ) X ( τ ) d τ + t T t a ν ( τ ) Y ( τ ) d τ
Then, coordinate functions are calculated by the following formulae:
x ν ( τ ) = 1 D ν t T t a ν ( θ ) K X ( τ , θ ) d θ + 1 D ν t T t a ν ( θ ) K X Y ( τ , θ ) d θ ,
y ν ( τ ) = 1 D ν t T t a ν ( θ ) K X Y ( θ , τ ) d θ + 1 D ν t T s a ν ( θ ) K Y ( τ , θ ) d θ .
Here, E [ V ν ] = 0 .   D ν = D [ V ν ] ,   K X ( τ , θ ) = E [ X ( τ ) X ( θ ) ] ,   K X Y ( τ , θ ) = E [ X ( τ ) Y ( θ ) ] ,   K Y ( τ , θ ) = E [ Y ( τ ) Y ( θ ) ] ;   a ν ( τ ) is a given set of deterministic functions satisfying biorthogonality conditions:
t T t a ν ( τ ) x μ ( τ ) d τ + t T t a ν ( τ ) y μ ( τ ) d τ = δ ν μ .
Let us consider RV
Z ν = t T t a ν ( τ ) Z ( τ ) d τ ,
and its presentation
Z ν = r = 1 N α ν r U r + V ν ,
where
α ν r = t T t a ν ( τ ) ξ r ( τ ) d τ .
The sum of RV Z ν , multiplied by x ν ( τ ) gives the CE of StP Z ( τ ) ( τ [ t T , t ] )
Z ( τ ) = ν Z ν x ν ( τ ) .
To find the conditional mathematical expectation of the loss function for StP Z ( τ ) ( τ [ t T , t ] ) , it is necessary to find the conditional probability density of output StP relatively on input StP Z ( τ ) . According to (4), StP W ( t ) depends upon the given random parameters U r ( r = 1 , N ¯ ) and random noise Y ( t ) . So, we get
Y ( t ) = ν V ν y ν ( t ) = ν ( Z ν r = 1 N α ν r U r ) y ν ( t ) = ν Z ν y ν ( t ) r = 1 N U r ν α ν r y ν ( t ) .
Here,
W ( t ) = r = 1 N U r ζ r ( t ) + ν Z ν y ν ( t ) r = 1 N U r ν α ν r y ν ( t ) .
The last formula shows that StP W ( t ) depends upon random parameters U r ( r = 1 , N ¯ ) and the set of Z ν .
Let us introduce the vector of RV U = [ U 1 U 2 U N ] T . Conditional distribution of U relative StP Z ( τ ) coincides with the set of RV Z ν . Conditional density f 1 ( u | z 1 , z 2 , ) is defined by the known formula:
f 1 ( u | z 1 , z 2 , ) = f ( u ) f 2 ( z 1 , z 2 , | u ) + f ( u ) f 2 ( z 1 , z 2 , | u ) d u .
Here, f ( u ) is a given apriority density of RV U = [ U 1 U 2 U N ] T ; f 2 ( z 1 , z 2 , | u ) is a density of RV Z ν , relatively U = [ U 1 U 2 U N ] T .
Taking into account that vector random noise is normal, V ν is the linear transform of vector [ X ( τ ) Y ( τ ) ] T . We conclude that RV are not only correlated, but also independent. Joint density of V ν with zero mathematical exactions and variances D ν is expressed by formula
f V ( v 1 , v 2 , ) = 1 ( 2 π ) L D 1 D 2 exp { 1 2 ν v ν 2 D ν } .
In (7), let us replace RV U 1 , , U N with their realizations u 1 , , u N ; then, Z ν is the linear function of RV V ν with known joint density. Expressing V ν by Z ν and using Formula (21), we get:
f 2 ( z 1 , z 2 , | u ) = 1 ( 2 π ) L D 1 D 2 exp { 1 2 ν 1 D ν ( z ν r = 1 N α ν r u r ) 2 } ,
where α ν ( u ) = r = 1 N α ν r u r .
After substituting Formula (22) into (20), we get the formula for a posteriori density f 1 ( u | z 1 , z 2 , ) of U = [ U 1 U 2 U N ] T for input StP Z ( τ )   ( τ [ t T , t ] ) :
f 1 ( u | z 1 , z 2 , ) = χ ( z ) f ( u ) exp { ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν } ,
χ ( z ) = [ + f ( u ) exp { ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν } d u ] 1 .
This formula may be used after observation when realization Z ( τ ) is available.
A posteriori mathematical expectation of loss function l ( W , W * ) is called conditional risk, and is denoted as ρ ( A | W ) :
ρ ( A | W ) = E [ l ( W , W * ) | Z ] = χ ( z ) + l ( W , W * ) f ( u ) × exp { ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν } d u .
In order to solve the stated problem, it is necessary to calculate the optimal output StP W * ( t ) for every t from condition of minimum of integral (11).
Let us consider this integral as a function of Ρ W = W * ( t ) at fixed values of parameters
η 0 = η 0 ( z 1 , z 2 , ) = ν z ν y ν ( t ) , η r = η r ( z 1 , z 2 , ) = ν α ν r z ν D ν ( r = 1 , N ¯ )
and time t :
I ( Ρ W , η 1 , , η N , t ) = + + l ( r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 , Ρ W ) f ( u 1 , , u N ) × exp { r = 1 N η r u r 1 2 p , q = 1 N b p q u p u q } d u 1 d u N .
Here,
b p 0 = ν α ν p y ν ( t ) , b p q = ν 1 D ν α ν p α ν q ( q , p = 1 , N ¯ ) .
The value of parameter Ρ W = Ρ 0 W ( t , η 0 , η 1 , , η N ) when integral (27) reaches the minimum value defines the Bayes optimal operator for criterion (1). Changing η r ( r = 0 , N ¯ ) and Ρ 0 W ( t , η 0 , , η N ) variables η 1 , , η N and z 1 , z 2 , with the corresponding RV H 0 , , H N and Z 1 , Z 2 , , we get the required optimal operator:
W * ( t ) = A Z = Ρ 0 w ( t , H 0 , , H N ) ,
where
H 0 = ν Z ν y ν ( t ) , H r = H r ( Z 1 , Z 2 , ) = ν α ν r Z ν D ν ( r = 1 , N ¯ )
The quality of the optimal operator is estimated by the mean risk [1,2]
R ( A ) = E [ ρ ( A | W ) | W ] = E [ l ( W , W * ) ] = + + l ( r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 , Ρ 0 W ) f 2 ( z 1 , z 2 , | u ) f ( u ) d z 1 d z 2 d u .
So, we get the following basic Formulae (23)–(31) necessary for wavelet canonical expansion method.

4. Wavelet Canonical Expansions Method

Let us construct an operator for an optimal linear system using the Haar wavelet CE method WLCE [5,6]:
{ φ 00 ( τ ¯ ) , ψ j k ( τ ¯ ) }
where
φ 00 ( τ ¯ ) = φ ( τ ¯ ) = { 1 , τ ¯ [ 0 , 1 ) , 0 , τ ¯ [ 0 , 1 )   is   a   scaling   function ,
ψ 00 ( τ ¯ ) = ψ ( τ ¯ ) = { 1 , τ ¯ [ 0 , 1 2 ) , 1 , τ ¯ [ 1 2 , 1 ) , 0 , τ [ 0 , 1 )   is   a   mother   wavelet ,
ψ j k ( τ ¯ ) = 2 j ψ ( 2 j τ ¯ k ) are wavelets of level j for j = 1 , 2 , , J ;   k = 0 , 1 , , 2 j 1 ; J is maximal resolution level defined by required accuracy of approximation for any function f ( τ ¯ ) L 2 [ 0 , 1 ] by finite linear combination of Haar wavelets, equal to 2 J 2 .
Then, let us present a one-dimensional wavelet basis (32) as:
g 1 ( τ ¯ ) = φ 00 ( τ ¯ ) , g 2 ( τ ¯ ) = ψ 00 ( τ ¯ ) , g ν ( τ ¯ ) = ψ j k ( τ ¯ ) , j = 1 , 2 , , J ; k = 0 , 1 , , 2 j 1 ; ν = 2 j + k + 1 ; ν = 3 , L . ¯
For construction of the Haar WLCE for vector [ X ( τ ) Y ( τ ) ] T at τ [ t T , t ] , we pass to new time variable τ ¯ [ 0 , 1 ] ,   τ ¯ = τ ( t T ) T and assume
K X ( τ 1 , τ 2 ) L 2 ( [ t T , t ] × [ t T , t ] ) , K X Y ( τ 1 , τ 2 ) L 2 ( [ t T , t ] × [ t T , t ] ) , K Y ( τ 1 , τ 2 ) L 2 ( [ t T , t ] × [ t T , t ] ) ,
K ¯ X ( τ ¯ 1 , τ ¯ 2 ) L 2 ( [ 0 , 1 ] × [ 0 , 1 ] ) , K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) L 2 ( [ 0 , 1 ] × [ 0 , 1 ] ) , K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) L 2 ( [ 0 , 1 ] × [ 0 , 1 ] ) .
Additionally, for presentation of given covariance functions in the form of two-dimensional wavelet expansion, it is necessary to define the two-dimensional orthogonal basis through tensor composition of one-dimensional bases (32) when scaling is performed simultaneously for two variables
Φ A ( τ ¯ 1 , τ ¯ 2 ) = φ 00 ( τ ¯ 1 ) φ 00 ( τ ¯ 2 ) , Ψ H ( τ ¯ 1 , τ ¯ 2 ) = φ 00 ( τ ¯ 1 ) ψ 00 ( τ ¯ 2 ) , Ψ B ( τ ¯ 1 , τ ¯ 2 ) = ψ 00 ( τ ¯ 1 ) φ 00 ( τ ¯ 2 ) , Ψ j k n D ( τ ¯ 1 , τ ¯ 2 ) = ψ j k ( τ ¯ 1 ) ψ j n ( τ ¯ 2 )
where j = 1 , 2 , , J ; k , n = 0 , 1 , , 2 j 1 .
So, the two-dimensional wavelet expansion of given covariance functions takes the form
K ¯ X ( τ ¯ 1 , τ ¯ 2 ) = a x Φ A ( τ ¯ 1 , τ ¯ 2 ) + h x Ψ H ( τ ¯ 1 , τ ¯ 2 ) + b x Ψ B ( τ ¯ 1 , τ ¯ 2 ) + j = 0 J k = 0 2 J 1 n = 0 2 J 1 d j k n x Ψ j k n D ( τ ¯ 1 , τ ¯ 2 )
where
a x = 0 1 0 1 K ¯ X ( τ ¯ 1 , τ ¯ 2 ) Φ A ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , h x = 0 1 0 1 K ¯ X ( τ ¯ 1 , τ ¯ 2 ) Ψ H ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , b x = 0 1 0 1 K ¯ X ( τ ¯ 1 , τ ¯ 2 ) Ψ B ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , d j k n x = 0 1 0 1 K ¯ X ( τ ¯ 1 , τ ¯ 2 ) Ψ j k n D ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 ,
K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) = a x y Φ A ( τ ¯ 1 , τ ¯ 2 ) + h x y Ψ H ( τ ¯ 1 , τ ¯ 2 ) + b x y Ψ B ( τ ¯ 1 , τ ¯ 2 ) + j = 0 J k = 0 2 J 1 n = 0 2 J 1 d j k n x y Ψ j k n D ( τ ¯ 1 , τ ¯ 2 )
where
a x y = 0 1 0 1 K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) Φ A ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , h x y = 0 1 0 1 K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) Ψ H ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , b x y = 0 1 0 1 K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) Ψ B ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , d j k n x y = 0 1 0 1 K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) Ψ j k n D ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 ,
K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) = a y Φ A ( τ ¯ 1 , τ ¯ 2 ) + h y Ψ H ( τ ¯ 1 , τ ¯ 2 ) + b y Ψ B ( τ ¯ 1 , τ ¯ 2 ) + j = 0 J k = 0 2 J 1 n = 0 2 J 1 d j k n y Ψ j k n D ( τ ¯ 1 , τ ¯ 2 )
here
a y = 0 1 0 1 K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) Φ A ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , h y = 0 1 0 1 K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) Ψ H ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , b y = 0 1 0 1 K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) Ψ B ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 , d j k n y = 0 1 0 1 K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) Ψ j k n D ( τ ¯ 1 , τ ¯ 2 ) d τ ¯ 1 d τ ¯ 2 .
After transition to time variable τ ¯ [ 0 , 1 ] , τ ¯ = τ ( t T ) T at τ = τ ( τ ¯ ) = T τ ¯ + ( t T ) , expression (3) takes the form
Z ( τ ) = Z ( τ ( τ ¯ ) ) = Z ¯ ( τ ¯ ) = r = 1 N U r ξ ¯ r ( τ ¯ ) + X ¯ ( τ ¯ ) .
Analogously, we have
V ν = T V ¯ ν ; V ¯ ν = 0 1 a ν ( τ ¯ ) X ¯ ( τ ¯ ) d τ ¯ + 0 1 a ν ( τ ¯ ) Y ¯ ( τ ¯ ) d τ ¯ , D ν = T 2 D ¯ ν , D ¯ ν = D [ V ¯ ν ] .
According to [3,5], functions a ν ( τ ¯ ) may be expressed by functions:
a 1 ( τ ¯ ) = g 1 ( τ ¯ ) , a ν ( τ ¯ ) = λ = 1 ν 1 c ν λ g λ ( τ ¯ ) + g ν ( τ ¯ ) ( ν = 2 , L ¯ ) .
Using notations:
x ¯ ν ( τ ¯ ) = 1 D ¯ ν 0 1 a ν ( θ ¯ ) K ¯ X ( τ ¯ , θ ¯ ) d θ ¯ + 1 D ¯ ν 0 1 a ν ( θ ¯ ) K ¯ X Y ( τ ¯ , θ ¯ ) d θ ¯ ,
y ¯ ν ( τ ¯ ) = 1 D ¯ ν 0 1 a ν x ( θ ¯ ) K ¯ X Y ( θ ¯ , τ ¯ ) d θ ¯ + 1 D ¯ ν 0 1 a ν y ( θ ¯ ) K ¯ Y ( τ ¯ , θ ¯ ) d θ ¯
we get the following formulae:
x ν ( τ ) = x ν ( τ ( τ ¯ ) ) = 1 T x ¯ ν ( τ ¯ ) , y ν ( τ ) = y ν ( τ ( τ ¯ ) ) = 1 T y y ¯ ν ( τ ¯ ) ,
X ( τ ( τ ¯ ) ) = ν = 1 L V ν x ν ( τ ( τ ¯ ) ) = ν = 1 L T V ¯ ν 1 T x ¯ ν ( τ ¯ ) = ν = 1 L V ¯ ν x ¯ ν ( τ ¯ ) ,
Y ( τ ( τ ¯ ) ) = ν = 1 L V ν y ν ( τ ( τ ¯ ) ) = ν = 1 L T V ¯ ν 1 T y ¯ ν ( τ ¯ ) = ν = 1 L V ¯ ν y ¯ ν ( τ ¯ ) .
Here, RV V ¯ ν have zero mathematical expectations, and variances coordinate functions x ¯ ν ( τ ¯ ) and y ¯ ν ( τ ¯ ) are successively defined by the following formulae:
x ¯ 1 ( τ ¯ ) = 1 D ¯ 1 h 1 x ( τ ¯ ) ; x ¯ ν ( τ ¯ ) = 1 D ¯ ν ( λ = 1 ν 1 d ν λ h λ x ( τ ¯ ) + h ν x ( τ ¯ ) ) ;
y ¯ 1 ( τ ¯ ) = 1 D ¯ 1 h 1 y ( τ ¯ ) ; y ¯ ν ( τ ¯ ) = 1 D ¯ ν ( λ = 1 ν 1 d ν λ h λ y ( τ ¯ ) + h ν y ( τ ¯ ) ) ;
where
d ν λ = c ν λ + μ = λ + 1 ν 1 c ν μ d μ λ ( λ = 1 , ν 2 ¯ ) ; d ν , ν 1 = c ν , ν 1 ; ν = 2 , L ¯ ;
c ν 1 = k ν 1 D ¯ 1 ( ν = 2 , L ) ¯ ; c ν μ = 1 D ¯ μ ( k ν μ λ = 1 μ 1 D ¯ λ c μ λ c ν λ ) ( μ = 2 , ν 1 ¯ ; ν = 3 , L ¯ ) ; D ¯ 1 = k 11 ; D ¯ ν = k ν ν λ = 1 ν 1 D ¯ λ | c ν λ | 2 ( ν = 2 , L ¯ ) .
Parameters k ν μ are expressed by coefficients of two-dimensional wavelet expressions of covariance functions K ¯ X ( τ ¯ 1 , τ ¯ 2 ) , K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) , and K ¯ Y ( τ ¯ 1 , τ ¯ 2 )
k 11 = a x + 2 a x y + a y , k 12 = h x + 2 h x y + h y , k 21 = b x + 2 b x y + b y , k 22 = d 000 x + 2 d 000 x y + d 000 y , k ν μ = d j k n x + 2 d j k n x y + d j k n y ( ν = 2 j + k + 1 ; μ = 2 j + n + 1 ; j = 1 , J ¯ ; k , n = 0 , 1 , , 2 j 1 ) .
The other k ν μ = 0 .
Auxiliary functions h ν x ( τ ¯ ) , h ν y ( τ ¯ ) are expressed by basic wavelet functions (38) and coefficients of wavelet expansions of covariance functions K ¯ X ( τ ¯ 1 , τ ¯ 2 ) , K ¯ X Y ( τ ¯ 1 , τ ¯ 2 ) , K ¯ Y ( τ ¯ 1 , τ ¯ 2 ) :
h 1 x ( τ ¯ ) = ( a x + a x y ) φ 00 ( τ ¯ ) + ( b x + b x y ) ψ 00 ( τ ¯ ) , h 1 y ( τ ¯ ) = ( a x y + a y ) φ 00 ( τ ¯ ) + ( b x y + b y ) ψ 00 ( τ ¯ ) , h 1 x ( τ ¯ ) = ( h x + h x y ) φ 00 ( τ ¯ ) + ( d 000 x + d 000 x y ) ψ 00 ( τ ¯ ) , h 1 y ( τ ¯ ) = ( h x y + h y ) φ 00 ( τ ¯ ) + ( d 000 x y + d 000 y ) ψ 00 ( τ ¯ ) , h ν x ( τ ¯ ) = k = 0 2 j 1 ( d j k n x + d j k n x y ) ψ j k ( τ ¯ ) ( v = 3 , L ¯ ; v = 2 j + n + 1 ; n = 0 , 1 , , 2 j 1 ) .
Considering (45), (46), we get
Z ν = T Z ¯ ν , Z ¯ ν = r = 1 N α ¯ ν r U r + V ¯ ν ,
α ν r = T α ¯ ν r , α ¯ ν r = 0 1 a ν ( τ ¯ ) ξ ¯ r ( τ ¯ ) d τ ¯ .
If functions ξ 1 ( τ ) , , ξ N ( τ ) L 2 [ t T , t ] , then ξ ¯ 1 ( τ ¯ ) , , ξ ¯ N ( τ ¯ ) L 2 [ 0 , 1 ] and have wavelet expansions
ξ ¯ r ( τ ¯ ) = a r ξ φ 00 ( τ ¯ ) + j = 0 J k = 0 2 j 1 d r j k ξ ψ j k ( τ ¯ ) ( r = 1 , , N ) ,
a r ξ = 0 1 ξ ¯ r ( τ ¯ ) φ 00 ( τ ¯ ) d τ ¯ , d r j k ξ = 0 1 ξ ¯ r ( τ ¯ ) ψ j k ( τ ¯ ) d τ ¯ ,
Using notation (38) we get from (61), (62)
ξ ¯ r ( τ ¯ ) = c r 1 ξ g 1 ( τ ¯ ) + ν = 2 ( ν = 2 j + k + 1 ; j = 0 , J ; ¯ k = 0 , 1 , , 2 j 1 ) L c r ν ξ g ν ( τ ¯ ) ( r = 1 , , N ) ,
c r 1 ξ = a r ξ , c r ν ξ = d r j k ξ .
From (60), (62), (64), we have
α ¯ 1 r = c r 1 ξ ; α ¯ ν r = λ = 1 ν 1 c ν λ c r λ ξ + c r ν ξ ( ν = 2 , L ¯ ) .
Finally, using formulae
ν = 1 L Z ν x ν ( τ ) = ν = 1 L ( T Z ¯ ν ) ( 1 T x ¯ ν ( τ ¯ ) ) = ν = 1 L Z ¯ ν x ¯ ν ( τ ¯ )
we get the required WLCE for StP Z ( τ ) ( τ [ t T , t ] ) :
Z ( τ ) = Z ( τ ( τ ¯ ) ) = Z ¯ ( τ ¯ ) = ν = 1 L Z ¯ ν x ¯ ν ( τ ¯ ) .
In basic Formulae (23)–(31), the parameters are expressed as follows:
η 0 = ν = 1 L z ν y ν ( τ ) = ν = 1 L ( T z ¯ ν ) ( 1 T y ¯ ν ( τ ¯ ) ) = ν = 1 L z ¯ ν y ¯ ν ( τ ¯ ) ,
η r = ν = 1 L α ν r z ν D ν = ν = 1 L ( T α ¯ ν r ) ( T z ¯ ν ) T 2 D ¯ ν = ν = 1 L α ¯ ν r z ¯ ν D ¯ ν ( r = 1 , N ¯ ) ,
b p 0 = ν = 1 L α ν p y ν ( τ ) = ν = 1 L ( T α ¯ ν p ) ( 1 T y ¯ ν ( τ ¯ ) ) = ν = 1 L α ¯ ν p y ¯ ν ( τ ¯ ) ,
b p q = ν = 1 L 1 D ν α ν p α ν q = ν = 1 L 1 T 2 D ¯ ν ( T α ¯ ν p ) ( T α ¯ ν q ) = ν = 1 L 1 D ¯ ν α ¯ ν p α ¯ ν q .
Note that expression Ρ 0 W ( t , η 0 , , η N ) depends on fixed values z ¯ 1 , , z ¯ L of Z ¯ 1 , Z ¯ 2 , , Z ¯ L .
So, the WLCE method is defined by Formulae (67)–(71) at conditions (61)–(65).

5. Synthesis of a Linear Optimal System for Criterion of the Maximum Probability That Signal Will Not Exceed a Particular Value in Absolute Magnitude

Conditional risk ρ ( A | W ) in case (2) is equal from interval to probability of error exit
ρ ( A | W ) = E [ l ( W , W * ) | W ] = P ( | W * W | w ( t ) ) = 1 P ( | W * W | < w ( t ) ) .
A priori density f ( u ) = f ( u 1 , , u N ) of RV U = [ U 1 U 2 U N ] T is defined by formula
f ( u 1 , , u N ) = [ ( 2 π ) N | K | ] 1 2 exp { 1 2 p , q = 1 N c p q u p u q }
where K is the covariance matrix of U , c p q ( p , q = 1 , N ¯ ) is K 1 elements.
Let us find minimum of the integral
I ( Ρ W , η 0 , , η N , t ) = [ ( 2 π ) N | K | ] 1 2 × | r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 P W | w ( t ) exp { r = 1 N η r u r 1 2 p , q = 1 N ( c p q + b p q ) u p u q } d u 1 d u N .
Integral (74) is propositional to the probability of the normal point ( U 1 , U 2 , , U N ) , and does not get into the subspace defined by inequality | r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 P W | < w ( t ) . This probability has a minimum, if its mathematical expectation lies on line r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 P W = 0 . Normal density has maximum mathematical expectation. So, for definition of mathematical expectation, it is enough to equate partial derivatives in (74) to zero for u 1 , u 2 , , u N . The (74) minimization value Ρ 0 ( t , η 0 , , η N ) is equal to:
Ρ 0 W = r = 1 N λ r ( t ) ( ζ r ( t ) b r 0 ) + η 0 .
For solution of functions λ 1 ( t ) , λ 2 ( t ) , , λ N ( t ) it is necessary to solve the system of linear algebraic equations:
p = 1 N λ p ( t ) ( c p q + b p q ) = η q ( t ) ( q = 1 , N ¯ ) .
In matrix form, Equation (76) is as follows:
C 1 Λ = A 1 T Z 1
where
C 1 = ( c i j + b i j ) i , j = 1 N , A 1 = ( α ¯ i j D ¯ i ) i , j = 1 L , N , Z 1 = [ z ¯ 1 , z ¯ 2 , , z ¯ L ] T , Λ = [ λ 1 ( t ) , , λ N ( t ) ] T .
Hence,
Λ = C 1 1 A 1 T Z 1 .
Using notations
B 1 = ( ζ 1 ( t ) b 10 ζ N ( t ) b N 0 ) , Y 1 = ( y ¯ 1 ( t ) y ¯ N ( t ) )
we get the Bayes optimal operator in matrix form:
A = B 1 T C 1 1 A 1 T + Y 1 T .
The Bayes optimal estimate of output StP is defined by
W * ( t ) = A Z 1 .
The mean risk is at
R ( A ) = [ ( 2 π ) N + L D 1 D L | K | ] 1 2 | r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 P 0 W | w ( t ) exp { 1 2 ν = 1 L z ν 2 D ν ν = 1 L r = 1 N α ν r D ν z ν u r 1 2 p , q = 1 N ( c p q + b p q ) u p u q } d u 1 d u N d z 1 d z L = = 1 [ ( 2 π ) N + L D 1 D L | K | ] 1 2 | r = 1 N u r ( ζ r ( t ) b r 0 ) + η 0 P 0 W | < w ( t ) exp { 1 2 ν = 1 L z ν 2 D ν ν = 1 L r = 1 N α ν r D ν z ν u r 1 2 p , q = 1 N ( c p q + b p q ) u p u q } d u 1 d u N d z 1 d z L .
Equations (75)–(83) define the method of synthesis of a linear system for criterion of maximum probability that the signal will not exceed a particular value in absolute magnitude.
New results generalize the following particular results [27,28,29,30,31] for different Bayes criteria in OStS:
Mean square error;
Complex statistical criteria;
Criterion of maximum probability that the signal not exceed particular value in absolute magnitude.

6. Example

The designed software tools based on results of Section 5 provide the possibility to compare mathematical models of different classes of linear OStS, its optimal instrumental potential accuracy in case of stochastic factors and noises.
Let us consider the extrapolator for a radar-location device described by the following equations:
Z ( τ ) = U 1 + U 2 τ + X ( τ ) , W ( t ) = U 1 + U 2 ( t + ) , τ [ t T , t ]
Here, U 1 and U 2 are random calibration parameters for the calibration device, and X is the colored noise. For the criterion of the maximum probability that the signal will not exceed a particular value a in absolute magnitude, we use algorithm (82).
Suppose that:
The noise X ( t ) is normal E X ( t ) = 0 , K X ( τ 1 , τ 2 ) = D exp { α | τ 2 τ 1 | } ;
Random parameters U 1 , U 2 are normal with joint density:
f ( u 1 , u 2 ) = c 11 c 22 c 12 2 2 π exp { 1 2 p , q = 1 2 c p q u p u q }
( c p q are elements of the inverse covariance matrix K 1 );
Input data:
t [ 9 ; 18 ] , T = 8 , = 1 ,
D = 1 , α = 1 , K = [ 1 0 0 1 ] ,
ξ 1 ( τ ) = 1 , ξ 2 ( τ ) = τ ; ζ 1 ( t ) = 1 , ζ 2 ( t ) = t + ,
J = 2 , L = 8 .
A typical realization method demonstrates high accuracy in Figure 1. As practice for quick calibration of typical devices we use, algorithms more simple than (82) were developed, computed and compared. This information is necessary for passport documentation.
The extrapolator takes values from −38.6099 to 11.9854. At the same time, the extrapolator error modulus does not exceed 0.7568 (Figure 1).

7. Conclusions

This article is devoted to problems with optimizing observable stochastic systems based on wavelet canonical expansions. Section 2 is devoted to different Bayes criteria in terms of risk theory. Following [1,2], in Section 3, basic formulae for optimal Bayes synthesis based on canonical expansions are given. Section 4 is dedicated to the solution of a general optimization problem using wavelet canonical expansions in case of complex nonstationary linear systems. In Section 5, a basic algorithm is given for the criterion of maximal probability that the signal will not exceed a particular value in absolute magnitude. An example of a radar-location extrapolator device is discussed.
The developed optimization methodology “quick probabilistic analytical numerical optimization” does not use statistical Monte Carlo methods.
Directions of future generalizations and implementations:
New models of scalar and vector OStS (nonlinear, with parametric noises, etc.):
New classes of the Bayes criteria.
The research was carried out using the infrastructure of the Shared Research Facilities “High Performance Computing and Big Data” (CKP “Informatics”) of FRC CSC RAS (Moscow).

Author Contributions

Conceptualization, I.S.; methodology, I.S., V.S., T.K.; software, E.K., T.K. All authors have read and agreed to the published version of the manuscript.

Funding

The research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

X ( t ) random function, noise
Y ( t ) random function, noise
E X ( t ) mathematical   expectation   of   random   function   X ( t )
Z ( t ) input stochastic process
W ( t ) output stochastic process
W * ( t ) estimator   W ( t )
l ( W , W * ) loss function
A system operator
ρ ( A | W ) conditional risk
R ( A ) mean risk
U r random parameter
ξ r ( τ ) , ζ r ( τ ) structural functions
V ν random variable of canonical expansion of random vector [ X ( t ) Y ( t ) ] T
x ν ( t ) coordinate   function   of   canonical   expansion   of   random   function   X ( t )
y ν ( t ) coordinate   function   of   canonical   expansion   of   random   function   Y ( t )
D ν variance   of   random   variable   V ν
K X ( t 1 , t 2 ) covariance   function   of   random   function   X ( t )
Z ν random   variable   of   canonical   expansion   of   StP Z ( t )
f ( u ) probability   density   of   random   vector   U = [ U 1 U 2 U N ] T
f 1 ( u | z 1 , z 2 , ) conditional probability density of random vector U = [ U 1 U 2 U N ] T relative to random variables Z ν
f V ( v 1 , v 2 , ) joint   probability   density   of   random   variables   V ν
f 2 ( z 1 , z 2 , | u ) conditional probability density of random variables Z ν relative to random vector U = [ U 1 U 2 U N ] T
φ 00 ( t ) Haar scaling function
ψ 00 ( τ ¯ ) Haar mother wavelet
CECanonical Expansion
CsCcomplex statistical criteria
OStSobservable Stochastic System
RVrandom variables
StPStochastic Process
StSStochastic System
WLCEWavelet Canonical Expansion

References

  1. Pugachev, V.S. Theory of Random Functions and Its Applications to Control Problems; Pergamon Press: Oxford, UK, 1965; 833p. [Google Scholar]
  2. Pugachev, V.S.; Sinitsyn, I.N. Stochastic Systems Theory and Applications; World Scientific: Singapore, 2001. [Google Scholar]
  3. Sinitsyn, I. Canonical Expansion of Random Functions and Its Application to Scientific Computer-Aided Support; Torus Press: Moscow, Russia, 2009. (In Russian) [Google Scholar]
  4. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Belousov, V.V.; Sergeev, I.V. Development of Algorithmic Support for the Analysis of Stochastic Systems Based on Canonical Expansions of Random Functions. Autom. Remote Control 2011, 72, 405–415. [Google Scholar] [CrossRef]
  5. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (IV). Highly Available Syst. 2017, 13, 55–69. (In Russian) [Google Scholar]
  6. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (V). Highly Available Syst. 2018, 14, 59–70. (In Russian) [Google Scholar]
  7. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (VI). Highly Available Syst. 2018, 14, 40–56. (In Russian) [Google Scholar]
  8. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (VII). Highly Available Syst. 2019, 15, 47–61. (In Russian) [Google Scholar]
  9. Arora, S.; Singh Brar, Y.; Kumar, S. Haar Wavelet Matrices for the Numerical Solutions of Differential Equations. Int. J. Comput. Appl. 2014, 97, 33–36. [Google Scholar] [CrossRef]
  10. Stromberg, J.O. A modified Franklin system and higher-order spline systems on Rn as unconditional bases for Hardy spaces. In Proceedings of the Conference on Harmonic Analysis in Honor of Antoni Zygmund, Chicago, IL, USA, 23–28 March 1981; pp. 475–494. [Google Scholar]
  11. Grossmann, A.; Morlet, J. Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape. SIAM J. Math. Anal. 1984, 15, 723–736. [Google Scholar] [CrossRef]
  12. Meyer, Y. Analysis at Urbana 1: Analysis in Function Spaces; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  13. Mallat, S.G. Multiresolution approximations and wavelet orthonormal bases of L2 (R). Trans. Am. Math. Soc. 1989, 315, 69–87. [Google Scholar]
  14. Daubechies, I. Orthonormal bases of compactly supported wavelets. Commun. Pure Appl. Math. 1988, 41, 909–996. [Google Scholar] [CrossRef] [Green Version]
  15. Strang, G. Wavelets and dilation equations: A brief introduction. SIAM Rev. 1989, 31, 614–627. [Google Scholar] [CrossRef]
  16. Hsiao, C.H. Haar wavelet approach to linear stiff systems. Math. Comput. Simul. 2004, 64, 561–567. [Google Scholar] [CrossRef]
  17. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  18. Chui, C.K. An Introduction to Wavelets; Academic Press: Boston, MA, USA, 1992. [Google Scholar]
  19. Lepik, U. Numerical solution of differential equations using Haar wavelets. Math. Comput. Simul. 2005, 68, 127–143. [Google Scholar] [CrossRef]
  20. Lepik, U. Numerical Solution of Evolution Equations by the Haar Wavelet Method. Appl. Math. Comput. 2007, 185, 695–704. [Google Scholar] [CrossRef]
  21. Chen, C.F.; Hsiao, C.H. Haar Wavelet Method for Solving Lumped and Distributed Parameter Systems. IEE Proc. Control Theory Appl. 1987, 144, 87–94. [Google Scholar] [CrossRef] [Green Version]
  22. Hsiao, C.H. State Analysis of Linear Time Delayed Systems via Haar Wavelets. Math. Comput. Simul. 1997, 44, 457–470. [Google Scholar] [CrossRef]
  23. Cattani, C. Haar Wavelets Based Technique in Evolution Problems. Proc.-Est. Acad. Sci. Phys. Math. 2004, 1, 45–63. [Google Scholar] [CrossRef]
  24. Hariharan, G. An Overview of Haar Wavelet Method for Solving Differential and Integral Equations. World Appl. Sci. J. 2013, 23, 1–14. [Google Scholar]
  25. Hsiao, C.H.; Wu, S.P. Numerical Solution of Time-Varying Functional Differential Equations via Haar Wavelets. Appl. Math. Comput. 2007, 188, 1049–1058. [Google Scholar] [CrossRef]
  26. Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Wavelet Modeling of Control Stochastic Systems at Complex Shock Disturbances. Mathematics 2021, 9, 2544. [Google Scholar] [CrossRef]
  27. Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Optimization of Linear Stochastic Systems Based on Canonical Wavelet Expansions. Autom. Remote Control 2020, 81, 2046–2061. [Google Scholar] [CrossRef]
  28. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (XII). Highly Available Syst. 2021, 17, 26–44. (In Russian) [Google Scholar]
  29. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (XI). Highly Available Syst. 2021, 17, 25–40. (In Russian) [Google Scholar] [CrossRef]
  30. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (XIII). Highly Available Syst. 2021, 17, 16–35. (In Russian) [Google Scholar] [CrossRef]
  31. Sinitsyn, I.N.; Sinitsyn, V.I.; Korepanov, E.R.; Konashenkova, T.D. Software Tools for Analysis and Synthesis of Stochastic Systems with High Availability (XV). Highly Available Syst. 2022, 18, 47–61. (In Russian) [Google Scholar] [CrossRef]
Figure 1. Graphs of: (a) signal extrapolation W and estimate extrapolation W * ; (b) module | W * W | .
Figure 1. Graphs of: (a) signal extrapolation W and estimate extrapolation W * ; (b) module | W * W | .
Mathematics 10 01517 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics 2022, 10, 1517. https://doi.org/10.3390/math10091517

AMA Style

Sinitsyn I, Sinitsyn V, Korepanov E, Konashenkova T. Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics. 2022; 10(9):1517. https://doi.org/10.3390/math10091517

Chicago/Turabian Style

Sinitsyn, Igor, Vladimir Sinitsyn, Eduard Korepanov, and Tatyana Konashenkova. 2022. "Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions" Mathematics 10, no. 9: 1517. https://doi.org/10.3390/math10091517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop