Guaranteed Estimation of Solutions to the Cauchy Problem When the Restrictions on Unknown Initial Data Are Not Posed

The paper deals with Cauchy problems for first-order systems of linear ordinary differential equations with unknown data. It is assumed that the right-hand sides of equations belong to certain bounded sets in the space of square-integrable vector-functions, and the information about the initial conditions is absent. From indirect noisy observations of solutions to the Cauchy problems on a finite system of points and intervals, the guaranteed mean square estimates of linear functionals on unknown solutions of the problems under consideration are obtained. Under an assumption that the statistical characteristics of noise in observations are not known exactly, it is proved that such estimates can be expressed in terms of solutions to well-defined boundary value problems for linear systems of impulsive ordinary differential equations.


Introduction
A general theory of guaranteed estimates of solutions to Cauchy problems for ordinary differential equations under uncertainty was constructed in [1]. These results were further developed in [2][3][4][5].
The paper focuses on elaborating the methods of estimating the state of the systems described by the Cauchy problems for linear ordinary differential equations with incomplete data.
The formulations of the estimation problems under the conditions of uncertainty, which are considered in this article, are new, and research in this direction has not been carried out previously.
For solving these estimation problems, we use observations that are linear transformations of unknown solutions on a finite system of intervals and points perturbed by additive random noises. Such a type of observation is caused by the fact that in many practically important cases, unknown solutions cannot be observed in a direct manner.
From observations of the state of systems, we find optimal, in a certain sense, estimates for functionals from solutions of these problems under the condition that the information about initial conditions is missing and that the right-hand sides of equations and correlation functions of random noises in observations are not known exactly, but it is only known that they belong to the certain given sets in the corresponding function spaces.
In such a situation, the minimax estimation method turns out to be applicable and preferable. In fact, choosing this approach, one can obtain optimal estimates not only for the unknown solutions but also for linear functionals with respect to these solutions. In other words, the desired estimates linear with respect to observations are such that the maximal mean square error determined over the whole set of realizations of perturbations from the sets under consideration attains its minimal value. Traditionally, these kinds of estimates are referred to as the guaranteed or minimax estimates.
We demonstrate that these problems can be reduced to the determination of minima of quadratic functionals on closed convex sets in Hilbert spaces. Expressions for the minimax estimates and for the estimation errors are determined as a result of the solution to this problem with the use of the Lagrange multipliers method. It is shown that such estimates are expressed in terms of solutions to certain well-defined uniquely solvable systems of differential equations. This paper continues our research cycle accomplished in [6,7], where the guaranteed (minimax) estimation method has been worked out for estimating linear functionals over the set of unknown solutions and data under the condition that unknown right-hand sides of the equations and initial conditions entering the statement of the Cauchy problems belong to a certain set in the corresponding Hilbert space (for details, see [8][9][10][11][12]).

Preliminaries
Let us first present the assertions and notations that will be frequently used in the text of the paper.
If vector-functions f (t) ∈ R n and g(t) ∈ R n are absolutely continuous on the closed interval [t 1 , t 2 ], then the following integration by the parts formula is valid where by (·, ·) n , we denote the inner product in R n here and later on (see [13]).
. Hermitian (self-adjoint) operator in a complex (real) Hilbert space H with bounded inverse Q −1 . Then, the generalized Cauchy-Schwarz inequality is valid. The equality sign in (2) is attained at the element For a proof, we refer to [14] (p. 186).

Setting of the Minimax Estimation Problem
We consider the following estimation problem. Let the unknown vector-function x(t) ∈ R n be a solution of the Cauchy problem where A(t) = [a ij (t)] is an n × n-matrix and B(t) = [b ij (t)] is an n × r-matrix with entries a ij (t) and b ij (t), which are square-integrable and piecewise continuous (here and in what follows, a function is called piecewise continuous on an interval if the interval can be broken into a finite number of subintervals on which the function is continuous on each open subinterval (i.e., the subinterval without its endpoints) and has a finite limit at the endpoints of each subinterval). C = [c ij ] is n × k-matrix with entries c ij ∈ R, i = 1, . . . , n, j = 1, . . . , k, f (t) ∈ R r is a vector-function belonging to the space (L 2 (t 0 , T)) r , x 0 ∈ R k . By a solution of this problem, we mean a function x(t) ∈ (W 1 2 (t 0 , T)) n that satisfies Equation (3) almost everywhere (a.e.) on (t 0 , T) (except on a set of Lebesgue measure 0) and the conditions (4). Here, W 1 2 (t 0 , T) is the space of functions absolutely continuous on an interval [t 0 , T] for which the derivative that exists almost everywhere on (t 0 , T) belongs to space L 2 (t 0 , T).
We suppose that the Cauchy data (x 0 , f (t)) are unknown and satisfy the condition (x 0 , f (·)) ∈ G 1 , where by G 1 we denote the set Here, matrix Q 1 (t) is a symmetric positive definite r × r matrix with real-valued piecewise continuous entries on [t 0 , T] f 0 ∈ (L 2 (0, T)) r , which is a prescribed vectorfunction, and 1 is a prescribed positive number.
The problem is to estimate the expression from observations of the form (here, we denote vectors and matrices by y and H and vector-functions and matrices-functions by y(·) and H(·)).
in the class of estimates linear with respect to observations (7) and (8); here, x(t) is the state of a system described by the Cauchy problem (3) and (4), l 0 ∈ (L 2 (t 0 , T)) n , a ∈ R n , H i are given m × n-matrices, H j (t) are given l × n-matrices where the elements are piecewise continuous functions on Ω j , u i ∈ R m , and u j (t) are vector-functions that belong to (L 2 (Ω j )) l , c ∈ R. We suppose that l (·)) T are observation errors in (7) and (8), respectively, that are realizations of random vectors ξ i = ξ i (ω) ∈ R m and random vector-functions ξ j (t) = ξ j (ω, t) ∈ R l and G 2 denotes the set of random elements ξ whose components ξ i and ξ j (·) are uncorrelated; that is, it is assumed that have zero means, Eξ i = 0 and Eξ j (·) = 0, with finite second moments E|ξ i | 2 and E ξ j (·) 2 (L 2 (Ω j )) l , and unknown correlation matrices R i = Eξ i ξ T i and R j (t, s) = Eξ j (t)ξ T j (s), satisfying the conditions and correspondingly (Tr D := ∑ l i=1 d ii denotes the trace of the matrix D = {d ij } l i,j=1 ). Here, D i , i = 1, . . . , N, are symmetric positive definite m × m matrices with constant entries and D j (t), j = 1, . . . , M, are symmetric positive definite l × l matrices the entries that are assumed to be piecewise continuous functions onΩ j , and i , i = 2, 3, are prescribed positive numbers. Set The norm and inner product in space H are defined by respectively.
in which vectorsû i , and a numberĉ are determined from the condition will be called the minimax estimate of expression (6). The quantity σ := {σ(û,ĉ)} 1/2 (14) will be called the error of the minimax estimation of l(x).
We see that a minimax estimate minimizes the maximal mean-square estimation error determined for the "worst" implementation of perturbations.

Representations for Minimax Estimates and Estimation Errors
In order to reduce the problem of determination of the minimax estimates to a certain optimal control problem, one can introduce, for any fixed u ∈ H vector-function, z(t; u) as a unique solution to the problem (here and in what follows, we assume that if a function is piecewise continuous, then it is continuous from the left).
where χ Ω (t) is a characteristic function of the set Ω, and U is denoted by the set It is easy to see that if U = ∅ then U is closed and convex set in the space H. The following result is valid.

Lemma 2.
Let U = ∅ (in the Appendix A, we give some sufficient conditions of non-emptiness of the set U). Then determination of the minimax estimate of l(x) is equivalent to the problem of optimal control of the system governed by the Equations (15) and (16) with the cost function Proof. For each i = 1, . . . , N + 1, denote by z i (t; u) the restriction of function z(t; u) to a subinterval (t i−1 , t i ) of the interval (t 0 , T) and extend it from this subinterval to the ends t i−1 and t i by continuity. Then, due to (15) and (16), Let x be a solution to the problem (3) and (4). From relations (6)- (8), (19) and (20), and the integration by parts formula (1) Taking into account that from latter equalities, we have The latter relationship yields Since vector x 0 in the first term on the right-hand side of (23) may be an arbitrary element of space R k , the quantity will be finite if and only if u ∈ U, that is, if the first term on the right-hand side of (23) vanishes. Therefore, we will further assume that u ∈ U.
Taking into consideration the known relationship that couples the variance Dη = E[η − Eη] 2 of random variable η with its expectation Eη, in which η is determined by the right-hand side of equality which follows from (22), and from the noncorrelatedness of ξ i = (ξ l (·)) T , from the equalities (22) and (23), we find Set Then ∀F = (x 0 , f ) ∈ G 1 , the generalized Cauchy−Schwarz inequality and (5) imply The direct substitution shows that the last inequality is transformed to an equality at Taking into account that where the infimum over c is attained at Calculate the last term on the right-hand side of (25). Applying Lemma 1, we have Transform the last factor on the right-hand side of (28): Similarly, In view of (10) and (11), we deduce from (28) It is not difficult to check that here, the equality sign is attained at the element where η 1 and η 2 are uncorrelated random variables such that Eη i = 0 and E|η i | 2 = 1, i = 1, 2. Hence, From (25)- (27) and (29), we obtain where I(u) is defined by (18) and where the infimum over c is attained at As a result of solving the optimal control problem formulated in Lemma 2, we come to the following assertion. Theorem 1. There exists a unique minimax estimate l(x) of l(x), which can be represented as and functions p,ẑ andx are found from the solution of systems of equations and − dp(t) respectively, where λ ∈ R k and µ ∈ R k are Lagrange multipliers. Problems Proof. Applying the same reasoning as in the proof of Theorem 1 from [8] and taking into account estimate (1.21) from [15], one can verify that the functional I(u) is strictly convex and lower semicontinuous on U. Since then, by Remark 1.2 to Theorem 1.1 (see [16]), there exists a unique elementû ∈ U such that I(û) = inf u∈U I(u).

σ 1 -Optimal Estimates of Unknown Solution of the Cauchy Problem at the Moment T
In this section, we will define an optimal, in a certain sense, estimate of unknown solution x(t) of the Cauchy problem (3) and (4) at the moment T that is linear with respect to observations (7) and (8) and show that this estimate of x(T) coincides with the function x(t) obtained from the solution to problems (37)-(40) at the moment T.
Let x e (T) be an estimate of x(T) linear with respect to observations (7) and (8), which have the form where U j (t) are n × l-matrices with entries that are square-integrable functions on Ω j , and U i are n × m-matrices, C ∈ R n .
Let M nm (R) be the set of n × m-matrices with real elements, M nl (R) be the set of n × lmatrices with real elements, and L 2 (Ω i j , M nl ) be the set of M nl -valued square-integrable functions on Ω j . Set where H := L 2 (Ω 1 , M nl ) × · · · × L 2 (Ω M , M nl ) × (M nm (R)) N and let {e 1 , . . . , e n } be an orthogonal basis of R n . Let σ 1 (U, C) be the error functional of estimate x e (T), which has the form
Denote by σ(U, C) the quantity defined by An estimatex for which matricesÛ i , matrix-functionsÛ j (·) and vectorĈ are determined from the condition inf U∈H,C∈R n σ(U, C) = σ(Û,Ĉ) which is called an optimal mean square estimate of vector x(T). The quantity is called the error of the optimal mean square estimation.
Parseval's formula implies the inequality Therefore, for the error of the optimal mean square estimation σ, the following estimate from above holds:

Conclusions
When elaborating the guaranteed estimation of solutions to the Cauchy problem in the absence of restrictions on unknown initial data, we have reduced the determination of the necessary minimax estimates to well-defined optimal control problems.
Using this approach, we have proved the existence of the unique minimax estimate and obtained its representation together with that of the estimation error in terms of solutions to the explicitly derived systems of impulsive ordinary differential equations.
The results and techniques of this study can be extended to a wider class of initial value problems and, after appropriate generalization, to the analysis of such estimation problems for linear partial differential equations of the parabolic and hyperbolic types that describe evolution processes. Now, we provide sufficient conditions for non-emptiness of the set U. Introduce matrix-function Φ(t, s) as a unique solution to the problem dΦ(t, s) dt = A(t)Φ(t, s), Φ(s, s) = E n t > s.
Denote by K j (t) and N j k × l and k × m-matrices, respectively, such that K T j (t) = H j (t)Φ(t, t 0 )C and N T i = H i Φ(t i , t 0 )C.
Then l(x) = (C Tz (t 0 ), x 0 ) k . It is easy to see thatŷ j (t) = H j (t)x(t) = H j (t)Φ(t, t 0 )Cx 0 , j = 1, . . . , M, andŷ i = H i Φ(t i , t 0 )Cx 0 , i = 1, . . . , N. Hence, Then a necessary and sufficient condition for the existence of u j (t) and u i such that (C T z(t 0 ; u), x 0 ) k = 0 for all x 0 ∈ R k is that the equation We will look for a solution to this equation in the form u j (t) = K T j (t)d, u i = N T i d, where vector d is determined from the system of equations D T d = C Tz (t 0 ).
Proof. In fact, the previous reasoning leads to the conclusion that for functionz(t; u), the equality holds and condition (A1) is fulfilled if for any g ∈ R k the system has a solution. It is easy to see that the element u 0 ∈ H with components u 0 j (·) = −K T j (·)D −1 T g, j = 1, . . . , M, u 0 i = −N T i D −1 T g, i = 1, . . . , N, satisfies this equation.