Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes

: The paper is devoted to the optimal state ﬁltering of the ﬁnite-state Markov jump processes, given indirect continuous-time observations corrupted by Wiener noise. The crucial feature is that the observation noise intensity is a function of the estimated state, which breaks forthright ﬁltering approaches based on the passage to the innovation process and Girsanov’s measure change. We propose an equivalent observation transform, which allows usage of the classical nonlinear ﬁltering framework. We obtain the optimal estimate as a solution to the discrete–continuous stochastic differential system with both continuous and counting processes on the right-hand side. For effective computer realization, we present a new class of numerical algorithms based on the exact solution to the optimal ﬁltering given the time-discretized observation. The proposed estimate approximations are stable, i.e., have non-negative components and satisfy the normalization condition. We prove the assertions characterizing the approximation accuracy depending on the observation system parameters, time discretization step, the maximal number of allowed state transitions, and the applied scheme of numerical integration.


Introduction
The Wonham filter [1], as well as the Kalman-Bucy filter [2], is one of the most practically used filtering algorithms for the states of the stochastic differential observation systems. It is applied extensively for signal processing in technics, communications, finance and economy, biology, medicine, etc. [3][4][5][6]. The filter provides the optimal in the Mean Square (MS) sense on-line estimate of the finite-state Markov Jump Process. (MJP) given indirect continuous-time observations, corrupted by the Wiener noise. The elegant algorithm represents the desired estimate as a solution to a Stochastic Differential System (SDS) with continuous random processes on the Right-Hand Side (RHS).
The fundamental condition for the solution to the filtering problem is the independence of the observation noise intensity of the estimated state. It provides the continuity from the right for the natural flow of σ-algebras induced by the observations, with subsequent utilization of the innovation process framework. The condition violation breaks these advantages. In the case of the state-dependent observation noise, the author of [7] presents the optimal estimate within the class of the linear estimates. Further, the authors of [8,9] use filters of a linear structure for the solution to the H 2 -optimal state filtering problem. To find the absolute optimal filtering estimate, one has to make extra efforts. First, for proper utilization of the stochastic analysis framework, one needs to reformulate the optimal filtering problem, "smoothing forward" the flow of σ-algebras induced by the observations. Second, in the case of state-dependent noise, the innovation process contains less information than the original observations. One has to supplement the innovation by the observation quadratic characteristic, which represents a continuous-time noiseless function of the estimated MJP state. In general, the optimal filtering given partially noiseless observations is a challenging problem. Its solution can be expressed either as a sequence of some regularized estimates [10] or by the additional differentiation of the smooth observation components or their quadratic characteristics [11][12][13][14]. In both cases, one needs to realize a limit passage, which is difficult in computers.
Even in the traditional settings, the numerical realization of the MJP state filtering is a complicated problem. For example, the explicit numerical methods based on the Itô-Taylor expansion applied to the Wonham filter equation, diverge: the produced approximations do not meet component-wise non-negativity condition. Over time the approximation components reach arbitrary large absolute values. Further, in the presentation, we refer to the approximations, preserving both the component non-negativity and normalization condition as the stable ones.
The Wonham filtering equation is a particular case of the nonlinear Kushner-Stratonovich equation. To solve it, one can use various numerical algorithms • the procedures based on the weak approximation of the original processes by Markov chains [15,16], • some variants of the splitting methods [17], • the robust procedures based on the Clark transform [18,19], • the schemes, which represent the conditional probability distributions through the logarithm [20], etc.
All the algorithms are developed for the case of additive observation noise and based on the Girsanov's measure transform. Hence, they are useless for the estimation of the MJP given the observations with state-dependent noise.
The goal of the paper is two-fold. First, it presents a theoretical solution to the MS-optimal filtering problem, given the observations with state-dependent noise. Second, it introduces a new class of stable numerical algorithms for filter realization and investigates its accuracy. We organize the paper as follows. Section 2 contains a description of the studying observation system with state-dependent observation noise along with the MS-optimal filtering problem statement. To solve the problem, one needs to transform the available observations both to preserve the information equivalence and suit for application of the known results of the optimal nonlinear filtering. Section 3 describes both the observation transformation and the SDS defining the optimal filtering estimate. The SDS is discrete-continuous and contains both continuous and counting random processes on the RHS. Previously, the author of the note [21] presents a sketch of the observation transform, but it cannot guarantee the uniqueness of that SDS solution.
Section 4 presents a new class of the stable numerical algorithms of the nonlinear filtering. The main idea is to discretize original continuous-time observations and then find the MS-optimal filtering estimate given the sampled observations. The authors of [22] use this idea to solve a particular case of the estimation problem, namely the classification problem of a finite-state random vector given continuous-time observations with multiplicative noise. Section 4.1 contains a general solution to the problem. The corresponding estimate represents a ratio, which numerator and denominator are the infinite sums of integrals. They are shift-scale mixtures of the Gaussians. The mixing distributions, in turn, describe the occupation time of the system state in each admissible value during the time discretization interval. In Section 4.2, we suggest approximating the estimates by a convergent sequence bounding number s of possible state transitions, which occurred over the discretization interval. We replace the infinite sums in the formula of the optimal estimate by their finite analogs and also investigate the accuracy of the approximations. We refer these approximations as the analytical ones of the s-th order. One cannot calculate the integrals analytically and have to replace them with some integral sums, and this brings an extra error. Section 4.3 analyzes the value of this error and the total distance between the optimal filtering estimate given the discretized observations and its numerical realization. Section 4.4 presents a numerical example that illustrates the conformity of theoretical estimates and their numerical realization. Section 5 contains discussion and concluding remarks.

Continuous-Time Filtering Problem Statement
On the probability triplet with filtration (Ω, F , P, {F } t 0 ) we consider the observation system Here • X t = col(X 1 t , . . . , X N t ) ∈ S N is an unobservable state which is a finite-state Markov jump process (MJP) with the state space S N {e 1 , . . . , e N } (S N stands for the set of all unit coordinate vectors of the Euclidean space R N ) with the transition matrix Λ(t) and the initial distribution π = col(π 1 , . . . , π N ); the process M X t is an F t -adapted martingale, The natural flow of σ-algebras generated by the observations Y up to the moment t is denoted by The optimal state filtering given the observations Y is to find the Conditional Mathematical Expectation (CME)

Observation Transform and Optimal Filtering Equation
Before derivation of the optimal filtering equation we specify the properties of the observation system (1) and (2).

1.
All trajectories of {X t } t 0 are continuous from the left and have finite limits from the right, i.e., are cádlág-processes.

3.
The noises in Y are uniformly nondegenerate [10], i.e., min 1 n N, t 0 G n (t) > αI for some α > 0; here and after, I is a unit matrix of appropriate dimensionality. 4.
The processes have a finite variation; here and after, I A (x) is an indicator function of the set A, and 0 is a zero matrix of appropriate dimensionality.
Conditions 1-3 are standard for the filtering problems [10]. They guarantee the proper description of MJP distribution π(t) E {X t } by the Kolmogorov system π(t) = π + t 0 Λ (s)π(s)ds. Condition 4 relates to the quadratic characteristic of the observation process as a key information source itself. Below we show that collection of G n (·), distinguished for different n, allows to restore the state X t precisely given the available noisy observations. Condition 4 guarantees the local regularity of the time subsets, where G n (·) coincide and/or differ each other: one can express them as finite unions of the intervals. The condition is not too restrictive: for instance, they are valid when G n (·) are piece-wise continuous with bounded derivatives.
Both the system state and observation are special square-integrable semimartingales [6,23] with the predictable characteristics Conditions 1-3 and the properties of X t guarantee P-a.s. fulfilment of the following equalities for the one-sided derivatives of Y, Y t : where ∆G n (t) The inclusion presumes the flow of σ-subalgebras {Y t } t 0 is not necessarily continuous from the right for the considered observations [24]. This is a reason to define a filtering estimate as a CME of X t with respect to the "smoothed" flow Y t+ for subsequent correct usage of the stochastic analysis framework.
Let us transform the available observations in such a way to derive the optimal filtering estimate by the standard methods [6,23]. Initially, the idea of this transform is suggested in [11]. As the result, the authors introduce the pair The authors of [11] prove coincidence of the σ-algebras Y t = σ{U s , 0 s t} ∨ σ{ Y, Y s , 0 s t} for the general diffusion observation systems. However, they do not pay attention to the continuity of {Y t } from the right. The authors of [12,14] suggest to replace the observations Y, Y t by their derivative Then, one can construct the optimal estimate either to use Q t as a linear constraint or to differentiate (10) for extraction of the dynamic noises. The papers [12,14] contain a rather pessimistic conclusion: the number of differentiations is unbounded in the general case of diffusion observation system. In contrast, we estimate a finite-state MJP and can construct the optimal filtering estimate using Q without additional differentiation.
So, the transformed observations will contain • diffusion processes with the unit diffusion, • counting stochastic processes, • indirect state observations obtained at the nonrandom discrete moments.
The first transformed observation part is the process U t (8), and in view of (2) and (7) it can be rewritten as where and W t is an F t -adapted standard Wiener [10]. The process Q t could play the role of the second part of the transformed observations since Y t = σ{U s , Q s , s ∈ [0, t]} [11], however the natural flow of σ-algebras generated by the couple (U, Q) is not continuous from the right yet. Moreover, the process Q t is matrix-valued and looks overabundant for the filter derivation. The point is, Q t = Q(t, X t− ) (10) is a function of the finite-set argument X t , and it affects the estimate performance through its complete preimage To go to the preimage we introduce the following transformation of Q t : H t is a Y t -adapted vector process with components 0 or 1, but the trajectories H t are not cádlág processes. Due to the fact X t− = X t P-a.s. for ∀ t ≥ 0 the equalities below are valid where K(t) is the N × N-dimensional matrix with the components (4). The function K(t) has the following properties. 1.
The number of K(·) jumps occurred in any finite time interval is finite due to condition 4.

5.
For any t 0 there exists a transformation T(t) such that the matrix T(t)K(t) is trapezoid with orthogonal strings and 0 and 1 as the components. 6. P T(t)H t ∈ S N = 1 for any t 0.
Let us define a Y t+ -adapted process V t = col(V 1 t , . . . , V N t ) with the cádlág-trajectories: From (12) and (13) We denote the set of the process V discontinuity by V, X stands for the set of X discontinuity and J for the analogous set of the process J. The sets V and X are random, in contrast J is nonrandom. The process V t is purely discontinuous, and due to property 4 it can be rewritten in the form Due to the definition V t ∈ S N for ∀ t 0. The process D t characterizes the observable jumps at the nonrandom moments caused by J(t) changes, and R t is an observable part of the state X t jumps, occurred, at some random instants.
As a second part of the transformed observations, we choose the N-dimensional random process C t col(C 1 t , . . . , C N t ): the components C n t count the jumps of the process V t into the state e n , occurred at the random instants over the interval [0, t]: The third part of the transformed observations is the N-dimensional process D t with the jumps at the nonrandom moments.
Correctness of the Lemma assertion follows immediately from the fact the composite process (U t , C t , D t ) is constructed to be Y t+ -adapted, and one-to-one correspondence of the (U, C, D) and Y paths: Below we use the following notations: 1 is a row vector of the appropriate dimensionality formed by units, J n (s) e n J(s) is the n-th row of the matrix J(s), Lemma 2. The process C t = col(C 1 t , . . . , C N t ) has the following properties. 1.
n-th component C n t allows the martingale representation
The innovation processes are Y t -adapted martingales with the quadratic characteristics Proof of Lemma 2 is given in Appendix A.
Finally, the transformed observations (U, C, D) take the form Theorem 1. The optimal filtering estimate X t is a strong solution to the SDS and A + is a Moore-Penrose pseudoinverse. The solution is unique within the class of nonnegative piecewise-continuous Y t+ -adapted processes with discontinuity set lying in V.
Proof of Theorem 1 is given in Appendix B. The transformed observations (22) along with Theorem 1 prompt a condition of the exact identifiability of the state X t given indirect noisy observations Y t (2). Corollary 1. If for any n = m (n, m = 1, N) the inequalities G n (s) = G m (s) are true almost everywhere on [0, t], then X t = X t P-a.s., and X t is the solution to SDS (23).
The proof of Corollary 1 is given in Appendix C.

Optimal Filtering Given Discretized Observations
The latter section contains the stochastic system (23) defining the optimal filtering estimate X t . The problem of its numerical realization seems routine: we should apply the corresponding methods of numerical integration of SDS with jumps on the RHS [26]. However, this simplicity is illusory. The problem is that the "new" countable observation C t and discrete-time one D t are results of certain transform of the available observation Y, and this transform includes a limit passage operation. In fact, to obtain C t we have to estimate/restore the current value of the derivative d Y,Y t+ dt . First, this leads to some time delay to accumulate observations Y t . Second, any pre-limit variant of C t either has a.s. continuous trajectories or represents their sampling, which demonstrates oscillating nature. Third, the considered filtering estimate is the CME of the state X t given the observations Y up to the moment t. The CME has natural properties: its components are a.s. non-negative and satisfy the normalization condition. The estimates and approximations having these properties are referred in the paper as the stable ones. Mostly, the conventional numerical algorithms do not provide these properties for the calculated approximations. They can preserve the normalization condition only, but the components can have the arbitrary signs and absolute values.
In the paper we present another approach to the numerical realization of the filtering algorithm above. We discretize the available observations Y by time with the increment h and then solve the optimal state filtering problem given discretized observations. The estimate can be considered as approximation of the one given the initial continuous-time observations. Properties of the CME guarantee the stability of the proposed approximation.
To simplify derivation of the numerical algorithm and its accuracy analysis we investigate the time-invariant subset of the observation system (1), (2), i.e., Λ(t) ≡ Λ, A(t) ≡ A, G n (t) ≡ G n , n = 1, N. The observations are discretized with the time increment h: where t r rh are equidistant time instants. We denote Y r σ{Y s : 1 s r} non-decreasing collection of σ-algebras generated by the time-discretized observations; Y 0 {∅, Ω}.
The optimal state filtering problem given discretized observations is to find X r E {X t r |Y r }. Let us consider asymptotics of X. We fix some T > 0 and consider a condensed sequence of binary meshes { rT 2 n } r=1,2 n with time increments h n T 2 n and corresponding increasing sequence The convergence also holds, if we replace the sequence of the binary meshes by any condensed sequence with vanishing step. So, we can conclude that the optimal filtering given the discretized observation is a way to design the stable convergent approximations without observation transform Y → (U, C, D) introduced in the previous section.
To derive the filtering formula we use the approach of [27] and the mathematical induction.
In the case r = 0 we have Let for some r ∈ N the estimate X r−1 = E X t r−1 |Y r−1 be known. Now we calculate X r at the next time instant. To do this we have to specify the mutual conditional distribution (X t r , Y r ) with respect to Y r−1 . From the observation model and ([10] Lemma 7.5) it follows that the conditional distribution of Y r given σ-algebra F X t r ∨ Y r−1 is Gaussian with the parameters Here, υ r = col(υ 1 r , . . . , υ N r ) t r t r−1 X s ds is a random vector composed of the occupation times of the process X in each state e n during the interval [t r−1 , t r ].
Below in the presentation we use the following notations: D is a distribution support of the vector υ r ; • Π {π = col(π 1 , . . . , π N ) : π n 0, ∑ N n=1 π n = 1} is a "probabilistic simplex" formed by the possible values of π; • N X r is a random number of the state X t transitions, occurred on the interval [t r−1 , t r ], • a s r {ω ∈ Ω : N X r (ω) s}, A s r ∏ r q=1 a s q ; • ρ k, ,q (du) is a conditional distribution of the vector X t r I {q} (N X r )υ r given X t r−1 = e k , i.e., for any G ∈ B(R M ) the following equality is true: is an M-dimensional Gaussian probability density function (pdf) with the expectation m and nondegenerate covariance matrix K; Markovianity of {(X t r , Y r )} r 0 , formula of the total probability and Fubini theorem provide the equalities below for any set A ∈ B(R M ) This means that the integrand in the square brackets defines the conditional distribution (X t r , Y r ) given Y r−1 . Further, the conditional distribution X r is defined component-wisely by the generalized Bayes rule [10] So, we have proved the following Lemma 3. If for the observation system (1), (2) conditions 1-3 are valid, then the filtering estimate X r given the discretized observations is defined by (26) at r = 0, and by recursion (28) at the instant t r of the discretized observation Y r reception.

Stable Analytic Approximations
Recursion (23) cannot be realized directly because of infinite summation both in the numerator and denominator. We replace them by the finite sums, and the corresponding vector sequence X r (s), calculated by the formula is called the analytic approximation of the s-th order of X r . Obviously, that X r (s) is stable. Let us introduce the following positive random numbers and matrices: The estimates X r (28) and X r (s) (29) can be rewritten in the recurrent form: X r (s) = (1ξ r X r−1 (s)) −1 ξ r X r−1 (s).
Let us define the global distance [28] between the estimates {X r (s)} and { X r } as The pretty natural characteristic shows the maximal expected divergence of the recursions (28) and (29) at the r-th step.
The assertion below defines an upper bound of the characteristic Σ r (s).
where λ max 1 n N |λ nn |, and C 1 = C 1 (h, λ) ∈ (0, 1) is the following parameter: which is bounded from above: C 1 The proof of Lemma 4 is given in Appendix D. Assertion of Lemma brings the practical benefit. The Lemma does not contain any asymptotic requirements neither to the approximation order s nor to the discretization step h: inequality (34) is universal. Mostly, in the digital control systems the data acquisition rate is fixed or bounded from above. There are some extra algorithmic limitations of the rate: the "raw" data should be preprocessed, smoothed, averaged, refined from outliers, etc. For example, utilization of the central limit theorem [29] and diffusion approximation framework [30] for the the renewal processes is legitimate with significant averaging intervals, and their length depends on the process moments. Now we fix the time instant T and consider an asymptotic h → 0. In this case r = T h → ∞ and

Stable Numerical Approximations
In the recursion (32) we use the integrals ξ ij r , which cannot be calculated analytically. The numerical integration brings some extra approximation error. Let us investigate its affect to the total accuracy of the filter numerical realization.
The integrals ξ ij (y) are usually approximated by the sums In complete analogy with ξ q we define the approximations ψ q ψ ij (Y q ) i,j=1,N . By construction, the elements of ψ q are positive random values, hence the approximation X r X r (1ψ r X r−1 ) −1 ψ r X r−1 , X 0 = π (37) is stable. Below we denote the numerical integration errors and their absolute values as follows So, the recursion (32) is replaced by the scheme (37), holding the common initial condition π. Both (32) and (37) are constructed in light of the event A s r : the state transition numbers do not exceed the threshold s over any subintervals [t q−1 , t q ] belonging to [0, t r ]. So, the distance between X r and X r (s) should be determined taking into account A s r . In view of this fact, we propose the pseudo-metrics This index reflects maximal divergence of the algorithms (32) and (37) after r steps, being started from the arbitrary but common initial condition.
is true for the numerical integration scheme (36), then the distance E r (s) is bounded from above: The proof of Theorem 2 is given in Appendix E. The chance to describe the accuracy of the numerical algorithm for the stochastic filtering using only the condition (41), related to the calculus, looks remarkable. Furthermore, if the total weight Q = ∑ ,j ij separates from the unity, i.e., Q < 1, then the index E r (s) is a sublinear function of r, so as the index Σ r (s) of the analytic accuracy is. Notably, that in the classic numerical algorithms of the SDS solution the global error grows linearly with respect to the number of steps r [26]. The precision characteristics of both the analytical approximation and its numerical realization should be aggregated into the one. If the conditions of Lemma 4 and Theorem 2 are valid, then the local distance (i.e., the distance after one iteration) between the optimal filtering estimate and its numerical approximation can be bounded from above: The global distance between X r E {X r |Y r } and X r can be bounded in the similar way: We could choose the parameters (h, s) of the analytical approximation and δ of the numerical integration independently each other. However, both the limitation of the computational resources and the accuracy requirements lead to the necessity of the mutual optimization of (h, s, δ).
Let us fix some time horizon T along with the order s of analytical approximation, and consider the asymptotic r → ∞, or, equivalently, h = T r → 0. Due to the Bernoulli inequality, and condition 0 < Q 1 we have that The first summand in the brackets represents the contribution of the analytical approximation error, the second one reflects the error of the specified numerical integration scheme. Obviously, the optimal choice of the parameters provides an equal infinitesimal order for both the summands, and it is possible when δ ∼ (λh) s+1 λ .

Numerical Example
To illustrate the correspondence between the theoretical estimate and its realization along with the performance of the numerical algorithm, we consider the filtering problem for the observation system (1) and (2) with the following parameters: t ∈ [0, 1], N = 3, The specified observation system is the one with state-dependent noise, and the conditions of Corollary 1 hold, so the optimal filter (23) restores the MJP state precisely under available noisy observations. Let us verify this theoretical fact, using the recursive algorithm (37). We choose the analytical approximation of the order s = 1 with numerical integration by the simple midpoint rectangle scheme and calculate estimate approximations with decreasing time-discretization step: h = 0.01; 0.001; 0.0001; 0.00001. We expect the descent of the estimation error characterized by To calculate the criterion, we use the Monte-Carlo method over the test sample of the size 1000. Figure 1 presents the corresponding plots of the quality index S t (h) for various values of h. The determination of the precision order provided by the chosen numerical integration method is out of the scope of this investigation. Nevertheless, one can see the expected decrease of the estimation error when the time-discretization step descends. We appraise this result as a practical confirmation of both the theoretical assertions and numerical algorithm.

Conclusions
In this paper, we investigated the optimal filtering problem of the MJP states, given the indirect noisy continuous-time observations. The observation noise intensity was a function of the estimated state, so it was impossible to apply the classic Wonham filter to this observation system. To overcome this obstacle, we suggested an observation transform. On the one hand, the transformed observations remained to be equivalent to the original one from the informational point of view. On the other hand, the "new" observations allowed to apply the effective stochastic analysis framework to process them. We derived the optimal filtering estimate theoretically as a unique strong solution to some discrete-continuous stochastic differential system. The transformed observations included derivative of the quadratic characteristics, i.e., the result of some limit passage in the stochastic settings. Hence, the subsequent numerical realization of the filtering became challenging. We proposed to approximate the initial continuous-time filtering problem by a sequence of the optimal ones given the time-discretized observations. We also involved numerical integration schemes to calculate the integrals included in the estimation formula. We prove assertions, characterizing the accuracy of the numerical approximation of the filtering estimate, i.e., the distance between the calculated approximation and optimal discrete-time filtering estimate. The accuracy depended on the observation system parameters, time discretization step, a threshold of state transition number during the time step, and the chosen scheme of the numerical integration. We suggested the whole class of numerical filtering algorithms. In each case, one could choose any specific algorithm individually, taking into account characteristics of the concrete observation system, accuracy requirements, and available computing resources.
We do not consider the presented investigations as completed. First, the characterization of the distance between the initial optimal continuous-time filtering estimate and its proposed approximation is still an open problem. Second, we can use the theoretical solution to the MJP filtering problem as a base of numerical schemes for the diffusion process filtering, given the observations with state-dependent noise. Third, the obtained optimal filtering estimate looks a springboard for a solution to the optimal stochastic control of the Markov jump processes, given both the counting and diffusion observations with state-dependent noise. All of this research is in progress.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: CME Conditional mathematical expectation MJP Markov jump process pdf Probability density function RHS Right-hand side SDS Stochastic differential system

Appendix A. Proof of Lemma 2
From (14), (15), the identity diag(a)b ≡ diag(b)a, the fact that J n (t) = J n (t−) at most at finite points of any finite interval and property 4 of the function K(t), the following equalities are true Assertion 1 of Lemma is proved. The definition of the processes C n t (n = 1, N) guarantees their strong orthogonality, i.e., P ∆C i t ∆C j t = 0 ≡ 0 for any i = j and t 0, so [C i , C j ] t ≡ 0. Let us use (5), (19) and properties of X and J n to derive the quadratic characteristics of C n : Assertion 2 of Lemma is proved. If s and t are two arbitrary moments, such that s t, then i.e., ν n t is a Y t -adapted martingale. Note, that ν n t is purely discontinuous with unit jumps, hence where µ 0 t is some Y t -adapted martingale. From the uniqueness of the special martingale representation [ν n , ν n ] t it follows that ν n , ν n t = t 0 1Γ n (s) X s ds. Lemma 2 is proved.

Appendix B. Proof of Theorem 1
We use the same approach as in ( [6], Part III, Sect. 8.7) to derive the MJP filtering equations. The idea exploits the uniqueness of the representation for a special semimartingale along with the integral representation of a martingale [23].

From the Bayes rule it follows that
Due to the generalized Itô rule, formulae (5), (18) and the properties of X and J j we can obtain, that where µ 5 t is an F t -adapted martingale. Conditioning both sides of this equality over Y t , we get where µ 6 t is a Y t -adapted martingale. On the other hand, using the Itô rule, representation (A3) and quadratic characteristic (21) we deduce, that where µ 7 t is a Y t -adapted martingale. Since the representations (A7) and (A8) correspond to the same special semimartingale X t C j t we conclude that the process β j s should satisfy the equality Acting as with the coefficient α t , we choose the predictable processes β j t in the form So, on the interval [κ n−1 , κ n ) the optimal filtering estimate X t is described by the SDS Since P {∆X κ n = 0} = 1, equation (A10) presumes P-a.s. fulfilment of the equality E X κ n |Y κ n−1 ∨ σ{U s , s ∈ (κ n−1 , κ n ]} ∨ σ{C j s , s ∈ (κ n−1 , κ n ], j = 1, N} = Finally, Y κ n = Y κ n−1 ∨ σ{U s , s ∈ (κ n−1 , κ n ]} ∨ σ{C j s , s ∈ (κ n−1 , κ n ], j = 1, N} ∨ σ{∆D κ n }, so, by the Bayes rule we get that To simplify inferences we will omit the index r in Ξ r and Θ r . The following relations are valid Let us consider an auxiliary estimateX r E X t r I A s r (ω)|Y r . From the Bayes rule it follows that From (A13) and (A14) we deduce, that for r = 1 and ∀ π ∈ Π The counting process N X t has the quadratic characteristic N X , N X t = − t 0 ∑ N n=1 λ nn X n s ds, hence the probability P {a s 1 } can be bounded from above as Formulae (A15) and (A16) lead to the fact, that sup π∈Π E X 1 − X 1 (s) 1 2C 1 (s+1)! . Markovianity of the pair (X t , N X t ) and inequality (A16) also allow to bound the probability P A s r from above: P A s r , that leads to (34). Lemma 4 is proved.
The correctness of the Lemma assertion in the general case of E I A s r (ω)Φ r can be verified similarly. Lemma A1 is proved.