Next Article in Journal
A Comparative Study of Multiscale Sample Entropy and Hierarchical Entropy and Its Application in Feature Extraction for Ship-Radiated Noise
Previous Article in Journal
Blind Frequency Estimation and Symbol Recovery for the Analytically Solvable Chaotic System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations

by
Yuri Goritskiy
1,
Vladimir Kazakov
2,*,
Olga Shevchenko
1 and
Francisco Mendoza
2
1
Department, Moscow Power Engineering Institute, Technical University, Krasnokazarmennya 14, 111250 Moscow, Russia
2
National Polytechnic Institute of Mexico, Ave. IPN s/n, Building Z, Access 4, 3th Floor, SEPI-Telecommunications, 07738 Mexico City, Mexico
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(8), 792; https://doi.org/10.3390/e21080792
Submission received: 27 June 2019 / Revised: 31 July 2019 / Accepted: 5 August 2019 / Published: 14 August 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We propose a description of the model of a random piecewise constant field formed by the sum of realizations of two Markov processes with an arbitrary number of states and defined along mutually perpendicular axes. The number of field quantization levels can be arbitrary. Realizations of a random field model of the desired shape are created by appropriate selection of parameters for formative realization of Markov processes. For the proposed field model, we investigated the sampling and restoration algorithm of any selected realizations. As a result, we determined the optimal sampling and recovery algorithms. The resulting sampling is fundamentally non-periodic. Recovery errors are calculated. Two examples are considered.

1. Introduction

In image processing, the first operation performed on a specific plot in most cases is level quantization. The number of levels depending on the objectives of the study may be different. The second operation is to sampling of the quantized plot using Cartesian (or polar) coordinates. The problems that arise here are typical for the sampling-recovery algorithms (SRA) of realizations of random fields; it is necessary to determine based on a priori information about the image and selected criteria, the optimal sampling-recovery procedure and then evaluate the quality of restoration.
Gaussian models of random fields are the simplest, because for multidimensional distributions of random variables, the analytical expressions representing a priori information about the field are known. The situation with non-Gaussian field models is fundamentally different, since the analytical description becomes very complicated and the level quantization operation only aggravates the situation. This circumstance explains the fact that the number of real images being analyzed is constantly growing, see [1,2,3] etc.
Various models of the random fields, the realizations of which are quantized by level and sampled on a plane, are known in the literature (see, for example, the fundamental books [4,5,6,7,8], as well as the journal articles [9,10,11,12,13,14]. However, the authors of the present article are not aware of any works discussing the statistical models of random non-Gaussian fields with a finite number of states (i.e., analytical field models after a quantization). In addition, there are no publications devoted to the SRA description of realizations of such fields. The proposed work is intended to remedy the aforementioned shortcomings.
The first part of this work deals with a generalization of a random field model with jumps (or piecewise constant states). The first publication devoted to the analytical description of non-Gaussian fields with jumps in brightness is [15], which is a “chessboard with random rectangles”. This model was further generalized in [16,17]. In [17] the number of levels was increased to four. Realizations of such random fields with piecewise constant states are produced by the summation of realizations of binary Markov processes given along the coordinate axes. In the present work we generalize the model [17] by forming the field realizations as a sum of realizations of piecewise constant Markov processes with an arbitrary number of states (such processes are called markovian chains with continuous time). Obviously, field realizations also have an arbitrary number of states. Using realizations of Markov processes as generators of field realizations makes it possible to analytically describe a random field model with piecewise constant states.
The second part of the work is devoted to the study of the sampling-recovery algorithm (SRA) of the realizations of the proposed field model. The SRA for random continuous processes and fields is widely known. However, the studies of SRA of realizations of discontinuous processes and fields are scarce. We begin with a brief discussion of SRA of random processes with piecewise constant states. SRA for the realizations of such processes have pronounced differences from features SRA for realizations of continuous random processes. Namely, in case of SRA of continuous processes, it is necessary to determine the recovery function in the interpolation and/or extrapolation modes, as well as assess the quality of the recovery. In that case the recovery function and the recovery error are the functions of time. SRA for realizations of piecewise continuous processes are completely different, since here it becomes necessary to determine the procedure for estimating the instants of transitions from one state to another, as well as assess the estimation´s the quality. In other words, it comes down to estimating a set of random variables. Naturally, in both cases SRA are based on a priori information about random processes, which are significantly different.
Realizations of random processes and fields in SRA are investigated by applying the conditional mean rule (CMR). The optimal structures of the reducing agents of realizations are determined for an arbitrary number and location of the samples. The conditional mean recovery algorithm automatically provides the minimum mean square recovery error. Numerous examples of the application of the discussed technique can be found in [18,19,20,21]. In particular, on the basis of this method were analyzed the realizations of the SRA binary Markov process [22] and the realizations of the SRA of a piecewise continuous Markov process with an arbitrary number of states [23]. The results of [22] served as the basis for the study of SRA for random piecewise constant fields [16,17] with the number of states equal to 2, 3, 4. In this work we study using the approach [23] the SRA for the field realizations with an arbitrary number of states. It is clear that an increase in the number of levels of field values increases the possibilities of describing random field models quantized by levels. We emphasize that the optimal SRA entails the non-periodic sampling of realizations. This nontrivial feature is overcome by the CMR method used.
The proposed model is flexible, since a specific type of realizations of a field model with the desired features can be formed by an expedient choice of probabilistic characteristics of the generating Markov processes.
The article consists of five sections. The first describes the proposed random field model with piecewise constant states, the second describes the sampling-recovery algorithm of field realizations and discusses the conditions under which realizations of generating processes can be restored using a known field realization. The third one justifies the choice of sampling intervals for realizations of processes for a given probability of state missing. In the fourth section, the instants of transition from one state to another are estimated from discrete samples. The fifth gives two examples that illustrate the proposed algorithms.

2. Description of the Model of a Random Field with Piecewise Constant States

We form a random piecewise-constant field ζ ( t , s ) by summing two independent homogeneous Markov processes ξ ( t ) and η ( t )
ς ( t , s ) = ξ ( t ) + η ( s )
The processes ξ ( t ) and η ( s ) have continuous arguments and discrete sets of states (let it be the sets X = { x i , i = 1 , 2 , , N ξ 1 } and Y = { y j , j = 1 , 2 , , N η 1 } which are defined on the axes t and s . Hereinafter, it is convenient to perceive t and s as time axes. N ξ , N η are the numbers of states of the corresponding processes. We do not introduce special notation for realizations of random functions that are different ξ ( t ) , η ( t ) and ζ ( t ) , therefore Equation (1) will also be understood as the sum of realizations. The Markov property of formative processes allows an analytic description of the structure of such a field. It might be possible to abandon the Markov properties of the generating processes, but this significantly complicates the analytical description of the SRA (for comparison, see, for example, [24]).
It is assumed that the characteristics of the processes ξ ( t ) and η ( s ) (the output density and the transition probability) are known. Denote the output density (or output intensity) of the states: for ξ ( t ) : λ ξ i , i = 1 , 2 , , N ξ 1 , for η ( s ) : λ η j , j = 1 , 2 , , N η 1 the meaning of which is tat:
P { go out from the state x i during the   interval Δ t } = λ ξ i Δ t + o ( Δ t )
P { go out from the state y j during the interval Δ s } = λ η j Δ s + o ( Δ s )
We also assume that for ξ ( t ) and η ( s ) there are given graphs or transition probability matrices P ξ = [ p i k ] , P η = [ q j s ] .
Figure 1 shows examples of realizations of fields formed by the sum Equation (1). The number of levels according to the values of each function in Figure 1a–c is 10 (a total of 19 field levels). High intensity of red color means the maximum value of the field, and high intensity of blue color minimum.
Figure 1a–c illustrate the flexibility of the proposed model. It is possible to form various realizations of the random field by selecting the parameters of the generating realizations. Specifically, it is possible to create field realizations with a fairly large area where there are no transitions. Such areas can be in the center (Figure 1a) or shifted in any direction (Figure 1b shows an upward). It is also possible to simulate two maxima (Figure 1c). Small areas with different sizes can be created around such areas, and so on. The Figure 2 shows a field with a maximum at the center (similar to Figure 1a) with a coarser quantification 8 levels for field values.

3. Description of the Sample-Recovery Algorithm of the Realization of the Field Model

Unrecoverable errors associated with the omission of states may occur in the sampling of realizations of processes with piecewise constant states. Such errors occur in the case when the duration of the process in the state is less than the sampling interval. The SRA of realization of the processes under consideration should take into account this phenomenon. Namely, it is necessary to determine the SRA so that when recovering, the probability of skipping a state is no more than a given level γ [22,23]:
P { skiping   of state } γ
This post expresses the idea of the condition: “it is advisable not to skip states”. A similar requirement must be met when analyzing the SRA of a random field ζ ( t , s ) . Below, in Section 4, we detail the condition Equation (2) is detailed.
The description of the SRA realizations of the field ζ ( t , s ) is associated with the analysis of the SRA realizations of the generating processes ξ ( t ) and η ( s ) . At the same time, knowing the samples of the field realization, one must not only estimate the position of the switching points of the process realizations, but also determine the states of the processes themselves. The latter circumstance is important because the residence time of the process in a given state and the sampling interval depend on the state itself. We emphasize this circumstance, since sampling turns out to be fundamentally non-periodic.
To determine the realizations of ξ ( t ) and η ( s ) from known values of the field ζ ( t , s ) , it is enough to know its value at one point of one of the formative realizations ( ξ ( t ) or η ( s ) ). For example, if at some point s * the value η * = η ( s * ) is known, then by Equation (1) we have
ξ ( t ) = ζ ( t , s * ) η ( s * ) = ζ ( t , s * ) η *
ξ ( t ) = ζ ( t , s * ) η ( s * ) = ζ ( t , s * ) η * .
Then at an arbitrary point t * of the t axis one can write:
ξ ( t * ) = ζ ( t * , s * ) η ( s * ) = ζ ( t * , s * ) η *
Finally, the application Equation (1) yields
η ( s ) = ζ ( t * , s ) ξ ( t * ) = ζ ( t * , s ) ξ * .
The possibility of unambiguous recovery depends on the sets of states of the formative processes and on the specific states used in a given field realization. There may be situations where recovery is trivial. For example, let the process ξ ( t ) have d states: x i = i Δ , i = 1 , 2 , , d 1 ( Δ is the measurement discreteness; we take Δ = 1 ) and the process η ( s ) has M states: y j = j Δ , j = 0 , 1 , 2 , , M 1 .
Table 1 shows the values of the function z ( x , y ) = x + y for arbitrary d and M .
In particular, for d = M = 4 , it turns out d M = 16 field levels. The values of this function are given in Table 2.
In the examples, all values of the function z = f ( x , y ) = x + y are different, and therefore each field value corresponds to a single pair ( x , y ) , so restoring the realizations of the processes ξ ( t ) and η ( s ) is trivial.
The situation when the process state grids have the same step is quite important. For example, let the processes ξ ( t ) and η ( s ) have N integer states: x i = i , y j = j ; i , j = 1 , 2 , , N 1 . The values of the sum with N = 4 are shown in Table 3. It is clear that if the field has the values 1, 2, 3 or 5, then the value of the pair ( x , y ) is not uniquely determined. If the value of the field is 0 or 6, then the value of the pair ( x , y ) is uniquely determined, and therefore the realizations ξ ( t ) and η ( s ) are determined in accordance with Equations (3) and (4).
For this important case (the case of sets with the same grid step), the necessary and sufficient condition for the uniqueness of the recovery of generating realizations is the following statement: among the values of the realization of the field ζ ( t , s ) there is either the minimum possible value
min t , s ζ ( ξ ( t ) , η ( s ) ) = min ξ , η ζ ( ξ , η ) = ζ min = x min + y min = z min
min t , s ζ ( ξ ( t ) , η ( s ) ) = min ξ , η ζ ( ξ , η ) = ζ min = x min + y min = z min
or the maximum possible value
max t , s ζ ( t , s ) = max ξ η ζ ( ξ , η ) = ζ max = x max + y max = z max
where ( x min , y min ) , ( x max , y max ) are the minimum and maximum values of the sets X and Y .
The sufficiency is obvious, since each of these values corresponds to a single pair: ( x min , y min ) for ζ min , and ( x max , y max ) for ζ max , and then by Equations (3) and (4) the realizations ξ ( t ) and η ( s ) are restored.
The necessity: from the uniqueness follows the fulfillment of condition Equations (5) and (6). Suppose that none of the conditions Equations (5) and (6) is satisfied, i.e., among the values ( ξ , η ) there is neither a point ( x min , y min ) , nor a point ( x max , y max ) . Or else, this condition is written differently: for any value ( ξ , η ) = ( x , y ) ,
z min + 1 ζ ( ξ , η ) = x + y z max 1
and show that recovery is not unique. Indeed, the available values of z = ζ ( ξ , η ) fields are obtained by summing some values of x i and some values of y j , and therefore the system of algebraic equations
ζ i , j = x i + y j
relatively unknown terms x 1 , x 2 , , y 1 , y 2 , is solvable by construction (the question is whether the solution is unique). Let be x 1 , x 2 , , y 1 , y 2 , that generate the available values of ζ i j field.
Condition Equation (8) (the absence of two “necessary” points ( x max , y max ) and ( x min , y min ) ) is satisfied only in two cases:
all   x i x min + 1   and   all   y j y max 1 ,
all   x i x min + 1   and   all   y j y max 1 ,
or   all   x i x max 1   and   all   y j y min 1 .
These two situations are illustrated in Figure 3a,b.
We form a new solution by marking the values with strokes. If Equation (9) is true, then from each x i we subtract 1, and to each y j we add 1:
x i = x i 1 , y j = y j + 1 .
It is clear that x 1 , x 2 , , y 1 , y 2 , also satisfies system Equation (8), and in the Figure 3, the range of values a) goes to region b).
If true Equation (10), then we add to each x i 1 and subtract 1 from each y j : x i = x i + 1 , y j = y j 1 .
These values also satisfy system Equation (8), while in Figure 3 the domain of (b) goes into region (a). Thus, if condition Equations (5) and (6) is not satisfied, then the solution is not unique. The necessity is shown.
Note that if the uniqueness condition is not satisfied, then the restoration is possible up to constants, and this is quite acceptable in many practical situations.
So, the necessary and sufficient condition for the uniqueness of the recovery of formative realizations ξ ( t ) and η ( s ) is the existence (among the values of a given field realization) of an least one extreme value ζ * ( ζ max or ζ min ) of a field that can be represented by the sum in the only way:
ζ * = ξ * + η * , ξ * X , η * Y
where X and Y are the sets of the states of the processes, respectively, ξ ( t ) and η ( s ) .
The sampling procedure consists of the following:
(1)
Determining the values ζ * , ξ * , η * among the given field values together with the corresponding values of the arguments t * and s * .
(2)
The definition of formative realizations ξ ( t ) and η ( s ) by Equations (3) and (4).
(3)
Calculation of sampling intervals T n ξ and T m η along the t and s axes and sampling points: t n + 1 = t n + T n ξ , s m + 1 = s m + T m η , t 0 = 0 , s 0 = 0 ,   n , m = 1 , 2 , .
(4)
Measurement of values
ξ n = ζ ( t n , s * ) η * ,   η m = ζ ( t * , s m ) ξ
The sampling results are sets of values
( t 0 , ξ 0 ) , ( t 1 , ξ 1 ) , , ( t n , ξ n ) ,
( s 0 , η 0 ) , ( s 1 , η 1 ) , , ( s m , η m ) .
The field recovery procedure consists of the following actions:
(1)
Restoration of realizations ξ v ( t ) and η v ( s ) by the values of ξ n and η m , n , m = 1 , 2 , ;
(2)
Determining the recovered values of the ζ v ( t , s ) field in accordance with Equation (1):
ζ v ( t , s ) = ξ v ( t ) + η v ( s )

4. Determination of Sampling Intervals

SRA for realization of Markov processes with a finite number of states are described in [22]. Let a continuous time Markov process have states 0 , 1 , 2 , , N 1 with the corresponding exit from states intensities λ 0 , , λ N 1 and the matrix of conditional probabilities P = [ p i j ] (on the diagonal p i i = 0 ). Let θ i be the duration of being in state i , and θ j be the duration of being in the next state j . By the property of a Markov process, the random variables θ i and θ j are independent and distributed according to the exponential law with the parameters λ i and λ j with density
f ( x ; λ ) = λ e λ x , x 0 ,  
and with the corresponding distribution function F ( x , λ ) .
At a sampling instant t n , it is necessary to determine the next instant t n + 1 = t n + T n , i.e., determine the sampling interval T n , which is selected from the condition Equation (2) of a given small probability γ of a gap following the state i unknown state j .
Next, we will use the property of the exponential distribution: the conditional distribution of the remainder of the transition does not depend on the elapsed waiting time and coincides with the distribution of the time in the state. The event “skip of state” (mentioned in Equation (2)) means that we have a condition:
P { θ i + θ j < T } γ
for any j , for which p i j > 0 , i.e., the maximum of the probability in j must be equal to γ :
max j : p i j > 0 P { θ i + θ j < T } = max j : p i j > 0 F ( T ; i , j ) = F ( T ; i , j ) = γ
where F ( T ; i , j * ) is the distribution function of the sum = θ i + θ j . Obviously, the greater λ j is, the less is θ j . Equation (13) reaches its maximum in that state j * = j * ( i ) , at which λ j is maximal:
λ j * = max j , P i j 0 λ j
The distribution function of the sum Σ = θ i + θ j is equal to F Σ ( x ; λ i . λ j ) . Note that λ i and λ j enter this expression symmetrically, which allows us to move to a one-parameter family of functions. Denote
α = max ( λ i , λ j * ) , β = min ( λ i , λ j * ) , k = β / α , 0 < k 1
It is easy to show, the probability density of a random variable α is determined by formula:
p α ( t ) = 1 α p ( t / α ) = { ( e k t e t ) k / ( 1 k ) , i f   k < 1 t e t , i f k = 1 .
Then for a random variable α Σ , the distribution function, depending on the parameter k , as is easily shown [23], is equal top
F α Σ ( x , k ) = { 1 ( e k x / k e x ) k / ( 1 k ) , if k 1 , 1 ( 1 + x ) e x , if k = 1 .
Condition Equation (13) takes the form:
P ( θ i + θ j * < T ) = P [ α ( θ i + θ j * ) < α T ] = F α ( α T , k ) = γ
From Equations (16) and (17), one can find the value of the sampling interval:
T = T ( i , γ ) = 1 α F α 1 ( γ , k )
where F α 1 ( γ , k ) is the function inverse to F α ( α T , k ) with respect to the first variable (the monotonic increase in x of the function F α ( α T , k ) is taken into account). Note that the dependence of the interval T on i appears through α and k . The graphs of functions F α 1 ( γ , k ) [23] depending on the parameter k for different γ are shown in Figure 4. There is also an approximate solution of Equation (16) for T obtained by expanding in powers of small x [23]:
T 2 γ / α β
The quality of the approximation Equation (19) with respect to the exact solution is no worse than 10% at γ < 0.03 , [23].
Let at the sampling instant t n the sample to have the value
ξ ( t n ) = ξ n = x i .
The least favorable next state, according to Equation (14), is the state with the number j * = j * ( i ) . We calculate the quantities α , β and k by the relation Equation (17), and the value T n ξ = T ( i , γ ) is selected by the Equation (18). Then the next sampling time is determined in an obvious way
t n + 1 = t n + T n ξ .
Note that the functions j * = j * ( i ) and T ( i , γ ) are calculated in advance. The sampling of η ( s ) is performed similarly. Sampling results are sequences of pairs ( t n , ξ n ) , n = 0 , 1 , 2 , and similarly ( s m , η m ) , m = 0 , 1 , 2 ,

5. Results Recovery of Formative Realizations from Discrete Samples

Consider the realization of the process on the axis ( 0 , t ) and denote on it the sampling points that need to be determined: t 1 , , t n , t n + 1 , ; t n + 1 = t n + T n ξ , t 0 = 0 . Suppose that on the sampling interval ( t n + T n ξ ) there is a transition from state i to state j : ξ ( t n ) = i ξ ( t n + T ) = j (here T = T n ξ ).
It is required to estimate the unobservable value τ -the random istant of transition from i to j . The best estimate (in terms of minimal error variance) is the expectation of conditional distribution τ , provided that τ ( t n , t n + T ) . Due to the homogeneity of the process, we assume that τ ( 0 , T ) . The density of the conditional distribution of the point τ on the interval ( 0 , T ) provided that there is only one transition in the interval, is determined by the following relation [23]:
p τ ( x | i , j ) = { ( μ ξ e μ ξ x ) / ( 1 e μ ξ x ) , i f μ ξ 0 , 1 T ,    i f    μ ξ = 0 ,
where μ ξ μ ξ ( i , j ) = λ ξ i λ ξ j . Equation (20) describes a truncated exponential distribution, which in the case μ ξ = 0 degenerates into a uniform one. By density Equation (20), the conditional expectation is determined, which is an estimate τ with the minimum variance for the transition instant τ :
τ ^ i j = E { τ | i , j } = { μ ξ 1 ( 1 μ ξ T / ( e μ ξ T ) ) = T [ 0.5 + g ( u ) ] , μ ξ 0 , T 2 , μ ξ = 0
where indicated: u = μ ξ ( i , j ) T ,
g ( u ) = u 1 ( 1 u / ( e u 1 ) ) 0.5 = u 1 + 0.5 ( 1 + e u ) / ( 1 e u ) ,
We can verify that g ( u ) is an odd function and g ( 0 ) = 0 .
Specifically,
τ ^ i j E { τ | i , j } = T [ 0.5 + g ( u i j ) ] ,   u i j = ( λ ξ i λ ξ j ) T = μ ξ T
the expression for the variance of the estimate is
V τ ^ i j = { 1 μ ξ 2 [ 1 ( μ ξ T ) 2 / ( e μ ξ T + e μ ξ T 2 ) ] , μ ξ 0 , T 2 / 12 , μ ξ = 0 , , i j
Graphs of the relative values of the estimate τ ^ / T = [ 0.5 + g ( u ) ] and the standard deviation σ = V τ ^ / T , as functions of the argument u = μ T are shown in Figure 5.
Using Equation (22), we obtain the estimated values for the process switching points. The restoration is performed according to the values ξ n , η m ; n , m = 1 , 2 , , measured in accordance with Equation (12).
For the processes ξ ( t ) : if the interval ( t n , t n + 1 ) is the switching interval, i.e., ξ n ξ n + 1 , then we determine the value of the estimate of the transition point ϑ k t using the Equation (22):
ϑ k t = t n + τ i j ,
where k is the sequence number of the switching interval on the t axis, k = 1 , 2 , The value of the recovered realization ξ v ( t ) at the point t of the next interval is determined by the relation: ξ v ( t ) = ξ n if t [ ϑ ^ k 1 t , ϑ ^ k t ) ,   ϑ ^ 0 t = 0 .
For the process η ( s ) : if the interval ( s m , s m + 1 ) is the next transition interval, i.e., η m η m + 1 , then we determine the value of the transition point ϑ ^ k s estimate using Equation (22):
ϑ ^ k s = s m + τ ^ i j
in Equation (22), instead μ ξ we use μ η μ η ( i , j ) = λ η i λ η j , i j ; instead T = T ξ we use T = T η ; k is the sequence number of the transition interval on the s axis, k = 1 , 2 , The values of the reconstructed realization at points s are as follows:
η v ( s ) = η m    if    s [ ϑ ^ k 1 s , ϑ ^ k s ) , ϑ ^ 0 s = 0
For the recovered realizations of the processes ξ v ( t ) and η v ( s ) we determine the reconstructed value of the field realization by Equation (1):
ζ v ( t , s ) = ξ v ( t ) + η v ( s ) .

6. Examples

6.1. Example 1. No Skip of State When Sampling

Let the process ξ ( t ) have 4 states: 0, 1, 2, 3, the process η ( t ) − 5 states: 0, 1, 2, 3, 4. The transition graph of the process ξ ( t ) is shown in Figure 6. These arrows show transitions in a way to indication of density. It is assumed that for internal states "1" and "2" transitions are possible only in neighboring states, and with equal probabilities.
The process graph η ( t ) differs only in the number of states and values of the output densities. The density of exit from the states and the average duration of stay in the states are shown in Table 4.
The sum z = x + y can take 8 integer values from 0 to 7. If it is 0 or 7, then the pairs are determined uniquely: for z = 0 it is (0, 0), for z = 7 it is (3, 4). Figure 7 shows the original field on a 10 × 10 square with 8 color levels. Vertically, there are 5 intervals of constancy, and horizontally-7; total 5 × 7 = 35 rectangles for which the field values, are given in the matrix Z with a size of 5 × 7.
Z = [ 4 5 6 7 6 5 6 3 4 5 6 5 4 5 2 3 4 5 4 3 4 1 2 3 4 3 2 3 0 1 2 3 2 1 2 ]
Among the field values we find the value z max = 7 , i.e., x max = 3 , y max = 4 at the point t = 5 , s = 9 , and also one value z min = 0 at the point t = 1 , s = 1 . To recover ξ ( t ) and η ( s ) , one point is enough t = 5 , s = 9 with z max = 7 , which means that ξ ( t ) = ξ ( t = 5 ) = 3 and η = η ( s = 9 ) = 4 . Then by Equations (3) and (4), we have
ξ ( t ) = ς ( t , s ) η ( s ) = ς ( t , 9 ) 4 ,
(these are the values of the top row of the matrix minus 4), η ( s ) = ς ( t , s ) ξ ( t ) = ς ( 5 , s ) 3 (these are the values from the bottom up to the middle column of the matrix minus 3). Realizations ξ ( t ) and η ( s ) are shown in Figure 8. The corresponding state sequences are as follows: for ξ ( t ) 0 , 1 , 2 , 3 , 2 , 1 , 2 ; for η ( s ) = 0 , 1 , 2 , 3 , 4 .
The sampling intervals are obtained by Equations (14), (17) and (18).
For ξ ( t ) , we have the states 0, 1, 2, 3, for which we write out the values of the sampling intervals T ξ for various values of the permissible probability γ of omission (see Equation (2)): when γ = 0.01 , T ξ = 0.16 ;   0.16 ;   0.18 ;   0.24 ; when γ = 0.05 , T ξ   = 0.39 ;   0.39 ;   0.44 ;   0.57 .
For η ( s ) we have the states: 0, 1, 2, 3, 4, for which the sampling intervals T η are different: when γ = 0.01 , T η = 0.3 ; 0.3 ; 0.3 ; 0.3 ; 0.3 ; when γ = 0.05   T ξ = 0.71 ; 0.71 ; 0.71 ; 0.71 .
For convenience of illustration the value of the skip probability is γ = 0.05 , which gives extended sampling intervals. In Figure 9 shows sampling and reconstruction for ξ ( t ) ) and η ( s ) ; reconstructed realizations are depicted by dashed lines. Figure 10 shows the restored field. In this example, no state was skipped during sampling.

6.2. Example 2. There is a Skip of State When Sampling

Consider an example close to the previous one, which differs from it only by the formative realization ξ ( t ) . The new realization is shown in Figure 11, where it is noticeable that the second left vertical bar is very narrow.
State 1 in the interval (0, 2) is skipped (Figure 12) when performing a sample.
For this reason one column of rectangles disappears in the restored field realization of Figure 13. Their number became six instead of seven. This is a characteristic error of a special type, which is governed by the choice of value γ .
We draw attention to the fact that the values found for the sampling intervals of the forming realizations turned out to be different for different states.

7. Conclusions

A model of a random field is proposed that is formed by summing two formative realizations of piecewise constant Markov processes with an arbitrary number of states. The flexibility of the field model is associated with the possibility of choosing the number of quantization states, as well as the expedient assignment of probabilistic characteristics of two formative realizations to obtain realizations of the field of the desired shape (for example, the presence of relatively large areas without jumps, their location in the field of view, etc.). This article has developed a sampling-restoration algorithm of the proposed random field model. We derive for the field type under consideration, the necessary and sufficient condition for the unambiguous recovery of formative realizations from field values. We substantiate the choice of sampling intervals with a given probability of missing state. Discrete samples are used to estimate random moments of transition from one state to another as well as the variance of the estimates.
The article has developed a sampling-restoration algorithm of the proposed random field model. We obtain for this model the necessary and sufficient condition for the unambiguous recovery of formative realizations from the field´s values. The choice of sampling intervals with a given probability of skipping a state is justified. On discrete samples, random instants of transition from one state to another are estimated, and the estimates of variance are calculated. Two examples are given to illustrate the proposed algorithms.
The proposed model can be used for modeling the various quantized by level fields. It is possible to select the number of quantization levels, as well as the formation of field realizations with the desired features.

Author Contributions

Methodology, Y.G. and V.K.; software, Y.G., O.S, and F.M; formal analysis, Y.G and V.K; investigation, Y.G and V.K.; resources, Y.G. and V.K; data curation, O.S. and F.M; writing—original draft preparation, Y.G and V.K.; writing—review and editing, Y.G., V.K., O.S., and F.M.

Funding

National Polytechnic Institute of Mexico, project 20195464, supports this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chiranjeevi, K.; Umaranjan, J. Fast vector quantization using a Bat algorithm for image compression. Eng. Sci. Technol. 2016, 19, 769–781. [Google Scholar] [Green Version]
  2. Ponomaryov, V.I.; Montenegro-Monroy, H.; Gallegos-Funes, F.; Pogrebnyak, O.; Sadovnychiydet, S. Fuzzy color video filtering technique for sequences corrupted by additive Gaussian noise. Neurocomputing 2015, 155, 222–246. [Google Scholar] [CrossRef]
  3. Orea-Flores, I.; Gallegos-Funes, F.; Arellano-Reynoso, A. Local Complexity Estimation Based Filtering Method in Wavelet Domain for Magnetic Resonance Imaging Denoising. Entropy 2019, 21, 401. [Google Scholar] [CrossRef]
  4. Jain, A.K. Fundamental of Digital Image Processing, 1st ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1986; pp. 80–123. [Google Scholar]
  5. Jan, J. Medical Image Processing Reconstruction and Restoration, 1st ed.; CRC Press: Boca Raton, FL, USA; Taylor and Frances: Oxford, UK, 2005; pp. 55–132. [Google Scholar]
  6. Burger, W.; Burge, M.J. Digital Image Processing. An Algorithmic Introduction using Java, 2nd ed.; Springer: London, UK, 2016; pp. 539–558. [Google Scholar]
  7. Ciulla, C. Improved Signal and Image Interpolation in Biomedical Applications: The Case of Magnetic Resonance Imaging, 1st ed.; Medical Information Science Reference: Hershey, PA, USA, 2009; pp. 58–160. [Google Scholar]
  8. Pratt, W.K. Introduction to Digital Image Processing, 1st ed.; CRC Press: Boca Raton, FL, USA; Taylor and Frances: Oxford, UK, 2014; pp. 87–176. [Google Scholar]
  9. Angulo, J.; Nguyen-Khoa, T.; Massy, Z.A.; Drüeke, T.; Serra, J. Morphological quantification of aortic calcification from low magnification images. Image Anal. Stereol. 2003, 22, 81–89. [Google Scholar] [CrossRef]
  10. Dong, P.; Pacureanu, A.; Zuluaga, M.A.; Olivier, C.; Grimal, Q.; Peyrin, F. Quantification of the 3D Morphology of the Bone Cell Network from Synchroton Micro-CT Images. Image Anal. Stereol. 2014, 33, 1–10. [Google Scholar] [CrossRef]
  11. Martone, M.; Bräutigam, B.; Krieger, G. Quantization effects in TanDEM-X data. IEEE Trans. Geosci. Remote Sens. 2014, 53, 582–597. [Google Scholar] [CrossRef]
  12. Mojsilovic, A.; Soljanin, E. Color quantization and processing by fibonacci lattices. IEEE Trans. Image Proc. 2001, 10, 1712–1725. [Google Scholar] [CrossRef] [PubMed]
  13. Dongjie, B.; Yongle, X.; Li, X.; Zheng, X.R. Efficient 2-D synthetic aperture radar image reconstruction from compressed sampling using a parallel operator splitting structure. Digit. Signal Process. 2016, 60, 171–179. [Google Scholar]
  14. Lu, T.; Wang, J.; Zhou, H.; Jiang, J.; Ma, J.; Wang, Z. Rectangular-Normalized superpixel. Entropy 2018, 20, 947–959. [Google Scholar] [CrossRef]
  15. Eldering, P.G. Method for the complete description of the infrared sky background. JOSA 1961, 51, 1424–1429. [Google Scholar] [CrossRef]
  16. Goritskiy, Y.A.; Kazakov, V.A. Sampling and reconstruction of a checkboard-type random field. J. Comput. Syst. Sci. Int. 2013, 52, 737–749. [Google Scholar] [CrossRef]
  17. Goritskiy, Y.A.; Kazakov, V.A.; Rodriguez, D.; Tejeda, F. Sampling of realizations of the random field formed by the sum of Markov binary processes. J. Comput. Syst. Sci. Int. 2017, 56, 44–51. [Google Scholar] [CrossRef]
  18. Kazakov, V.A. The sampling-reconstruction procedure with a limited number of samples of stochastic processes and fields on the basis of the conditional mean rule. Electromagn. Waves Electr. Syst. 2005, 23, 98–115. [Google Scholar]
  19. Shmaliy, Y.S. Probability: Interpretation, Theory and Applications, 1st ed.; Nova Science Publisher: Hauppauge, NY, USA, 2012; pp. 269–325. [Google Scholar]
  20. Rodriguez, D.; Kazakov, V.A. Procedimiento de Muestreo y Reconstrucción Análisis de Procesos Gaussianos con Jitter, 1st ed.; Académica Española: Madrid, Spain, 2012; pp. 145–172. [Google Scholar]
  21. Kazakov, V.A.; Goritskiy, Y.A. Muestreo—Reconstrucción de Realizaciones de Procesos y Campos Aleatorios, 1st ed.; Instituto Politecnico Nacional de Mexico: Mexico City, Mexico, 2017; pp. 191–213. [Google Scholar]
  22. Goritskiy, Y.A.; Kazakov, V.A. Sampling-reconstruction of binary markov process. Radio Electr. Commun. Syst. 2010, 49, 6–11. [Google Scholar]
  23. Goritskiy, Y.A.; Kazakov, V.A. Sampling and Reconstruction of Markov Processes with limited set of states. J. Comput. Syst. Sci. Int. 2010, 49, 16–21. [Google Scholar] [CrossRef]
  24. Goritskiy, Y.A.; Kazakov, V.A. Sampling of Piece wise constant stochastic processes with smooth distributions of stay time in states. J. Comput. Syst. Sci. Int. 2013, 52, 854–865. [Google Scholar] [CrossRef]
Figure 1. Examples of fields formed by the sum of realizations of two Markov processes. (a) In the center there is a rather large area of the field without transitions with the maximum level of quantization; (b) Here, this part of the model is shifted upwards; (c) Parts of the realization of the high quantization field model are separated and located in the upper and lower regions.
Figure 1. Examples of fields formed by the sum of realizations of two Markov processes. (a) In the center there is a rather large area of the field without transitions with the maximum level of quantization; (b) Here, this part of the model is shifted upwards; (c) Parts of the realization of the high quantization field model are separated and located in the upper and lower regions.
Entropy 21 00792 g001
Figure 2. A field with a pronounced maximum and with a coarse discretization of eight levels.
Figure 2. A field with a pronounced maximum and with a coarse discretization of eight levels.
Entropy 21 00792 g002
Figure 3. The range of values (x,y) for conditions Equation (7) is shaded, where x max and y max are maximum values, x min and y min are minimal values. (a) A case illustrating condition Equation (9); (b) A case illustrating condition Equation (10).
Figure 3. The range of values (x,y) for conditions Equation (7) is shaded, where x max and y max are maximum values, x min and y min are minimal values. (a) A case illustrating condition Equation (9); (b) A case illustrating condition Equation (10).
Entropy 21 00792 g003
Figure 4. Function F α Σ 1 ( γ , k ) for different values γ : 0.1(1), 0.05(2), 0.02(3), 0.01(4), 0.005(5), 0.001(6).
Figure 4. Function F α Σ 1 ( γ , k ) for different values γ : 0.1(1), 0.05(2), 0.02(3), 0.01(4), 0.005(5), 0.001(6).
Entropy 21 00792 g004
Figure 5. Relative evaluation values τ ^ / T (curve 1) and its standard deviation σ = V τ ^ / T (curve 2) as a function of the parameter u = μ T .
Figure 5. Relative evaluation values τ ^ / T (curve 1) and its standard deviation σ = V τ ^ / T (curve 2) as a function of the parameter u = μ T .
Entropy 21 00792 g005
Figure 6. Graph flowchart of the process ξ(t).
Figure 6. Graph flowchart of the process ξ(t).
Entropy 21 00792 g006
Figure 7. The specified realization of the field (Example 1).
Figure 7. The specified realization of the field (Example 1).
Entropy 21 00792 g007
Figure 8. Realizations that make up a field (Example 1). (a) Realization of the forming Markov process, fixed along the axis ξ ( t ) ; (b) Realization of the forming Markov process, fixed along the axis η ( t ) .
Figure 8. Realizations that make up a field (Example 1). (a) Realization of the forming Markov process, fixed along the axis ξ ( t ) ; (b) Realization of the forming Markov process, fixed along the axis η ( t ) .
Entropy 21 00792 g008
Figure 9. Sampling and recovery of formative realizations (Example 1). (a) Sampling and recovery of formative realization of Markov process, fixed along the axis ξ ( t ) ; (b) Sampling and recovery of formative realization of Markov process, fixed along the axis η ( s ) .
Figure 9. Sampling and recovery of formative realizations (Example 1). (a) Sampling and recovery of formative realization of Markov process, fixed along the axis ξ ( t ) ; (b) Sampling and recovery of formative realization of Markov process, fixed along the axis η ( s ) .
Entropy 21 00792 g009
Figure 10. The recovered field (Example 1).
Figure 10. The recovered field (Example 1).
Entropy 21 00792 g010
Figure 11. Specified field realization (Example 2).
Figure 11. Specified field realization (Example 2).
Entropy 21 00792 g011
Figure 12. Discretization and recovery realization of ξ ( t ) (Example 2).
Figure 12. Discretization and recovery realization of ξ ( t ) (Example 2).
Entropy 21 00792 g012
Figure 13. Recovered field (Example 2).
Figure 13. Recovered field (Example 2).
Entropy 21 00792 g013
Table 1. Values z = f(x,y) = x + y for arbitrary d and M, total N = dM values.
Table 1. Values z = f(x,y) = x + y for arbitrary d and M, total N = dM values.
z = x + yy
0d2d(M − 1)d
x000 + M0 + 2 M0 + (M − 1)d
111 + M1 + 2 M1 + (M − 1)d
222 + M2 + 2 M2 + (M − 1)d
d − 1d − 12d − 13d − 1Md − 1
Table 2. Values z = f(x,y) = x + y for d = M = 4. total N = 16 values.
Table 2. Values z = f(x,y) = x + y for d = M = 4. total N = 16 values.
z = x + yy
04812
x004812
115913
2261014
3371115
Table 3. Field values at M = 4 states.
Table 3. Field values at M = 4 states.
z = x + yy
0123
x00123
11234
22345
33456
Table 4. State Characteristics.
Table 4. State Characteristics.
Process ξ (t)Process η (s)
x0x1x2x3y0y1y2y3y4
States012301234
Average stay1.01.21.52
0.5
2.02.02.02.02.0
Output density λ1.00.830.660.40.50.50.50.50.5

Share and Cite

MDPI and ACS Style

Goritskiy, Y.; Kazakov, V.; Shevchenko, O.; Mendoza, F. Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations. Entropy 2019, 21, 792. https://doi.org/10.3390/e21080792

AMA Style

Goritskiy Y, Kazakov V, Shevchenko O, Mendoza F. Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations. Entropy. 2019; 21(8):792. https://doi.org/10.3390/e21080792

Chicago/Turabian Style

Goritskiy, Yuri, Vladimir Kazakov, Olga Shevchenko, and Francisco Mendoza. 2019. "Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations" Entropy 21, no. 8: 792. https://doi.org/10.3390/e21080792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop