Next Article in Journal
General Convergence Rates by the Delayed Sums Method
Next Article in Special Issue
A Higher-Order Uniformly Convergent Numerical Method for a Singularly Perturbed Nonlinear Reaction–Diffusion Equation
Previous Article in Journal
Recognition Geometry
Previous Article in Special Issue
Existence and Uniqueness in Fractional Boundary Value Problems: A Refined Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique

School of Mathematical Sciences, Adelaide University, Mawson Lakes, SA 5095, Australia
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(2), 91; https://doi.org/10.3390/axioms15020091
Submission received: 3 December 2025 / Revised: 31 December 2025 / Accepted: 14 January 2026 / Published: 26 January 2026

Abstract

Suppose K Y and K X are the image and the preimage of a nonlinear operator F : K Y K X . It is supposed that the cardinality of each K Y and K X is N and N is large. We provide an approximation to the map F that requires prior information only on a few elements p from K Y , where p N , but still effectively represents F ( K Y ) . It is achieved under Lipschitz continuity assumptions. The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of sets of stochastic elements. The proposed technique provides a single operator that transforms any element from the arbitrarily large set K Y . The operator is determined in terms of pseudo-inverse matrices so that it always exists.

1. Introduction

A purpose of the proposed methodology is to provide an effective way to transform large data sets. The methodology is motivated by the problems arising in signal processing, where the nonlinear operator F : K Y K X is interpreted as a nonlinear system (or a nonlinear filter) transforming a set of stochastic signals K Y to the set of stochastic signals K X . Therefore, below we refer to this terminology.
The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of stochastic signal sets. The device is not straightforward and requires the careful substantiation presented in Section 2.2, Section 3.4, Section 4.2 and Section 4.4 below.

1.1. Motivations

1.1.1. Filtering of Large Arrays of Stochastic Signals

We consider the case when the cardinality N of signal sets K Y and K X is large. Members of K Y and K X are represented by stochastic vectors. The problem of finding a filter that effectively transforms the large set K Y to the large set K X has been considered in a number of works such as those represented in [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. The approaches considered in these references require prior information on each reference signal. This requirement poses a significant difficulty on the applicability of the methods mentioned above. For processing of large signal arrays, such restrictions are quite inconvenient. The exception is the techniques provided in [15,16], which exploit the initial information in the form of the vector derived from averaging over the reference signal set.
In this paper, we show how to construct a filter F that requires prior information only on few signals, p N , from K X but still performs effectively compared to known filters based on prior information on every reference signal from K X . Such a filter is denoted by F ( p 1 ) .

1.1.2. Basic Idea of the Proposed Methodology

The specific feature of the proposed filter is an adjustment and extension of the piecewise function interpolation technique [17] to filtering of stochastic signals. It is well known that the piecewise function interpolation [17] often provides better accuracy and faster performance than those associated with the linear and polynomial approximation used in known filtering techniques (such as, for example, those in [5,9]).
The device of the filter under consideration is given in Section 2.2, Section 3.1 and Section 4.2.

1.1.3. Pseudo-Inverse Matrices and Related Matters

The known filter models proposed in [1,2,3,6,7,8,10,13,14], use inverse matrices. In the cases of grossly corrupted signals or erroneous measurements, those inverse matrices may not exist, and thus, those filters cannot be applied. The examples in Section 5 illustrate this case.
The filter proposed here avoids this drawback since its model is based on exploiting pseudo-inverse matrices. As a result, the proposed filter always exists. That is, it processes any kind of noisy signals. Exploiting pseudo-inverse matrices is not straightforward and requires the development of a specific technique. Perhaps it was a reason to not use the pseudo-inverse matrices in the above-mentioned references. Here, an extension of the proposed filtering techniques to the case of implementation of the pseudo-inverse matrices is performed on the basis of theory represented in [5].

1.1.4. Computational Load

Let x = [ x 1 , , x m ] T K X and y = [ y 1 , , y n ] T K Y . An implementation of the filters provided in [1,2,3,4,5,6,7,8,10,13,14] leads to computation of an n × n inverse or pseudo-inverse matrix for each pair of signals x K X and y K Y , and a matrix product of m × n and n × n matrices. Thus, for the processing of arrays in K X and K Y , the filters in [1,2,3,4,5,6,7,8,10,13,14] require O ( 2 m n 2 N ) + O ( 26 n 3 N ) flops [18].
Arrays K X and K Y can be represented by vectors, χ and γ , each with m N and n N components, respectively. Then an application of the techniques in [1,2,3,4,5,6,7,8,10,13,14] requires O ( 2 m n 2 N 2 ) and O ( 26 n 3 N 3 ) flops [18].
Thus, for the case when N is large, the computational work associated with the methods in [1,2,3,4,5,6,7,8,10,13,14] becomes, in practice, unacceptable.
The required computational load for the filter F ( p 1 ) considered below is substantially less. This is implied by an ability to use only p pseudo-inverse matrices, where p is much less than cardinality N. For this reason, F ( p 1 ) requires only O ( 2 m n 2 p ) + O ( 26 n 3 p ) flops. Clearly, this is less than that required by the filters mentioned above. This issue is illustrated in Section 5.

1.2. Relevant Works

Some particular filtering techniques relevant to the method are proposed below.

1.2.1. Generic Optimal Linear (GOL) Filter [5]

The generic optimal linear (GOL) filter in [5] is a generalization of the Wiener filter to the case when the covariance matrix is not invertible and the observable signal is arbitrarily noisy (i.e., when, in particular, noise is not necessarily additive and Gaussian). The GOL filter has been developed for processing an individual stochastic signal. Some ideas from [5] are used in the proof of Theorem 1 below.

1.2.2. Simplicial Canonical Piecewise Linear Filter [13]

A complex Wiener adaptive filter was developed in [13] from the two-dimensional complex-valued simplicial canonical piecewise linear filter [19]. The filter in [13] was developed for the processing of an individual stochastic signal and can be exploited when the reference signal is known and a ‘covariance-like’ matrix is invertible. The latter precludes an application to the signal types considered, for example, in Section 5: the matrices used in [13] are not invertible for the signals as those in Section 5. Similarly, the filters studied in [8,10] were developed for the processing of a single signal when the covariance matrices are invertible.
For the filter proposed here, these restrictions are removed.

1.2.3. Adaptive Piecewise Linear Filter [12]

A piecewise linear filter in [12] was proposed for a fixed image denoising (given by a matrix), corrupted by an additive Gaussian noise. That is, the method involved a non-stochastic reference signal and required its knowledge. No theoretical justification for the filter was given in [12].

1.2.4. Averaging Polynomial Filter [15,16]

The averaging polynomial filter proposed in [15,16] was developed for the purpose of processing infinite signal sets. The filter was based on an argument involving the ‘averaging’ over sets of signals under consideration. This device allows one to determine a single filter for the processing of infinite signal sets. At the same time, it leads to an increase in the associated error when signals differ considerably from each other. This effect is illustrated in Section 5 below.

1.2.5. Other Relevant Filters

The technique developed in [11] is an extension of the GOL filter to the constraint problem with respect to the filter rank. It concerns data compression.
The methods in [6,7,20,21] have been developed for deterministic signals. Motivated by the results achieved in [20,21], adaptive filters were elaborated in [22]. A theoretical basis for the device proposed in [20,21] is provided in [23].
We note that the idea of piecewise linear filtering has been used in the literature in several very different conceptual frameworks, despite exploiting some very similar terms (as in [12,13,19,20,21,22,23,24,25,26]). At the same time, a common feature of those techniques is that they were developed for the processing of a single signal, not of large signal sets as in this paper. In particular, piecewise linear filters in [24] have been obtained by arranging linear filters and thresholds in a tree structure. Piecewise linear filters discussed in [25] were developed using so-called threshold decomposition, which is a segmentation operator exploited to split a signal into a set of multilevel components. Filter design methods for piecewise linear systems proposed in [26] were based on a piecewise Lyapunov function.

1.3. Difficulties Associated with the Known Filtering Techniques

Basic difficulties associated with applying the known filtering techniques to the case under consideration (i.e., to processing of large signal sets, K X and K Y ) are as follows:
(i)
They require information on each reference signal (in the form of a sample, for example).
(ii)
Matrices used in the known filters can be non-invertible (as in the simulations considered below in Section 5) and then the filter does not exist.
(iii)
The associated computation work may require a very long time. For example, in simulations (Section 5), MATLAB was out of memory for computing the GOL filter [5] when each of the sets K X and K Y was represented by a long vector (this option has been discussed in Section 1.1.4 above). The PC used for the simulations was a Dell Latitude 7400 with a CPU Intel Core i5 (8th Gen) 8265U/1.6 GHz and 8 GB RAM.

1.4. Differences from the Known Filtering Techniques

The differences from the known filtering techniques discussed above are as follows:
(i)
We consider a single filter that processes arbitrarily large input-output sets of stochastic signal vectors. The known filters [1,2,3,4,5,6,7,8,9,10,11,12,13,14,19,20,21,22,23,24,25,26] have been developed for the processing of an individual signal vector only. In the case of their application to arbitrarily large signal sets, they imply difficulties described in Section 1.1 and Section 1.3 above.
(ii)
As a result, our piecewise linear filter model (Section 3), the statement of the problem (Section 3.3 below), and consequently, the device of its solution (Section 4 below) are different from those considered in [12,13,19,20,21,22,23,24,25,26]. In this regard, see also Section 1.2.5.
(iii)
The above naturally leads to a new structure of the filter (presented in Section 3.4 and Section 4.2 below), which is very different from the known ones.

1.5. Contribution

In general, for the processing of large data sets, the proposed filter allows us to achieve better results in comparison with the known techniques in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,19,20,21,22,23,24,25,26]. In particular, it allows us to achieve the following:
(i)
Achieve a desired accuracy in signal estimation This means that any desired accuracy is achieved theoretically, as is shown in Section 4.4 below. In practice, of course, the accuracy is increased to a prescribed reasonable level.
(ii)
Exploit prior information only on few reference signals, p, from the set K X that contains N p signals or even an infinite number of signals.
(iii)
Find a single filter to process any signal from the arbitrarily large signal set.
(iv)
Determine the filter in terms of pseudo-inverse matrices so that the filter always exists.
(v)
Decrease the computational load compared to the related known techniques.

2. Some Preliminaries

2.1. Notation

The signal sets we consider are, in fact, special representations of time series.
Let ( Ω , Σ , μ ) be a probability space (usually, Ω = { ω } is the set of outcomes, Σ is a σ -field of measurable subsets in Ω and μ : Σ [ 0 , 1 ] is an associated probability measure on Σ ; in particular, μ ( Ω ) = 1 . ) and K X and K Y be arbitrarily large sets of signals such that
K X = { x ( t , · ) L 2 ( Ω , R m ) | t T } and   K Y = { y ( t , · ) L 2 ( Ω , R n ) | t T }
where T : = [ a , b ] R . We interpret x ( t , · ) as a reference signal and y ( t , · ) as an observable signal, an input to the filter F studied below. In an intuitive way, y can be regarded as a noise-corrupted version of x . For example, y can be interpreted as y = x + n , where n is white noise. In this paper, we do not restrict ourselves to this simplest version of y and assume that the dependence of y on x and n is arbitrary. The variable t T R represents time. More generally, T can be considered as a set of parameter vectors α = ( α ( 1 ) , , α ( q ) ) T C q R q , where C q is a q-dimensional cube, i.e., y = y ( α , · ) and x = x ( α , · ) . One coordinate, say α ( 1 ) of α , could be interpreted as time. Then, for example, the stochastic signal x ( t , · ) can be interpreted as an arbitrary stationary time series.
Let { t k } 1 p T be a sequence of fixed time-points such that
a = t 1 < < t p = b .
Because of Equation (1), the sets K Y and K X are divided in ‘smaller’ subsets K X , 1 , , K X , p 1 and K Y , 1 , , K Y , p 1 , respectively, so that, for each j = 1 , , p ,
K X , j = { x ( t , · ) | t j t t j + 1 } and   K Y , j = { y ( t , · ) | t j t t j + 1 } .
Therefore, K Y and K X can now be represented as
K X = j = 1 p 1 K X , j and   K Y = j = 1 p 1 K Y , j .

2.2. Brief Description of the Method

We wish to determine a single filter that estimates x K X from y K Y with a controlled, associated error. Array K Y can be finite or infinite.
To this end, the proposed filter F ( p 1 ) : K Y K X is represented by a sum of sub-filters F 1 , , F p 1 (see Equations (4) and (5) below). The sub-filter F j transforms signals of subset K Y , j of array K Y to signals of subset K X , j of array K X , i.e., F j : K Y , j K X , j . The prime idea is to determine F j separately, for each j = 1 , , p 1 , from an associated minimization problem (Equation (11)) (see Section 3.4 and Section 4.2 below). This procedure provides an estimate F j [ y ( t , · ) ] that interpolates x ( t , · ) K X , j at t = t j and t = t j + 1 . It is natural to expect that the processing of a ‘smaller’ subset, K Y , j , may lead to a smaller error than that associated with the processing of the whole array K Y .
Therefore, the advantages of F ( p 1 ) [ y ( t , · ) ] are similar to those that follow from an application of the piecewise interpolation procedure, such as, for example, the high accuracy of estimation.
Section 4.4 confirms this observation. Importantly, it is shown in Section 4.5 and Section 5 that the proposed technique avoids the bottlenecks of the known methods discussed in Section 1.3 above.

3. Description of the Problem

3.1. Piecewise Linear Filter Model

Let F ( p 1 ) : K Y K X be a filter such that, for each t T ,
F ( p 1 ) [ y ( t , · ) ] = j = 1 p 1 δ j F j [ y ( t , · ) ] ,
where
F j [ y ( t , · ) ] = α j + B j [ y ( t , · ) ] and   δ j = 1 , i f   t j t t j + 1 , 0 , otherwise .
Here, F j is a sub-filter defined for t j t t j + 1 . In Equation (5), α j = [ α j ( 1 ) , , α j ( m ) ] T R m and B j : L 2 ( Ω , R n ) L 2 ( Ω , R m ) is a linear operator given by a matrix B j R m × n , so that
[ B j ( y ) ] ( t , ω ) = B j [ y ( t , ω ) ] .
Thus, F j is defined by an operator F j : R n R m such that
F j [ y ( t , ω ) ] = α j + B j [ y ( t , ω ) ] .
Filter F ( p 1 ) defined by Equations (4)–(6) is called the piecewise filter. Hereinafter, we will use a non-curly symbol to denote an operator and associated matrix (e.g., the operator F j : L 2 ( Ω , R n ) L 2 ( Ω , R m ) and the associated matrix F j R m × n are denoted by F j ).

3.2. Assumptions

In the known approaches related to filtering of stochastic signals (e.g., see [1,2,3,4,5,6,7,8,9,10,11,13,14,15,16]), it is assumed that covariance matrices formed from the reference signal and observed signal are known or can be estimated.
The assumption used here is similar. The covariance matrices that are assumed to be known or can be estimated are formed from selected signal pairs { x ( t j , · ) , y ( t j , · ) } with j = 1 , , p and p to be a small number. It is worthwhile to note that it is not assumed that the covariance matrices are known for each signal pair from K X × K Y , { x ( t , · ) , y ( t , · ) } with t [ a , b ] ), p N , where N is the number of signals in K X or K Y .

3.3. The Problem

In Equations (4)–(6), parameters of the filter F ( p 1 ) , i.e., vector α j and matrix B j , for j = 1 , , p 1 , are unknown. Therefore, under the assumptions described in Section 3.2, the problem is to determine α j and B j , for j = 1 , , p 1 . The related problem is to estimate an error associated with the filter F ( p 1 ) .
Solutions to both problems are given in Section 4.2 and Section 4.4, respectively. In particular, in the following Section 3.4, interpolation Equations (8) and (11) are introduced that lead to a determination of α j and B j .

3.4. Interpolation Conditions

Let us denote
x ( t j , · ) Ω 2 = Ω x ( t j , ω ) 2 2 d μ ( ω )
where x ( t j , ω ) 2 is the Euclidean norm of x ( t j , ω ) R m .
For t = t 1 , let x ^ ( t 1 , · ) be an estimate of x ( t 1 , · ) determined by known methods [1,2,3,4,5,6,7,8,9,10,11,13,14,15,16]. This is the initial condition of the proposed technique.
For j = 1 , , p 1 , each sub-filter F j in Equations (5) and (6) is defined so that α j and B j satisfy the conditions as follows:
Sub-filter F 1 : For j = 1 , α 1 and B 1 solve
x ^ ( t 1 , · ) = α 1 + B 1 [ y ( t 1 , · ) ] and min B 1 [ x ( t 2 , · ) α 1 ] B 1 [ y ( t 2 , · ) ] Ω 2 ,
respectively. Then an estimate of x ( t , · ) , x ^ ( t , · ) , for t [ t 1 , t 2 ] , is determined as
x ^ ( t , · ) = F 1 [ y ( t , · ) ] = α 1 + B 1 [ y ( t , · ) ] = x ^ ( t 1 , · ) + B 1 [ y ( t , · ) y ( t 1 , · ) ]
where α 1 and B 1 satisfy Equation (8). In particular, α 1 = x ^ ( t 1 , · ) B 1 [ y ( t 1 , · ) ] and
x ^ ( t 2 , · ) = F 1 [ y ( t 2 , · ) ] .
Extending this procedure up to j = k 1 , where k = 3 , , p , we set the following: Let x ^ ( t k 1 , · ) be an estimate of x ( t k 1 , · ) defined by the preceding steps as
x ^ ( t k 1 , · ) = F k 2 [ y ( t k 1 , · ) ] .
Then sub-filter F k 1 is defined as follows:
Sub-filter F k 1 : For j = k 1 , α k 1 and B k 1 solve
x ^ ( t k 1 , · ) = α k 1 + B k 1 [ y ( t k 1 , · ) ] and min B k 1 [ x ( t k , · ) α k 1 ] B k 1 [ y ( t k , · ) ] Ω 2 ,
respectively. Then an estimate of x ( t , · ) , x ^ ( t , · ) , for t [ t k 1 , t k ] , is determined as
x ^ ( t , · ) = F k 1 [ y ( t , · ) ] = α k 1 + B k 1 [ y ( t , · ) ] = x ^ ( t k 1 , · ) + B 1 [ y ( t , · ) y ( t k 1 , · ) ] .
Equations (8) and (11) are motivated by the device of piecewise function interpolation and associated advantages [17].
Filter F ( p 1 ) of the form of Equations (4) and (5) with α j and B j satisfying Equations (8) and (11) is called the piecewise linear interpolation filter. The pair of signals { x ( t k , · ) , y ( t k , · ) } associated with time t k defined by Equation (1) is called the interpolation pair.
Remark 1.
In general, the above procedure might be sensitive to the choice of the initial estimate x ( t 1 , · ) determined by known methods [1,2,3,4,5,6,7,8,9,10,11,13,14,15,16]. If the initial estimate is poor, then the associated error increases.

4. Main Results

4.1. General Device

In accordance with the scheme presented in Section 3.1 and Section 3.4 above, an estimate of the reference signal x ( t , · ) , for any t T = [ a , b ] , by the piecewise linear interpolation filter F ( p 1 ) , is given by
x ^ ( t , · ) = F ( p 1 ) [ y ( t , · ) ] = j = 1 p 1 δ j F j [ y ( t , · ) ] ,
where, for each j = 1 , , p 1 , the sub-filter F j is given by Equation (5), and is defined from interpolation Equations (8) and (11).
Below, we show how to determine F j to satisfy the conditions of Equations (8) and (11).

4.2. Filter Model

We write z ( t j , t j + 1 , · ) = [ z ( 1 ) ( t j , t j + 1 , · ) , , z ( m ) ( t j , t j + 1 , · ) ] T and w ( t j , t j + 1 , · ) = [ w ( 1 ) ( t j , t j + 1 , · ) , , w ( n ) ( t j , t j + 1 , · ) ] T , where
z ( t j , t j + 1 , · ) = x ( t j + 1 , · ) x ^ ( t j , · ) and w ( t j , t j + 1 , · ) = y ( t j + 1 , · ) y ( t j , · ) .
Here, z ( j ) ( t j , t j + 1 , · ) L 2 ( Ω , R ) and w ( i ) ( t j , t j + 1 , · ) L 2 ( Ω , R ) , for all j = 1 , , m , are stochastic variables.
The associated covariance matrix is defined by
E z j w j = z ( i ) ( t j , t j + 1 , · ) , w ( k ) ( t j , t j + 1 , · ) i , k = 1 m , n ,
where z ( i ) ( t j , t j + 1 , · ) , w ( k ) ( t j , t j + 1 , · ) = Ω z ( i ) ( t j , t j + 1 , ω ) w ( k ) ( t j , t j + 1 , ω ) d μ ( ω ) .
The Moore–Penrose generalized inverse of a matrix M is denoted by M .
The main results are represented below.
Theorem 1.
Let K X = { x ( t , · ) L 2 ( Ω , R m ) | t T = [ a , b ] } and K Y = { y ( t , · ) L 2 ( Ω , R n ) | t T = [ a , b ] } be sets of reference signals and observed signals, respectively. Let t j [ a , b ] , for j = 1 , , p , be such that
a = t 1 < < t p = b .
For t = t 1 , let x ^ ( t 1 , · ) be a known estimate of x ( t 1 , · ) . As has been mentioned in Section 3.4, x ^ ( t 1 , · ) can be determined by the known methods. Then, for any t [ a , b ] , the proposed piecewise linear interpolation filter F ( p 1 ) : L 2 ( Ω , R n ) L 2 ( Ω , R m ) transforms any signal y ( t , · ) L 2 ( Ω , R m ) to an estimate of x ( t , · ) , x ^ ( t , · ) , is given by
x ^ ( t , · ) = F ( p 1 ) [ y ( t , · ) ] = j = 1 p 1 δ j F j [ y ( t , · ) ]
where
F j [ y ( t , · ) ] = x ^ ( t j , · ) + B j [ y ( t , · ) y ( t j , · ) ] ,
x ^ ( t j , · ) = F j 1 [ y ( t j , · ) ] ( f o r   j = 2 , , p 1 ) ,
B j = E z j w j E w j w j + M B j [ I n E w j w j E w j w j ] ,
where I n is the n × n identity matrix and M B j is an m × n arbitrary matrix.
Proof. 
The proof of Theorem 1 is given in Appendix A.    □
It is worthwhile to observe that, due to an arbitrary matrix M B j in Equation (19), the filter F ( p 1 ) is not unique. In particular, M B j can be chosen as the zero matrix O similarly to the generic optimal linear [5] (which is also not unique for the same reason).

4.3. Numerical Realization of Filter F ( p 1 ) and Associated Algorithm

4.3.1. Numerical Realization

In practice, the set T = [ a , b ] (see Section 2.1) is represented by a finite set { τ k } k = 1 N , i.e., [ a , b ] = [ τ 1 , τ 2 , , τ N ] where a τ 1 < τ 2 < < τ N b .
For k = 1 , , N , the estimate of x ( τ k , · ) , x ^ ( τ k , · ) , and observed signal y ( τ k , · ) are represented by m × q and n × q matrices
X ^ ( k ) = [ x ^ ( τ k , ω 1 ) , , x ^ ( τ k , ω q ) ] and Y ( k ) = [ y ( τ k , ω 1 ) , , y ( τ k , ω q ) ] .
The sequence of fixed time-points { t k } 1 p [ a , b ] introduced in Equation (1) is such that
τ 1 = t 1 < < t p = τ N ,
where t 1 = τ n 0 , t 2 = τ n 0 + n 1 , , t p = τ n 0 + n 1 + n p 1 , and where n 0 = 1 and n 1 , , n p 1 are positive integers such that N = n 0 + n 1 + + n p 1 .
For j = 1 , , p , vectors x ^ ( t j , · ) and y ( t j , · ) associated with t j in Equation (21) are represented, respectively, by
X ^ j ( k ) = [ x ^ ( t j , ω 1 ) , , x ^ ( t j , ω N ) ] and Y j = [ y ( t j , ω 1 ) , , y ( t j , ω N ) ] .

4.3.2. Algorithm

As it has been mentioned in Section 3.4, it is supposed that, for t = t 1 , an estimate of X 1 , X ^ 1 , is known and can be determined by the known methods. This is the initial condition of the proposed technique.
On the basis of the results obtained in Section 3.4 and Section 4.2, the performance algorithm of the proposed filter consists of the following steps (Algorithm 1). For j = 1 , p , we write N j = n 0 + n 1 + + n j 1 .
Algorithm 1 Computation of X ^ ( 2 ) , X ^ ( 3 ) , …, X ^ ( N ) given by Theorem 1
  • Initial parameters: Y ( 1 ) , , Y ( N ) , { t j } j = 1 p (see Equation (21)), { E z j w j } j = 1 p , { E w j w j } j = 1 p (see Equations (14) and (15)), X ^ 1 , n 0 = 1 , N 0 = 0 and M B j = O , for j = 1 , , p 1 .)
  • The condition M B j = O , for j = 1 , , p 1 , is motivated by a desire to consider the minimal norm representation of B j , for j = 1 , , p 1 , in Equation (19). Possible ways to obtain estimates of E z j w j and E w j w j are discussed below in Section 4.5.
  • Final parameters: X ^ ( 2 ) , X ^ ( 3 ) , …, X ^ ( N ) .
  • for  j 1 , , p  do
  •        B j E z j w j E w j w j
  •       for  k N j 1 + 1 , , N j  do
  •              X ^ ( k ) X ^ j + B j ( Y ( k ) Y j )
  •              X ^ j + 1 X ^ ( N j )
  •       end for
  • end for

4.4. Error Analysis

It is natural to expect that the error associated with the piecewise interpolating filter F ( p 1 ) decreases when max j = 1 , , p 1 Δ t j decreases. Below, in Theorem 3, we justify that this observation is true. To this end, first, in the following Theorem 2, we establish an estimate of the error associated with the filter F.
Let us introduce the norm by
x ( t , · ) T , Ω 2 = 1 b a T x ( t , · ) Ω 2 d t .
We also denote x ( t , ω ) T , Ω 2 = x ( t , · ) T , Ω 2 .
Let us suppose that x ( · , ω ) and y ( · , ω ) are Lipschitz continuous signals, i.e., that there exist real non-negative constants λ j and γ j , with j = 1 , , p , such that, for t [ t j , t j + 1 ] ,
x ( t , ω ) x ( t j , ω ) T , Ω 2 λ j Δ t j amd y ( t , ω ) y ( t j + 1 , ω ) T , Ω 2 γ j Δ t j
where Δ t j = | t j + 1 t j | .
Theorem 2.
Under the conditions (Equation (23)) the error associated with the piecewise interpolation filter, x ( t , ω ) F ( p 1 ) [ y ( t , ω ) ] T , Ω 2 , is estimated as follows:
x ( t , ω ) F ( p 1 ) [ y ( t , ω ) ] T , Ω 2 max j = 1 , , p 1 ( λ j + γ j B j 2 ) Δ t j + E z j z j 1 / 2 2 E z j w j ( E w j w j 1 / 2 ) 2 .
Proof. 
The proof of Theorem 2 is given in Appendix A. □
Further, to show that the error of the reference signal estimate tends to zero, we need to assume that, for t [ t 1 , t 2 ] , the known estimate x ^ ( t 1 , ω ) differs from x ( t , ω ) for the value of the order Δ t 1 , i.e. that, for some constant c 1 0 ,
x ( t , ω ) x ^ ( t 1 , ω ) Ω 2 c 1 Δ t 1 , for   t [ t 1 , t 2 ] .
Theorem 3.
Let the conditions (23) and (25) be true. Then the error associated with the piecewise interpolating filter F, x ( t , ω ) F ( p 1 ) [ y ( t , ω ) ] T , Ω 2 , decreases in the following sense:
x ( t , ω ) F ( p 1 ) [ y ( t , ω ) ] T , Ω 2 0 a s max j = 1 , , p 1 Δ t j 0 a n d p .
Proof. 
The proof of Theorem 3 is given in Appendix A. □
Remark 2.
We would like to emphasize that the statement of Theorem 3 is fulfilled only under Equations (23) and (25). At the same time, Equations (23) and (25) are not restrictive from a practical point of view. Equation (23) is true for Lipschitz continuous signals x and y , i.e., for a very wide class of signals. Equation (25) is achieved by choosing an appropriate known method (e.g., see [1,2,3,4,5,6,7,8,9,10,11,13,14,15,16]) to find the estimate x ^ ( t 1 , ω ) used in the proposed filter F ( p 1 ) (see Equation (8) and Theorem 1).

4.5. Estimation of Covariance Matrices in Equation (19)

Matrix E z j w j used in Equation (19), for j = 1 , , p , can be estimated as follows:
  • A popular method of estimating E z j w j is provided, for example, in [27]—it is based on the use of samples of z j and w j , for j = 1 , , p .
  • In the case of incomplete observations, the method proposed in [28,29] can be used.
  • Let E z ^ j w j be a matrix obtained from matrix E z j w j where the term x ( t j + 1 , · ) is replaced by x ^ ( t , · ) with t [ t j 1 , t j ] . Since x ^ ( t , · ) with t [ t j 1 , t j ] is known, matrix E z ^ j w j can be considered as an estimate of E z j w j .
  • In the important case of an additive noise, E z j w j can be represented in the explicit form. Indeed, if
    y ( t , · ) = x ( t , · ) + ξ ( t , · )
    where ξ ( t , · ) L 2 ( Ω , R m ) is a random noise, then z ( t j , t j + 1 , · ) = y ( t j + 1 , · ) ξ ( t j + 1 , · ) x ^ ( t j , · ) and matrix E z j w j can be represented as follows:
    E z j w j = E ( y j + 1 ξ j + 1 ) ( y j + 1 y j ) E x ^ j ( y j + 1 y j )
We note that the RHS of Equation (27) depends only on observed signals y ( t j , · ) , y ( t j + 1 , · ) , estimated signal x ^ ( t j , · ) , and noise ξ ( t j + 1 , · ) , not on the reference signal x ( t j + 1 , · ) . In particular, in Equation (27), the term E ξ j + 1 ( y j + 1 y j ) can be estimated as ± ( E [ ξ j + 1 2 ] ) 1 / 2 ( E [ ( y j + 1 y j ) 2 ] ) 1 / 2 where E [ ξ j + 1 2 ] = Ω [ ξ ( t j + 1 , ω ) ] 2 d μ ( ω ) . It is motivated by the Holder’s inequality for integrals. The second term in Equation (27), E x ^ j ( y j + 1 y j ) , can be estimated from the samples of x ^ ( t j + 1 , · ) and y ( t j + 1 , · ) y ( t j , · ) .
We also note that the first term in the RHS of Equation (27), E ( y j + 1 ξ j + 1 ) ( y j + 1 y j ) , is similar to the related covariance matrix in the Wiener filtering approach [5].
5.
Other known ways to estimate E ξ j + 1 ( y j + 1 y j ) can be found in [5], Section 5.3.
In general, an estimation of covariance matrices is a special research topic that is not a subject of this paper. The relevant references can be found, for example, in [5,29].

5. Simulations

5.1. General Consideration

The simulations represented below have been performed by Matlab R14 (Version 7.0). In accordance with Section 4.3.1, signal sets K X and K Y (see Section 2.1) are given by
K X = { x ( τ 1 , · ) , x ( τ 2 , · ) , , x ( τ N , · ) } and K Y = { y ( τ 1 , · ) , y ( τ 2 , · , , y ( τ N , · ) } ,
where, for k = 1 , , N , x ( τ k , · ) L 2 ( Ω , R m ) and y ( τ k , · ) L 2 ( Ω , R n ) . In many practical problems (arising, for example, in a DNA analysis the number N is quite large; for instance, N = O ( 10 4 ) .
We set N = 141 and m = n = 116 . Thus, in these simulations, the interval T = [ a , b ] (see Section 2.1 and Section 4.3.1) is modeled as 141 points τ k with k = 1 , , 141 so that [ a , b ] = [ τ 1 , τ 2 , , τ 141 ] .
The sequence of fixed time-points { t k } 1 p T in Equation (1) is now such that
τ 1 = t 1 < < t p = τ 141 .
Below, in Examples 1–12, four particular choices of the specific interpolation signal pairs { x ( t j , · ) , y ( t j , · ) } 1 p (introduced in Section 3.4) are considered, for p = 5 , 8 , 15 and 28. Points t 1 , , t p are as follows.
For p = 5 , 8 , 15 , if j = 1 , , p , then t j = t j ( p ) = τ 1 + ( j 1 ) Δ p , respectively, where Δ 5 = 35 ,   Δ 8 = 20 and Δ 15 = 10 .
For p = 28 , if j = 1 , , p 1 , then t j = t j ( p ) = τ 1 + ( j 1 ) Δ 28 , and if j = p , then t 28 = t 28 ( 28 ) = t 27 + 6 = 141 , where Δ 28 = 5 .
Signals x ( τ k , · ) and y ( τ k , · ) have been simulated as digital images represented by 116 × 256 matrices
X ( k ) = [ x ( τ k , ω 1 ) , , x ( τ k , ω 256 ) ] and Y ( k ) = [ y ( τ k , ω 1 ) , , y ( τ k , ω 256 ) ] ,
respectively, for k = 1 , , 141 , so that X ( k ) represents an image that should be estimated from an observed image Y ( k ) . A column of matrices X ( k ) and Y ( k ) , x ( τ k , ω i ) R 116 and y ( τ k , ω i ) R 116 , for i = 1 , , 256 , represents a realization of signals x ( τ k , · ) and y ( τ k , · ) , respectively.
Note that X ( 1 ) , , X ( 141 ) did not used in the piecewise linear filter F ( p 1 ) below since they are not supposed to be known. They are represented here for illustration purposes only. In particular, X ( 1 ) , , X ( 141 ) are used to compare their estimates by different filters.
Observed noisy signals Y ( 1 ) , , Y ( 141 ) have been simulated in different forms presented by Equations (40) and (49)–(51) in the Examples 1–12 below. We note that the considered observed signals are grossly corrupted.
To estimate the signals X ( 1 ) , …, X ( 141 ) from the observed signals Y ( 1 ) , …, Y ( 141 ) , the proposed piecewise linear filter F ( p 1 ) , the generic optimal linear (GOL) filters [5] and the averaging polynomial filter [16] have been used.
The filters proposed in [11,12,13,16] have not been applied here by the reasons discussed in Section 1. In particular, the filter in [13] cannot be applied to signals represented by Y ( 1 ) , …, Y ( 141 ) in the form Equations (40), (49)–(51) below because the associated inverse matrices used in [13] do not exist.
For signals under consideration (given by matrices X ( k ) and Y ( k ) with k = 1 , , 141 ), the filter F ( p 1 ) , the generic optimal linear (GOL) filters [5] and the averaging polynomial filter [15,16] are represented as follows:
(i)
Piecewise linear filter F ( p 1 ) . For j = 1 , , p , { X j , Y j } designates an interpolation pair defined similarly to that in Section 3.4. Each X j and Y j is associated with t j in Equation (28) so that
X j = [ x ( t j , ω 1 ) , , x ( t j , ω 256 ) ] and Y j = [ y ( t j , ω 1 ) , , y ( t j , ω 256 ) ] .
The estimate X ^ ( k ) of X ( k ) by the filter F ( p 1 ) is given by
X ^ ( k ) = F ( p 1 ) [ Y ( k ) ] ,
where Equations (16)–(19) are presented in Section 4.2,
F ( p 1 ) [ Y ( k ) ] = j = 1 p 1 δ j F j [ Y ( k ) ] , δ j = 1 , i f   t j τ k t j + 1 , 0 , otherwise ,
F j ( p 1 ) [ Y ( k ) ] = X ^ j + B j [ Y ( k ) Y j ] ,
X ^ j = F j 1 [ Y j ] , X ^ 1 is   given ,
B j = E Z j W j ( E W j W j ) ,
and where E Z j W j and E W j W j are estimates of matrices E z j w j and E w j w j in Equation (19), respectively. In particular, E W j W j can be represented in the form
E W j W j = W j W j T , where   W j = Y j + 1 Y j .
Further, matrix E Z j W j depends on Z j = X j + 1 X ^ j where X j + 1 is unknown. Therefore, a determination of E Z j W j is reduced, in fact, to finding an estimate of X j + 1 . Since it is customary to find E Z j W j in terms of signal samples [5], E Z j W j has been presented as
E Z j W j = Z ˜ j W j T , where   Z ˜ j = X ˜ j + 1 X ^ j
and X ˜ j + 1 has been constructed from a sample of X j + 1 as follows. The sample of X j + 1 is a 116 × 128 matrix presented by odd columns of X j + 1 . Then an estimate of X j + 1 is chosen as a 116 × 256 matrix X ˜ j + 1 where each odd column is a related odd column of X j + 1 , and each even column is an average of two adjacent columns. The last column in X ˜ j + 1 is the same as its preceding column.
This way of estimating E Z j W j was chosen for illustration purposes only. Other related methods have been considered in Section 4.5.
The errors associated with the filter F ( p 1 ) are given by
ε k , F ( p 1 ) = X ( k ) F ( p 1 ) [ Y ( k ) ] F 2 , for   k = 1 , , 141 .
(ii)
Generic optimal linear (GOL) filters [5]. To each signal Y ( k ) , an individual GOL filter W k has also been applied, so that W k estimates X ( k ) from Y ( k ) in the form
W k Y ( k ) = E X ( k ) Y ( k ) E Y ( k ) Y ( k ) Y ( k ) ,
for each k = 1 , , 141 . Thus, the GOL filter W k requires an estimate of 141 matrices E X ( k ) Y ( k ) , for each k = 1 , , 141 .
Similarly to matrix E Z j W j in the filter F ( p 1 ) above, the matrix E X ( k ) Y ( k ) has been estimated from samples of each X ( k ) , X ˜ ( k ) , for each k = 1 , , 141 .
One of the advantages of the proposed filter F ( p 1 ) is that F ( p 1 ) requires a smaller number, p, of samples of X j , X ˜ j , to be known (where j = 1 , , p ).
The errors associated with filters W k are given by
ϵ k , w = X ( k ) W k Y ( k ) F 2 .
(iii)
Averaging polynomial filters [15,16]. By the methodology in [15], the averaging polynomial filter W is based on the use of the estimates of the covariance matrices, E X Y and E Y Y , in the form
E X Y = 1 141 k = 1 141 X ˜ ( k ) ( Y ( k ) ) T and E Y Y = 1 141 k = 1 141 Y ( k ) ( Y ( k ) ) T .
Then, for each, k = 1 , , 141 , the estimate of X ( k ) is given by
W Y ( k ) = E X Y E Y Y Y ( k ) .
The errors associated with the filter W are given by
ε k W = X ( k ) W Y ( k ) F 2 , for   k = 1 , , 141 .

5.2. Simulations with Signals Modelled from Images ‘Plant’: Application of Piecewise Interpolation Filter and GOL Filters

Here, results of simulations for reference signals represented by matrices X ( 1 ) ,   , X ( 141 ) (see Equation (29) above) formed from images ‘plant’ from http://sipi.usc.edu/services/database.html (accessed on 21 February 2025). Typical selected images X ( k ) are shown in Figure 1.
Observed noisy images Y ( 1 ) , , Y ( 141 ) have been simulated in the form
Y ( k ) = X ( k ) randn ( k ) rand ( k ) ,
for each k = 1 , , 141 . Here, • means the Hadamard product, and randn ( k ) and rand ( k ) are 116 × 256 matrices with random entries. The entries of randn ( k ) are normally distributed with mean zero, variance one and standard deviation one. The entries of rand ( k ) are uniformly distributed in the interval ( 0 , 1 ) . A typical example of such images is given in Figure 2a. Examples of signal X ( 95 ) estimates by different filters are represented in Figure 2b–d.
To demonstrate the effectiveness of the proposed filter F ( p 1 ) , sub-filters F j ( p 1 ) and associated interpolation signal pairs { X j , Y j } j = 1 p have been chosen in four different ways as follows.
Example 1.
First, for p = 5 , the interpolation signal pairs are
{ X 1 , Y 1 } = { X ( 1 ) , Y ( 1 ) } , { X 2 , Y 2 } = { X ( 35 ) , Y ( 35 ) } , { X 3 , Y 3 } = { X ( 70 ) , Y ( 70 ) } ,
{ X 4 , Y 4 } = { X ( 105 ) , Y ( 105 ) } , { X 5 , Y 5 } = { X ( 141 ) , Y ( 141 ) } .
The error values { ε k , F ( 4 ) } 1 141 associated with filter F ( 4 ) are evaluated by Equation (37). The graph of { ε k , F ( 4 ) } 1 141 is presented in Figure 3a.
Example 2.
For p = 8 , the interpolation signal pairs are
{ X 1 , Y 1 } = { X ( 1 ) , Y ( 1 ) } , { X j , Y j } = { X ( 20 ( j 1 ) ) , Y ( 20 ( j 1 ) ) } , f o r   j = 2 , , 7 ;
a n d { X 8 , Y 8 } = { X ( 141 ) , Y ( 141 ) } .
The error magnitudes { ε k , F ( 7 ) } 1 141 associated with the piecewise interpolation filter F ( 7 ) constructed by Equations (31)–(36) with the interpolation signal pairs given by Equations (43) and (44) are diagrammatically shown in Figure 3b.
It follows from Figure 3b that the errors associated with filter F ( 7 ) is less than those of filter F ( 4 ) . This is a confirmation of Theorem 3.
Example 3.
Further, for p = 15 , the interpolation pairs are
{ X 1 , Y 1 } = { X ( 1 ) , Y ( 1 ) } , { X j , Y j } = { X ( 10 ( j 1 ) ) , Y ( 10 ( j 1 ) ) } f o r   j = 2 , , 14 ;
a n d { X 15 , Y 15 } = { X ( 141 ) , Y ( 141 ) } .
In Figure 3c, the errors { ε k , F ( 15 ) } 1 141 associated with the piecewise interpolation filter F ( 15 ) are presented. Figure 3c demonstrates a further confirmation of Theorem 3: the errors associated with the piecewise interpolation filter diminish as p increases.
Example 4.
Finally, the number of interpolation signal pairs { X j , Y j } j = 1 p is p = 29 so that
{ X 1 , Y 1 } = { X ( 1 ) , Y ( 1 ) } , { X j , Y j } = { X ( 5 ( j 1 ) ) , Y ( 5 ( j 1 ) ) } f o r   j = 2 , , 28 ;
a n d { X 29 , Y 29 } = { X ( 141 ) , Y ( 141 ) } .
In this case, when p is greater than in the previous Examples 1–3, the errors { ε k , F ( 29 ) } 1 141 associated with the piecewise interpolation filter F ( 29 ) are smaller than those associated with filters F ( 4 ) , F ( 8 ) and F ( 15 ) —see Figure 3d.
The diagrams of errors associated with the GOL filters [5] are also presented in Figure 3, which shows that the proposed filters F ( 4 ) , F ( 8 ) , F ( 15 ) and F ( 29 ) provide better accuracy then that of the GOL filters.
At the same time, the filter F ( p 1 ) is easier to implement since it requires less initial information compared to GOL filters, as it has been discussed in Section 1.1.1 and Section 1.1.4.
Results of the application of the averaging polynomial filter [15,16] are discussed in Section 5.4 below.

5.3. Simulations with Signals Modelled from Images ‘Boat’: Application of Piecewise Interpolation Filter and GOL Filters

In this section, results of the simulations for a different type of signal than those considered in Section 5.2 above are presented. Here, the reference signals X ( 1 ) , , X ( 141 ) are formed from images ‘boat’ (http://sipi.usc.edu/services/database.html, accessed 21 February 2025).
Observed noisy signals Y ( 1 ) , , Y ( 141 ) have been simulated in the form
Y ( k ) = X ( k ) randn ( k ) ,
for each k = 1 , , 141 . The noise term is different from that in Equation (40).
Typical selected images X ( k ) and Y ( k ) are shown in Figure 4 and Figure 5, respectively.
As in Section 5.2, the piecewise interpolation filter F ( p 1 ) is constructed by Equations (31)–(36). In Examples 5–8 below, the number p 1 of sub-filters F j ( p 1 ) and associated interpolation signal pairs { X j , Y j } j = 1 p have been chosen in four different ways.
Example 5.
First, similar to Example 1, the number of interpolation signal pairs { X j , Y j } j = 1 p has been chosen as p = 5 , and X j and Y j have been presented as in Equations (41) and (42).
The error values { ε k , F ( 5 ) } 1 141 associated with the piecewise interpolation filter F ( 5 ) applied to these data are presented in Figure 6a.
Example 6.
For the greater number of interpolation signal pairs than that in Example 5, p = 8 , and for X j and Y j ( j = 1 , , 8 ) chosen as in Equations (43) and (44), the error magnitudes { ε k , F ( 8 ) } 1 141 associated with the piecewise interpolation filter F ( 8 ) are diagrammatically shown in Figure 6b. A comparison between Figure 6a,b demonstrates that the increase in p implies the decrease in the errors associated with the filter F ( p 1 ) .
Example 7.
For p = 15 , and for X j and Y j ( j = 1 , , 15 ) chosen in Equations (45) and (46), the errors { ε k , F ( 15 ) } 1 141 associated with the piecewise interpolation filter F ( 15 ) are further less than those for filters F ( 5 ) and F ( 8 ) . See Figure 6c in this regard.
Example 8.
The further increase in p to p = 29 confirms this tendency. The piecewise interpolation filter F ( 29 ) with X j and Y j ( j = 1 , , 29 ) chosen similar to Equations (47) and (48) produces the associated errors { ε k , F ( 29 ) } 1 141 represented in Figure 6d. They are, clearly, less than the errors associated with filters F ( 5 ) , F ( 8 ) and F ( 15 ) .
The errors associated with the GOL filters are also presented in Figure 6a–d. The figures clearly demonstrate the advantage of the piecewise interpolation filter F ( p 1 ) .
Results of the application of the averaging polynomial filter [16] are discussed in Section 5.4 below.

5.4. Results of Simulations for Averaging Polynomial Filter [15,16]

To further illustrate the effectiveness of the proposed piecewise interpolation filter, in this Section, results of simulations for the averaging polynomial filter [15,16] are presented. The filter has been applied to two different types of data considered in Section 5.2 and Section 5.3.
Example 9.
The filter [15,16] applied to signals considered in Section 5.2 gives the associated errors { ϵ k W } k = 1 141 (see Equation (39)) represented in Figure 7a. For a comparison, the errors associated with the piecewise interpolation filter F ( 29 ) and the GOL filters [5] are also given in Figure 7a.
A typical example of the estimated signal by the averaging polynomial filter [15,16] is presented in Figure 2d above.
Example 10.
The averaging polynomial filter [16] applied to signals considered in Section 5.3 produces the associated errors { ϵ k W } k = 1 141 shown in Figure 7b. The errors associated with the piecewise interpolation filter F ( 29 ) are much smaller, and they are not discerned in Figure 7b.
The extreme failure of the averaging polynomial filter (the errors of magnitude 10 9 ) in Examples 9 and 10 illustrates the observation made in Section 1.2.4: The associated error increases when signals differ considerably from each other. This is the case in Examples 9 and 10.
Together, Figure 2, Figure 3, Figure 5, Figure 6 and Figure 7a,b illustrate the advantage of the piecewise interpolation filter.

5.5. Further Simulations with Different Type of Noise

In Examples 11 and 12 below, a different type of noise is considered. Unlike the multiplicative noise in Equations (40) and (49), here, the noise is additive.
Example 11.
First, the piecewise interpolation filter F ( 28 ) , the GOL filters [5] and the averaging polynomial filter [16] have been applied to the observed signals given by
Y ( k ) = X ( k ) + 900 × randn ( k ) , f o r   k = 1 , , 141 .
where  X ( k ) is as in Section 5.2, i.e., X ( k ) is formed from the images ’plant’. In Figure 8a, the diagrams of the errors associated with filter F ( 28 ) and the GOL filters [5] are given. The errors associated with the averaging polynomial filter [16], { ϵ k W } k = 1 141 , are much greater (of order O ( 10 9 ) ) and they are not presented in Figure 8a.
Example 12.
In this example, the reference signals X ( 1 ) , , X ( 141 ) are as those in Section 5.3, i.e., they are formed from the image ’boat’. The observed signals are given by
Y ( k ) = X ( k ) + 1000 × randn ( k ) , f o r   k = 1 , , 141 .
The piecewise interpolation filter F ( 29 ) and the GOL filters [5] estimate the reference signals with the associated errors represented in Figure 8b. As in Example 11 above, in this case, the errors associated with the averaging polynomial filter [15,16] are much greater (of order O ( 10 10 ) ), and they are not presented in Figure 8b.
Examples 11 and 12 further demonstrate the advantages of the proposed piecewise interpolation filter.

5.6. Summary of Simulations

The above simulations confirm the theoretical results obtained in Theorems 1–3. In particular, Figure 3 and Figure 6 demonstrate that the error associated with the piecewise interpolation filter F ( p 1 ) decreases when the number of sub-filters F 1 , , F p , p, increases.
A comparison between the proposed filter F ( p 1 ) and the known related filters [5,12,13,15,16] has been performed. The filter F ( p 1 ) estimates the reference signals with the accuracies that are much better than those of the generic optimal linear (GOL) filters [5] and the averaging polynomial filter [15,16]. Further, the filters proposed in [12,13] fail in processing the signals under consideration. This is because the observed signals in Equations (40), (49), (50) and (51) are grossly corrupted, and, therefore, the inverse matrices used in the filter structures in [13] do not exist. The technique in [12] requires the use of the reference signal in the proposed filter, which is supposed to be unknown in the simulations above.
The filters have been applied to the different signal sets (presented in Section 5.2 and Section 5.3), using different forms of noise (given in Equations (40), (49), (50) and (51)).
Computational work associated with the proposed filter F ( p 1 ) is substantially less than that associated with the known filters discussed in Section 1 (in particular, with the filters in [10,11,12,13,16,17,20,21,22,23,24,25,26]). This is because, for the processing of a data set containing N signals, filter F ( p 1 ) requires computation of p covariance matrices with p N while the known filters require computation of N matrices (in the above examples, p = 5 , 7 , 15 , 28 , respectively, and N = 141 ).
Table 1 represents a typical time in seconds used by the piecewise linear interpolation filter (PILF), GOL and the averaging polynomial filter (APF) [16] in the above examples. It follows from Table 1 that the PILF is faster than the GOL up to ten times. While the time used by the PILF and APF for p = 29 is almost the same, the accuracy associated with the PILF is much better than that associated with the APF.
Table 2 summarizes the computational and structural advantages of the proposed method. The abbreviation ‘PINV’ means the use of the pseudo-inverse matrices (not inverse matrices), and ε P I L F , ε G O L and ε A P F denote the errors associated with the PILF, GOL and APF, correspondingly. ‘Stability’ means numerical stability. The last column illustrates features related to filters considered in [10,11,12,13,16,17,20,21,22,23,24,25,26]. As mentioned above, these filters cannot be applied to the data under consideration because of the singularity of matrices used in [10,11,12,13,16,17,20,21,22,23,24,25,26]. Therefore, their errors and a computational complexity are marked by N/A as not applicable.

6. Conclusions

The technique of the constructive approximation of a nonlinear operator F : K Y K X has been provided. Here, K X = { x ( t , · ) L 2 ( Ω , R m ) | t T } and K Y = { y ( t , · ) L 2 ( Ω , R n ) | t T } where T : = [ a , b ] R and Ω = { ω } is the set of outcomes of a probability space. The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of stochastic sets K X and K Y . The proposed methodology is motivated by the problems arising in signal processing, where the nonlinear operator F is interpreted as a nonlinear system (or a nonlinear filter) transforming a set of stochastic signals K Y to the set of stochastic signals K X . Therefore, the provided technique provides an effective way to transform large data sets.
Distinctive features of the approach are as follows:
(i)
The proposed filter F ( p 1 ) : K Y K X is nonlinear and is presented in the form of a sum with p 1 terms, where each term, F j : K Y , j K X , j , is interpreted as a particular sub-filter. Here, K Y , j and K X , j are ‘small’ pieces of K Y and K X , respectively.
(ii)
The prime idea is to exploit prior information only on a few reference signals, p, from the set K X that contains N p signals (or even an infinite number of signals) and determine F j separately for each piece K Y , j and K X , j , so that the associated error is minimal. In other words, the filter F ( p 1 ) is flexible to changes in the sets of observed and reference signals K Y and K X , respectively.
(iii)
Due to the specific way of determining F j , the filter F ( p 1 ) provides a smaller associated error than that for the processing of the whole set K Y by a filter that is not specifically adjusted to each particular piece K Y , j . Moreover, the error associated with our filter decreases when the number of its terms, F 1 , , F p 1 , increases.
(iv)
While the proposed filter F ( p 1 ) processes arbitrarily large (and even infinite) signal sets, the filter is nevertheless fixed for all signals in the sets.
(v)
The filter F ( p 1 ) is determined in terms of pseudo-inverse matrices so that the filter always exists.
(vi)
computational load associated with the filter F ( p 1 ) is less than that associated with other known filters applied to the processing of large signal sets.

Author Contributions

Conceptualization, methodology, writing—original draft, A.T.; numerical simulations, A.T. and P.P.; algorithm, P.P.; Matlab codes, P.P.; English amelioration, P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
It follows from Equations (8) and (11) that α j , for j = 1 , , p 1 , is given by
α j = x ^ ( t j , ω ) B j [ y ( t j , ω ) ] .
Further, for α j given by Equation (A1),
[ x ( t j + 1 , · ) α j ] B j [ y ( t j + 1 , · ) ] Ω 2
= z ( t j , t j + 1 , · ) B j [ w ( t j , t j + 1 , · ) ) ] Ω 2 = tr { E z j z j E z j w j B j T B j E w j z j + B j E w j w j B j T } = E z j z j 1 / 2 2 E z j w j ( E w j w j 1 / 2 ) 2 + ( B j E z j w j E w j w j ) E w j w j 1 / 2 2
= E z j z j 1 / 2 2 E z j w j ( E w j w j 1 / 2 ) 2 + E z j w j ( E w j w j 1 / 2 ) B j E w j w j 1 / 2 2 ,
where · is the Frobenius norm. The latter is true because
E w j w j E w j w j 1 / 2 = ( E w j w j 1 / 2 )
and
E z j w j E w j w j E w j w j = E z j w j
by Lemma 24 in [5]. Thus, the second expression in Equation (11) is reduced to the problem
min B j E z j w j ( E w j w j 1 / 2 ) B j E w j w j 1 / 2 2 .
It is known (see, for example, [5], p. 304) that the solution of Equation (A5) is given by Equation (19).
Equation (17) follows from Equations (6) and (A1).
Theorem 1 is proven. □
Proof of Theorem 2.
For t [ t j , t j + 1 ] and F j defined by Equations (17)–(19),
x ( t , ω ) F [ y ( t , ω ) ] = x ( t , ω ) F j [ y ( t , ω ) ] = x ( t , ω ) x ^ ( t j , ω ) + B j y ( t j , ω ) B j y ( t , ω ) = [ x ( t , ω ) x ( t j + 1 , ω ) ] + z ( t j , t j + 1 , ω ) B j w ( t j , t j + 1 , ω ) + B j [ y ( t j + 1 , ω ) y ( t , ω ) ] .
Then, Equation (A6) implies
x ( t , ω ) F [ y ( t , ω ) ] T , Ω 2 x ( t , ω ) x ( t j + 1 , ω ) T , Ω 2 + z ( t j , t j + 1 , ω ) B j w ( t j , t j + 1 , ω ) Ω 2 + B j [ y ( t j + 1 , ω ) y ( t , ω ) ] T , Ω 2
where z ( t j , t j + 1 , ω ) B j w ( t j , t j + 1 , ω ) Ω 2 = z ( t j , t j + 1 , ω ) B j w ( t j , t j + 1 , ω ) T , Ω 2 .
It follows from Equations (A2) and (A3) that for B j given by Equation (19),
z ( t j , t j + 1 , ω ) B j w ( t j , t j + 1 , ω ) Ω 2 = E z j z j 1 / 2 2 E z j w j ( E w j w j 1 / 2 ) 2 .
Then, Equations (16)–(19), (23) and (A6)–(A8) imply that for all t [ a , b ] and ω Ω , Equation (24) is true. □
Proof of Theorem 3.
Equation (22) implies that
x ( t , ω ) F [ y ( t , ω ) ] T , Ω 2 = 1 b a j = 1 p 1 t j t j + 1 x ( t , ω ) F j [ y ( t , ω ) ] Ω 2 d t ,
where
x ( t , ω ) F j [ y ( t , ω ) ] Ω 2 = x ( t , ω ) x ^ ( t j , ω ) + B j [ y ( t j , ω ) B j y ( t , ω ) ] Ω 2 x ( t , ω ) x ( t j , ω ) Ω 2 + x ( t j , ω ) x ^ ( t j , ω ) Ω 2 + B j [ y ( t j , ω ) B j y ( t , ω ) ] Ω 2 .
Then,
t j t j + 1 x ( t , ω ) F j [ y ( t , ω ) ] Ω 2 d t t j t j + 1 x ( t , ω ) x ( t j , ω ) Ω 2 d t + t j t j + 1 x ( t j , ω ) x ^ ( t j , ω ) Ω 2 d t + B j t j t j + 1 y ( t j , ω ) y ( t , ω ) Ω 2 d t
λ j ( Δ t j ) 2 + x ( t j , ω ) x ^ ( t j , ω ) Ω 2 Δ t j + B j γ j ( Δ t j ) 2
Let us consider an estimate of x ( t j , ω ) x ^ ( t j , ω ) Ω 2 , for j = 1 , , p 1 . To this end, let us denote Δ t = max j = 1 , , p 1 Δ t j .
For j = 1 , i.e., for t [ t 1 , t 2 ] ,
x ( t , ω ) F 1 y ( t , ω ) Ω 2 x ( t , ω ) x ( t 1 , ω ) Ω 2 + x ( t 1 , ω ) x ^ ( t 1 , ω ) Ω 2 + B 1 y ( t 1 , ω ) y ( t , ω ) Ω 2 λ 1 Δ t 1 + c 1 Δ t 1 + B 1 γ 1 Δ t 1 β 1 Δ t ,
where β 1 = λ 1 + c 1 + B 1 γ 1 . In particular, the latter implies
x ( t 2 , ω ) x ^ ( t 2 , ω ) Ω 2 = x ( t 2 , ω ) F 1 y ( t 2 , ω ) Ω 2 β 1 Δ t
For j = 2 , i.e., for t [ t 2 , t 3 ] ,
x ( t , ω ) F 2 y ( t , ω ) Ω 2 x ( t , ω ) x ( t 2 , ω ) Ω 2 + x ( t 2 , ω ) x ^ ( t 2 , ω ) Ω 2 + B 2 y ( t 2 , ω ) y ( t , ω ) Ω 2 λ 2 Δ t 2 + β 1 Δ t + B 2 γ 2 Δ t 2 β 2 Δ t ,
where β 2 = λ 2 + β 1 + B 2 γ 2 . In particular, it follows that
x ( t 3 , ω ) x ^ ( t 3 , ω ) Ω 2 = x ( t 3 , ω ) F 2 y ( t 3 , ω ) Ω 2 β 2 Δ t .
On the above basis, let us assume that, for j = k 1 with k = 2 , , p 1 , i.e., for t [ t k 1 , t k ] ,
x ( t k , ω ) x ^ ( t k , ω ) Ω 2 = x ( t k , ω ) F k 1 y ( t k , ω ) Ω 2 β k 1 Δ t
where β k 1 is defined by analogy with β 2 .
Then, for j = k with k = 2 , , p 1 , i.e., for t [ t k , t k + 1 ] ,
x ( t , ω ) F k y ( t , ω ) Ω 2 x ( t , ω ) x ( t k , ω ) Ω 2 + x ( t k , ω ) x ^ ( t k , ω ) Ω 2 + B k y ( t k , ω ) y ( t , ω ) Ω 2 λ k Δ t k + β k 1 Δ t + B k γ 2 Δ t k β k Δ t ,
where β k = λ k + β k 1 + B k γ k . Thus, the following is true:
x ( t k + 1 , ω ) x ^ ( t k + 1 , ω ) Ω 2 = x ( t k + 1 , ω ) F k y ( t k + 1 , ω ) Ω 2 β k Δ t .
Therefore, Equations (A10), (A11) and (A12) imply
t j t j + 1 x ( t , ω ) F j [ y ( t , ω ) ] Ω 2 d t λ j ( Δ t j ) 2 + β j 1 ( Δ t j ) 2 + B j γ j ( Δ t j ) 2 η j ( Δ t ) 2
where η j = λ j + β j 1 + B j , and then it follows from Equations (A9)–(A11) and (A13) that for all t [ a , b ] ,
x ( t , ω ) F [ y ( t , ω ) ] T , Ω 2 1 b a j = 1 p 1 η j ( Δ t ) 2 = 1 b a Δ t j = 1 p 1 η j Δ t .
Let us now choose c R and d R so that Δ t = d c p and partition interval [ c , d ] R by points τ 1 , , τ p so that c = τ 1 and τ j = τ 1 + j Δ t with j = 1 , , p . There exists an integrable (bounded) function φ : [ c , d ] R , such that, for ξ j ( τ j , τ j + 1 ) , φ ( ξ j ) = η j . Then,
lim Δ t j = 1 p 1 η j Δ t = lim Δ t j = 1 p 1 φ ( ξ j ) Δ t = c d φ ( τ ) d τ < + .
Thus,
1 b a Δ t j = 1 p 1 η j Δ t 0 as Δ t 0 .
As a result, Equations (A14)–(A16) imply Equation (26). □

References

  1. Chen, J.; Benesty, J.; Huang, Y.; Doclo, S. New Insights Into the Noise Reduction Wiener Filter. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1218–1234. [Google Scholar] [CrossRef]
  2. Spurbeck, M.; Schreier, P. Causal Wiener filter banks for periodically correlated time series. Signal Process. 2007, 87, 1179–1187. [Google Scholar] [CrossRef]
  3. Goldstein, J.S.; Reed, I.; Scharf, L.L. A Multistage Representation of the Wiener Filter Based on Orthogonal Projections. IEEE Trans. Inf. Theory 1998, 44, 2943–2959. [Google Scholar] [CrossRef]
  4. Hua, Y.; Nikpour, M.; Stoica, P. Optimal Reduced-Rank estimation and filtering. IEEE Trans. Signal Process. 2001, 49, 457–469. [Google Scholar]
  5. Torokhti, A.; Howlett, P. Computational Methods for Modelling of Nonlinear Systems; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  6. Sontag, E.D. Polynomial Response Maps; Lecture Notes in Control and Information Sciences; Springer: Cham, Switzerland, 1979; Volume 13. [Google Scholar]
  7. Chen, S.; Billings, S.A. Representation of non-linear systems: NARMAX model. Int. J. Control 1989, 49, 1013–1032. [Google Scholar] [CrossRef]
  8. Mathews, V.J.; Sicuranza, G.L. Polynomial Signal Processing; J. Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  9. Torokhti, A.; Howlett, P. Optimal Transform Formed by a Combination of Nonlinear Operators: The Case of Data Dimensionality Reduction. IEEE Trans. Signal Process. 2006, 54, 1431–1444. [Google Scholar] [CrossRef]
  10. Vesma, J.; Saramaki, T. Polynomial-Based Interpolation Filters—Part I: Filter Synthesis. Circuits Syst. Signal Process. 2007, 26, 115–146. [Google Scholar] [CrossRef]
  11. Torokhti, A.; Miklavcic, S. Data Compression under Constraints of Causality and Variable Finite Memory. Signal Process. 2010, 90, 2822–2834. [Google Scholar] [CrossRef]
  12. Russo, F. Technique for image denoising based on adaptive piecewise linear filters and automatic parameter tuning. IEEE Trans. Instrum. Meas. 2006, 55, 1362–1367. [Google Scholar] [CrossRef]
  13. Cousseau, J.E.; Figueroa, J.L.; Werner, S.; Laakso, T.I. Efficient Nonlinear Wiener Model Identification Using a Complex-Valued Simplicial Canonical Piecewise Linear Filter. IEEE Trans. Signal Process. 2007, 55, 1780–1792. [Google Scholar] [CrossRef]
  14. Wigren, T. Recursive Prediction Error Identification Using the Nonlinear Wiener Model. Automatica 1993, 29, 1011–1025. [Google Scholar] [CrossRef]
  15. Torokhti, A.; Howlett, P. Filtering and Compression for Infinite Sets of Stochastic Signals. Signal Process. 2009, 89, 291–304. [Google Scholar] [CrossRef]
  16. Torokhti, A.; Manton, J. Generic Weighted Filtering of Stochastic Signals. IEEE Trans. Signal Process. 2009, 57, 4675–4685. [Google Scholar] [CrossRef]
  17. Babuska, I.; Banerjee, U.; Osborn, J.E. Generalized finite element methods: Main ideas, results, and perspective. Int. J. Comput. Methods 2004, 1, 67–103. [Google Scholar] [CrossRef]
  18. Golub, G.H.; van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  19. Julian, P.; Desages, A.; D’Amico, B. Orthonormal high-level canonical PWL functions with applications to model reduction. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2000, 47, 702–712. [Google Scholar] [CrossRef]
  20. Kang, S.; Chua, L. A global representation of multidimensional piecewise-linear functions with linear partitions. IEEE Trans. Circuits Syst. 1978, 25, 938–940. [Google Scholar] [CrossRef]
  21. Chua, L.O.; Deng, A.-C. Canonical piecewise-linear representation. IEEE Trans. Circuits Syst. 1988, 35, 101–111. [Google Scholar] [CrossRef]
  22. Lin, J.-N.; Unbehauen, R. Adaptive nonlinear digital filter with canonical piecewise-linear structure. IEEE Trans. Circuits Syst. 1990, 37, 347–353. [Google Scholar] [CrossRef]
  23. Lin, J.-N.; Unbehauen, R. Canonical piecewise-linear approximations. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1992, 39, 697–699. [Google Scholar] [CrossRef]
  24. Gelfand, S.B.; Ravishankar, C.S. A tree-structured piecewise linear adaptive filter. IEEE Trans. Inf. Theory 1993, 39, 1907–1922. [Google Scholar] [CrossRef]
  25. Heredia, E.A.; Arce, G.R. Piecewise linear system modeling based on a continuous threshold decomposition. IEEE Trans. Signal Process. 1996, 44, 1440–1453. [Google Scholar] [CrossRef]
  26. Feng, G. Robust filtering design of piecewise discrete time linear systems. IEEE Trans. Signal Process. 2005, 53, 599–605. [Google Scholar] [CrossRef]
  27. Anderson, T. An Introduction to Multivariate Statistical Analysis; Wiley: New York, NY, USA, 1984. [Google Scholar]
  28. Perlovsky, L.I.; Marzetta, T.L. Estimating a Covariance Matrix from Incomplete Realizations of a Random Vector. IEEE Trans. Signal Process. 1992, 40, 2097–2100. [Google Scholar] [CrossRef] [PubMed]
  29. Ledoit, O.; Wolf, M. A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 2004, 88, 365–411. [Google Scholar]
Figure 1. Examples of selected signals to be estimated from observed data.
Figure 1. Examples of selected signals to be estimated from observed data.
Axioms 15 00091 g001
Figure 2. Examples of the observed signal and the estimates obtained by different filters.
Figure 2. Examples of the observed signal and the estimates obtained by different filters.
Axioms 15 00091 g002
Figure 3. Diagrams of errors associated with the piecewise interpolation filters F ( p 1 ) and the GOL filters [5] applied to signals described in Examples 1–4: (a) for p = 5 ; (b) for p = 8 ; (c) for p = 15 ; (d) for p = 29 .
Figure 3. Diagrams of errors associated with the piecewise interpolation filters F ( p 1 ) and the GOL filters [5] applied to signals described in Examples 1–4: (a) for p = 5 ; (b) for p = 8 ; (c) for p = 15 ; (d) for p = 29 .
Axioms 15 00091 g003
Figure 4. Examples of selected signals to be estimated from observed data considered in Examples 5–9.
Figure 4. Examples of selected signals to be estimated from observed data considered in Examples 5–9.
Axioms 15 00091 g004
Figure 5. Examples of the observed signal and the estimates obtained by different filters.
Figure 5. Examples of the observed signal and the estimates obtained by different filters.
Axioms 15 00091 g005
Figure 6. Diagrams of errors associated with the piecewise interpolation filter F ( p 1 ) of order p and the generic optimal linear (GOL) filters [5] applied to signals described in Examples 5–8: (a) for p = 5 ; (b) for p = 8 ; (c) for p = 15 ; (d) for p = 29 .
Figure 6. Diagrams of errors associated with the piecewise interpolation filter F ( p 1 ) of order p and the generic optimal linear (GOL) filters [5] applied to signals described in Examples 5–8: (a) for p = 5 ; (b) for p = 8 ; (c) for p = 15 ; (d) for p = 29 .
Axioms 15 00091 g006
Figure 7. Diagrams of errors associated with the averaging polynomial filters [15,16] in (a) Example 9 and (b) Example 10.
Figure 7. Diagrams of errors associated with the averaging polynomial filters [15,16] in (a) Example 9 and (b) Example 10.
Axioms 15 00091 g007
Figure 8. Diagrams of errors associated with the piecewise interpolation filter F ( p 1 ) and the generic optimal linear (GOL) filters [5] applied to signals described in (a) Example 11 and (b) Example 12.
Figure 8. Diagrams of errors associated with the piecewise interpolation filter F ( p 1 ) and the generic optimal linear (GOL) filters [5] applied to signals described in (a) Example 11 and (b) Example 12.
Axioms 15 00091 g008
Table 1. Time in seconds used by PILF, GOL and APF in the above examples.
Table 1. Time in seconds used by PILF, GOL and APF in the above examples.
Initial ParametersTime in Seconds
PILF COL APF
‘Plant’, p = 8 , N = 256 0.373.721.25
‘Boat’, p = 8 , N = 256 ,0.363.811.17
‘Plant’, p = 15 , N = 256 0.703.721.25
‘Boat’, p = 15 , N = 256 ,0.673.811.17
‘Plant’, p = 29 , N = 256 1.233.721.25
‘Boat’, p = 29 , N = 256 ,1.183.811.17
Table 2. Summary of computational and structural advantages of the proposed method.
Table 2. Summary of computational and structural advantages of the proposed method.
PILFCOLAPF[10,11,12,13,16,17,20,21,22,23,24,25,26]
PINVYesYesYesNo
Error ε P I L F ε G O L ε P I L F ε A P F ε P I L F N/A
Comp. loadp by n × n
PINV, p N
N by n × n
PINV, N p
one n × n
PINV
N/A
StabilityYesYesNoNo
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Torokhti, A.; Pudney, P. Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique. Axioms 2026, 15, 91. https://doi.org/10.3390/axioms15020091

AMA Style

Torokhti A, Pudney P. Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique. Axioms. 2026; 15(2):91. https://doi.org/10.3390/axioms15020091

Chicago/Turabian Style

Torokhti, Anatoli, and Peter Pudney. 2026. "Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique" Axioms 15, no. 2: 91. https://doi.org/10.3390/axioms15020091

APA Style

Torokhti, A., & Pudney, P. (2026). Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique. Axioms, 15(2), 91. https://doi.org/10.3390/axioms15020091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop