Next Article in Journal
Dynamical Analysis and Misalignment Projection Synchronization of a Novel RLCM Fractional-Order Memristor Circuit System
Next Article in Special Issue
On the Convergence of an Approximation Scheme of Fractional-Step Type, Associated to a Nonlinear Second-Order System with Coupled In-Homogeneous Dynamic Boundary Conditions
Previous Article in Journal
The Algebra of Signatures for Extreme Two-Uniform Hypergraphs
Previous Article in Special Issue
Automatic Sleep Staging Based on Single-Channel EEG Signal Using Null Space Pursuit Decomposition Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vector-Valued Shepard Processes: Approximation with Summability

by
Oktay Duman
1 and
Biancamaria Della Vecchia
2,*
1
Department of Mathematics, TOBB Economics and Technology University, Söğütözü, 06560 Ankara, Turkey
2
Dipartimento di Matematica, Università di Roma ‘Sapienza’, 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(12), 1124; https://doi.org/10.3390/axioms12121124
Submission received: 15 November 2023 / Revised: 10 December 2023 / Accepted: 12 December 2023 / Published: 15 December 2023

Abstract

:
In this work, vector-valued continuous functions are approximated uniformly on the unit hypercube by Shepard operators. If λ denotes the usual parameter of the Shepard operators and m is the dimension of the hypercube, then our results show that it is possible to obtain a uniform approximation of a continuous vector-valued function by these operators when λ m + 1 . By using three-dimensional parametric plots, we illustrate this uniform approximation for some vector-valued functions. Finally, the influence in approximation by regular summability processes is studied, and their motivation is shown.

1. Introduction

The Shepard operators, initially defined by Donald Shepard in 1968 [1], are effectively employed in a wide array of fields, ranging from mathematics to engineering, and from geographical mapping systems to mining, owing to their interpolation capabilities as well as their ability to approximate functions more rapidly. These operators are quite successful not only in scattered data interpolation problems (see [2,3,4,5,6,7,8]) but also in the classical approximation theory (see [9,10,11,12,13,14,15]). Recently in [16,17], we investigated the approximation behavior of the Shepard operators in complex cases. Our main focus in this paper is to use these operators to approximate vector-valued functions on the unit hypercube and further enhance this approach through the utilization of regular summability methods.
Firstly, let us introduce the vector-valued Shepard operators that will be employed throughout this article.
Let m N and K be the unit hypercube (or, the unit m-dimensional square), i.e.,
K = [ 0 , 1 ] m = x = ( x 1 , x 2 , , x m ) R m : x i [ 0 , 1 ] , i = 1 , 2 , , m .
Fixed n N , examine the set given by
Ω n : = k = ( k 1 , k 2 , , k m ) N m : k i { 0 , 1 , , n } , i = 1 , 2 , , m
and the following sample points of K
x k , n = k 1 n , k 2 n , , k m n with k Ω n .
Then, in total, we have ( n + 1 ) m sample points on K . Assume d N and f = ( f 1 , f 2 , , f d ) is a vector-valued function on the set K , where each component f r ( r = 1 , 2 , , d ) is a real-valued function on the set K . Now, if 0 < λ R , we examine the following vector-valued Shepard processes:
S n , λ ( f ; x ) = k = 0 n x x k , n m λ f ( x k , n ) k = 0 n x x k , n m λ ,
where we write k = 0 n for the multi-index summation k 1 = 0 n k 2 = 0 n k m = 0 n . Here, the symbol · m denotes the usual Euclidean norm on the set K . Observe that S n , λ ( f ) is an interpolatory process at the sample points x k , n , i.e.,
S n , λ ( f ; x k , n ) = f ( x k , n ) for k Ω n .
It is easy to verify that S n , λ ( f ) may be written with respect to the components of f as follows:
S n , λ ( f ; x ) = S ˜ n , λ ( f 1 ; x ) , S ˜ n , λ ( f 2 ; x ) , , S ˜ n , λ ( f d ; x ) ,
where S ˜ n , λ is given by
S ˜ n , λ ( g ; x ) : = k = 0 n x x k , n m λ g ( x k , n ) k = 0 n x x k , n m λ
for real-valued functions g defined on K . It is clear that S ˜ n , λ ( g ; x ) becomes real valued.
We structure the present paper based on this terminology as follows: Section 2 examines the approximation characteristics of the vector-valued Shepard operators, employing both classical convergence and regular summability methods. Section 3 demonstrates our main approximation theorem through supporting the auxiliary results. Section 4 gives some significant applications, including the influence in approximating by regular summability processes. The last section is devoted to the concluding remarks.

2. Approximating by Vector-Valued Shepard Operators

Let C ( K , R d ) denote the space of all continuous functions from K into R d . Now, we state an approximation result for the vector-valued Shepard operators given by (1).
Theorem 1.
Let f C ( K , R d ) . If λ m + 1 , we obtain
S n , λ ( f ) f o n K ,
with ⇉ denoting the uniform convergence.
Note that the uniform convergence in (3) can be written explicitly with respect to the components of f :
S n , λ ( f ) f on K for each r = 1 , 2 , , d , lim n S ˜ n , λ ( f r ) f r = 0 for each r = 1 , 2 , , d , lim n sup x K S ˜ n , λ ( f r ; x ) f r ( x ) = 0 .
Remark 1.
We should note that Farwig in [8] considered the multidimensional Shepard operators by using n distinct sample points in a compact set A satisfying two conditions of regularity. However, in (1), taking ( n + 1 ) m sample points in the unit hypercube K , we consider not only multidimensional but also vector-valued Shepard operators. One can also check that if we take d = 1 in Theorem 1, then our approximation result is in agreement with Farwig’s theorem in [8]. Indeed, it follows from Theorem 2.3 in [8] (in accordance with our notations, take q = 0 , r = 1 / n , s = m , p = λ ) that the order of approximation is O ( ( log n ) / n ) when λ = m + 1 and O ( 1 / n ) when λ > m + 1 , which coincides with our Theorem 1. Here, we will examine the demonstration of Theorem 1 from a different point of view than Farwig’s method (see Section 3). However, unlike Farwig’s result, we have not yet proved the existence of the approximation when m λ < m + 1 . We also give some applications that graphically and numerically illustrate the approximation of some vector-valued functions (see Section 4).
We now discuss the effects of the regular summability methods to the approximation in Theorem 1. We first recall some preliminaries on summability. Let A be a matrix A : = [ a j n ] j , n N and x a sequence x : = { x n } . Then, for the A-transformed sequence of { x n } , we write A x : = { ( A x ) j } = n = 1 a j n x n , if we obtain convergence for the series, at each j. In such a case, we write A as a summability (matrix) method. If lim A x = L , with L R , for the sequence x = { x n } , this sequence is said to be A-summable (or A-convergent) to L (we write A-lim x = L ). For a vector-valued functions sequence, this limit can also be considered. Examine the function sequence { f n } = { ( f 1 , n , f 2 , n , , f d , n ) } . One says { f n } is (uniformly) A-summable to f = ( f 1 , n , f 2 , n , , f d , n ) on K if
n = 1 a j n f n f on K
or, equivalently, for each r = 1 , 2 , , d ,
n = 1 a j n f r , n f r on K ,
which is denoted by f n A f on K . We also write that a matrix summability method A is regular when A-lim x = L , if lim x = L . By using the well-known Silverman–Toeplitz result (see, for instance, [18,19]), one can characterize the regularity of a matrix summability method as follows:
A = [ a j n ] is regular lim n a j n = 0 for every j , lim j n = 1 a j n = 1 , sup j N n = 1 a j n < .
A summability method A = [ a j n ] is said to be non-negative if a j n 0 for all j , n N . We should note that non-negative regular summability processes are successful in approximation theory (cfr. [20,21,22,23,24,25,26]). Now we apply such methods in approximating by vector-valued Shepard operators.
From Theorem 1, we deduce that, for each f C ( K , R d ) and λ m + 1 ,
S n , λ ( f ) f on K .
Now, for a fixed non-negative regular summability method A = [ a j n ] , we immediately check that, for every λ m + 1 ,
S n , λ ( f ) A f on K ,
provided that the transformed operator n = 1 a j n S n , λ ( f ) is well defined for each j. Indeed, we may write from (1), (4) and (5), for each r = 1 , 2 , , d ,
n = 1 a j n S ˜ n , λ ( f r ) f r n = 1 a j n S ˜ n , λ ( f r ) f r + f r n = 1 a j n 1 0 , as j ,
since A = [ a j n ] is non-negative and regular. However, the next modifications give a motivation for regular summability methods in approximating processes.
Assume that, for each n N , u n : K R d and v n : K K are vector-valued functions such that
u n = u 1 , n , u 2 , n , , u d , n
and
v n = v 1 , n , v 2 , n , , v m , n
have bounded components on K . Using the sequences { u n } and { v n } , we consider the following modifications of vector-valued Shepard operators:
S n , λ ( f ; x ) = u 1 , n x S ˜ n , λ f 1 ; x , , u d , n x S ˜ n , λ f d ; x
and
S n , λ ( f ; x ) = S ˜ n , λ f 1 ; v n ( x ) , , S ˜ n , λ f d ; v n x
for x K , if n N , λ > 0 and f C ( K , R d ) . Introducing the vector-valued test functions
e 0 : K R d , e 0 ( x ) = 1 = ( 1 , 1 , , 1 )
and
e 1 : K K , e 1 ( x ) = x = ( x 1 , x 2 , , x m ) ,
one immediately deduces the following result.
Theorem 2.
Assume f C ( K , R d ) and λ m + 1 , :
(i)
If u n e 0 on K , then it follows that the sequence { S n , λ ( f ) } is uniformly convergent to f on K ,
(ii)
If v n e 1 on K , then one obtains the sequence { S n , λ ( f ) } is uniformly convergent to f on K .
Proof. 
( i ) Let f C ( K , R d ) and λ m + 1 Then, for each r = 1 , 2 , , d , we obtain from (1), (2) and (6) that
u r , n x S ˜ n , λ f r ; x f r x u r , n x S ˜ n , λ f r ; x f r x + f r x u r , n ( x ) 1
holds for every x K and n N . Since each { u r , n } is bounded and uniformly convergent to 1 on K , the proof follows from Theorem 1 at once.
( i i ) By using a similar idea, for each r = 1 , 2 , , d , from (7)
S ˜ n , λ f r ; v n x f r x S ˜ n , λ f r ; v n x f r v n x + f r v n x f r x .
Then, we observe that
S ˜ n , λ f r ; v n x f r x 2 ω f r , δ n + f r v n x f r x ,
where
δ n : = O 1 n , if λ > m + 1 O log n n , if λ = m + 1
(see the proof of Theorem 1 in Section 3). Here, as usual, ω · , δ , δ > 0 , denotes the usual modulus of continuity defined by
ω g , δ = sup x y m δ g ( x ) g ( y )
if g is any real-valued bounded function on K . Since each f r is uniformly continuous on K and v n e 1 on K , the proof is completed. □
Then, the following natural problem arises:
  • Can we preserve the approximation in Theorem 2 at “some sense” when u n e 0 or v n e 1 fails on K (in the usual sense)?
The next result partially gives an affirmative answer to this problem by using non-negative regular summability methods.
Theorem 3.
Assume A = [ a j n ] to be a non-negative regular matrix method. Let u n A e 0 on K . If f C K , R d , for λ m + 1 , then the sequence { S n , λ ( f ) } is uniformly A-summable to f on K .
Proof. 
Assume f C K , R d ,   x K , for n N and λ m + 1 . Then, for each r = 1 , 2 , , d ,
n = 1 a j n u r , n x S ˜ n , λ ( f r ; x ) f r ( x ) n = 1 a j n u r , n x S ˜ n , λ ( f r ; x ) f r ( x ) + f r ( x ) n = 1 a j n u r , n 1 2 M r n = 1 a j n ω f r , δ n + f r n = 1 a j n u r , n 1 ,
with δ n given (10) and M r , a positive constant, such that u r , n ( x ) M r . Since A = [ a j n ] is regular, the r.h.s. of the last inequality vanishes as j ; consequently, S n , λ ( f ) A f on K . □
However, we understand from the discussion in Section 4.3 that the convergence v n A e 1 on K is not sufficient for the A-summability. In such a case, we need strong summability. If f n is a vector-valued functions sequence from K into R d , then we write f n = ( f 1 , n , , f d , n ) is strongly (uniform) A-summable to f = ( f 1 , , f d ) on K if
lim j n = 1 a j n | f r , n ( x ) f r ( x ) | = 0 , uniformly in x K
holds for each r = 1 , 2 , , d . We denote this convergence by f n A f on K . The above definition can be easily modified when the range of f n is K R m . We also observe the following facts on K :
{ f n } is uniformly convergent to f { f n } is strongly ( uniform ) A - summable to f { f n } is ( uniform ) A - summable to f ,
where f n and f are vector-valued functions whose components are bounded on K . But it is easily verified that the converse result is not always true. Then we obtain the next statement.
Theorem 4.
Assume A = [ a j n ] to be a non-negative regular matrix method. If v n A e 1 on K , then, if f C K , R d ,   λ m + 1 ,   S n , λ ( f ) A f , which also implies that the sequence { S n , λ ( f ) } is uniformly A-summable to f on K .
Proof. 
Assume f C K , R d . The uniform continuity of f r r = 1 , 2 , , d on K implies that each component function f r verifies the following (see [27]): ε > 0 , one can find a constant M ε > 0 s.t.
f r ( y ) f r ( x ) M ε y x m + ε
holds true for each x , y K . Hence, from (6), (10) and (11), we deduce that for each r = 1 , 2 , , d ,
n = 1 a j n S ˜ n , λ f r ; v n ( x ) f r ( x ) 2 n = 1 a j n ω f r , δ n + n = 1 a j n M ε v n x x m + ε ,
which implies
n = 1 a j n S ˜ n , λ f r ; v n ( x ) f r ( x ) 2 n = 1 a j n ω f r , δ n + M ε i = 1 m n = 1 a j n v i , n x x i + ε n = 1 a j n .
If j , since A is regular, we obtain that the r.h.s. of the last inequality vanishes for any r = 1 , 2 , , d . This mean that the sequence { S n , λ ( f ) } is strongly (uniform) A-summable to f on K . □

3. Auxiliary Results and Demonstration of Theorem 1

To the aim of demonstrating Theorem 1, the next lemmas are needed.
Lemma 1.
Let n N and x K such that x x k , n . Then, for each λ > 0 ,
k = 0 n x x k , n m λ 1 = O n λ
holds.
Proof. 
Define the multi-index k = ( k 1 , k 2 , , k m ) Ω n by
x x k , n m = min k Ω n x x k , n m .
Then, we observe that
x x k , n m λ = O n λ .
Using this and by the inequality
k = 0 n x x k , n m λ 1 x x k , n m λ ,
the assertion follows. □
Lemma 2.
Let m N and n 2 be given. Then, for every j = 1 , 2 , , m and λ m + 1 , we have
k 1 , k 2 , , k j = 2 n 1 k 1 2 + k 2 2 + + k j 2 ( λ 1 ) / 2 = O ( 1 ) , i f λ > m + 1 O ( log n ) , i f λ = m + 1 .
Proof. 
If m = 1 or m = 2 , the proof is clear. Let m 3 . Then, the case of j = 1 is also clear. Assume now that
j { 2 , 3 , , m 1 } .
Then, we obtain
k 1 , k 2 , , k j = 2 n 1 k 1 2 + k 2 2 + + k j 2 ( λ 1 ) / 2 1 n 1 n 1 n j times d x 1 d x 2 d x j x 1 2 + x 2 2 + + x j 2 ( λ 1 ) / 2 1 j n 0 2 π 0 π 0 π 0 π ( j 2 ) times r j 1 sin j 2 θ 1 sin j 3 θ 2 sin θ j 2 r λ 1 d θ 1 d θ 2 d θ j 1 d r 1 j n 0 2 π 0 π 0 π 0 π ( j 2 ) times 1 r λ j d θ 1 d θ 2 d θ j 1 d r = O ( 1 ) ,
since λ m + 1 . Finally, assume that j = m . In this case, using a similar idea, we may write that
k 1 , k 2 , , k m = 2 n 1 k 1 2 + k 2 2 + + k m 2 ( λ 1 ) / 2 1 m n 0 2 π 0 π 0 π 0 π ( m 2 ) times 1 r λ m d θ 1 d θ 2 d θ m 1 d r = O ( 1 ) , if λ > m + 1 O ( log n ) , if λ = m + 1 ,
as stated. □
We can see that Lemma 2 is also valid if all summations start from 1, which means, for every j = 1 , 2 , , m , if λ m + 1 ,
k 1 , k 2 , , k j = 1 n 1 k 1 2 + k 2 2 + + k j 2 ( λ 1 ) / 2 = O ( 1 ) , if λ > m + 1 O ( log n ) , if λ = m + 1
holds. Indeed, observe that
k 1 , k 2 , , k j = 1 n 1 k 1 2 + k 2 2 + + k j 2 ( λ 1 ) / 2 = 1 j ( λ 1 ) / 2 + j 1 k 1 = 2 n 1 k 1 2 + j 1 ( λ 1 ) / 2 + j 2 k 1 , k 2 = 2 n 1 k 1 2 + k 2 2 + j 2 ( λ 1 ) / 2 + + j j 1 k 1 , k 2 , , k j 1 = 2 n 1 k 1 2 + + k j 1 2 + 1 ( λ 1 ) / 2 + j j k 1 , k 2 , , k j = 2 n 1 k 1 2 + + k j 2 ( λ 1 ) / 2 1 j ( λ 1 ) / 2 + t = 1 j j t k 1 , k 2 , , k t = 2 n 1 k 1 2 + + k t 2 ( λ 1 ) / 2 ,
which implies (13).
Then, for any fixed x K , consider the real-valued function φ x on the set K by
φ x ( y ) : = y x m , y K .
The following lemma will be useful.
Lemma 3.
For every λ m + 1 ,
S ˜ n , λ φ x ; x 0 o n K ,
where S ˜ n , λ is given by (2).
Proof. 
It follows from (2) that
S ˜ n , λ φ x ; x = k = 0 n x x k , n m 1 λ k = 0 n x x k , n m λ .
Using the multi-index k = ( k 1 , k 2 , , k m ) Ω n given by (12), we obtain from Lemma 1 that
S ˜ n , λ φ x ; x x x k , n m + O n λ k Ω n { k } x x k , n m 1 λ .
Observe that the multi-index set Ω n { k } contains ( n + 1 ) m 1 elements. Now, for i 1 , i 2 , , i m { 1 , 2 , , m } , define the following (disjoint) multi-index subsets of Ω n :
Ω n ( i 1 ) = k Ω n : k i 1 k i 1 , k j = k j for j i 1 Ω n ( i 1 , i 2 ) = k Ω n : k i 1 k i 1 , k i 2 k i 2 , k j = k j for j i 1 , i 2 Ω n ( i 1 , i 2 , , i m 1 ) = k Ω n : k i 1 k i 1 , k i 2 k i 2 , , k i m 1 k i m 1 , k i m = k i m Ω n ( i 1 , i 2 , , i m 1 , i m ) = k Ω n : k i 1 k i 1 , k i 2 k i 2 , , k i m 1 k i m 1 , k i m k i m .
Since i 1 , i 2 , , i m { 1 , 2 , , m } , we have in total
m 1 sets of form Ω n ( i 1 ) m 2 sets of form Ω n ( i 1 , i 2 ) with i 1 < i 2 m m 1 sets of form Ω n ( i 1 , i 2 , , i m 1 ) with i 1 < i 2 < < i m 1 m m sets of form Ω n ( i 1 , i 2 , , i m 1 , i m ) with i 1 < i 2 < < i m .
Also, for each set of such forms,
Ω n ( i 1 ) contains n elements Ω n ( i 1 , i 2 ) contains n 2 elements Ω n ( i 1 , i 2 , , i m 1 ) contains n m 1 elements Ω n ( i 1 , i 2 , , i m 1 , i m ) contains n m elements .
So, all such sets contain ( n + 1 ) m 1 elements. We can consider the set Ω n { k } as the union of all disjoint sets having the above forms, that is,
Ω n { k } = i 1 = 1 m Ω n ( i 1 ) i 1 , i 2 = 1 i 1 < i 2 m Ω n ( i 1 , i 2 ) i 1 , i 2 , , i m = 1 i 1 < i 2 < < i m Ω n ( i 1 , i 2 , , i m ) .
Then, we may write from (14) that
S ˜ n , λ φ x ; x x x k , n m + O n λ i 1 = 1 m k Ω n ( i 1 ) x i 1 k i 1 n 1 λ + O n λ i 1 , i 2 = 1 i 1 < i 2 m k Ω n ( i 1 , i 2 ) x i 1 k i 1 n 2 + x i 2 k i 2 n 2 ( 1 λ ) / 2 + + O n λ i 1 , i 2 , , i m = 1 i 1 < i 2 < < i m m k Ω n ( i 1 , i 2 , , i m ) x x k , n m 1 λ .
Now we bound the summations on the r.h.s. of the last inequality. We see that
k Ω n ( i 1 ) x i 1 k i 1 n 1 λ k Ω n ( i 1 ) k i 1 k i 1 n 1 2 n 1 λ = O 1 n 1 λ k 1 = 1 n 1 k 1 λ 1 = O 1 n 1 λ .
We also obtain
k Ω n ( i 1 , i 2 ) x i 1 k i 1 n 2 + x i 2 k i 2 n 2 ( 1 λ ) / 2 k Ω n ( i 1 , i 2 ) k i 1 k i 1 n 1 2 n 2 + k i 2 k i 2 n 1 2 n 2 ( 1 λ ) / 2 O 1 n 1 λ k 1 , k 2 = 1 n 1 k 1 2 + k 2 2 ( λ 1 ) / 2 .
Working similarly, we obtain
k Ω n ( i 1 , i 2 , , i m ) x x k , n m 1 λ = O 1 n 1 λ k 1 , k 2 , , k m = 1 n 1 k 1 2 + k 2 2 + + k m 2 ( λ 1 ) / 2 .
Hence, we may write from (14)
S ˜ n , λ φ x ; x = O 1 n + O 1 n j = 2 m m j k 1 , k 2 , , k j = 1 n 1 k 1 2 + + k j 2 ( λ 1 ) / 2 .
Therefore, using the fact that (13) (see also Lemma 2), we obtain from (15) that
S ˜ n , λ φ x ; x = O 1 n , if λ > m + 1 O log n n , if λ = m + 1
holds true for each n 2 . Now if n on both sides in (16), the proof is completed. □
Now we can prove Theorem 1.
Proof of Theorem 1.
Let x K and f = ( f 1 , f 2 , , f d ) C ( K , R d ) be fixed. Then, for each r = 1 , 2 , , d , we deduce from (2) that
S ˜ n , λ f r ; x f r ( x ) = k = 0 n x x k , n m λ f r ( x k , n ) k = 0 n x x k , n m λ f r ( x ) k = 0 n x x k , n m λ f r ( x k , n ) f r ( x ) k = 0 n x x k , n m λ k = 0 n x x k , n m λ ω f r , x x k , n m k = 0 n x x k , n m λ .
Hence, for any δ > 0 ,
S ˜ n , λ f r ; x f r ( x ) ω f r , δ k = 0 n x x k , n m λ 1 + x x k , n m δ k = 0 n x x k , n m λ = ω f r , δ 1 + 1 δ S ˜ n , λ φ x ; x
holds for each r = 1 , 2 , , d . Now taking the supremum over x K , we obtain that
sup x K S ˜ n , λ f r ; x f r ( x ) d ω f r , δ 1 + 1 δ sup x K S ˜ n , λ φ x ; x
for any δ > 0 and each r = 1 , 2 , , d . Letting δ = δ n : = sup x K S ˜ n , λ φ x ; x given by (10), we observe that
S ˜ n , λ ( f r ) f r 2 ω f r , δ n
for each r = 1 , 2 , , d . Then, taking the limit as n in (17), the proof follows from Lemma 3 and the modulus of the continuity’s properties. □

4. Applications and Special Cases

In this section, we first apply Theorem 1. Secondly, we compute the corresponding approximation errors. Then, to point out the influence of regular summability methods in approximation, we consider a modification of vector-valued Shepard operators.

4.1. An Application of Theorem 1

As a first application, take d = 3 and m = 2 . Now, consider the functions f and g on K = [ 0 , 1 ] × [ 0 , 1 ] given by
f ( x ) = f 1 ( x ) , f 2 ( x ) , f 3 ( x ) , g ( x ) = g 1 ( x ) , g 2 ( x ) , g 3 ( x ) ,
where, for x = x , y K ,
f 1 ( x ) = 4 + 3 + cos ( 2 π y ) sin 2 π x f 2 ( x ) = 4 + 3 + cos ( 2 π y ) cos 2 π x f 3 ( x ) = 4 + sin ( 2 π y )
and
g 1 ( x ) = 8 + 3 + cos ( 2 π y ) cos 2 π x g 2 ( x ) = 3 + sin ( 2 π y ) g 3 ( x ) = 4 + 3 + cos ( 2 π y ) sin 2 π x .
Then, by Theorem 1, we have, if λ 3 ,
S n , λ ( f ) f on K
and
S n , λ ( g ) g on K .
If we consider f and g as three-dimensional surfaces parametrized by x and y , we can produce their three-dimensional parametric plots by using the Mathematica program. Similarly, we can also produce the corresponding approximations in (20) and (21) by vector-valued Shepard operators. Such parametric plots are shown in Figure 1 for the values n = 6 , 11 , 17 and λ = 5 .

4.2. Approximation Errors in Theorem 1

To numerically support the above application, we now compute the componentwise errors as follows. If λ = 5 , examine the corresponding pointwise errors e j [ r ] given by
e j [ r ] : = S ˜ n , 5 ( f r ) f r , r = 1 , 2 , 3 ,
at the n e = 30 × 30 points of a regular mesh of K for n = 20 , 50 , 80 . Here, we consider the function f = ( f 1 , f 2 , f 3 ) defined by (18). Observe that, for each r = 1 , 2 , 3 , e j [ r ] represents the componentwise approximation errors at the n e = 30 × 30 points. In Table 1, Table 2 and Table 3, we evaluate the (componentwise) maximum e m a x [ r ] , mean e m e a n [ r ] and mean squared e M S [ r ] errors defined respectively by
e m a x [ r ] : = max 1 j n e e j [ r ] , e m e a n [ r ] : = 1 n e j = 1 n e e j [ r ] and e M S [ r ] : = 1 n e j = 1 n e e j [ r ] 2
for r = 1 , 2 , 3 and n = 20 , 50 , 80 (see [6] for such pointwise errors).

4.3. Effects of Regular Summability Methods

In this subsection, we give some applications of Theorems 3 and 4.
Firstly, in the modification (6), by using the test function e 0 in (8), define the vector-valued functions u n : K R d by
u n ( x ) : = 2 e 0 ( x ) , for n even 0 , for n odd
and also examine the Cesàro method C 1 = [ c j n ] given by
c j n = 1 j , if n = 1 , 2 , , j 0 , otherwise .
Consequently, we easily observe that u n C 1 e 0 on K . Furthermore, for each f C K , R d , if r = 1 , 2 , , d , we obtain
n = 1 c j n u r , n S ˜ n , λ ( f r ) = 1 j n = 1 j u r , n S ˜ n , λ ( f r ) = 2 j n = 1 [ j / 2 ] S ˜ 2 n , λ ( f r ) ,
where [ · ] denotes the floor function. Then, from the regularity and Theorem 1, the r.h.s. of the last inequality converges to f r (for each r = 1 , 2 , , d ) as j ; hence, S n , λ ( f ) C 1 f on K .
Secondly, in the modification (7) by using the test function e 1 in (9), we consider the following function sequences:
v n ( x ) : = 2 e 1 ( x ) , for x [ 0 , 1 2 ] m and n even 0 , for x [ 0 , 1 2 ] m and n odd e 1 ( x ) , otherwise
and
v n ( x ) : = 0 , for n = k 2 k = 1 , 2 , e 1 ( x ) , otherwise .
Using again the Cesàro method, we obtain v n C 1 e 1 on K and v n C 1 e 1 on K . Indeed, for the sequence { v n } , if x [ 0 , 1 2 ] m , it is clear; and also if x [ 0 , 1 2 ] m , then
n = 1 c j n v n ( x ) = j / 2 2 x j e 1 ( x ) = x .
On the other hand, for the sequence { v n } , we immediately see that, for every x K ,
n = 1 c j n v r , n ( x ) x r = 1 j n = 1 ( n = k 2 ) j x r 1 j 0 as j ,
which gives v n C 1 e 1 on K .
Now, in (7), we take the sequences { v n } and { v n } instead of { v n } , respectively.
If S n , λ is constructed with the sequence { v n } , then we observe that the approximation S n , λ ( f ) C 1 f on K does not hold for some f C K , R d . For example, considering the function e 2 ( x ) = x 1 2 , x 2 2 , , x d 2 , we obtain, for each x [ 0 , 1 2 ] m , that
n = 1 c j n S n , λ ( e 2 ; x ) = 1 j n = 1 j S n , λ ( e 2 ; v n ( x ) ) = 1 j n = 1 [ j / 2 ] S 2 n , λ ( e 2 ; 2 x ) e 2 2 x 2 as j .
Since e 2 2 x 2 = 2 x 1 2 , 2 x 2 2 , , 2 x d 2 e 2 ( x ) for x [ 0 , 1 2 ] m , one can see that S n , λ ( e 2 ; x ) C 1 e 2 ( x ) for each x [ 0 , 1 2 ] m .
If S n , λ is constructed with the sequence { v n } , then for each f C K , R d , if x K ,
n = 1 c j n S ˜ n , λ ( f r ; v n ( x ) ) f r ( x ) = 1 j n = 1 ( n = k 2 ) j f r ( x ) + 1 j n = 1 ( n k 2 ) j S ˜ n , λ ( f r ; x ) f r ( x ) f r ( x ) j + 1 j n = 1 j S ˜ n , λ ( f r ; x ) f r ( x )
holds for each r = 1 , 2 , , d . Then, from the regularity of C 1 and Theorem 1, the r.h.s. of the above inequality uniformly vanishes as j . Hence, the sequence { S n , λ ( f ) } is strongly (uniform) C 1 -summable to f on K for each f C K , R d .

5. Concluding Remarks

In this paper, we established the methodology of approximating continuous vector-valued functions on the unit hypercube using Shepard operators. In order to compute the approximation error, we employed some known error metrics, including maximum error, mean error, and mean squared error. Furthermore, we demonstrated both the theoretical and practical improvements of this approximation by using some techniques from regular summability methods.
In future work, we plan to consider novel vector-valued Shepard processes for approximating functions not necessarily continuous (i.e., integrable functions), the famous Kantorovich operators, interesting in applications for image processing and sampling theory (see [28,29,30]).

Author Contributions

This material is the result of the joint efforts of O.D. and B.D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shepard, D. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 23rd ACM National Conference, New York, NY, USA, 27–29 August 1968; ACM: New York, NY, USA, 1968; pp. 517–524. [Google Scholar]
  2. Barnhill, R.; Dube, R.; Little, F. Properties of Shepard’s surfaces. Rocky Mt. J. Math. 1983, 13, 365–382. [Google Scholar] [CrossRef]
  3. Barnhill, R.; Ou, H. Surfaces defined on surfaces. Comput. Aided Geom. Des. 1990, 7, 323–336. [Google Scholar] [CrossRef]
  4. Cavoretto, R.; De Rossi, A.; Dell’Accio, F.; Di Tommaso, F. A 3D efficient procedure for Shepard interpolants on tetrahedra. In Numerical Computations: Theory and Algorithms; Sergeyev, Y.D., Kvasov, D.E., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 27–34. [Google Scholar] [CrossRef]
  5. Cavoretto, R.; De Rossi, A.; Dell’Accio, F.; Di Tommaso, F. An efficient trivariate algorithm for tetrahedral Shepard interpolation. J. Sci. Comput. 2020, 82, 57. [Google Scholar] [CrossRef]
  6. Dell’Accio, F.; Di Tommaso, F.; Hormann, K. On the approximation order of triangular Shepard interpolation. IMA J. Numer. Anal. 2016, 36, 359–379. [Google Scholar] [CrossRef]
  7. Dell’Accio, F.; Di Tommaso, F.; Nouisser, O.; Siar, N. Rational Hermite interpolation on six-tuples and scattered data. Appl. Math. Comput. 2020, 386, 125452. [Google Scholar] [CrossRef]
  8. Farwig, R. Rate of convergence of Shepard’s global interpolation formula. Math. Comput. 1986, 46, 577–590. [Google Scholar] [CrossRef]
  9. Criscuolo, G.; Mastroianni, G. Estimates of the Shepard interpolatory procedure. Acta Math. Hung. 1993, 61, 79–91. [Google Scholar] [CrossRef]
  10. Della Vecchia, B. Direct and converse results by rational operators. Constr. Approx. 1996, 12, 271. [Google Scholar] [CrossRef]
  11. Della Vecchia, B.; Mastroianni, G. Pointwise simultaneous approximation by rational operators. J. Approx. Theory 1991, 65, 140–150. [Google Scholar] [CrossRef]
  12. Szabados, J. Direct and converse approximation theorems for the Shepard operator. Approx. Theory Its Appl. 1991, 7, 63–76. [Google Scholar] [CrossRef]
  13. Yu, D. On weighted approximation by rational operators for functions with singularities. Acta Math. Hung. 2012, 136, 56–75. [Google Scholar] [CrossRef]
  14. Yu, D.; Zhou, S. Approximation by rational operators in Lp spaces. Math. Nachrichten 2009, 282, 1600–1618. [Google Scholar] [CrossRef]
  15. Zhou, X. The saturation class of Shepard operators. Acta Math. Hung. 1998, 80, 293–310. [Google Scholar] [CrossRef]
  16. Duman, O.; Della Vecchia, B. Complex Shepard operators and their summability. Results Math. 2021, 76, 214. [Google Scholar] [CrossRef]
  17. Duman, O.; Della Vecchia, B. Approximation to integrable functions by modified complex Shepard operators. J. Math. Anal. Appl. 2022, 512, 126161. [Google Scholar] [CrossRef]
  18. Boos, J.; Cass, P. Classical and Modern Methods in Summability; Oxford Mathematical Monographs, Oxford University Press: Oxford, UK, 2000; p. 600. [Google Scholar]
  19. Hardy, G.H. Divergent Series; Clarendon Press: Oxford, UK, 1949. [Google Scholar]
  20. Atlihan, Ö.G.; Orhan, C. Matrix summability and positive linear operators. Positivity 2007, 11, 387–398. [Google Scholar] [CrossRef]
  21. Atlihan, Ö.G.; Orhan, C. Summation process of positive linear operators. Comput. Math. Appl. 2008, 56, 1188–1195. [Google Scholar] [CrossRef]
  22. Gökçer, T.Y.; Duman, O. Regular summability methods in the approximation by max-min operators. Fuzzy Sets Syst. 2021, 426, 106–120. [Google Scholar] [CrossRef]
  23. King, J.P.; Swetits, J.J. Positive linear operators and summability. J. Aust. Math. Soc. 1970, 11, 281–290. [Google Scholar] [CrossRef]
  24. Mohapatra, R.N. Quantitative results on almost convergence of a sequence of positive linear operators. J. Approx. Theory 1977, 20, 239–250. [Google Scholar] [CrossRef]
  25. Nishishiraho, T. Convergence rates of summation processes of convolution type operators. J. Nonlinear Convex Anal. 2010, 11, 137–156. [Google Scholar]
  26. Swetits, J. On summability and positive linear operators. J. Approx. Theory 1979, 25, 186–188. [Google Scholar] [CrossRef]
  27. Vanderbei, R.J. Uniform Continuity Is Almost Lipschitz Continuity; Statistics and Operations Research Series SOR-91 11; Princeton University: Princeton, NJ, USA, 1991. [Google Scholar]
  28. Coroianu, L.; Costarelli, D.; Gal, S.G.; Vinti, G. Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Commun. Pure Appl. Anal. 2020, 19, 4213–4225. [Google Scholar] [CrossRef]
  29. Costarelli, D.; Vinti, G. Approximation by max-product neural network operators of Kantorovich type. Results Math. 2016, 69, 505–519. [Google Scholar] [CrossRef]
  30. Costarelli, D.; Vinti, G. An inverse result of approximation by sampling Kantorovich series. Proc. Edinb. Math. Soc. 2019, 62, 265–280. [Google Scholar] [CrossRef]
Figure 1. Parametric plots of S n , λ ( f ) (in red) and S n , λ ( g ) (in green) for the values n = 6 , 11 , 17 and λ = 5 , where f and g are functions given, respectively, by (18) and (19).
Figure 1. Parametric plots of S n , λ ( f ) (in red) and S n , λ ( g ) (in green) for the values n = 6 , 11 , 17 and λ = 5 , where f and g are functions given, respectively, by (18) and (19).
Axioms 12 01124 g001
Table 1. Approximation errors in the first component.
Table 1. Approximation errors in the first component.
n e max [ 1 ] e mean [ 1 ] e MS [ 1 ]
200.353580.100990.13239
500.125290.040510.04970
800.081970.024780.03056
Table 2. Approximation errors in the second component.
Table 2. Approximation errors in the second component.
n e max [ 2 ] e mean [ 2 ] e MS [ 2 ]
200.248700.096540.11228
500.132000.037670.04773
800.087470.023910.03031
Table 3. Approximation errors in the third component.
Table 3. Approximation errors in the third component.
n e max [ 3 ] e mean [ 3 ] e MS [ 3 ]
200.092990.032910.04278
500.035440.013130.01598
800.021720.007990.00980
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Duman, O.; Vecchia, B.D. Vector-Valued Shepard Processes: Approximation with Summability. Axioms 2023, 12, 1124. https://doi.org/10.3390/axioms12121124

AMA Style

Duman O, Vecchia BD. Vector-Valued Shepard Processes: Approximation with Summability. Axioms. 2023; 12(12):1124. https://doi.org/10.3390/axioms12121124

Chicago/Turabian Style

Duman, Oktay, and Biancamaria Della Vecchia. 2023. "Vector-Valued Shepard Processes: Approximation with Summability" Axioms 12, no. 12: 1124. https://doi.org/10.3390/axioms12121124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop