Next Article in Journal
Discrete Integrals Based on Comonotonic Modularity
Previous Article in Journal / Special Issue
Wavelet-Based Monitoring for Biosurveillance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonnegative Scaling Vectors on the Interval

by
David K. Ruch
1 and
Patrick J. Van Fleet
2,*
1
Department of Mathematical and Computer Sciences, Metropolitan State University of Denver, Denver, CO 80217, USA
2
Department of Mathematics, University of St. Thomas, St. Paul, MN 55105, USA
*
Author to whom correspondence should be addressed.
Axioms 2013, 2(3), 371-389; https://doi.org/10.3390/axioms2030371
Submission received: 17 April 2013 / Revised: 28 June 2013 / Accepted: 1 July 2013 / Published: 9 July 2013
(This article belongs to the Special Issue Wavelets and Applications)

Abstract

:
In this paper, we outline a method for constructing nonnegative scaling vectors on the interval. Scaling vectors for the interval have been constructed in [1,2,3]. The approach here is different in that the we start with an existing scaling vector Φ that generates a multi-resolution analysis for L 2 ( R ) to create a scaling vector for the interval. If desired, the scaling vector can be constructed so that its components are nonnegative. Our construction uses ideas from [4,5] and we give results for scaling vectors satisfying certain support and continuity properties. These results also show that less edge functions are required to build multi-resolution analyses for L 2 [ a , b ] than the methods described in [5,6].

1. Introduction

Let ϕ be a compactly supported orthogonal scaling function generating a multi-resolution analysis, { V k } , for L 2 ( R ) . In Walter and Shen [7], the authors show how to use this ϕ to construct a new nonnegative scaling function P that generates the same multi-resolution analysis for L 2 ( R ) . The disadvantages of this construction is that orthogonality is lost (although the authors gave a simple expression for the dual P * ), and P is not compactly supported.
The results of [7] were generalized to the scaling vectors Φ = ϕ 1 , , ϕ A T in [8]. In [9], the authors show that it is possible to modify the construction of [8] and retain compact support. Since many applications require the underlying space to be L 2 ( [ a , b ] ) , rather than L 2 ( R ) , it is worthwhile to investigate extending the construction to the interval.
In this paper, we take a continuous, compactly supported scaling vector Φ and illustrate how it can be used to construct a compactly supported scaling vector P that generates a multi-resolution analysis for L 2 ( [ a , b ] ) . The resulting scaling vector for L 2 ( [ a , b ] ) is nonnegative if at least one component ϕ j of the original scaling vector Φ is nonnegative on its support. Nonnegativity of the scaling vector may be desirable in applications, such as density estimation (see [10] for density estimation by a single nonnegative scaling function). The construction is motivated by the work of Meyer [5]. It is a goal of the construction to produce a nonnegative scaling vector, preserve the polynomial accuracy of the original scaling vector and to keep the number of edge functions as small as possible. We conclude the paper with results that show, under certain circumstances, that it is possible to construct compactly scaling vectors that require only m 1 edge functions to preserve polynomial accuracy m. This is an improvement over some methods (for example, [5,6]) that require m edge functions to preserve polynomial accuracy m.
In the next section, we introduce basic definitions, examples and results that are used throughout the sequel. In the third section, we define the edge functions for our constructions and show that the resulting scaling vector satisfies a matrix refinement equation and generates a multi-resolution analysis for L 2 ( [ a , b ] ) . The final section consists of some constructions illustrating the results of Section 3, as well as results in special cases that show the number of edge functions needed to attain a desired polynomial accuracy is smaller than the number needed for similar methods.

2. Notation, Definitions and Preliminary Results

We begin with the concept of a scaling vector or set of multiscaling functions. This idea was first introduced in [11,12]. We start with A functions, ϕ 1 , , ϕ A , and consider the nested ladder of subspaces, V 1 V 0 V 1 V 2 , where V n = { 2 n / 2 ϕ 1 ( 2 n · k ) , , 2 n / 2 ϕ A ( 2 n · k ) } k Z ¯ , n Z .
It is convenient to store ϕ 1 , , ϕ A as a vector:
Φ = ϕ 1 , ϕ 2 , , ϕ A T
and define a multi-resolution analysis in much the same manner as in [4]:
(M1) 
n Z V n ¯ = L 2 ( R ) .
(M2) 
n Z V n = { 0 } .
(M3) 
f ( · ) V n f ( 2 n · ) V 0 , n Z .
(M4) 
f ( · ) V 0 f ( · n ) V 0 , n Z .
(M5) 
Φ generates a Reisz basis for V 0 .
In this case, Φ satisfies a matrix refinement equation:
Φ ( t ) = k C k Φ ( 2 t k ) , C k R A × A
We will make the following two assumptions about Φ and its components:
(A1) 
Each ϕ , = 1 , , A , is compactly supported and continuous;
(A2) 
There is a vector c = c 1 , , c A for which:
= 1 A k Z c ϕ ( t k ) = 1
Condition (A2) tells us that Φ forms a partition of unity.
We will say that Φ has polynomial accuracy m if there exist constants f n k such that for n = 0 , , m 1
t n = = 1 A k Z f n , k ϕ ( t k ) = k Z f n , k · Φ ( t k )
where
f n , k = ( f n , k 1 , , f n , k A ) .
Comparison of Equations (2) and (3) shows that
c = f 0 , k , k Z .
We have the following result from [13] involving the components of the vectors f n , k :
Lemma 2.1 ([13]). The components { f j , k } of the vectors f n , k in Equations (4) satisfy the recurrence relation
f j , k + 1 = i = 0 j j i f i , k
for = 1 , , A , and j = 0 , , m 1 .
It will be convenient to reformulate Equation (5) in the following way:
Proposition 2.2. For the row vectors f n , k given in Equation (4), define the column vectors
E k = f 0 , k , f 1 , k , f 2 , k , , f m 2 , k T .
Then
E k + 1 = P L E k
where P L is the ( m 1 ) × ( m 1 ) lower triangular Pascal matrix, whose elements are defined by
P L j , k = 0 , k > j j k , k j
for j , k = 0 , , m 2 .
Proof. Computing the inner product of row j, j = 0 , , m 2 , with E k gives:
P L j · E k = i = 0 j j i f i , k
This is exactly f j , k + 1 by Equation (5).       ☐
The following corollary is immediate:
Corollary 2.3. For j Z , we have:
E j = P L j E 0
Note: We have defined the vectors E k without including the term f m 1 , k . While Proposition 2.2 certainly holds if this term is included in the definition of E k , our results in Section 4 use E k as defined by Equation (6).
With regards to (A1), we will further assume that for M N , = 1 , , A ,
supp ϕ = 0 , M .
We will denote by M the maximum value of M :
M = Max { M 1 , , M A }
As stated in Section 1, our construction of scaling vector Φ [ a , b ] for L 2 [ a , b ] uses an existing scaling vector Φ for L 2 ( R ) . It is possible to perform our construction, so that the components of Φ [ a , b ] are nonnegative. If the components of Φ are nonnegative, then the components of Φ [ a , b ] will be nonnegative, as well. In the case where not all the components of Φ are nonnegative, it still may be possible to construct a nonnegative Φ [ a , b ] . Theorem 2.5 from [9] illustrates that in order to do so, we must first modify Φ. To this end, Φ must be bounded and compactly supported, possess polynomial accuracy, p 1 , and satisfy Condition B below.
Definition 2.4 (Condition B). Let Φ = ϕ 1 , , ϕ A T . We say Φ satisfies Condition B if for some j { 1 , , A } , ϕ j ( t ) 0 for t R and there exist finite index sets Λ i and constants c i k for i j , such that:
(B1) 
ϕ ˜ i ( t ) : = ϕ i ( t ) + k Λ i c i k ϕ j ( t k ) 0 , t R ,
(B2) 
d j : = c j i j k Λ i c i c i k 0 ,
(B3) 
c i 0 for i j
Here, the c i are the coefficients from (A2).
Theorem 2.5 ([9]). Suppose a scaling vector Φ = ϕ 1 , , ϕ A T , is bounded, compactly supported, has accuracy, p 1 , and satisfies Condition B. Then the nonnegative vector
Φ ˜ = ϕ ˜ 1 , , ϕ ˜ j 1 , ϕ j , ϕ ˜ j + 1 , , ϕ ˜ A T
where ϕ ˜ i is given by (B1) and is a bounded, compactly supported scaling vector with accuracy p 1 that generates the same space, V 0 .
We now give two examples of multiscaling functions that we will use in the sequel.
Example 2.6 (Donovan,Geronimo,Hardin,Massopust). In [14], the authors constructed a scaling vector with A = 2 that satisfies the four-term matrix refinement equation, Φ ( t ) = k = 0 3 C k Φ ( 2 t k ) , where
C 0 = 3 5 4 2 5 2 20 3 10 , C 1 = 3 5 0 9 2 20 1 C 2 = 0 0 9 2 20 3 10 , C 3 = 0 0 2 20 0 .
The scaling functions ϕ 1 , ϕ 2 (shown in Figure 1), satisfy ϕ j ( t n ) , ϕ ( t m ) = δ m n δ j , m , n Z , j , = 1 , 2 . They also have polynomial accuracy two are continuous, symmetric and compactly supported with s u p p ϕ 1 = [ 0 , 2 ] and s u p p ϕ 2 = [ 0 , 1 ] . Φ also satisfies the partition of unity condition (A2) with
c = f 0 , k = 1 + 2 1 1 , 2 , k Z .
We can satisfy Theorem 2.5 by choosing ϕ ˜ 2 = ϕ 2 , since ϕ 2 ( t ) 0 , t R . We create ϕ ˜ 1 by taking Λ = { 0 , 1 } with c 10 = c 11 = 1 2 , so that ϕ ˜ 1 ( t ) = ϕ 1 ( t ) + 1 2 ( ϕ 2 ( t ) + ϕ 2 ( t 1 ) ) 0 for t R . The partition of unity coefficients for Φ ˜ ( t ) are d 1 = c 1 and d 2 = c 2 c 1 ( c 10 + c 11 ) = c 2 c 1 > 0 . The new scaling vector, Φ ˜ ( t ) , is shown in Figure 1.
Figure 1. The scaling vector Φ (left) and the new scaling vector Φ ˜ from Example 2.6 (right).
Figure 1. The scaling vector Φ (left) and the new scaling vector Φ ˜ from Example 2.6 (right).
Axioms 02 00371 g001
Our next example uses a scaling vector constructed by Plonka and Strela in [15].
Example 2.7 (Plonka,Strela). Using a two-scale similarity transform in the frequency domain, Plonka and Strela constructed the following scaling vector Φ in [15]. It satisfies a three-term matrix refinement equation
Φ ( t ) = k = 0 2 C k Φ ( 2 t k )
where
C 0 = 1 20 7 15 4 10 , C 1 = 1 20 10 0 0 20 , C 2 = 1 20 7 15 4 10 .
This scaling vector (shown in Figure 2) is not orthogonal, but it is compactly supported on [ 0 , 2 ] with polynomial accuracy three. ϕ 2 is nonnegative on its support and symmetric about t = 1 , while ϕ 1 is antisymmetric about t = 1 . Φ satisfies (A2) with c = f 0 , k = ( 0 , 1 ) , k Z . The authors also show that
f 1 , 0 = 1 6 , 1 .
We can satisfy Theorem 2.5 by choosing ϕ ˜ 2 = ϕ 2 since ϕ 2 ( t ) 0 , t R . We create ϕ ˜ 1 by taking c 10 = 1 . 6 , so that ϕ ˜ 1 ( t ) = ϕ 1 ( t ) + 1 . 6 ϕ 2 ( t ) 0 for t R . The partition of unity coefficients for Φ ˜ are d 1 = c 1 = 0 and d 2 = c 2 c 1 k c i k = c 2 0 > 0 . The new scaling vector Φ ˜ is shown in Figure 2.
Figure 2. The scaling vector Φ (left) and the the new scaling vector Φ ˜ from Example 2.7 (right).
Figure 2. The scaling vector Φ (left) and the the new scaling vector Φ ˜ from Example 2.7 (right).
Axioms 02 00371 g002

3. Nonnegative Scaling Vectors on the Interval

The construction of scaling vectors on the interval has been addressed in [1,2,3]. In these cases, the authors constructed scaling vectors on the interval from scratch. It is our intent to show how to modify a given scaling vector that generates a multi-resolution analysis for L 2 ( R ) , so that it generates a (nonorthogonal) multi-resolution analysis for L 2 ( [ 0 , 1 ] ) . Moreover, the components of the new vector will be nonnegative. In particular cases, our procedure requires fewer edge functions than in the single scaling function constructions of [5,6].
Our task then is to modify an existing scaling vector and create a nonnegative scaling vector that generates a multi-resolution analysis for L 2 ( [ 0 , 1 ] ) that preserves the polynomial accuracy of the original scaling vector and avoids the creation of “too many” edge functions.
We begin with a multi-resolution analysis for L 2 ( R ) generated by scaling vector Φ and we also assume our scaling vector has polynomial accuracy m with f n k given by Equation (4).
Finally, assume that the set S = { ϕ ¯ k , = 1 , , A , k Z } of non-zero functions defined by ϕ ¯ k ( t ) = ϕ ( t k ) | [ 0 , ) are linearly independent, and let n ( S ) denote the number of functions in S. Note that, due to the support restriction of Φ in Equation (10), ϕ ¯ ( t ) = ϕ ( t ) .
Note that S consists of all the original scaling functions ϕ , = 1 , , A , plus those left shifts, ϕ ( · + k ) , k > 0 , whose support overlaps [ 0 , 1 ] . Since each scaling function ϕ is supported on 0 , M , we can compute
n ( S ) = M 1 + + M A .
We work only on the left edge [ 0 , ) in constructing V 0 [ 0 , ) . We begin with ϕ ( · k ) , k 0 and then add left edge functions to preserve polynomial accuracy.
Define the left edge functions, ϕ L , n , by
ϕ L , n ( t ) = = 1 A k = 1 M 0 f n , k ϕ ( t k ) | [ 0 , ) = = 1 A k = 1 M 0 f n , k ϕ ¯ k ( t )
for n = 0 , , m 1 . Observe that since the ϕ are compactly supported, so is ϕ L , n and also note that by Equation (3), ϕ L , n ( t ) = t n on [ 0 , 1 ] . Right edge functions can be defined similarly.
Our next proposition shows that the left edge functions (and, in an analogous manner, the right edge functions) satisfy a matrix refinement equation.
Proposition 3.1. Suppose that Φ is a scaling vector satisfying (A1)–(A2) with polynomial accuracy m with f n , k , given in Equation (3). Further assume that the set S defined above is linearly independent. Then the set of edge functions ϕ L , n , n = 0 , , m 1 , satisfies a matrix refinement equation.
Proof. Φ satisfies a matrix refinement Equation (1), and since Φ is supported on [ 0 , M ] , the number of refinement terms is finite. So there is a minimal positive integer N such that
Φ ( t ) = j = 0 N C j Φ ( 2 t j )
with C j = 0 for j < 0 or j > N . Now for k Z we have
Φ ( t k ) = j = 0 N C j Φ ( 2 ( t k ) j ) = j = 2 k 2 k + N C j 2 k Φ ( 2 t j ) .
Note that for each n = 0 , , m 1 and t [ 0 , ) :
ϕ L , n ( t ) 2 n ϕ L , n ( 2 t ) = k = 1 M 0 f n , k Φ ( t k ) 2 n k = 1 M 0 f n , k Φ ( 2 t k ) = k = 1 M 0 f n , k j = 2 k 2 k + N C j 2 k Φ ( 2 t j ) 2 n k = 1 M 0 f n , k Φ ( 2 t k ) = j = 2 2 M N k = 1 M 0 f n , k C j 2 k Φ ( 2 t j ) 2 n k = 1 M 0 f n , k Φ ( 2 t k )
We are able to leave the summation limits on the inner sum in the above line unchanged, since C j 2 k = 0 for j 2 k < 0 or j 2 k > N . Thus we have
ϕ L , n ( t ) 2 n ϕ L , n ( 2 t ) = j = 2 2 M N q n , j Φ ( 2 t j )
with
q n , j = k = 1 M 0 f n k C j 2 k 2 n f n j , j { 1 M , , 0 } k = 1 M 0 f n k C j 2 k , j { 2 2 M , , M } { 1 , , N } .
Recall that ϕ L , n ( t ) = 2 n ϕ L , n ( 2 t ) = t n on 0 , 1 2 and that the functions ϕ ( 2 t j ) are linearly independent, so q n j = 0 for j = 2 2 M , , 0 . Thus
ϕ L , n ( t ) = 2 n ϕ L , n ( 2 t ) + j = 1 N q n , j Φ ( 2 t j )
on [ 0 , ) . This is the desired dilation equation for the n t h edge function, ϕ L , n .      ☐
Refinement equations for the right edge functions are derived in a similar manner.
Example 3.2. We return to the scaling vector of Strela and Plonka [15] introduced in Example 2.7. This scaling vector has polynomial accuracy three with f 0 , 0 = ( 0 , 1 ) and f 1 , 0 = 1 6 , 1 .
Both ϕ 1 and ϕ 2 are supported on [ 0 , 2 ] . The refinement equation matrices, C 0 , C 1 and C 2 , are given in Example 2.7. We calculate q 0 , 1 and q 0 , 2 as:
q 0 , 1 = k = 1 2 0 f 0 , k C 1 2 k = ( 0 , 1 ) · C 1 = ( 0 , 1 )
and
q 0 , 2 = k = 1 2 0 f 0 , k C 2 2 k = ( 0 , 1 ) · C 2 = 1 5 , 1 2 .
The dilation equation for ϕ L , 0 is
ϕ L , 0 ( t ) = ϕ L , 0 ( 2 t ) + ( 0 , 1 ) · Φ ( 2 t 1 ) + 1 5 , 1 2 Φ ( 2 t 2 ) = ϕ L , 0 ( 2 t ) + ϕ 2 ( 2 t 1 ) + 1 5 ϕ 1 ( 2 t 2 ) + 1 2 ϕ 2 ( 2 t 2 ) .
In a similar manner, we can use Equations (13) and (16) to find that
q 1 , 1 = f 1 , 0 C 1 = 1 12 , 1 q 1 , 2 = f 1 , 0 C 2 = 31 120 , 5 8 .
We thus compute the dilation equation for ϕ L , 1 :
ϕ L , 1 ( t ) = 2 1 ϕ L , 1 ( 2 t ) + q 1 , 1 Φ ( 2 t 1 ) + q 1 , 2 Φ ( 2 t 2 ) = 2 1 ϕ L , 1 ( 2 t ) 1 12 ϕ 1 ( 2 t 1 ) + ϕ 2 ( 2 t 1 ) + 31 120 ϕ 1 ( 2 t 2 ) + 5 8 ϕ 2 ( 2 t 2 )
In order to construct a scaling vector for V 0 ( [ 0 , ) ) , we need for our edge functions not only to satisfy a matrix refinement equation, but also to join with { Φ ( · k ) } k 0 and form a Riesz basis for V 0 ( [ 0 , ) ) . We will next show that the set of edge functions we constructed above does indeed preserve the Riesz basis property. We need the following result.
Lemma 3.3. Suppose H is a separable Hilbert space with closed subspaces, V, V ˜ ,W and W ˜ , such that V W = V ˜ W ˜ = { 0 } . Assume further that V and V ˜ are topologically isomorphic with Riesz bases, { v i } , { v ˜ i } , respectively, and W and W ˜ are topologically isomorphic with Riesz bases, { w i } , { w ˜ i } , respectively. Then, V W and V ˜ W ˜ are topologically isomorphic with Riesz bases, { v i , w i } , { v ˜ i , w ˜ i } , respectively.
Proof. First we present a useful fact to simplify the proof. As stated in [4] (page xix), every Riesz basis is a homeomorphic image of an orthonormal basis. Since V and V ˜ are homeomorphic images of each other, we can assume without loss of generality, that the bases v i and v ˜ i are orthonormal bases of V and V ˜ , respectively. Similarly we may assume that the bases w j and w ˜ j are orthonormal bases of W and W ˜ , respectively.
Now, to show that v i , w j is a Riesz basis of V W , we need to verify the stability condition:
A i α i 2 + j β j 2 i α i v i + j β j w j 2 B i α i 2 + j β j 2
for some A , B > 0 and for all sequences, c k 2 , where, for convenience, we partition c k as a i , b j .
Use the orthonormality of the sets v i and w j to obtain
0 i α i v i j β j w j 2 = i α i 2 2 i j α i β j v i , w j + j β j 2
so
2 i j α i β j v i , w j i α i 2 + j β j 2 .
Now we use Equation (18) to see that
i α i v i + j β j w j 2 = i α i 2 + 2 i j α i β j v i , w j + j β j 2 2 i α i 2 + j β j 2
which proves the upper bound on the stability condition of Equation (17) with B = 2 .
We use Bessel’s inequality with each orthonormal set v i and w j to obtain
i α i v i + j β j w j 2 i α i 2 and i α i v i + j β j w j 2 j β j 2 .
Adding these inequalities, we find the lower stability bound for Equation (17) with A = 1 / 2 :
1 2 i α i 2 + j β j 2 i α i v i + j β j w j 2
This completes the proof that v i , w j is a Riesz basis of V W and an identical argument shows that v ˜ i , w ˜ j is a Riesz basis of V ˜ W ˜ .
It is now easy to see that the map T : V W V ˜ W ˜ , which maps each v i to v ˜ i and each w j to w ˜ j , is a homeomorphism.       ☐
We are now ready to state and prove our next result.
Theorem 3.4. Let Φ = ϕ 1 , , ϕ A T be a scaling vector that satisfies (A1) and generates a multi-resolution analysis for L 2 ( R ) . For some index set B, let { L i } i B be a finite set of edge functions with s u p p L i = [ 0 , δ i ] and assume that { L i , ϕ ( · k ) } i , , k 0 is a linearly independent set. Then { L i ( 2 j · ) , ϕ ( 2 j · k ) } i , , k 0 is a Riesz basis of V j , where L 2 ( [ 0 , ) ) = j V j ¯ .
Proof. Without loss of generality, set j = 0 , and let C be the set of integer indices for which s u p p L i s u p p ϕ ( · k ) for all i B , , k 0 . For ease of notation, denote by { f n } n C those { ϕ ( · k ) } corresponding to C, and for integer index set D, let { g m } m D denote the other { ϕ ( · k ) } . For ease of presentation, assume that B,C and D are mutually disjoint and note that Z = B C D . Now, since { L i , f n } is a linearly independent and a finite set, it must be a Riesz basis of its span. We then use the Gram-Schmidt process to orthogonalize it and thus obtain { L ˜ i , f ˜ n } . In the process, we begin with the L i and then move on to the { f n } . This ensures that s u p p L ˜ i [ 0 , max ( δ j ) ] , whence L ˜ i ( t ) g n ( t ) d t = 0 for all i , n .
If we set V = { f ˜ n } and W = { g m } ¯ , we have V W = { 0 } , so by Lemma 3.3, V W = { f ˜ n , g m } ¯ has Riesz basis { f ˜ n , g m } . Hence there exist A 0 , B 0 > 0 , such that
A 0 { d k } 2 2 n C d n f ˜ n ( t ) + m D d m g m ( t ) 2 2 B 0 { d k } 2 2 , { d k } 2 ( Z )
Assuming without loss of generality that A 0 1 , we use the line above, L ˜ i = 1 , the orthogonality of the { L ˜ i , f ˜ n } and the disjoint support of the L ˜ i and g n to see that
A 0 { d k } 2 2 = A 0 i B d i 2 + n C d n 2 + m D d m 2 i B d i 2 + n C d n f ˜ n + m D d m g m 2 2 = R i B d i L ˜ i ( t ) 2 d t + R n C d n f ˜ n ( t ) 2 d t + R m D d m g m ( t ) 2 d t + 2 R n C , m D d n d m f ˜ n ( t ) g m ( t ) d t = R i B , n C , m D d i L ˜ i ( t ) + d n f ˜ n ( t ) + d m g m ( t ) 2 d t = i B d i L ˜ i ( t ) + n C d n f ˜ n ( t ) + m D d m g m ( t ) 2 2 .
A similar proof shows that
i B d i L ˜ i ( t ) + n C d n f ˜ n ( t ) + m D d m g m ( t ) 2 2 B 0 { d k } 2 2
so { L ˜ i , f ˜ n , g m } is a Riesz basis of its span. Finally, to see that { L ˜ i , f ˜ n , g m } is a Riesz basis for V 0 , set V = { L i , f n } and V ˜ = { L ˜ i , f ˜ n } and W = W ˜ = { g m } ¯ . Since V = V ˜ has finite dimension and V W = { 0 } , Lemma 3.3 holds so that { L i , f n , g m } is a Riesz basis for V W = V ˜ W ˜ = V 0 .  ☐

4. Edge Function Construction

We begin this section by constructing the left edge function needed to build the interval scaling vector Φ [ 0 , 1 ] from the scaling vector of Example 2.6.
Example 4.1. We return to the scaling vector of Example 2.6. Note that m = 2 , M 1 = 2 , M 2 = 1 and
f 0 , 0 = ( c 1 , c 2 ) = 1 + 2 1 1 , 2 .
It is known (see for example [16]) that Φ can be restricted to any interval, [ a , b ] , where a , b Z and the set
S = s u p p ϕ ( · k ) [ a , b ] | k Z , = 1 , 2
constitutes an orthogonal set of functions on [ a , b ] and reproduces constant functions on [ a , b ] . We nevertheless construct the edge function ϕ L , 0 to illustrate the computation and provide motivation for Theorem 4.2.
We use Equation (15) to construct ϕ L , 0 :
ϕ L , 0 ( t ) = k = 1 0 f 0 , k 1 ϕ ¯ 1 ( t k ) + f 0 , 0 2 ϕ ¯ 2 ( t ) = f 0 , 0 1 ϕ ¯ 1 ( t + 1 ) + ϕ ¯ 1 ( t ) + f 0 , 0 2 ϕ 2 ( t ) = 1 + 2 1 ϕ ¯ 1 ( t + 1 ) + ϕ ¯ 1 ( t ) + 2 ϕ 2 ( t ) .
If we want a nonnegative edge function, then we need to use Φ ˜ ( t ) = ( ϕ ˜ 1 ( t ) , ϕ 2 ( t ) ) T from Example 2.6. In this case
f ˜ 0 , 0 = d 1 , d 2 = c 1 , c 2 c 1 = 1 + 2 1 1 , 2 1 .
Using Equation (15), we see that
ϕ ˜ L , 0 ( t ) = d 1 ϕ ˜ ¯ 1 ( t ) + ϕ ˜ ¯ 1 ( t + 1 ) + d 2 ϕ 2 ( t )
The edge functions are plotted in Figure 3.
Figure 3. The edge function using ϕ L , 0 (left) and the nonnegative edge function using ϕ ˜ L , 0 (right) from Example 4.1.
Figure 3. The edge function using ϕ L , 0 (left) and the nonnegative edge function using ϕ ˜ L , 0 (right) from Example 4.1.
Axioms 02 00371 g003
Although m = 2 for the scaling vector in Example 4.1, we only computed ϕ L , 0 . There is a good reason for this: it turns out that ϕ L , 1 can be written as a linear combination of Φ and ϕ L , 0 . Indeed, we can use Equation (3) and the supports of ϕ 1 and ϕ 2 to write
t = f 1 , 0 1 ϕ 1 ( t ) + f 1 , 1 2 ϕ ¯ 2 ( t + 1 ) + f 1 , 0 2 ϕ 2 ( t ) , t [ 0 , 1 ] ,
and then ask if there exists constants α i , i = 0 , 1 , 2 , such that
α 0 ϕ L , 0 ( t ) + α 1 ϕ 1 ( t ) + α 2 ϕ 2 ( t ) = f 10 1 ϕ 1 ( t ) + f 1 , 1 2 ϕ ¯ 2 ( t + 1 ) + f 1 , 0 2 ϕ 2 ( t ) .
Expanding this system and equating coefficients for ϕ 1 ( t ) , ϕ 2 ( t ) and ϕ 1 ¯ ( t + 1 ) gives
α 0 f 0 , 0 1 + α 1 + α 2 = f 1 , 0 1 α 0 f 0 , 0 1 + α 1 + α 2 = f 1 , 0 2 α 0 f 0 , 0 1 + α 1 + α 2 = f 1 , 1 1 .
To motivate further results we write the above system as the matrix equation
Axioms 02 00371 i004
Note that the uniqueness of a solution to this system is completely determined by the fact that f 0 , 0 1 = c 1 = 1 + 2 1 0 .
Thus for the scaling vector of Example 4.1, we need only one left (right) edge function to form a multi-resolution analysis for L 2 [ 0 , 1 ] . This is one fewer left (right) edge functions than required by the constructions of multi-resolution analyses described in [5,6].
The preceding discussion provides motivation for the following result.
Theorem 4.2. Suppose Φ = ϕ 1 , , ϕ A T forms a multi-resolution analysis for L 2 ( R ) with polynomial accuracy m 2 . Suppose that each ϕ i is continuous on R , i = 1 , , A , s u p p ϕ 1 = [ 0 , m ] and s u p p ϕ i = [ 0 , 1 ] for i = 2 , , A . Then for t [ 0 , 1 ] , t m 1 can be written as a linear combination of ϕ L , j , j = 0 , , m 2 and ϕ , = 1 , , A . That is, only the ϕ L , 0 , , ϕ L , m 2 left (right) edge functions defined by Equation (15) are needed in conjunction with Φ to construct a multi-resolution analyses of L 2 [ 0 , 1 ] .
Proof. It suffices to show that there exists α 0 , , α m 2 + A , such that
t m 1 = j = 0 m 2 α j ϕ L , j ( t ) + = 1 A α + m 2 ϕ ( t )
for t [ 0 , 1 ] .
From Equation (15) and the support properties of Φ, we know that t m 1 , t [ 0 , 1 ] can be expressed as
t m 1 = k = 1 m 1 f m 1 , k 1 ϕ ¯ 1 ( t k ) + = 1 A f m 1 , 0 ϕ ¯ ( t ) .
Expanding the right-hand side of Equation (20) gives
t m 1 = j = 0 m 2 α j ϕ L , j ( t ) + = 1 A α + m 2 ϕ ( t ) = j = 0 m 2 α j k = 1 m 1 f j , k 1 ϕ ¯ 1 ( t k ) + = 1 A f j , 0 ϕ ( t ) + = 1 A α + m 2 ϕ ( t ) = k = 1 m 1 j = 0 m 2 α j f j , k 1 ϕ ¯ 1 ( t k ) + = 1 A j = 0 m 2 α j f j , 0 + α + m 2 ϕ ( t ) .
Equating coefficients for ϕ ¯ 1 ( t k ) , k = 1 m , , 1 , and ϕ ( t ) , = 1 , , A , in Equations (21) and (22) gives rise to the following system of ( m 1 + A ) × ( m 1 + A ) linear equations
j = 0 m 2 f j , k 1 α j = f m 1 , k 1 , k = 1 m , , 1 j = 0 m 2 α j f j , 0 + α + m 2 = f m 1 , 0 , = 1 , , A .
We can reformulate Equation (23) as a matrix equation, P α = b , where
α = α 0 , α 1 , , α A + m 2 T
b = f m 1 , 0 1 , f m 1 , 0 2 , , f m 1 , 0 A , f m 1 , 1 m 1 , f m 1 , 2 m 1 , , f m 1 , 1 1 T
Axioms 02 00371 i001
with I A the A × A identity matrix, Z m 1 , A the ( m 1 ) × A zero matrix, F the A × ( m 1 ) matrix defined component-wise by F , j = f j , 0 , and Q the ( m 1 ) × ( m 1 ) matrix given by
Q = f 0 , 1 1 f 1 , 1 1 f 2 , 1 1 f m 2 , 1 1 f 0 , 2 1 f 1 , 2 1 f 2 , 2 1 f m 2 , 2 1 f 0 , 3 1 f 1 , 3 1 f 2 , 3 1 f m 2 , 3 1 f 0 , 1 m 1 f 1 , 1 m 1 f 2 , 1 m 1 f m 2 , 1 m 1 .
The proof is complete if we can show Q is a nonsingular matrix.
Using Equation (6) and Corollary 2.3, we see that
Q T = E 1 1 E 2 1 E 2 m 1 E 1 m 1 = P L 1 E 0 1 P L 2 E 0 1 P L 2 m E 0 1 P L 1 m E 0 1 = P L 1 E 0 1 P L 1 E 0 1 P L 3 m E 0 1 P L 2 m E 0 1
where P L is the lower-triangular Pascal matrix given by Equation (8). Thus
Q = v v P U 1 v P U 2 v P U 2 m · P U 1
where we have introduced the row vector v = E 0 1 T for ease of notation, and P U is the upper-triangular Pascal matrix defined by P U = P L T .
We can perform the following row operations on the right-hand side of Equation (25), and the result has the same determinant as Q
v v P U 1 v P U 2 v P U 2 m v v P U 1 v v P U 2 2 v P U 1 + v v P U 3 3 v P U 2 + 3 v P U 1 v v j = 0 m 2 ( 1 ) j m 2 j P U j v v P U 1 I m 1 v P U 1 I m 1 2 v P U 1 I m 1 m 2 .
An identity given in [17] leads to P U 1 = D P U D 1 , where D = D 1 is the diagonal matrix, whose diagonal entries are d j , j = ( 1 ) j 1 , j = 1 , , m 1 , so that
P U 1 I m 1 k = D P U I m 1 k D .
The matrix P U I m 1 k is strictly upper triangular with zeros in every diagonal until the kth upper diagonal. Denote by p k the first element in this diagonal, and note that p k > 0 , since every element in the kth diagonal and above in P U I m 1 is positive. Pre- and post-multiplication by D only serves to change signs of various elements of P U I m 1 k . In particular, the first element in the kth upper diagonal of Equation (27) is ( 1 ) k p k 0 . Thus the matrix on the right-hand side of Equation (26) is upper triangular with with diagonal elements, λ j = ( 1 ) j 1 p j 1 v 1 = ( 1 ) j 1 p j 1 f 0 , 0 1 , j = 1 , , m 1 . Hence Q is non-singular if f 0 , 0 1 0 . However Φ ( t ) is a continuous scaling vector that forms a partition of unity. Since s u p ϕ i ( t ) = [ 0 , 1 ] for i = 2 , , A , the only way to satisfy the partition of unity condition at the nonzero integers is if f 0 , 0 1 0 .   ☐
We return to Example 2.7 to motivate our next result.
Example 4.3. The scaling vector Φ from Example 2.7 has polynomial accuracy m = 3 and s u p p ϕ i = [ 0 , 2 ] . We construct the edge functions ϕ L , 0 and ϕ L , 1 . Noting that f 0 , 0 = ( 0 , 1 ) and f 1 , 0 = 1 6 , 1 , we have
ϕ L , 0 ( t ) = = 1 2 k = 1 0 f 0 , k ϕ ¯ ( t k ) = f 0 , 0 1 ϕ ¯ 1 ( t + 1 ) + f 0 , 0 1 ϕ 1 ( t ) + f 0 , 0 2 ϕ ¯ 2 ( t + 1 ) + f 0 , 0 2 ϕ 2 ( t ) = ϕ ¯ 2 ( t + 1 ) + ϕ 2 ( t )
and
ϕ L , 1 ( t ) = = 1 2 k = 1 0 f 1 , k ϕ ¯ ( t k ) = f 1 , 1 1 ϕ ¯ 1 ( t + 1 ) + f 1 , 0 1 ϕ 1 ( t ) + f 1 , 1 2 ϕ ¯ 2 ( t + 1 ) + f 1 , 0 2 ϕ 2 ( t ) = f 1 , 0 1 f 0 , 0 1 ϕ ¯ 1 ( t + 1 ) + f 1 , 0 1 ϕ 1 ( t ) + f 1 , 0 2 f 0 , 0 2 ϕ ¯ 2 ( t + 1 ) + f 1 , 0 2 ϕ 2 ( t ) = 1 6 ϕ ¯ 1 ( t + 1 ) 1 6 ϕ 1 ( t ) + ϕ 2 ( t )
where we have used Lemma 2.1 to compute f 1 , 1 1 and f 1 , 1 2 . The edge functions are plotted in Figure 4.
Figure 4. The edge functions, ϕ L , 0 and ϕ L , 1 , from Example 4.3.
Figure 4. The edge functions, ϕ L , 0 and ϕ L , 1 , from Example 4.3.
Axioms 02 00371 g004
We did not construct ϕ L , 2 because it can be constructed as a linear combination of ϕ L , 0 , ϕ L , 1 , ϕ 1 and ϕ 2 . We know ϕ L , 2 ( t ) = t 2 for t [ 0 , 1 ] . For t [ 0 , 1 ] , we have
t 2 = ϕ L , 2 ( t ) = = 1 A k = 1 0 f 2 , k ϕ ¯ ( t k ) = f 2 , 1 1 ϕ ¯ 1 ( t + 1 ) + f 2 , 0 1 ϕ 1 ( t ) + f 2 , 1 2 ϕ ¯ 2 ( t + 1 ) + f 2 , 0 2 ϕ 2 ( t ) .
We seek α 0 , , α 3 , so that
t 2 = α 0 ϕ L , 0 ( t ) + α 1 ϕ L , 1 ( t ) + α 2 ϕ 1 ( t ) + α 3 ϕ 2 ( t ) = α 0 ϕ ¯ 2 ( t + 1 ) + ϕ 2 ( t ) + α 1 1 6 ϕ ¯ 1 ( t + 1 ) + ϕ 2 ( t ) + α 2 ϕ 1 ( t ) + α 3 ϕ 2 ( t )
for t [ 0 , 1 ] . We can regroup the terms in Equation (29) as follows:
t 2 = 1 6 α 1 ϕ ¯ 1 ( t + 1 ) + α 2 ϕ 1 ( t ) + α 0 ϕ ¯ 2 ( t + 1 ) + α 0 + α 1 + α 3 ϕ 2 ( t ) .
Comparing this Equation to (28) leads to the following system of equations
α 0 + α 1 + α 2 + α 3 = f 2 , 0 1 α 0 1 6 + α 1 + α 2 + α 3 = f 2 , 0 2 α 0 1 6 α 1 + α 2 + α 3 = f 2 , 1 1 α 0 1 1 6 α 1 + α 2 + α 3 = f 2 , 1 2 .
This system is easily seen to have a unique solution.
We can generalize the preceding discussion with the following proposition:
Theorem 4.4. Suppose Φ = ϕ 1 , , ϕ A T forms a multi-resolution analysis for L 2 ( R ) with polynomial accuracy A + 1 . Suppose that each ϕ i is continuous on R and s u p p ϕ i = [ 0 , 2 ] , i = 1 , , A . Assume further that f 0 , 0 , f 1 , 0 , , f A 1 , 0 is a linearly independent set of vectors. Then for t [ 0 , 1 ] , t A can be written as a linear combination of ϕ L , j , j = 0 , , A 1 , and ϕ , t = 1 , , A . That is, only the ϕ L , 0 , , ϕ L , A 1 left (right) edge functions defined by Equation (15) are needed in conjunction with Φ to construct a multi-resolution analyses of L 2 [ 0 , 1 ] .
Proof. We seek constants, α 0 , , α 2 A , such that for t [ 0 , 1 ] , we have
t A = j = 0 A 1 α j ϕ L , j ( t ) + = 1 A α j + A 1 ϕ ( t ) .
Substituting the edge functions
ϕ L , j ( t ) = = 1 A k = 1 0 f j , k ϕ ¯ ( t k ) = = 1 A f j , 1 ϕ ¯ ( t + 1 ) + = 1 A f j , 0 ϕ ( t )
j = 0 , , A 1 into Equation (30) gives, for t [ 0 , 1 ]
t A = j = 0 A 1 α j = 1 A f j , 1 ϕ ¯ ( t + 1 ) + = 1 A f j , 0 ϕ ( t ) + = 1 A α j + A 1 ϕ ( t ) = = 1 A j = 0 A 1 α j f j , 1 ϕ ¯ ( t + 1 ) + = 1 A j = 0 A 1 α j f j , 0 + α A + 1 ϕ ( t ) .
However for t [ 0 , 1 ]
t A = ϕ L , A ( t ) = = 1 A k = 1 0 f A , k ϕ ¯ ( t k ) = = 1 A f A , 1 ϕ ¯ ( t + 1 ) + = 1 A f A , 0 ϕ ( t ) .
Setting Equations (31) and (32) equal to each other gives rise to the following system of equations
j = 0 A 1 f j , 0 α j + α A + 1 = f A , 0 , = 1 , A j = 0 A 1 f j , 1 α j + α A + 1 = f A , 1 , = 1 , A .
We can reformulate Equation (33) as a matrix equation, P α = b , where
α = α 0 , , α A 1 , α A , , α 2 A 1
b = f A , 0 1 , , f A , 0 A , f A , 1 1 , , f A , 1 A
and
Axioms 02 00371 i002
with I A and Z A the A × A identity and zero matrices, respectively, F the A × A matrix defined component-wise by F , j = f j , 0 and Q the A × A matrix given by
Q = f 0 , 1 1 f 1 , 1 1 f 2 , 1 1 f A 1 , 1 1 f 0 , 1 2 f 1 , 1 2 f 2 , 1 2 f A 1 , 1 2 f 0 , 1 A f 1 , 1 A f 2 , 1 A f A 1 , 1 A .
The proof is complete if we can show Q is a nonsingular matrix.
Using Equation (6) and Corollary 2.3, we see that
Q T = E 1 1 E 1 2 E 1 A = P L 1 E 0 1 P L 1 E 0 2 P L 0 E 0 A = P L 1 E 0 1 E 0 2 E 0 A = P L 1 f 0 , 0 1 f 0 , 0 2 f 0 , 0 A f 1 , 0 1 f 1 , 0 2 f 1 , 0 A f A 1 , 0 1 f A 1 , 0 2 f A 1 , 0 A = P L 1 f 0 , 0 f 1 , 0 f A 1 , 0 .
Since it is assumed that f 0 , 0 , f 1 , 0 , , f A 1 , 0 is a linearly independent set of vectors, Q T is nonsingular and the proof is complete.    ☐
Propositions 4.2 and 4.4 in some sense represent the extreme cases for the supports of scaling functions in Φ. In the general case, supp ϕ = 0 , M , and in order to consider a square system, like Equations (23) or (33), we need the number of functions in S = ϕ ¯ ( · k ) contributing to t m 1 for t [ 0 , 1 ] , which should be the sum of m 1 (the number of edge functions) and A (the number of scaling functions). From Equation (14), we have
= 1 A M = n ( S ) = m 1 + A
so that n ( S ) A = m 1 . Using an argument similar to those used in the proofs of Propositions 4.2 and 4.4, we would arrive at the n ( S ) × n ( S ) system P α = b , where
α = α 0 , , α n ( S ) 1 T
b = f m 1 , 0 , g 1 , g 2 , , g A T
with g = f m 1 , 1 m , f m 1 , 2 m , , f m 1 , 1 and
Axioms 02 00371 i003
where I A is the A × A identity matrix, Z m 1 , A is the m 1 × A zero matrix, F is the A × ( m 1 ) matrix defined component-wise by F , k = f k , 0 , = 1 , , A , k = 0 , , m 2 and Q is the ( m 1 ) × ( m 1 ) matrix given in block form by
Axioms 02 00371 i005
for = 1 , , A .
It remains an open problem to determine the conditions necessary to ensure Q is nonsingular. A reasonable assumption is that f 0 , 0 , , f m 2 , 0 (or equivalently, the set E 0 1 , , E 0 A ) is a set of linearly independent vectors, but the proof has not been established. For those instances when M > 2 , it must be true that E 0 ( 0 , 0 , , 1 ) T , since this vector is an eigenvector of P U 1 .
It is also unclear whether or not nonnegative scaling vectors can be created that possess certain (anti-)symmetry properties. If the underlying scaling vector possesses (anti-)symmetry properties, then the only modifications needed would be for the edge functions. We have yet to consider the problem of creating (anti-)symmetric edge functions.

Acknowledgments

The authors wish to express their gratitude to University of St. Thomas colleague, Yongzhi (Peter) Yang, for introducing us to several interesting properties obeyed by Pascal matrices and for his careful reading of the manuscript.

References

  1. Dahmen, W.; Micchelli, C.A. Biorthogonal wavelet expansions. Constr. Approx. 1997, 13, 293–328. [Google Scholar] [CrossRef]
  2. Goh, S.; Jian, Q.; Xia, T. Construction of biorthogonal multiwavelets using the lifting scheme. Appl. Comput. Harm. Anal. 2000, 9, 336–352. [Google Scholar] [CrossRef]
  3. Lakey, J.; Pereyra, C. Divergence-Free Multiwavelets on Rectangular Domains. In Wavelet Analysis and Multiresolution Methods; Marcel Dekker: Urbana-Champaign, IL, USA, 2000; pp. 203–240. [Google Scholar]
  4. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  5. Meyer, Y. Ondelettes sur l‘intervalle. Rev. Math. Iberoam. 1991, 7, 115–133. [Google Scholar] [CrossRef]
  6. Cohen, A.; Daubechies, I.; Vial, P. Wavelets on the interval and fast wavelet transforms. Appl. Comput. Harmon. Anal. 1993, 1, 54–81. [Google Scholar] [CrossRef] [Green Version]
  7. Walter, G.; Shen, X. Positive estimation with wavelets. Cont. Math. 1998, 216, 63–79. [Google Scholar]
  8. Ruch, D.; Van Fleet, P. On the Construction of Positive Scaling Vectors. In Proceedings of Wavelet Analysis and Multiresolution Methods, New York, USA, 2000; He, T., Ed.; Marcel Dekker: New York, NY, USA, May 2000; pp. 317–339. [Google Scholar]
  9. Ruch, D.; Van Fleet, P. Gibbs’ phenomenon for nonnegative compactly supported scaling vectors. J. Math. Anal. Appl. 2005, 5, 370–382. [Google Scholar] [CrossRef]
  10. Walter, G.; Shen, X. Continuous nonnegative wavelets and their use in density estimation. Commut. Stat. Theory Method 1999, 28, 1–18. [Google Scholar] [CrossRef]
  11. Geronimo, J.; Hardin, D.; Massopust, P. Fractal functions and wavelet expansions based on several scaling functions. J. Approx. Theory 1994, 78, 373–401. [Google Scholar] [CrossRef]
  12. Goodman, T.; Lee, S.L. Wavelets of multiplicity r. Trans. Am. Math. Soc. 1994, 342, 307–324. [Google Scholar]
  13. Heil, C.; Strang, G.; Strela, V. Approximation by translates of refinable functions. Numer. Math. 1996, 73, 75–94. [Google Scholar] [CrossRef]
  14. Donovan, G.; Geronimo, J.; Hardin, D.; Massopust, P. Construction of orthogonal wavelets using fractal interpolation functions. SIAM J. Math. Anal. 1996, 27, 1158–1192. [Google Scholar] [CrossRef]
  15. Plonka, G.; Strela, V. Construction of multiscaling functions with approximation and symmetry. SIAM J. Math. Anal. 1998, 29, 481–510. [Google Scholar] [CrossRef]
  16. Donovan, G.; Geronimo, J.; Hardin, D.; Massopust, P. Construction of orthogonal wavelets using fractal interpolation functions. SIAM J. Math. Anal. 1996, 27, 1158–1192. [Google Scholar] [CrossRef]
  17. Call, G.; Velleman, D. Pascal’s matrices. Am. Math. Mon. 1993, 100, 372–376. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ruch, D.K.; Van Fleet, P.J. Nonnegative Scaling Vectors on the Interval. Axioms 2013, 2, 371-389. https://doi.org/10.3390/axioms2030371

AMA Style

Ruch DK, Van Fleet PJ. Nonnegative Scaling Vectors on the Interval. Axioms. 2013; 2(3):371-389. https://doi.org/10.3390/axioms2030371

Chicago/Turabian Style

Ruch, David K., and Patrick J. Van Fleet. 2013. "Nonnegative Scaling Vectors on the Interval" Axioms 2, no. 3: 371-389. https://doi.org/10.3390/axioms2030371

APA Style

Ruch, D. K., & Van Fleet, P. J. (2013). Nonnegative Scaling Vectors on the Interval. Axioms, 2(3), 371-389. https://doi.org/10.3390/axioms2030371

Article Metrics

Back to TopTop