Construction of Multiwavelets on an Interval

Boundary functions for wavelets on a finite interval are often constructed as linear combinations of boundary-crossing scaling functions. An alternative approach is based on linear algebra techniques for truncating the infinite matrix of the DiscreteWavelet Transform to a finite one. In this article we show how an algorithm of Madych for scalar wavelets can be generalized to multiwavelets, given an extra assumption. We then develop a new algorithm that does not require this additional condition. Finally, we apply results from a previous paper to resolve the non-uniqueness of the algorithm by imposing regularity conditions (including approximation orders) on the boundary functions.


Introduction and Overview
The Discrete Wavelet Transform (DWT) is designed to act on infinitely long signals.For finite signals, the algorithm breaks down near the boundaries.This can be dealt with by constructing special boundary functions [1][2][3], or by extending the data by zero padding, extrapolation, symmetry, or other methods [4][5][6][7].
Two approaches to constructing boundary functions were compared in a previous paper of the authors [8].The first approach is based on forming linear combinations of standard scaling functions that cross the boundary.The second approach is based on boundary recursion relations.
A third approach is based on linear algebra.The infinite banded Toeplitz matrix representing the DWT is replaced by a finite matrix by suitable end-point modifications.A particular such method for scalar wavelets is presented in Madych [4].
In this paper, we first show that the Madych approach can be generalized to multiwavelets under an additional assumption, which may or may not be satisfied for a given multiwavelet.We then present a modified method that does not require this extra assumption.
Linear algebra completions are not unique; they all include multiplication by an arbitrary orthogonal matrix.A random choice of matrix does not in general produce coefficients that correspond to any actual boundary function.Random choice also does not provide any approximation order at the boundary.
We show how to impose approximation order constraints in the algorithm, and in the process remove much or all of the non-uniqueness.

Review of Wavelet Theory
In this section we provide a brief background on the theory of wavelets.We primarily focus on the basic definitions and results that will be used throughout this article.For a more detailed treatment, we refer the reader to the many excellent articles and books published on this subject [6,[9][10][11][12].
We will state everything in terms of multiwavelets, which includes scalar wavelets as a special case, and restrict ourselves to the orthogonal case.

Multiresolution Approximation
A multiresolution approximation (MRA) of L 2 (R) is a chain of closed subspaces {V n }, n ∈ Z, (vi) there exists a function vector The function φ is called the multiscaling function of the given MRA.r is called the multiplicity.Condition (ii) gives the main property of an MRA.Each V n consists of the functions in V 0 compressed by a factor of 2 n .Thus, an orthonormal basis of V n is given by for some r × r coefficient matrices H k .This is called a two-scale refinement equation, and φ is called refinable.We consider in this paper only compactly supported functions, for which the refinement equation is a finite sum.The orthogonal projection P n of a function s ∈ L 2 into V n is given by This is interpreted as an approximation to s at scale 2 −n .
Here the inner product is defined as where * denotes the complex conjugate transpose.
The main application of an MRA comes from considering the difference between approximations to s at successive scales 2 −n and 2 −n−1 .
Let Q n = P n+1 − P n .Q n is also an orthogonal projection onto a closed subspace W n , which is the orthogonal complement of V n in V n+1 : An orthonormal basis of W 0 is generated from the integer translates of a single function vector for some coefficient matrices G n .We have and {ψ j,nk : 1 ≤ j ≤ r, n, k ∈ Z} produces an orthonormal basis for L 2 (R).

Discrete Wavelet Transform
The Discrete Wavelet Transform (DWT) takes a function s ∈ V n for some n and decomposes it into a coarser approximation at level m < n, plus the fine detail at the intermediate levels.
It suffices to describe the step from level n to level n − 1.

Since the signal s
We find that If we interleave the coefficients at level n − 1 the DWT can be written as (sd) n−1 = ∆s n , where The matrix ∆ is orthogonal.Signal reconstruction corresponds to s n = ∆ * (sd) n−1 .

Approximation Order
A multiscaling function φ has approximation order p if all polynomials of degree less than p can be expressed locally as linear combinations of integer shifts of φ.That is, there exist row vectors c * jk , j = 0, . . ., p − 1, k ∈ Z, so that For orthogonal wavelets where µ j is the jth continuous moment of φ.
A high approximation order is desirable in applications.A minimum approximation order of 1 is a required condition in many theorems.

Wavelets on an Interval
Standard wavelet theory only considers functions on the entire real line.In practice we often deal with functions on a finite interval I.One way to deal with this problem is to introduce special boundary basis functions.These functions need to be refinable in order to support a DWT algorithm.
The main approaches construct boundary functions from • recursion relations; or • linear combinations of shifts of the underlying scaling functions; or • linear algebra techniques.
Details on these approaches, and the connections between them, will be given later in this section.The linear combination approach has been the most commonly used technique (see e.g., [1,3,13,14]).The recursion relation approach, and its relationship to linear combinations, was studied in more detail in [8].The linear algebra approach is used in [4,6].

Basic Assumptions and Notation
We do not aim for complete generality but make the following simplifying assumptions, which cover most cases of practical interest.
• The underlying multiwavelet is orthogonal, continuous, with multiplicity r and approximation order p ≥ 1, and has recursion coefficients H 0 , . . ., H N , G 0 , . . ., G N .This means the support of φ and ψ is contained in the interval [0, N ].• The interval I is [0, M ] with M large enough so that the left and right endpoint functions do not interfere with each other.• The boundary functions have support on [0, N − 1] and [M − N + 1, M ], respectively (that is, smaller support than the interior functions).
The interior functions satisfy the usual recursion relations The boundary-crossing multiscaling functions are those shifts of φ whose support contains 0 or M in its interior.At the left endpoint, these are φ −N +1 through φ −1 .
The support of φ could be strictly smaller than [0, N ].In this case, some of the functions that appear to be boundary-crossing are actually interior, but this causes no problems.
We assume that we have L left endpoint scaling functions, which we group together into a single vector φ L .We stress that we mean L scalar functions, not function vectors, and that L is not necessarily a multiple of r.
Likewise, we assume R right endpoint functions with support contained in [M − N + 1, M ], grouped into a vector φ R , with recursion relations similar to (6).We will show later that L and R are uniquely determined by φ.
Most of the rest of the paper will only address the calculations at the left end in detail.Calculations at the right end are identical, with appropriate minor changes.

Recursion Relations
The left endpoint functions satisfy recursion relations Here

The recursion relation approach constructs boundary functions by finding suitable recursion coefficients A, B.
We are interested in regular solutions of (6), that is, φ L that are continuous, have approximation order at least 1, and φ L (0) = 0.
It is shown in [8] that a sufficient condition for regularity is that A has a simple largest eigenvalue of 1/ √ 2, and that φ L has approximation order at least 1.Conditions for verifying boundary approximation order p are given in Section 6.

Linear Combinations
If the boundary functions are linear combinations of boundary-crossing functions, then The linear combination approach to constructing boundary function tries to construct a suitable C. A random choice of C k will not produce refinable functions.It is shown in [8] that if the boundary functions are both refinable and linear combinations, the coefficients must be related by where Relations ( 8) are a kind of eigenvalue problem.For any given φ there is only a small number of possibilities for boundary functions that are both refinable and linear combinations.
Note that the recursion relations are necessary.Without coefficients A, B there is no discrete wavelet transform.On the other hand, the existence of C as in ( 7) is optional.It was shown in [8] that there are regular refinable boundary functions that are not linear combinations.

Linear Algebra
We can assume that N = 2K + 1 is odd, by introducing an extra recursion coefficient H N = 0 if necessary.The resulting structure for the decomposition matrix is This corresponds to a segment of the infinite matrix ∆ in (2) with some end point modifications.
Here the T k are as in (2), and with A, B, E, F as in (6).
The linear algebra approach tries to construct suitable endpoint blocks L k , R k by linear algebra methods.

Uniqueness Results
We assume initially that there are only four recursion coefficients, and therefore only two block matrices Since the infinite matrix ∆ in ( 2) is orthogonal, we know that or equivalently These relations lead to some interesting properties that we use in the following.More detail is given in [8,15], but for convenience we give at least an outline of the proofs here.Lemma 3.1 If T 0 , T 1 are square matrices of size 2r × 2r that satisfy relations (11), then (a) ρ 0 := rank(T 0 ) and ρ 1 := rank(T 1 ) satisfy The ranges and nullspaces of T 0 and T 1 are mutually orthogonal and complementary, that is, (c) There exist orthogonal matrices U , V with where I ρ denotes an identity matrix of size ρ × ρ. Proof.
(a) The first equation in (11) implies ρ 0 + ρ 1 ≥ 2r.The second equation implies The dimension count shows that they are identical.
(c) We start with separate singular value decompositions (SVD) of T 0 and T 1 where the usual ordering of singular values has been reversed for T 1 .
Since ranges and nullspaces are orthogonal, we can construct matrices U and V by taking the first ρ 0 columns of U 0 , V 0 and last ρ 1 columns of U 1 , V 1 , respectively, which provide a common SVD.The fact that Σ 0 = I, Σ 1 = I follows from To investigate the possible sizes of the endpoint blocks in ∆ M , it suffices to consider of size 6r × 6r.∆ M for larger M simply has more rows of T 0 , T 1 in the middle.
Theorem 3.3 Assume that ∆ 3 , ∆3 are two orthogonal matrices of form (14). Then there exist orthogonal matrices The proofs are given in [8].
If there are more than two matrices T j , we form block matrices.For example if we have T 0 , . . ., T 3 , we use

Madych Approach for Scalar Wavelets
This is a particular implementation of the matrix completion approach for scalar orthogonal wavelets.Madych [4] started from a periodized version of the infinite matrix (2), and modified it into the desired form by orthogonal matrices.We will show in this section that this also works for orthogonal multiwavelets, given an additional condition.
To explain the Madych algorithm, and simultaneously extend it to multiwavelets, we again assume initially that we only have T 0 and T 1 , and begin with Our objective is to find orthogonal matrices U , V so that has the desired structure and is orthogonal.We let From ( 11) it follows that

A multiscaling function based on four recursion coefficients satisfies Condition M if
This condition is automatic in the scalar case, because S 0 and S 1 are non-zero orthogonal row vectors satisfying S 0 S * 1 = 0.For multiwavelets this condition may not hold.For example, the Chui-Lian CL(2) multiwavelet [16] has only three recursion coefficients and S 1 only has rank 1.
The row spans of S 0 , S 1 are mutually orthogonal, so we can orthonormalize them separately.We can find r × r nonsingular matrices R 0 , R 1 so that is an orthogonal matrix.
We now let This already has the desired form.Multiply ∆3 V from the left by where U L , U R are arbitrary orthogonal matrices, to obtain This completes the algorithm in the case of up to four recursion coefficients.In the general case we form block matrices again, as in (16).

A New Approach to Multiwavelet Endpoint Modification
For a more general algorithm, we again begin with ∆3 as in (17).Let U and V be the orthogonal matrices from the joint SVD of T 0 and T 1 in (13).Consider We obtain By inspection, a technique that produces the correct structure is to move the first ρ 0 columns to the end, and then interchange the first ρ 0 rows with the last ρ 1 rows.That amounts to multiplying from the right with and from the left with Now let Q L , Q R be arbitrary orthogonal matrices of size 2ρ 1 × 2ρ 1 , 2ρ 0 × 2ρ 0 , respectively.We multiply (19) from the left with and from the right with where Here There is one important observation that follows from Theorem 3.3.Suppose that QL , QR are orthogonal matrices, and instead of ( 15) we define Then ∆3 has the form (14) and is orthogonal.By Theorem 3.3, there are other orthogonal matrices Q L , Q R so that the same ∆3 can also be written in the form (15).For this reason, it is not necessary to consider arbitrary orthogonal components in the matrix U R .
As before, in the case of more than four recursion coefficients we apply the algorithm to block matrices.
If we reconsider Madych's approach in our notation, we find that it amounts to moving the second set of columns in (18) to the end, so that we end up with This only yields matrices of the correct size if ρ 0 = ρ 1 , which is equivalent to Condition M.

Imposing Regularity Conditions
The algorithm in this section will only be described in detail for the left boundary.Calculations at the right boundary work the same way.
Given a completion ∆ M we can read off the recursion coefficients A, B from L 0 , L 1 .During numerical experiments with the algorithm described above we discovered that a random choice for the matrix Q L did not usually correspond to actual boundary functions.This section will describe how to make a canonical choice for Q L which produces recursion coefficients that correspond to regular boundary functions, and which imposes some approximation orders.The groundwork for this was laid in [8].

Approximation Order for Boundary Functions
In [8] it was shown that if the interior multiscaling function φ has approximation order ≥ p, then approximation order p for the boundary scaling functions is equivalent to the existence of row vectors * j , j = 0, . . ., p − 1 so that * j where the γ * jm are known row vectors: Here x = greatest integer ≤ x, and c * jk are defined in (3).If we let the approximation order conditions can be written as After applying the algorithm from Section 5 with initial multiplier Q L = I, we end up with 20).After pre-multiplying by a general Q L , we get where G is a known matrix with rows g * j .Conditions (22) reduce to

Simplifying the Problem
It is easy to see that if we replace the boundary function vector φ L by M φ L for an invertible matrix M , the new boundary functions still span the same space, and remain refinable.They also remain orthogonal if M is orthogonal.
The effect of M on the coefficients A, B, C and the matrix Λ is The key observation is that by using a suitable M , we can assume that Λ is lower triangular.

Deriving the Algorithm
We will now satisfy conditions (23) row by row.
NOTE: The vectors * j , γ * j , g * j are naturally numbered j = 0, . . ., p − 1.To keep the notation readable, in this section we will number the elements of vectors and matrices starting with index 0 instead of 1.
We have found the 0th row of Q L , as well as * 0 .For the induction step, assume that rows 0, . . ., k − 1 of Λ, Q L have already been determined.We partition the matrix Q L as follows: where Q 11,k is lower triangular, and likewise ,k = I by induction.We now satisfy the conditions for the kth row in (23) one by one.
Assuming l kk = 0, the condition * k ( The condition * k ( The new row k of Q must be orthogonal to the rows of Q 11,k k .We substitute (24) and (25) and simplify to get Note that the matrix in parentheses is upper triangular with non-zero diagonal entries, and therefore non-singular.Finally, we normalize the new row k The actual calculations go in the following order: First we calculate λ * k and l kk from ( 26) and ( 27).Then we can calculate α * k , β * k and q kk from ( 24) and (25).Everything is unique, except for the choice of sign in l kk .
The only way in which the algorithm could fail is if l kk = 0.This is conceivable, but has not been observed in practice.
With this algorithm we can impose boundary approximation orders up to the number of boundary functions L, or the interior approximation order p, whichever is smaller.If L < p, the boundary functions will have a lower approximation order than the interior functions.If L > p, only the top p rows of A, B will be determined, and we have to make some arbitrary choices about the rest.
This will be illustrated by examples in the next section.

Examples
To keep the subscripting manageable, we assume that for left endpoint calculations we are working on the interval [0, M ].For right endpoint calculations we instead use [−M, 0].The right-endpoint formulas corresponding to (6) and ( 7) are The examples only show the calculations for φ L , φ R .The recursion coefficients for the wavelet functions ψ L , ψ R are found by completing the orthogonal matrices Q L and Q R .
We find ρ 0 = 3, ρ 1 = 1, so we have only a single boundary function at the left end, but three at the right end.
At the left end we can only enforce approximation order 1.The boundary scaling function found by our algorithm is the same one as in [8].
At the right end we can enforce approximation order 2.This determines two of the boundary functions, corresponding to the orthonormalized combination of the first two basic solutions from [8].The third function (determined by the third row of Q R ) is arbitrary, except that the diagonal entry in Z should be smaller than

Summary
Boundary functions for wavelets on a finite interval can be constructed from linear combinations of boundary-crossing internal functions, or by finding recursion coefficients, or by linear algebra techniques.The problem with the linear algebra approach is that it involves multiplication by arbitrary orthogonal matrices.A random choice leads to boundary recursion coefficients that produce an invertible transform but do not correspond to any actual boundary functions.Also, these coefficients do not provide any approximation orders.
In this paper, we present a linear algebra algorithm that works for all multiwavelets and removes most or all of the non-uniqueness while ensuring maximum possible boundary approximation orders.

√ 2 / 2
to ensure that the completion is regular.As an example, we set the third column of Z to zero, and let the QR-factorization built into MATLAB choose an appropriate third row of Y .The result was 0.3015 0.2345 0.3920 0.3279 0.0533 0.2902 0.5383 −0.3711 0.2189 0.8638 −0.2613  The graphs are shown in Figure3.

Figure 3 .
Figure 3. Boundary functions for DGHM.The single left boundary function has approximation order 1.Three right boundary functions provide approximation order 2. .