Next Article in Journal / Special Issue
A Sequential, Implicit, Wavelet-Based Solver for Multi-Scale Time-Dependent Partial Differential Equations
Previous Article in Journal / Special Issue
Divergence-Free Multiwavelets on the Half Plane

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Construction of Multiwavelets on an Interval

by
Ahmet Altürk
1 and
Fritz Keinert
2,*
1
Department of Mathematics, Amasya University, Amasya, Turkey
2
Department of Mathematics, Iowa State University, Ames, IA 50011, USA
*
Author to whom correspondence should be addressed.
Axioms 2013, 2(2), 122-141; https://doi.org/10.3390/axioms2020122
Submission received: 5 February 2013 / Revised: 21 March 2013 / Accepted: 26 March 2013 / Published: 17 April 2013

## Abstract

:
Boundary functions for wavelets on a finite interval are often constructed as linear combinations of boundary-crossing scaling functions. An alternative approach is based on linear algebra techniques for truncating the infinite matrix of the Discrete Wavelet Transform to a finite one. In this article we show how an algorithm of Madych for scalar wavelets can be generalized to multiwavelets, given an extra assumption. We then develop a new algorithm that does not require this additional condition. Finally, we apply results from a previous paper to resolve the non-uniqueness of the algorithm by imposing regularity conditions (including approximation orders) on the boundary functions.

## 1. Introduction and Overview

The Discrete Wavelet Transform (DWT) is designed to act on infinitely long signals. For finite signals, the algorithm breaks down near the boundaries. This can be dealt with by constructing special boundary functions [1,2,3], or by extending the data by zero padding, extrapolation, symmetry, or other methods [4,5,6,7].
Two approaches to constructing boundary functions were compared in a previous paper of the authors [8]. The first approach is based on forming linear combinations of standard scaling functions that cross the boundary. The second approach is based on boundary recursion relations.
A third approach is based on linear algebra. The infinite banded Toeplitz matrix representing the DWT is replaced by a finite matrix by suitable end-point modifications. A particular such method for scalar wavelets is presented in Madych [4].
In this paper, we first show that the Madych approach can be generalized to multiwavelets under an additional assumption, which may or may not be satisfied for a given multiwavelet. We then present a modified method that does not require this extra assumption.
Linear algebra completions are not unique; they all include multiplication by an arbitrary orthogonal matrix. A random choice of matrix does not in general produce coefficients that correspond to any actual boundary function. Random choice also does not provide any approximation order at the boundary.
We show how to impose approximation order constraints in the algorithm, and in the process remove much or all of the non-uniqueness.

## 2. Review of Wavelet Theory

In this section we provide a brief background on the theory of wavelets. We primarily focus on the basic definitions and results that will be used throughout this article. For a more detailed treatment, we refer the reader to the many excellent articles and books published on this subject [6,9,10,11,12].
We will state everything in terms of multiwavelets, which includes scalar wavelets as a special case, and restrict ourselves to the orthogonal case.

#### 2.1. Multiresolution Approximation

A multiresolution approximation (MRA) of $L 2 ( R )$ is a chain of closed subspaces ${ V n }$, $n ∈ Z$,
$⋯ ⊂ V - 1 ⊂ V 0 ⊂ V 1 ⊂ ⋯ ⊂ L 2 ( R )$
satisfying
(i)
$V n ⊂ V n + 1$ for all $n ∈ Z$;
(ii)
$f ( x ) ∈ V n ⟺ f ( 2 x ) ∈ V n + 1$ for all $n ∈ Z$;
(iii)
$f ( x ) ∈ V n ⟹ f ( x - 2 - n k ) ∈ V n$ for all $n , k ∈ Z$;
(iv)
$⋂ n ∈ Z V n = { 0 }$;
(v)
$⋃ n ∈ Z V n ¯ = L 2 ( R )$;
(vi)
there exists a function vector
$ϕ ( x ) = ϕ 1 ( x ) ⋮ ϕ r ( x ) , ϕ i ∈ L 2 ( R )$
such that $ϕ j ( x − k ) : 1 ≤ j ≤ r , k ∈ Z$ is an orthonormal basis for $V 0$ [11].
The function $ϕ$ is called the multiscaling function of the given MRA. r is called the multiplicity.
Condition (ii) gives the main property of an MRA. Each $V n$ consists of the functions in $V 0$ compressed by a factor of $2 n$. Thus, an orthonormal basis of $V n$ is given by
${ ϕ j , n k : = 2 n / 2 ϕ j ( 2 n x − k ) : 1 ≤ j ≤ r , k ∈ Z }$
Since $V 0 ⊂ V 1$, $ϕ$ can be written in terms of the basis of $V 1$ as
$ϕ ( x ) = ∑ k H k ϕ 1 k ( x ) = 2 ∑ k H k ϕ ( 2 x − k )$
for some $r × r$ coefficient matrices $H k$. This is called a two-scale refinement equation, and $ϕ$ is called refinable. We consider in this paper only compactly supported functions, for which the refinement equation is a finite sum.
The orthogonal projection $P n$ of a function $s ∈ L 2$ into $V n$ is given by
$P n s = ∑ k ∈ Z 〈 s , ϕ n k 〉 ϕ n k$
This is interpreted as an approximation to s at scale $2 − n$.
Here the inner product is defined as
$〈 f , g 〉 = ∫ f ( x ) g ( x ) * d x$
where * denotes the complex conjugate transpose.
The main application of an MRA comes from considering the difference between approximations to s at successive scales $2 − n$ and $2 − n − 1$.
Let $Q n = P n + 1 − P n$. $Q n$ is also an orthogonal projection onto a closed subspace $W n$, which is the orthogonal complement of $V n$ in $V n + 1$:
$V n + 1 = V n ⊕ W n$
$Q n s$ is interpreted as the fine detail in s at resolution $2 − n$.
An orthonormal basis of $W 0$ is generated from the integer translates of a single function vector $ψ ∈ L 2 ( R )$, called a multiwavelet function. Since $W 0 ⊂ V 1$, the multiwavelet function ψ can be represented as
$ψ ( x ) = 2 ∑ n G n ϕ ( 2 x − n )$
for some coefficient matrices $G n$.
We have
$L 2 ( R ) = ⨁ n ∈ Z W n$
and ${ ψ j , n k : 1 ≤ j ≤ r , n , k ∈ Z }$ produces an orthonormal basis for $L 2 ( R )$.

#### 2.2. Discrete Wavelet Transform

The Discrete Wavelet Transform (DWT) takes a function $s ∈ V n$ for some n and decomposes it into a coarser approximation at level $m < n$, plus the fine detail at the intermediate levels.
$s = P n s = P m s + ∑ k = m n − 1 Q n s$
It suffices to describe the step from level n to level $n − 1$.
Since the signal $s ∈ V n = V n − 1 ⊕ W n − 1$, we can represent it by its coefficients ${ s n k * } = { 〈 s , ϕ n k 〉 }$, ${ d n k * } = { 〈 s , ψ n k 〉 }$ as
$s = ∑ k s n k * ϕ n k = ∑ j s n − 1 , j * ϕ n − 1 , j + ∑ j d n − 1 , j * ψ n − 1 , j$
We find that
$s n − 1 , j = ∑ k H k − 2 j s n k , d n − 1 , j = ∑ k G k − 2 j s n k$
If we interleave the coefficients at level $n − 1$
$( sd ) n − 1 = ⋮ s n − 1 , 0 d n − 1 , 0 s n − 1 , 1 d n − 1 , 1 ⋮$
the DWT can be written as $( sd ) n − 1 = Δ s n$, where
$Δ = ⋯ ⋯ ⋯ ⋯ T 0 T 1 T 2 ⋯ ⋯ T 0 T 1 T 2 ⋯ ⋯ T 0 T 1 T 2 ⋯ ⋯ ⋯ ⋯ , T k = H 2 k H 2 k + 1 G 2 k G 2 k + 1$
The matrix Δ is orthogonal. Signal reconstruction corresponds to $s n = Δ * ( sd ) n − 1$.

#### 2.3. Approximation Order

A multiscaling function $ϕ$ has approximation order p if all polynomials of degree less than p can be expressed locally as linear combinations of integer shifts of $ϕ$. That is, there exist row vectors $c j k *$, $j = 0 , … , p − 1$, $k ∈ Z$, so that
$x j = ∑ k c j k * ϕ ( x − k )$
For orthogonal wavelets
$c j k * = ∑ ℓ = 0 j j ℓ k j − ℓ μ ℓ *$
where $μ j$ is the jth continuous moment of $ϕ$.
A high approximation order is desirable in applications. A minimum approximation order of 1 is a required condition in many theorems.

## 3. Wavelets on an Interval

Standard wavelet theory only considers functions on the entire real line. In practice we often deal with functions on a finite interval I. One way to deal with this problem is to introduce special boundary basis functions. These functions need to be refinable in order to support a DWT algorithm.
The main approaches construct boundary functions from
recursion relations; or
linear combinations of shifts of the underlying scaling functions; or
linear algebra techniques.
Details on these approaches, and the connections between them, will be given later in this section.
The linear combination approach has been the most commonly used technique (see e.g., [1,3,13,14]). The recursion relation approach, and its relationship to linear combinations, was studied in more detail in [8]. The linear algebra approach is used in [4,6].

#### 3.1. Basic Assumptions and Notation

We do not aim for complete generality but make the following simplifying assumptions, which cover most cases of practical interest.
The underlying multiwavelet is orthogonal, continuous, with multiplicity r and approximation order $p ≥ 1$, and has recursion coefficients $H 0 , … , H N$, $G 0 , … , G N$. This means the support of $ϕ$ and ψ is contained in the interval $[ 0 , N ]$.
The interval I is $[ 0 , M ]$ with M large enough so that the left and right endpoint functions do not interfere with each other.
The boundary functions have support on $[ 0 , N − 1 ]$ and $[ M − N + 1 , M ]$, respectively (that is, smaller support than the interior functions).
The interior multiscaling functions are those integer shifts of $ϕ$ whose support fits completely inside $[ 0 , M ]$. These are $ϕ 0 , … , ϕ M − N$, where $ϕ k ( x ) = ϕ ( x − k )$. All interior functions have value $0$ at the endpoints, by continuity.
The interior functions satisfy the usual recursion relations
$ϕ ( x ) = 2 ∑ k H k ϕ ( 2 x − k ) ψ ( x ) = 2 ∑ k G k ϕ ( 2 x − k )$
The boundary-crossing multiscaling functions are those shifts of $ϕ$ whose support contains 0 or M in its interior. At the left endpoint, these are $ϕ − N + 1$ through $ϕ − 1$.
The support of $ϕ$ could be strictly smaller than $[ 0 , N ]$. In this case, some of the functions that appear to be boundary-crossing are actually interior, but this causes no problems.
We assume that we have L left endpoint scaling functions, which we group together into a single vector $ϕ L$. We stress that we mean L scalar functions, not function vectors, and that L is not necessarily a multiple of r.
Likewise, we assume R right endpoint functions with support contained in $[ M − N + 1 , M ]$, grouped into a vector $ϕ R$, with recursion relations similar to (6). We will show later that L and R are uniquely determined by $ϕ$.
Most of the rest of the paper will only address the calculations at the left end in detail. Calculations at the right end are identical, with appropriate minor changes.

#### 3.2. Recursion Relations

The left endpoint functions satisfy recursion relations
$ϕ L ( x ) = 2 A ϕ L ( 2 x ) + 2 ∑ k = 0 N − 2 B k ϕ ( 2 x − k ) ψ L ( x ) = 2 E ϕ L ( 2 x ) + 2 ∑ k = 0 N − 2 F k ϕ ( 2 x − k )$
Here
$A = 〈 ϕ n − 1 L , ϕ n L 〉 , B k = 〈 ϕ n − 1 L , ϕ n , k 〉 , E = 〈 ψ n − 1 L , ϕ n L 〉 , F k = 〈 ψ n − 1 L , ϕ n , k 〉$
where $ϕ n L ( x ) = 2 n / 2 ϕ L ( 2 n x )$. A and E are of size $L × L$, $B k$ and $F k$ are of size $L × r$.
The recursion relation approach constructs boundary functions by finding suitable recursion coefficients A, B.
We are interested in regular solutions of (6), that is, $ϕ L$ that are continuous, have approximation order at least 1, and $ϕ L ( 0 ) ≠ 0$.
It is shown in [8] that a sufficient condition for regularity is that A has a simple largest eigenvalue of $1 / 2$, and that $ϕ L$ has approximation order at least 1. Conditions for verifying boundary approximation order p are given in Section 6.

#### 3.3. Linear Combinations

If the boundary functions are linear combinations of boundary-crossing functions, then
Each $C k$ is of size $L × r$.
The linear combination approach to constructing boundary function tries to construct a suitable C. A random choice of $C k$ will not produce refinable functions. It is shown in [8] that if the boundary functions are both refinable and linear combinations, the coefficients must be related by
$C V = A C C W = B$
where
$B = B 0 B 1 ⋯ B N − 2 , C = C − N + 1 C − N + 2 ⋯ C − 1$
and
Both V and W are of size $( N − 1 ) r × ( N − 1 ) r$.
Relations (8) are a kind of eigenvalue problem. For any given $ϕ$ there is only a small number of possibilities for boundary functions that are both refinable and linear combinations.
Note that the recursion relations are necessary. Without coefficients A, B there is no discrete wavelet transform. On the other hand, the existence of C as in (7) is optional. It was shown in [8] that there are regular refinable boundary functions that are not linear combinations.

#### 3.4. Linear Algebra

We can assume that $N = 2 K + 1$ is odd, by introducing an extra recursion coefficient $H N = 0$ if necessary. The resulting structure for the decomposition matrix is
This corresponds to a segment of the infinite matrix Δ in (2) with some end point modifications.
Here the $T k$ are as in (2), and
$L 0 = A E , L k = B 2 k − 2 B 2 k − 1 F 2 k − 2 F 2 k − 1 , k = 1 , … , K$
with A, B, E, F as in (6).
The linear algebra approach tries to construct suitable endpoint blocks $L k$, $R k$ by linear algebra methods.

#### 3.5. Uniqueness Results

We assume initially that there are only four recursion coefficients, and therefore only two block matrices
$T 0 = H 0 H 1 G 0 G 1 , T 1 = H 2 H 3 G 2 G 3$
Since the infinite matrix Δ in (2) is orthogonal, we know that
$T 0 T 0 * + T 1 T 1 * = I T 0 T 1 * = 0$
or equivalently
$T 0 * T 0 + T 1 * T 1 = I T 0 * T 1 = 0$
These relations lead to some interesting properties that we use in the following. More detail is given in [8,15], but for convenience we give at least an outline of the proofs here.
Lemma 3.1 If $T 0$, $T 1$ are square matrices of size $2 r × 2 r$ that satisfy relations (11), then
(a) $ρ 0 : = r a n k ( T 0 )$ and $ρ 1 : = r a n k ( T 1 )$ satisfy
$ρ 0 + ρ 1 = 2 r$
(b) The ranges and nullspaces of $T 0$ and $T 1$ are mutually orthogonal and complementary, that is,
$R ( T 0 ) ⊕ R ( T 1 ) = R 2 r N ( T 0 ) ⊕ N ( T 1 ) = R 2 r$
(c) There exist orthogonal matrices U, V with
$T 0 = U I ρ 0 0 0 0 V * , T 1 = U 0 0 0 I ρ 1 V *$
where $I ρ$ denotes an identity matrix of size $ρ × ρ$.
Proof.
(a) The first equation in (11) implies $ρ 0 + ρ 1 ≥ 2 r$. The second equation implies $ρ 0 + ρ 1 ≤ 2 r$.
(b) The relation $T 0 * T 1 = 0$ implies that $R ( T 1 )$ is contained in $N ( T 0 * ) = R ( T 0 ) ⊥$. The dimension count shows that they are identical.
(c) We start with separate singular value decompositions (SVD) of $T 0$ and $T 1$
$T 0 = U 0 Σ 0 0 0 0 V 0 * , T 1 = U 1 0 0 0 Σ 1 V 1 *$
where the usual ordering of singular values has been reversed for $T 1$.
Since ranges and nullspaces are orthogonal, we can construct matrices U and V by taking the first $ρ 0$ columns of $U 0$, $V 0$ and last $ρ 1$ columns of $U 1$, $V 1$, respectively, which provide a common SVD. The fact that $Σ 0 = I$, $Σ 1 = I$ follows from
$T 0 + T 1 T 0 * + T 1 * = Σ 0 2 0 0 Σ 1 2 = I$
■ To investigate the possible sizes of the endpoint blocks in $Δ M$, it suffices to consider
$Δ 3 = L 0 L 1 0 0 0 T 0 T 1 0 0 0 R 0 R 1$
of size $6 r × 6 r$. $Δ M$ for larger M simply has more rows of $T 0$, $T 1$ in the middle.
Theorem 3.2 If $Δ 3$ is orthogonal and has the structure given in (14), then $L 0$, $L 1$, $R 0$, $R 1$ must have sizes $2 ρ 1 × ρ 1$, $2 ρ 1 × 2 r$, $2 ρ 0 × 2 r$, and $2 ρ 0 × ρ 0$, respectively.
Theorem 3.3 Assume that $Δ 3$, $Δ ˜ 3$ are two orthogonal matrices of form (14). Then there exist orthogonal matrices $Q L$, $Q R$ so that
$Δ ˜ 3 = Q L 0 0 0 I 0 0 0 Q R Δ 3$
The proofs are given in [8].
If there are more than two matrices $T j$, we form block matrices. For example if we have $T 0 , … , T 3$, we use
$T ^ 0 = T 0 T 1 T 2 0 T 0 T 1 0 0 T 0 , T ^ 1 = T 3 0 0 T 2 T 3 0 T 1 T 2 T 3$

## 4. Madych Approach for Scalar Wavelets

This is a particular implementation of the matrix completion approach for scalar orthogonal wavelets. Madych [4] started from a periodized version of the infinite matrix (2), and modified it into the desired form by orthogonal matrices. We will show in this section that this also works for orthogonal multiwavelets, given an additional condition.
To explain the Madych algorithm, and simultaneously extend it to multiwavelets, we again assume initially that we only have $T 0$ and $T 1$, and begin with
$Δ ˜ 3 = T 0 T 1 0 0 T 0 T 1 T 1 0 T 0$
Our objective is to find orthogonal matrices U, V so that
$Δ 3 = U Δ ˜ 3 V = L 0 L 1 0 0 0 T 0 T 1 0 0 0 R 0 R 1$
has the desired structure and is orthogonal.
We let
$S 0 = H 0 H 1 , S 1 = H 2 H 3 , W 0 = G 0 G 1 , W 1 = G 2 G 3$
From (11) it follows that
$T 0 S 1 * = T 0 W 1 * = T 1 S 0 * = T 1 W 0 * = 0$
Definition 4.1 A multiscaling function based on four recursion coefficients satisfies Condition M if
$S 0 S 1 = H 0 H 1 H 2 H 3$
has full rank.
This condition is automatic in the scalar case, because $S 0$ and $S 1$ are non-zero orthogonal row vectors satisfying $S 0 S 1 * = 0$.
For multiwavelets this condition may not hold. For example, the Chui–Lian CL(2) multiwavelet [16] has only three recursion coefficients
$H 0 = 1 4 2 2 2 − 7 − 7 , H 1 = 1 4 2 4 0 0 2 , H 2 = 1 4 2 2 − 2 7 − 7$
and $S 1$ only has rank 1.
The row spans of $S 0$, $S 1$ are mutually orthogonal, so we can orthonormalize them separately. We can find $r × r$ nonsingular matrices $R 0$, $R 1$ so that
$R 0 S 0 R 1 S 1$
is an orthogonal matrix.
We now let
$V = ( R 0 S 0 ) * 0 ( R 1 S 1 ) * 0 I 0$
then
$Δ ˜ 3 V = T 0 ( R 0 S 0 ) * T 1 0 T 0 ( R 1 S 1 ) * 0 T 0 T 1 0 T 1 ( R 0 S 0 ) * 0 T 0 T 1 ( R 1 S 1 ) * = T 0 ( R 0 S 0 ) * T 1 0 0 0 T 0 T 1 0 0 0 T 0 T 1 ( R 1 S 1 ) *$
This already has the desired form. Multiply $Δ ˜ 3 V$ from the left by
$U = U L 0 0 0 I 0 0 0 U R$
where $U L$, $U R$ are arbitrary orthogonal matrices, to obtain
$Δ 3 = U Δ ˜ 3 V = L 0 L 1 0 0 0 T 0 T 1 0 0 0 R 0 R 1$
with
$L 0 = U L T 0 S 0 * R 0 * L 1 = U L T 1 R 0 = U R T 0 R 1 = U R T 1 S 1 * R 1 *$
This completes the algorithm in the case of up to four recursion coefficients. In the general case we form block matrices again, as in (16).

## 5. A New Approach to Multiwavelet Endpoint Modification

For a more general algorithm, we again begin with $Δ ˜ 3$ as in (17).
Let U and V be the orthogonal matrices from the joint SVD of $T 0$ and $T 1$ in (13). Consider
$U 3 * = U * 0 0 0 U * 0 0 0 U * , V 3 = V 0 0 0 V 0 0 0 V$
We obtain
By inspection, a technique that produces the correct structure is to move the first $ρ 0$ columns to the end, and then interchange the first $ρ 0$ rows with the last $ρ 1$ rows. That amounts to multiplying from the right with
$P R = 0 I ρ 0 I 6 r − ρ 0 0$
and from the left with
$P L = 0 0 I ρ 0 0 I 4 r 0 I ρ 1 0 0$
so that
Now let $Q L$, $Q R$ be arbitrary orthogonal matrices of size $2 ρ 1 × 2 ρ 1 , 2 ρ 0 × 2 ρ 0$, respectively. We multiply (19) from the left with
$U L = Q L 0 0 0 U 0 0 0 Q R$
and from the right with
$U R = I ρ 1 0 0 0 0 V * 0 0 0 0 V * 0 0 0 0 I ρ 0$
then
$Δ 3 = U L P L U 3 * Δ ˜ 3 V 3 P R U R = L 0 L 1 0 0 0 T 0 T 1 0 0 0 R 0 R 1$
where
$L 0 = Q L , 0 L 1 = Q L , 1 V 1 * R 0 = Q R , 0 V 0 * R 1 = Q R , 1$
Here
$Q L = ( Q L , 0 | Q L , 1 ) , Q R = ( Q R , 0 | Q R , 1 ) , V = ( V 0 | V 1 )$
with $Q L , 0$, $Q L , 1$ of size $2 ρ 1 × ρ 1$, $Q R , 0$, $Q R , 1$ of size $2 ρ 0 × ρ 0$, and $V 0$, $V 1$ of size $2 r × ρ 0$, $2 r × ρ 1$, respectively.
There is one important observation that follows from Theorem 3.3. Suppose that $Q ˜ L$, $Q ˜ R$ are orthogonal matrices, and instead of (15) we define
$Δ ˜ 3 = Δ 3 Q ˜ L 0 0 0 I 0 0 0 Q ˜ R$
Then $Δ ˜ 3$ has the form (14) and is orthogonal. By Theorem 3.3, there are other orthogonal matrices $Q L$, $Q R$ so that the same $Δ ˜ 3$ can also be written in the form (15). For this reason, it is not necessary to consider arbitrary orthogonal components in the matrix $U R$.
As before, in the case of more than four recursion coefficients we apply the algorithm to block matrices.
If we reconsider Madych’s approach in our notation, we find that it amounts to moving the second set of columns in (18) to the end, so that we end up with
This only yields matrices of the correct size if $ρ 0 = ρ 1$, which is equivalent to Condition M.

## 6. Imposing Regularity Conditions

The algorithm in this section will only be described in detail for the left boundary. Calculations at the right boundary work the same way.
Given a completion $Δ M$ we can read off the recursion coefficients A, B from $L 0$, $L 1$. During numerical experiments with the algorithm described above we discovered that a random choice for the matrix $Q L$ did not usually correspond to actual boundary functions. This section will describe how to make a canonical choice for $Q L$ which produces recursion coefficients that correspond to regular boundary functions, and which imposes some approximation orders. The groundwork for this was laid in [8].

#### 6.1. Approximation Order for Boundary Functions

In [8] it was shown that if the interior multiscaling function $ϕ$ has approximation order $≥ p$, then approximation order p for the boundary scaling functions is equivalent to the existence of row vectors $ℓ j *$, $j = 0 , … , p − 1$ so that
$ℓ j * 2 A = 2 − j ℓ j * ℓ j * 2 B m = γ j m * , m = 0 , … , N − 2$
where the $γ j m *$ are known row vectors:
$γ j m * = 2 − j c j m * − 2 ∑ k = 0 ⌊ m / 2 ⌋ c j k * H m − 2 k$
Here $⌊ x ⌋ =$ greatest integer $≤ x$, and $c j k *$ are defined in (3).
If we let
$γ j * = γ j 0 * , … , γ j , N − 2 * , Γ = γ 0 * ⋮ γ p − 1 * , Λ = ℓ 0 * ⋮ ℓ p − 1 *$
the approximation order conditions can be written as
$Λ 2 A = 1 0 ⋯ 0 0 1 / 2 ⋱ ⋮ ⋮ ⋱ ⋱ 0 0 ⋯ 0 2 − p + 1 Λ Λ 2 B = Γ$
After applying the algorithm from Section 5 with initial multiplier $Q L = I$, we end up with
$L 0 L 1 = I 0 0 V 1 *$
with $V 1$ as in (20). After pre-multiplying by a general $Q L$, we get
$Q L L 0 L 1 = Q 11 Q 12 Q 21 Q 22 I 0 0 V 1 * = Q 11 Q 12 V 1 * Q 21 Q 22 V 1 * = A B D E$
so $A = Q 11$, $B = Q 12 V 1 *$.
The second condition in (22) is
$Λ 2 B = Λ 2 Q 12 V 1 * = Γ$
We recall that $V 1 * V 1 = I$ and multiply by $V 1$ to get
$Λ 2 Q 12 = Λ 2 Q 12 V 1 * V 1 = Γ V 1 = : G$
where G is a known matrix with rows $g j *$.
Conditions (22) reduce to
$Λ 2 Q 11 = 1 0 ⋯ 0 0 1 / 2 ⋱ ⋮ ⋮ ⋱ ⋱ 0 0 ⋯ 0 2 − p + 1 Λ Λ 2 Q 12 = G Q 11 Q 11 * + Q 12 Q 12 * = I$

#### 6.2. Simplifying the Problem

It is easy to see that if we replace the boundary function vector $ϕ L$ by $M ϕ L$ for an invertible matrix M, the new boundary functions still span the same space, and remain refinable. They also remain orthogonal if M is orthogonal.
The effect of M on the coefficients A, B, C and the matrix Λ is
$A → A ˜ = M A M − 1 B → B ˜ = M B C → C ˜ = M C Λ → Λ ˜ = Λ M − 1$
The key observation is that by using a suitable M, we can assume that Λ is lower triangular.

#### 6.3. Deriving the Algorithm

We will now satisfy conditions (23) row by row.
Note: The vectors $ℓ j *$, $γ j *$, $g j *$ are naturally numbered $j = 0 , … , p − 1$. To keep the notation readable, in this section we will number the elements of vectors and matrices starting with index 0 instead of 1.
We start with row 0. The assumptions $ℓ 0 * = ( l 00 , 0 , … , 0 )$ and $ℓ 0 * ( 2 Q 11 ) = ℓ 0 *$ lead to
$Q 11 = 1 / 2 0 … 0 … … * ⋮ ⋮ … … *$
where * denotes as yet undetermined entries.
The condition
$ℓ 0 * 2 Q 12 = g 0 *$
$Q 12 = 1 2 l 00 g 0 * *$
The 00-entry of $Q 11 Q 11 * + Q 12 Q 12 *$ is
$1 2 + 1 2 l 00 2 ∥ g 0 * ∥ 2 = 1$
$l 00 = ± ∥ g 0 * ∥$
It follows from the results in [8] that $l 00 ≠ 0$.
We have found the 0th row of $Q L$, as well as $ℓ 0 *$.
For the induction step, assume that rows $0 , … , k − 1$ of Λ, $Q L$ have already been determined. We partition the matrix $Q L$ as follows:
where $Q 11 , k$ is lower triangular, and likewise
$ℓ k * = l k 0 ⋯ l k , k − 1 | l k k | 0 ⋯ 0 = λ k * | l k k | 0 *$
Note that
$Q 11 , k Q 11 , k * + Q 12 , k Q 12 , k * = I$
by induction. We now satisfy the conditions for the kth row in (23) one by one.
Assuming $l k k ≠ 0$, the condition $ℓ k * ( 2 Q 11 ) = 2 − k ℓ k *$ leads to
$ζ k * = 0 * α k * = 1 2 l k k λ k * 2 − k I − 2 Q 11 , k q k , k = 2 − k − 1 / 2$
The condition $ℓ k * ( 2 Q 12 ) = g k *$ leads to
$β k * = 1 2 l k k g k * − 2 λ k * Q 12 , k$
The new row k of Q must be orthogonal to the rows of $Q 11 , k$
$0 * = α k * q k , k 0 * Q 11 , k * 0 0 * + β k * Q 12 , k * = α k * Q 11 , k * + β k * Q 12 , k * .$
We substitute (24) and (25) and simplify to get
$λ k * = g k * Q 12 , k * 2 I − 2 − k Q 11 , k * − 1$
Note that the matrix in parentheses is upper triangular with non-zero diagonal entries, and therefore non-singular.
Finally, we normalize the new row k
$1 = α k * α k + q k k 2 + β k * β k$
$l k k 2 = 1 2 ( 1 − 2 − 2 k − 1 ) λ k * 2 − k I − 2 Q 11 , k 2 − k I − 2 Q 11 , k * λ k + g k * − 2 λ k * Q 12 , k g k − 2 Q 12 , k * λ k$
The actual calculations go in the following order: First we calculate $λ k *$ and $l k k$ from (26) and (27). Then we can calculate $α k *$, $β k *$ and $q k k$ from (24) and (25). Everything is unique, except for the choice of sign in $l k k$.
The only way in which the algorithm could fail is if $l k k = 0$. This is conceivable, but has not been observed in practice.
With this algorithm we can impose boundary approximation orders up to the number of boundary functions L, or the interior approximation order p, whichever is smaller. If $L < p$, the boundary functions will have a lower approximation order than the interior functions. If $L > p$, only the top p rows of A, B will be determined, and we have to make some arbitrary choices about the rest.
This will be illustrated by examples in the next section.

## 7. Examples

To keep the subscripting manageable, we assume that for left endpoint calculations we are working on the interval $[ 0 , M ]$. For right endpoint calculations we instead use $[ − M , 0 ]$. The right-endpoint formulas corresponding to (6) and (7) are
The examples only show the calculations for $ϕ L$, $ϕ R$. The recursion coefficients for the wavelet functions $ψ L$, $ψ R$ are found by completing the orthogonal matrices $Q L$ and $Q R$.

#### 7.1. Example 1: Daubechies $D 2$

As a very simple scalar example we consider the Daubechies scaling functions with two vanishing moments.
The recursion coefficients are
$h 0 = 1 + 3 4 2 , h 1 = 3 + 3 4 2 , h 2 = 3 − 3 4 2 , h 3 = 1 − 3 4 2$
The approximation order is 2, but there is only a single boundary function at each end. We can impose approximation order 1 at each end.
At the left end, we find
$a = 1 2 2 , b * = 6 4 , − 2 4 , c * = 3 + 1 , 3 + 1$
At the right end, we get
$z = 1 2 2 , y * = 6 4 , 2 4 , x * = 3 − 1 , 3 − 1$
These are the same boundary functions derived in [8]. They are shown in Figure 1.
Figure 1. Boundary functions for $D 2$ with approximation order 1, at left and right ends.
Figure 1. Boundary functions for $D 2$ with approximation order 1, at left and right ends.

#### 7.2. Example 2: CL(3) Multiwavelet

The Chui–Lian multiwavelet CL(3) [16] has recursion coefficients
$H 0 = 1 20 2 10 − 3 10 5 6 − 2 15 5 6 − 3 15 5 − 3 10 , H 1 = 1 20 2 30 + 3 10 5 6 − 2 15 − 5 6 − 7 15 15 − 3 10 H 2 = 1 20 2 30 + 3 10 − 5 6 + 2 15 5 6 + 7 15 15 − 3 10 , H 3 = 1 20 2 10 − 3 10 − 5 6 + 2 15 − 5 6 + 3 15 5 − 3 10$
It has approximation order 3.
Here $ρ 0 = ρ 1 = 2$, so we need a vector of two boundary functions at each end. We can impose approximation order 2.
The original completion is
$Δ 3 = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 − 6 / 8 − 7 15 / 40 3 10 / 40 − 3 / 8 6 / 8 − 3 15 / 40 3 10 / 40 − 1 / 8 0 0 3 10 / 40 − 3 / 8 6 / 8 + 7 15 / 40 1 / 8 − 3 10 / 40 6 / 8 − 3 15 / 40$
After applying the algorithm in Section 6, we find that the top two rows of $Q L$ must be
$Q L = 2 / 2 0 − 30 / 8 − 2 / 8 − 24 10 + 1559 / 3848 2 / 4 − 7041 − 36 10 / 15392 − 191 − 60 10 / 15392$
$A = 0 . 7071 0 − 0 . 6518 0 . 3536 , B = 0 . 6980 − 0 . 0796 0 . 0091 − 0 . 0796 0 . 6613 0 . 0835 − 0 . 0095 − 0 . 0754$
Only the numerical approximations of A, B are shown, since the exact expressions get very messy.
This is the same solution found in [8]. The boundary functions at the right end are the left end functions reversed, due to the symmetry of CL(3). The graphs are shown in Figure 2.
Figure 2. Boundary functions for CL(3) with approximation order 2, at left and right ends.
Figure 2. Boundary functions for CL(3) with approximation order 2, at left and right ends.

#### 7.3. Example 3: DGHM Multiwavelet

The Donovan-Geronimo-Hardin-Massopust multiwavelet [17] has approximation order 2 with recursion coefficients
$H 0 = 1 20 2 12 16 2 − 2 − 6 , H 1 = 1 20 2 12 0 9 2 20 H 2 = 1 20 2 0 0 9 2 − 6 , H 3 = 1 20 2 0 0 − 2 0$
Its support is $[ 0 , 2 ]$ instead of the expected $[ 0 , 3 ]$, which causes some interesting effects.
We find $ρ 0 = 3$, $ρ 1 = 1$, so we have only a single boundary function at the left end, but three at the right end.
At the left end we can only enforce approximation order 1. The boundary scaling function found by our algorithm is the same one as in [8].
At the right end we can enforce approximation order 2. This determines two of the boundary functions, corresponding to the orthonormalized combination of the first two basic solutions from [8]. The third function (determined by the third row of $Q R$) is arbitrary, except that the diagonal entry in Z should be smaller than $2 / 2$ to ensure that the completion is regular.
As an example, we set the third column of Z to zero, and let the QR-factorization built into MATLAB choose an appropriate third row of Y. The result was
$Z = 0 . 7071 0 0 − 0 . 6250 0 . 3536 0 0 0 0 , Y = 0 . 4477 0 . 3015 0 . 2345 0 . 3920 0 . 3279 0 . 0533 0 . 2902 0 . 5383 − 0 . 3711 0 . 2189 0 . 8638 − 0 . 2613$
The graphs are shown in Figure 3.
Figure 3. Boundary functions for DGHM. The single left boundary function has approximation order 1. Three right boundary functions provide approximation order 2.
Figure 3. Boundary functions for DGHM. The single left boundary function has approximation order 1. Three right boundary functions provide approximation order 2.

## 8. Summary

Boundary functions for wavelets on a finite interval can be constructed from linear combinations of boundary-crossing internal functions, or by finding recursion coefficients, or by linear algebra techniques. The problem with the linear algebra approach is that it involves multiplication by arbitrary orthogonal matrices. A random choice leads to boundary recursion coefficients that produce an invertible transform but do not correspond to any actual boundary functions. Also, these coefficients do not provide any approximation orders.
In this paper, we present a linear algebra algorithm that works for all multiwavelets and removes most or all of the non-uniqueness while ensuring maximum possible boundary approximation orders.

## References

1. Andersson, L.; Hall, N.; Jawerth, B.; Peters, G. Wavelets on Closed Subsets of the Real Line. In Recent Advances in Wavelet Analysis; Academic Press: Boston, MA, USA, 1994; Volume 3, pp. 1–61. [Google Scholar]
2. Chui, C.K.; Quak, E. Wavelets on a Bounded Interval. In Numerical Methods in Approximation Theory; Birkhäuser: Basel, 1992; Volume 105, pp. 53–75. [Google Scholar]
3. Cohen, A.; Daubechies, I.; Vial, P. Wavelets on the interval and fast wavelet transforms. Appl. Comput. Harmon. Anal. 1993, 1, 54–81. [Google Scholar] [CrossRef] [Green Version]
4. Madych, W.R. Finite orthogonal transforms and multiresolution analyses on intervals. J. Fourier Anal. Appl. 1997, 3, 257–294. [Google Scholar] [CrossRef]
5. Brislawn, C. Classification of nonexpansive symmetric extension transforms for multirate filter banks. Appl. Comput. Harmon. Anal. 1996, 3, 337–357. [Google Scholar] [CrossRef]
6. Strang, G.; Nguyen, T. Wavelets and Filter Banks; Wellesley-Cambridge Press: Wellesley, MA, USA, 1996. [Google Scholar]
7. Williams, J.R.; Amaratunga, K. A discrete wavelet transform without edge effects using wavelet extrapolation. J. Fourier Anal. Appl. 1997, 3, 435–449. [Google Scholar] [CrossRef]
8. Altürk, A.; Keinert, F. Regularity of boundary wavelets. Appl. Comput. Harmon. Anal. 2012, 32, 65–85. [Google Scholar] [CrossRef]
9. Daubechies, I. Ten Lectures on Wavelets. In Proceeding of CBMS-NSF Regional Conference Series in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992; Volume 61. [Google Scholar]
10. Keinert, F. Wavelets and Multiwavelets. In Studies in Advanced Mathematics; Chapman & Hall/CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
11. Mallat, S.G. Multiresolution approximations and wavelet orthonormal bases of L2(R). Trans. Am. Math. Soc. 1989, 315, 69–87. [Google Scholar]
12. Strang, G. Wavelets and dilation equations: A brief introduction. SIAM Rev. 1989, 31, 614–627. [Google Scholar] [CrossRef]
13. Gao, X.; Zhou, S. A study of orthogonal, balanced and symmetric multi-wavelets on the interval. Sci. China Ser. F 2005, 48, 761–781. [Google Scholar] [CrossRef]
14. Lee, W.S.; Kassim, A.A. Signal and image approximation using interval wavelet transform. IEEE Trans. Image Proc. 2007, 16, 46–56. [Google Scholar] [CrossRef]
15. Van Fleet, P.J. Multiwavelets and integer transforms. J. Comput. Anal. Appl. 2003, 5, 161–178. [Google Scholar]
16. Chui, C.K.; Lian, J.A. A study of orthonormal multi-wavelets. Appl. Numer. Math. 1996, 20, 273–298. [Google Scholar] [CrossRef]
17. Donovan, G.C.; Geronimo, J.S.; Hardin, D.P.; Massopust, P.R. Construction of orthogonal wavelets using fractal interpolation functions. SIAM J. Math. Anal. 1996, 27, 1158–1192. [Google Scholar] [CrossRef]

## Share and Cite

MDPI and ACS Style

Altürk, A.; Keinert, F. Construction of Multiwavelets on an Interval. Axioms 2013, 2, 122-141. https://doi.org/10.3390/axioms2020122

AMA Style

Altürk A, Keinert F. Construction of Multiwavelets on an Interval. Axioms. 2013; 2(2):122-141. https://doi.org/10.3390/axioms2020122

Chicago/Turabian Style

Altürk, Ahmet, and Fritz Keinert. 2013. "Construction of Multiwavelets on an Interval" Axioms 2, no. 2: 122-141. https://doi.org/10.3390/axioms2020122