Next Article in Journal
Assigning Numerical Scores to Linguistic Expressions
Next Article in Special Issue
The SIC Question: History and State of Play

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Quincunx Fundamental Refinable Functions in Arbitrary Dimensions

by
Xiaosheng Zhuang
Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong
Axioms 2017, 6(3), 20; https://doi.org/10.3390/axioms6030020
Submission received: 16 June 2017 / Revised: 3 July 2017 / Accepted: 4 July 2017 / Published: 6 July 2017
(This article belongs to the Special Issue Wavelet and Frame Constructions, with Applications)

## Abstract

:
In this paper, we generalize the family of Deslauriers–Dubuc’s interpolatory masks from dimension one to arbitrary dimensions with respect to the quincunx dilation matrices, thereby providing a family of quincunx fundamental refinable functions in arbitrary dimensions. We show that a family of unique quincunx interpolatory masks exists and such a family of masks is of real value and has the full-axis symmetry property. In dimension $d = 2$, we give the explicit form of such unique quincunx interpolatory masks, which implies the nonnegativity property of such a family of masks.
MSC:
42C40; 41A05; 42C15; 65T60

## 1. Introduction and Motivation

We say that a $d × d$ integer matrix $M$ is a dilation matrix if $lim n → ∞ M − n = 0$, that is, all the eigenvalues of $M$ are greater than 1 in modulus. An $M$-refinable function (or distribution) $ϕ$ satisfies the refinement equation:
$ϕ = | det M | ∑ k ∈ Z d a ( k ) ϕ ( M · − k ) ,$
where $a : Z d → C$ is called a refinement mask (or low-pass filter) for $ϕ$. In the sequel, we assume that the refinement mask a is finitely supported and normalized; i.e., $a ∈ l 0 ( Z d )$ and $∑ k ∈ Z d a ( k ) = 1$, where by $l ( Z d )$ we denote the linear space of all sequences $v : Z d → C$ of complex numbers on $Z d$ and by $l 0 ( Z d )$ we denote the linear space of all sequences $v = { v ( k ) } k ∈ Z d ∈ l ( Z d )$ such that the cardinality of ${ k ∈ Z d : v ( k ) ≠ 0 }$ is finite. It is often convenient to use the (formal) Fourier seris $v ^$ of a sequence $v = { v ( k ) } k ∈ Z d ∈ l 0 ( Z d )$, which is defined to be:
$v ^ ( ξ ) : = ∑ k ∈ Z d v ( k ) e − i k · ξ , ξ ∈ R d ,$
where $k · ξ : = k 1 ξ 1 + ⋯ + k d ξ d$ for $k = ( k 1 , … , k d )$ and $ξ = ( ξ 1 , … , ξ d )$ in $R d$. In terms of Fourier series, the refinement Equation (1) can be also stated in the frequency domain as:
$ϕ ^ ( M T ξ ) = a ^ ( ξ ) ϕ ^ ( ξ ) , a . e . ξ ∈ R d ,$
where for a function $f ∈ L 1 ( R d )$, its Fourier transform $f ^$ is defined to be $f ^ ( ξ ) : = ∫ R d f ( x ) e − i x · ξ d x$, which can be naturally extended to functions in $L 2 ( R d )$ or tempered distributions.
Let $a ∈ l 0 ( Z d )$ be a mask. For a nonnegative integer $κ ∈ N 0 : = N ∪ { 0 }$, we say that a satisfies the sum rules of order $κ + 1$ with respect to a dilation matrix $M$ (or a lattice $M Z d$) if,
where $Π κ d$ denotes the set of all polynomials in $R d$ of (total) degree at most $κ$. Note that (3) depends only on the lattice $M Z d : = { M k : k ∈ Z d }$; that is, if two lattices $M Z d$ and $N Z d$ generated by two dilation matrices $M$ and $N$ are the same $M Z d = N Z d$, then a satisfies the sum rules of order $κ + 1$ with respect to $M$ if and only if a satisfies the sum rules of order $κ + 1$ with respect to $N$.
To find the solution for (1) or (2), one starts with an initial function $ϕ 0$ and employs iteratively the subdivision scheme $Q a , M n ϕ 0 , n = 1 , 2 , ⋯$, where,
$Q a , M f : = | det M | ∑ k ∈ Z d a ( k ) f ( M · − k ) .$
We say that the subdivision scheme $Q a , M n ϕ 0$ converges in $L p ( R d )$ if there exists a function $ϕ ∈ L p ( R d )$ such that: $lim n → ∞ ∥ Q a , M n ϕ 0 − ϕ ∥ p = 0$. In such a case, $ϕ$ satisfies (1). It was shown in [1] that the subdivision scheme $Q a , M n ϕ 0$ converges in $L p ( R d )$ if and only if,
$lim n → ∞ ∥ ∇ j S a , M n δ ∥ p 1 / n < | det M | 1 / p , j = 1 , … , d ,$
where $∇ j v : = v − v ( · − e j )$ is the difference operator with $e j$ being the jth coordinate unit vector in $R d$, $δ$ is the Kronecker delta satisfying $δ ( 0 ) = 1$ and $δ ( k ) = 0$ for all $k ∈ Z d \ { 0 }$, and $S a , M : l ( Z d ) → l ( Z d )$ is the subdivision operator defined to be:
$[ S a , M v ] ( n ) : = | det M | ∑ k ∈ Z d v ( k ) a ( n − M k ) , n ∈ Z d , v ∈ l ( Z d ) .$
In this paper, we are interested in fundamental refinable functions associated with quincunx interpolatory masks and quincunx dilation matrices in arbitrary dimensions. A function $ϕ$ is said to be fundamental if it is continuous and $ϕ ( k ) = δ ( k ) , k ∈ Z d$. A dilation matrix $M$ is said to be a quincunx dilation matrix if $| det M | = 2$. For example, in dimension one ($d = 1$), $M = 2$ is the dyadic dilation factor; in dimension $d = 2$, such dilation matrices include:
$M 2 = 1 − 1 1 1 , M 2 = 1 1 1 − 1 ;$
and in dimension $d = 3$, an example of quincunx dilation matrices is:
$M = 1 0 1 − 1 − 1 1 0 − 1 0 .$
It is easy to show that the quincunx lattice $Q d : = M Z d$ generated by a quincunx dilation matrix $M$ is:
which is independent of the quincunx dilation matrix $M$. In dimension one, the quincunx lattice is simply the lattice of even integers; while in a higher dimension, it is also called the checkerboard lattice.
One can show that if $ϕ$ is a fundamental refinable function associated with a refinement mask a and a quincunx dilation matrix $M$ as in (1), then it is necessary that a is a quincunx interpolatory mask:
$a ( β ) = 1 2 δ ( β ) , β ∈ M Z d .$
The subdivision scheme associated with a quincunx interpolatory mask and a quincunx dilation matrix is called a quincunx interpolatory subdivision scheme. If the quincunx interpolatory subdivision scheme $Q a , M n ϕ 0$ converges to $ϕ ∈ L p ( R d )$, then $ϕ$ is called a quincunx fundamental refinable function.
Interpolatory subdivision schemes play a crucial role in computer-aided geometric design (CAGD) [2], sampling theory, and wavelet/framelet analysis [3,4,5,6,7,8,9,10].
In dimension $d = 1$, Deslauriers and Dubuc [11] proposed a family of interpolatory subdivision schemes associated with a family ${ a 2 n − 1 : n ∈ N }$ of quincunx interpolatory masks (with respect to the dyadic dilation factor $M = 2$). Such a family is unique in the following sense:
(1)
$a 2 n − 1$ is a quincunx interpolatory mask: $a 2 n − 1 ( 2 k ) = 1 2 δ ( k )$, $k ∈ Z$.
(2)
$a 2 n − 1$ is supported on $[ 1 − 2 n , 2 n − 1 ] ∩ Z$.
(3)
$a 2 n − 1$ satisfies the sum rules of order $2 n$ with respect to the dyadic dilation $M = 2$.
In fact, the interpolatory mask $a 2 n − 1$ can be explicitly written as:
$a 2 n − 1 ^ ( ξ ) = cos 2 n ( ξ / 2 ) ∑ j = 0 n − 1 n − 1 + j j sin 2 j ( ξ / 2 ) , ξ ∈ R .$
It is easily seen that such a family enjoys many agreeable properties: symmetry, real-value, non-negativity ($a 2 n − 1 ^ ( ξ ) ⩾ 0$), minimal support, and so on. Moreover, the family of Deslauriers and Dubuc’s interpolatory masks $a 2 n − 1$, $n ∈ N$, is closely related to the family of Daubechies’ orthonormal masks $a n d b$ in the sense that $| a n d b ^ ( ξ ) | 2 = a 2 n − 1 ^ ( ξ )$ and can be obtained by utilizing the Riesz factorization technique [12].
On the one hand, in dimension $d = 2$, a very simple extension of the dyadic dilation is to consider the dilation matrix $M = 2 I 2 = diag ( 2 , 2 )$. For such a dilation matrix, Dynet al. [13] constructed the so-called butterfly interpolatory subdivision scheme; Deslauriers et al. [14] obtained several continuous fundamental refinable functions; Mongeau and Deslauriers [15] obtained several $C 1$ fundamental refinable functions; using convolutions of box splines, Riemenschneider and Shen [16] constructed a family of interpolatory subdivision schemes with symmetry; and Han and Jia [17] constructed a family of optimal interpolatory subdivision schemes with many desirable properties.
On the other hand, in higher dimensions ($d ⩾ 2$), a more natural way of extending the dyadic dilation factor with respect to the lattice of even integers is the quincunx dilation matrix. Han and Jia in ([18], Theorem 3.3) constructed a family of quincunx interpolatory subdivision schemes associated with a family,
of 2-dimensional unique quincunx interpolatory masks, which can be viewed as the generalization of the family of Deslauriers and Dubuc’s interpolatory masks in dimension one to dimension two in the following sense:
(1)
$a ( m , n )$ is a quincunx interpolatory mask; i.e., (6) holds with $d = 2$.
(2)
$a ( m , n )$ is supported on $G ( m , n ) : = ( [ − m , m ] × [ − n , n ] ) ∩ Z 2$.
(3)
$a ( m , n )$ satisfies the sum rules of order $m + n + 1$ with respect to the quincunx lattice $Q 2$ defined as in (5) for $d = 2$.
The uniqueness of such a family $a ( m , n )$ implies that $a ( m , n )$ is minimally supported among all the quincunx interpolatory masks which satisfies the sum rules of order $m + n + 1$. Moreover, the mask $a ( m , n )$ is real-valued and full-axis symmetric (see (15)).
A natural question is whether the above result is still true in higher dimensions $d ⩾ 3$. It should be pointed out that the results in [18] are already highly nontrivial. Many of the results are tailored for dimension two. It was not clear whether techniques in that paper can be carried over directly to higher dimensions. In this paper, we develop new techniques that extend many results in [18] to arbitrary dimensions and give a complete picture of the unique family of quincunx interpolatory masks in arbitrary dimension $d ∈ N$. That is, we prove the following main theorem.
Theorem 1.
There exists a family,
of d-dimensional masks that are unique in the following sense:
(1)
$a m$ is a quincunx interpolatory mask; i.e., (6) holds.
(2)
$a m$ is supported on $G m : = ( [ − m 1 , m 1 ] × ⋯ × [ − m d , m d ] ) ∩ Z d$.
(3)
$a m$ satisfies the sum rules of order $m 1 + ⋯ + m d + 1$ with respect to the quincunx lattice $Q d$ defined as in (5).
In addition, $a m$ is real-valued and full-axis symmetric.
Note that a $( d − 1 )$-dimensional mask $a : Z d − 1 → C$ can be regarded as a d-dimensional mask by identifying $Z d − 1$ as a subset $Z d − 1 × { 0 }$ in $Z d$. The above result not only provides a natural generalization of the Deslauriers and Dubuc’s family of interpolatory masks to arbitrary dimensions, but also shows the close connection between low-dimensional and high-dimensional quincunx interpolatory masks: any such quincunx interpolatory mask $a m$ in $R d$ can be regarded as a quincunx interpolatory mask $a ( m , 0 , … , 0 )$ in $R d + k$ for any $k ∈ N 0$ and $( m , 0 , … , 0 ) ∈ Z d + k$.
The remainder of this paper is devoted to proving the above theorem and is organized as follows. In Section 2, we introduce some necessary lemmas and definitions in order to simply give the proof of our main result. Some properties of the unique quincunx interpolatory masks in arbitrary dimensions are discussed. In Section 3, we present the explicit form of the unique quincunx interpolatory masks in dimension $d = 2$, thereby showing the nonnegativity property of such a family in dimension two. Some remarks are given in the last section.

## 2. Quincunx Interpolatory Masks in Arbitrary Dimensions

In this section, we prove the main theorem, which relies essentially on multivariate polynomial interpolation. Before proceeding further, we introduce some notation and definitions.
For $μ = ( μ 1 , … , μ d ) ∈ N 0 d$ and $x = ( x 1 , … , x d ) ∈ R d$, we define,
$| μ | : = | μ 1 | + ⋯ + | μ d | , μ ! : = μ 1 ! ⋯ μ d ! , and x μ : = x 1 μ 1 ⋯ x d μ d .$
Elements in $N 0 d$ are ordered lexicographically so that $ν = ( ν 1 , … , ν d ) ∈ N 0 d$ is not greater than $μ = ( μ 1 , … , μ d ) ∈ N 0 d$, denoted as $ν ⪯ μ$, if either $| ν | < | μ |$ or $| ν | = | μ |$, $ν j = μ j$ for $j = 1 , … , ℓ − 1$ and $ν ℓ < μ ℓ$ for some $1 ⩽ ℓ ⩽ d$. We say that $ν ⩽ μ$ if $ν j ⩽ μ j$ for $j = 1 , … , d$. We denote,
$O j d : = { μ ∈ N 0 d : | μ | = j } and Λ n d : = { μ ∈ N 0 d : | μ | ⩽ n } = ∪ j = 0 n O j$
as the ordered index subset in $N 0 d$ for $j , n ∈ N 0$. The cardinalities of $O j d$ and $Λ n d$ satisfy $# O j d = ( j + d − 1 d − 1 )$ and $# Λ n d = ( n + d d )$. The polynomial space $Π n d$ can be written as $Π n d = span { x μ : μ ∈ Λ n d }$.
Let,
$X n 2 : = ∪ j = 0 n { x μ ∈ R 2 : | μ | = j , μ ∈ N 0 2 }$
be a set of distinct points in $R 2$. $X n 2$ is said to satisfy the node nonfiguration in $R 2$ if there exist $n + 1$ distinct lines $L 0 , … , L n$ in $R 2$ such that ${ x μ : | μ | = n }$ lies on $L n$, …, ${ x μ : | μ | = j , μ ∈ N 0 2 }$ lies on $L j \ [ L j + 1 ∪ ⋯ ∪ L n ]$, …, and $x ( 0 , 0 )$ lies on $L 0 \ [ L 1 ∪ ⋯ ∪ L n ]$. Note that the cardinality $# X n 2$ of $X n 2$ is $# X n 2 = ( n + 2 2 )$. A simple example of an $X n 2$ satisfying the node nonfiguration in $R 2$ is:
$X n 2 = Λ n 2 = ∪ j = 0 n { μ ∈ N 0 2 : | μ | = j } .$
For node configuration in $R d$, we define it recursively as follows. Let,
$X n d : = ∪ j = 0 n { x μ ∈ R d : | μ | = j , μ ∈ N 0 d }$
be a set of distinct points in $R d$. $X n d$ is said to satisfy the node configuration in $R d$ if there exist $n + 1$ distinct hyperplanes $H 0 , … , H n$ in $R d$ such that,
${ x μ : | μ | = j , μ ∈ N 0 d } ⊆ H j \ [ H j + 1 ∪ ⋯ ∪ H n ] , j = 0 , … , n ,$
and each set of points,
${ x μ : | μ | = j , μ ∈ N 0 d } = ∪ k = 0 j { x j , ν : | ν | = k , ν ∈ N 0 d − 1 } , j = 0 , … , n ,$
considered as a set of points in $R d − 1$ satisfies the node configuration in $R d − 1$. Again, a simple example of $X n d$ satisfying the node configuration in $R d$ is:
$X n d = Λ n d = ∪ j = 0 n { μ ∈ N 0 d : | μ | = j } .$
The cardinality of such a $X n d$ in $R d$ is $# X n d = ( n + d d )$. One can easily show that the property of node configuration in $R d$ is invariant under linear transforms. That is, if $A : R d → R d$ is an invertible linear transform, then $A X n d : = { A x : x ∈ X n d }$ satisfies the node configuration in $R d$ if and only if $X n d$ satisfies the node configuration in $R d$.
Let $X n 1 : = { x 0 , … , x n }$ be a set of $n + 1$ distinct points in $R$. Since hyperplanes in dimension one are just points, the above definition of node configuration in $R d$ for $X n d$ can be also defined recursively based on $X n 1$. For convention, we assign $X n 0 : = ∅$ the empty set.
For two index subsets $X$ and $Λ$ in $R d$ , we denote by:
$( β μ ) β ∈ X ; μ ∈ Λ$
a matrix $A$ of size $( # X ) × ( # Λ )$. The vector $( β μ ) β ∈ X ; μ$ is then the column vector of $A$ indexed by $μ ∈ Λ$ while the vector $( β μ ) β ; μ ∈ Λ$ is the row vector of $A$ indexed by $β ∈ X$.
We have the following result (c.f. [19], Theorem 4) regarding the uniqueness of multivariate polynomial interpolation associated with such a set $X n d$.
Lemma 1.
Suppose that $X n d ⊆ R d$ satisfies the node configuration in $R d$ and $p ∈ Π n d$ is a polynomial. If $p$ vanishes on $X n d$, then $p$ vanishes everywhere. Consequently, the square matrix,
$( β μ ) β ∈ X n d ; μ ∈ Λ n d$
is non-singular.
Proof.
We prove the result by induction on the pair $( d , n ) ∈ N × N 0$.
(1)
For $d = 1$, the set $X n 1$ contains $n + 1$ distinct points in $R$ and the matrix in (7) is simply the Vandermonde matrix. The result obviously holds.
(2)
Now suppose the statement holds for the pair $( d ′ , n )$ for any dimension $d ′ ⩽ d − 1$ and $n ∈ N 0$.
(3)
To prove the statement holds for the pair $( d , n )$ for any $n ∈ N 0$, we proceed by induction on n.
(3.1)
The statement is obviously true for $n = 0$ ($X 0 d$ is a singleton).
(3.2)
Suppose the statement holds for $n − 1$ with $n ⩾ 1$.
(3.3)
Let $p ∈ Π n$ be any polynomial that vanishes on $X n d$. Then by the node configuration property of $X n d$, there exists an orthogonal transform $A : R d → R d$ so that for all $| μ | = n$, $A x μ = ( t 0 , t ν )$; for some $t 0 ∈ R$, $t ν ∈ R d − 1$. That is, the orthogonal transform turns the hyperplane $H n$ containing ${ x μ ∈ X n d : | μ | = n }$ perpendicular to the first coordinate axis. Hence,
${ A x μ : μ ∈ X n d : | μ | = n } = { ( t 0 , t ν ) ∈ R d : ν ∈ Λ n d − 1 }$
is a set of points lies in a hyperplane perpendicular to the first coordinate. Note that the set
${ t ν : ν ∈ Λ n d − 1 } = : V n d − 1 ⊆ R d − 1$
has the same cardinality as ${ x μ ∈ X n d : | μ | = n }$. Since the node configuration property is invariant under orthogonal transforms, we see that $V n d − 1$ satisfies the node configuration in $R d − 1$. Define $q n ( y ) = p ( A − 1 y ) = p ( x )$. Then, the polynomial
$q n ( y 1 , y 2 , … , y d ) | y 1 = t 0$
is a polynomial of $d − 1$ variables, has a degree at most n, and vanishes on the set $V n d − 1$. By the induction hypothesis in item (3.2), $q n ( t 0 , y 2 , … , y d )$ vanishes everywhere. Consequently, we must have:
$q n ( y 1 , y 2 , … , y d ) = ( y 1 − t 0 ) q n − 1 ( y 1 , y 2 , … , y d )$
for some d-variate polynomial $q n − 1 ( y 1 , y 2 , … , y d )$ of the total degree at most $n − 1$. Again, by our induction hypothesis in item (3.2), $q n − 1$ vanishes everywhere. Consequently, $q n$ and thus $p$ vanishes everywhere.
The statement for $( d , n )$ for any $n ∈ N 0$ has been proven.
Therefore, by induction, the statement holds for any integer pair $( d , n ) ∈ N × N 0$.
Next, we show that the matrix $( β μ ) β ∈ X n d ; μ ∈ Λ n d$ is non-singular. Suppose this is not the case. Then there exist non-trivial coefficients $c μ , μ ∈ Λ n d$ such that,
$∑ μ ∈ Λ n d c μ β μ = 0 .$
However, it implies that the non-trivial polynomial $p ( x ) = ∑ μ ∈ Λ n d c μ x μ$ vanishes on $X n d$ and hence $p$ vanishes everywhere. This a contradiction. Therefore, $( β μ ) β ∈ X n d ; μ ∈ Λ n d$ must be non-singular. ☐
Note that $X n 0 = ∅$, the statement in Lemma 1 also holds for any $( d , n ) ∈ N 0 × N 0$. Next, we show that for points on a rectangular grid in $R d$, they can be extended to be a set that satisfies the node configuration in $R d$.
Lemma 2.
Let,
$G m : = [ − m 1 , m 1 ] × ⋯ × [ − m d , m d ] ∩ Z d , m = ( m 1 , … , m d ) ∈ N 0 d$
with $| m |$ being odd. Define:
Then, $G m 0$ and $G m 1$ can be both extended to two sets $Y m 0$ and $Y m 1$, respectively, such that they both satisfy the node configuration in $R d$ with $# Y m 0 = # Y m 1 = ( | m | + d d )$. Consequently, the matrices,
$( β μ ) β ∈ G m 0 ; μ ∈ Λ | m | d a n d ( β μ ) β ∈ G m 1 ; μ ∈ Λ | m | d$
are both of full row rank.
Proof.
We prove the result for $G m 1$. The proof of the result for $G m 0$ is similar.
First, we extend $G m$ to a larger set $G m ˜ 1$. Let $m ˜ : = ( m 1 , … , m d − 1 , m d + 1 ) ∈ N 0 d$ and define,
$G m ˜ : = [ − m 1 , m 1 ] × ⋯ × [ − m d − 1 , m d + 1 ] ∩ Z d .$
Obviously, we have $G m ⊆ G m ˜$. Define $| m | + 1$ hyperplanes:
$H 2 j d : = { ( x 1 , … , x d ) ∈ R d : x 1 + ⋯ + x d = | m | − 2 j } , H 2 j + 1 d : = { ( x 1 , … , x d ) ∈ R d : x 1 + ⋯ + x d = − ( | m | − 2 j ) } ,$
for $j = 0 , … , | m | − 1 2$, as well as sets of nodes
$X 2 j : = G m ˜ ∩ H 2 j d = { β ∈ G m ˜ : | β | = | m | − 2 j }$
and,
$X 2 j + 1 : = G m ˜ ∩ H 2 j + 1 d = { β ∈ G m ˜ : | β | = − ( | m | − 2 j ) } .$
It is easy to show that:
$G m 1 = ∪ j = 0 | m | [ G m 1 ∩ H j ] , X 2 j = G m 1 ∩ H 2 j d , and X 2 j + 1 ⊇ G m 1 ∩ H 2 j + 1 d .$
Moreover, $# X 2 j = # O 2 j d$ and $# X 2 j + 1 = # O 2 j + 1 d$. Now define,
$Y m 1 : = X 0 ∪ X 1 ∪ ⋯ ∪ X | m | ,$
which extends $G m 1$.
We next show that $Y m 1$ satisfies the node configuration in $R d$. In fact, from the above definitions, $X j$ lies on the hyperplane $H j$ and does not intersect with any other hyperplane $H k$ for $k = 0 , … , | m |$ and $k ≠ j$. Moreover, for each $X j$, one can simply use some linear transforms $A j : X j → O j d$ so that $A j X j = O j d$. In fact, define for $j = 0 , … , | m | − 1 2$, the linear transforms $A 2 j , A 2 j + 1 : R d → R d$ as,
$A 2 j x = − ( x − m ) and A 2 j + 1 x = m ˜ + x , x ∈ R d .$
Then, we have $A j H j = { x ∈ R d : | x | = j }$ and $A j X j = O j d$ for $j = 0 , … , | m |$. Note that when regarded as a set in $R d − 1$, the index set $O j d$ satisfies the node configuration in $R d − 1$. Since each transform $A j$ is an affine transform, it preserves the node configuration of the set $X j$ in $R d − 1$. Consequently, by the definition of node configuration in $R d$, we conclude that $Y m 1$ satisfies the node configuration in $R d$.
A similar approach can be applied to $G m 0$ in order to obtain $Y m 0 ⊇ G m 0$. The corresponding hyperplanes in this cases are:
$H 2 j + 1 d : = { ( x 1 , … , x d ) ∈ R d : x 1 + ⋯ + x d = | m | − 1 − 2 j } , H 2 j d : = { ( x 1 , … , x d ) ∈ R d : x 1 + ⋯ + x d = − ( | m | + 1 − 2 j ) } ,$
for $j = 0 , … , | m | − 1 2$, and $Y m 0 : = ∪ j = 0 | m | X j$ with $X j : = G m ˜ ∩ H j$ for $G m ˜$ defined as in (10).
The fact that the two matrices $( β μ ) β ∈ G m 0 ; μ ∈ Λ | m | d$ and $( β μ ) β ∈ G m 1 ; μ ∈ Λ | m | d$ are of full row rank follows from Lemma 1. We are done. ☐
The matrices $( β μ ) β ∈ G m 0 ; μ ∈ Λ | m | d$ and $( β μ ) β ∈ G m 1 ; μ ∈ Λ | m | d$ in Lemma 2 have more columns than rows. Hence, the columns of these two matrices are linearly dependent. The following lemma provides a more precise description of the linear dependence of the columns of these two matrices.
Lemma 3.
Let $G m$, $G m 0$, and $G m 1$ be defined as in (8) and (9) for odd $| m |$. Define,
Then the matrices,
$( β μ ) β ∈ G m 0 ; μ ∈ Λ m a n d ( β μ ) β ∈ G m 1 ; μ ∈ Λ m$
are both of full row rank.
Proof.
Note that the index set $Λ m = Λ | m | d ∩ { μ ∈ N 0 d : μ ⩽ 2 m }$. By Lemma 2, the matrices $( β μ ) β ∈ G m τ ; μ ∈ Λ | m | d$, $τ = 0 , 1$, are both of full row rank. Consider the index set:
We next conclude the result by claiming that each of the columns $( β μ ) β ∈ G m τ ; μ$ for $μ ∈ Λ | m | d \ Λ m$ is a linear combination of columns $( β μ ) β ∈ G m τ ; μ$ for $μ ∈ Λ m$, $τ = 0 , 1$, respectively.
In fact, for each $μ ∈ Λ | m | d \ Λ m$, using long division of polynomials, $x μ$ with $x = ( x 1 , … , x d ) ∈ R d$ can be represented as:
$x μ = q 1 μ ( x ) ∏ j = − m 1 m 1 ( x 1 − j ) + ⋯ + q d μ ( x ) ∏ j = − m d m d ( x d − j ) + p μ ( x ) ,$
where $q 1 μ ( x ) , … , q d μ ( x )$ are polynomials of d variables and $p μ ( x )$ is a linear combination of $x ν$ with $ν ∈ Λ m$. Obviously, we have $p μ ( 0 ) = 0$. Now, the claim follows from $β μ = p μ ( β )$ for any $β ∈ G m τ , τ = 0 , 1$. ☐
We denote the full-axis symmetry group $E$ by:
$E : = { E ε : = diag ( ε 1 , … , ε d ) : ε = ( ε 1 , … , ε d ) ∈ { + 1 , − 1 } d } .$
We say that a sequence ${ a ( β ) : β ∈ Z d }$ is full-axis symmetric if it satisfies,
One can easily show that $G m τ$ defined in (9) is full-axis symmetric for $τ = 0 , 1$; i.e., for each $τ = 0 , 1$,
$E ε G m τ = G m τ ∀ E ε ∈ E .$
By considering the symmetry property of the set $G m τ$, we can further reduce the dependence of columns and rows of matrices in (12).
Lemma 4.
Let $G m$, $G m 0$, and $G m 1$ be defined as in (8) and (9) for odd $| m |$. Define,
$G m 0 , + : = { β ∈ G m 0 : β ⩾ 0 } , G m 1 , + : = { β ∈ G m 1 : β ⩾ 0 } ,$
and,
$Λ m 0 : = { 2 μ ∈ N 0 d : μ ⩽ m , | 2 μ | ⩽ | m | } .$
Then the matrices,
$( β μ ) β ∈ G m 0 , + ; μ ∈ Λ m 0 a n d ( β μ ) β ∈ G m 1 , + ; μ ∈ Λ m 0$
are square matrices and are both non-singular.
Proof.
First, we show that $# ( G m 0 , + ) = # ( G m 1 , + ) = # Λ m 0$. Recalling the definition of $Λ m$ in (11), we have,
and,
$G m τ , + = ∪ j = 0 | m | − 1 2 { β ∈ N 0 d : | β | = 2 j + τ , 0 ⩽ β ⩽ m } , τ = 0 , 1 .$
For $j = 2 k + 1$ with $0 ⩽ 2 k + 1 ⩽ | m | − 1 2$, we have,
$# { 2 μ ∈ N 0 d : | μ | = 2 k + 1 , μ ⩽ m } = # { β ∈ N 0 d : | β | = 2 k + 1 , β ⩽ m } = # { β ∈ N 0 d : | β | = | m | − ( 2 k + 1 ) , β ⩽ m } .$
Note that:
${ β ∈ N 0 d : | β | = 2 k + 1 , β ⩽ m } = G m 1 , + ∩ H 2 k + 1$
and,
${ β ∈ N 0 d : | β | = | m | − ( 2 k + 1 ) , β ⩽ m } = G m 0 , + ∩ H | m | − ( 2 k + 1 ) ,$
where the hyperplane $H j : = { x ∈ R d : | x | = j }$. Similarly, for $j = 2 k$ with $0 ⩽ 2 k ⩽ | m | − 1 2$, we have,
$# { 2 μ ∈ N 0 d : | μ | = 2 k , μ ⩽ m } = # { β ∈ N 0 d : | β | = 2 k , β ⩽ m } = # { β ∈ N 0 d : | β | = | m | − 2 k , β ⩽ m } .$
Note that,
${ β ∈ N 0 d : | β | = 2 k , β ⩽ m } = G m 0 , + ∩ H 2 k$
and,
${ β ∈ N 0 d : | β | = | m | − 2 k , β ⩽ m } = G m 1 , + ∩ H | m | − 2 k .$
In view of,
$G m 0 , + = ∪ k = 0 | m | − 1 2 [ G m 0 , + ∩ H 2 k ] a n d G m 1 , + = ∪ k = 0 | m | − 1 2 [ G m 1 , + ∩ H 2 k + 1 ] ,$
we conclude that $# G m τ , + = # Λ m 0$ for $τ = 0 , 1$.
Next, we show that the matrices $( β μ ) β ∈ G m τ , + ; μ ∈ Λ m 0$, $τ = 0 , 1$, are non-singular. We use contradiction and shall prove the result for $τ = 1$ only. The proof for $τ = 0$ is similar. Suppose that there exists a sequence $( c β ) β ∈ G m 1 , +$ of complex numbers in $C$ with $c β ≠ 0$ for some $β ∈ G m 1 , +$ such that,
$∑ β ∈ G m 1 , + c β β μ = 0 ∀ μ ∈ Λ m 0 .$
By the symmetry of $G m 1$ and that $G m 1 , + = G m 1 ∩ N 0 d$, we have:
$G m 1 = { E ε β : β ∈ G m 1 , + , E ε ∈ E } = ∪ E ε ∈ E { E ε β : β ∈ G m 1 , + } .$
We extend the coefficient set ${ c β : β ∈ G m 1 , + }$ to ${ c ˜ β : β ∈ G m 1 }$ by defining,
$c ˜ β : = # E # { E ε β : E ε ∈ E } c β + , β = ( β 1 , … , β d ) ∈ G m 1 ,$
where $β + : = ( | β 1 | , … , | β d | ) ∈ G m 1 , +$. Now consider the linear combination $∑ β ∈ G m 1 c ˜ β β μ$ for all $μ ∈ Λ m = { μ = ( μ 1 , … , μ d ) ∈ N 0 d : | μ | ⩽ | m | , μ ⩽ 2 m }$. On the one hand, if $μ ∈ Λ m 0$, then,
$∑ β ∈ G m 1 c ˜ β β μ = ∑ β ∈ G m 1 # E # { E ε β : E ε ∈ E } c β + β μ = ∑ E ε ∈ E ∑ β ∈ G m 1 , + c β ( E ε β ) μ = ( # E ) · ∑ β ∈ G m 1 , + c β β μ = 0 .$
On the other hand, if $μ = ( μ 1 , … , μ d ) ∈ Λ m \ Λ m 0$, then there exists $j 0 ∈ { 1 , … , d }$ such that $μ j 0$ is odd. By the symmetry of the lattice $G m 1$, we have,
$∑ β ∈ G m 1 c ˜ β β μ = ∑ β ∈ G m 1 # E # { E ε β : E ε ∈ E } c β + β μ = ∑ E ε ∈ E ∑ β ∈ G m 1 , + c β ( E ε β ) μ = ∑ E ε ∈ E , ε j 0 = 1 ∑ β ∈ G m 1 , + c β ( E ε β ) μ + ∑ E ε ∈ E , ε j 0 = − 1 ∑ β ∈ G m 1 , + c β ( E ε β ) μ = ∑ E ε ∈ E , ε j 0 = 1 ∑ β ∈ G m 1 , + c β ( E ε β ) μ − ∑ β ∈ G m 1 , + c β ( E ε β ) μ = 0 .$
Consequently, the rows of $( β μ ) β ∈ G m 1 ; μ ∈ Λ m$ are linearly dependent, in contradiction of the result in Lemma 3. Therefore, $( β μ ) β ∈ G m 1 , + , μ ∈ Λ m 0$ must be non-singular. We are done. ☐
Recall in Theorem 1 that $a m : Z d → C$ with $m = ( m 1 , … , m d ) ∈ N 0 d$ is supposed to be a mask defined on a lattice $G m : = [ − m 1 , m 1 ] × , [ − m d , m d ] ∩ Z d$ and satisfies the sum rules of order $| m | + 1$ with respect to the quincunx lattice . Now we are ready to prove our main result in Theorem 1.
Proof of Theorem 1.
Let $G m 0$ and $G m 1$ be the even lattice and odd lattice as defined in (9). We first construct a quincunx interpolatory mask $a m$ supported on $G m$.
By Lemmas 3 and 4, we can choose an index subset $Λ ⊆ N 0 d$ so that,
$Λ m 0 ⊆ Λ ⊆ Λ m , # Λ = # G m 1 ,$
and the square matrix,
$( β μ ) β ∈ G m 1 ; μ ∈ Λ$
is non-singular, where $Λ m 0 , Λ m$ are defined as in (17), (11), respectively. Then we can uniquely solve the following system of linear equations for ${ c β : β ∈ G m 1 }$:
$∑ β ∈ G m 1 c β β μ = 1 2 δ ( μ ) , μ ∈ Λ .$
Note that ${ c β : β ∈ G m 1 }$ must be full-axis symmetric: $c E ε β = c β$ for all $β ∈ G m 1$ and for all $E ε ∈ E$. In fact, by $E ε G m 1 = G m 1$ and (20), we have for each $E ε ∈ E$,
$1 2 δ ( μ ) = ∑ E ε β ∈ G m 1 c E ε β ( E ε β ) μ = ∑ β ∈ G m 1 c E ε β ( E ε β ) μ = ∑ β ∈ G m 1 c E ε β β μ ε μ .$
Hence, we obtain for each $E ε ∈ E$,
$∑ β ∈ G m 1 c E ε β β μ = 1 2 δ ( μ ) ε μ = 1 2 δ ( μ ) , μ ∈ Λ .$
Comparing (20) and (21), we see that ${ c β : β ∈ G m 1 }$ must be full-axis symmetric.
Define,
$a m ( β ) = c β β ∈ G m 1 ; 1 2 δ ( β ) β ∈ G m 0 ; 0 β ∉ G m .$
Then obviously $a m$ is a quincunx interpolatory mask supported on $G m$. Note that $a m ( E ε β ) = a m ( β ) = 1 2 δ ( β )$ for all $β ∈ G m 0$. Together with the symmetry property of ${ a m ( β ) : β ∈ G m 1 }$, we conclude that $a m$ is full-axis symmetric:
The real-value property of $a m$ is obvious.
Next, we show that $a m$ satisfies the sum rules of order $| m | + 1$ with respect to the quincunx lattice $Q d$, which is equivalent to:
$∑ β ∈ G m 1 a m ( β ) β μ = ∑ β ∈ G m 0 a m ( β ) β μ ∀ μ ∈ Λ | m | d .$
Note that by our definition of ${ a β : β ∈ G m 0 }$, we have,
$∑ β ∈ G m 0 a m ( β ) β μ = 1 2 δ ( μ ) ∀ μ ∈ Λ | m | d .$
Moreover, by our construction, we already have, $∑ β ∈ G m 1 a m ( β ) β μ = 1 2 δ ( μ )$ for all $μ ∈ Λ ⊇ Λ m 0$. Hence, we only need to show that,
$∑ β ∈ G m 1 a m ( β ) β μ = 0 ∀ μ ∈ Λ | m | d \ Λ .$
We prove it by considering,
$Λ | m | d \ Λ = ( Λ | m | d \ Λ m ) ∪ ( Λ m \ Λ )$
as the union of two index sets. For $μ = ( μ 1 , … , μ d ) ∈ Λ m \ Λ$, there must exist $μ j$ odd in view of $Λ m 0 ⊆ Λ$. By the full-axis symmetry property of $a m$ and similar to (19), we must have, $∑ β ∈ G m 1 a m ( β ) β μ = 0$. For $μ ∈ Λ | m | d \ Λ m$, by (13), we have,
$∑ β ∈ G m 1 a m ( β ) β μ = ∑ β ∈ G m 1 a m ( β ) p μ ( β ) ,$
which must be 0 since $p μ ( x )$ is a linear combination of $x ν$ with $ν ∈ Λ m$. Hence (22) holds.
Consequently, $a m$ is a quincunx interpolatory mask supported on $G m$ and satisfying the sum rules of order $| m | + 1$ with respect to the quincunx lattice $Q d$.
We finally show the uniqueness of $a m$. Suppose there is another quincunx interpolatory mask $b : Z d → C$ supported on $G m$ and satisfying the sum rules of order $| m | + 1$ with respect to the quincunx lattice $Q d$. Then, we have,
$∑ β ∈ G m 1 ( a m ( β ) − b ( β ) ) β μ = 0 , ∀ μ ∈ Λ | m | d ⊇ Λ .$
By the non-singularity of $( β μ ) β ∈ G m 1 ; μ ∈ Λ$, we must have $a m ( β ) = b ( β )$ for all $β ∈ G m 1$. Consequently, $a m = b$. We are done. ☐
In view of the proof of Theorem 1, we can have a more generalized result as follows.
Theorem 2.
Let $G m$, $G m 0$, and $G m 1$ be defined as in (8) and (9) for odd $| m |$, and $Λ m 0$, $Λ m$ be defined as in (17), (11), respectively. Suppose the sequences $( y μ τ ) μ ∈ Λ m , τ = 0 , 1$ of complex numbers in $C$ satisfy,
$y μ τ = 0 ∀ μ ∈ Λ m \ Λ m 0 .$
Then the system of linear equations,
$∑ β ∈ G m τ c β β μ = y μ τ , μ ∈ Λ m ,$
has a unique solution $( c β τ ) β ∈ G m τ$ for $τ = 0 , 1$. Moreover, ${ c β τ : β ∈ G m τ }$ is full-axis symmetric for $τ = 0 , 1$. In particular, if $y μ τ$, $μ ∈ Λ m$, are of real value, then $c β τ$$β ∈ G m τ$, are of real value for $τ = 0 , 1$.
Proof.
By Lemmas 3 and 4, the matrices $( β μ ) β ∈ G m τ ; μ ∈ Λ m$, $τ = 0 , 1$, are both of full row rank. Choose any index set $Λ m 0 ⊆ Λ τ ⊆ Λ m$ so that $# Λ τ = # G m τ$ and the square matrix $( β μ ) β ∈ G m τ ; μ ∈ Λ τ$ is non-singular, respectively, for $τ = 0 , 1$. Then the following system of linear equations:
$∑ β ∈ G m τ c β β μ = y μ τ , μ ∈ Λ τ ,$
has a unique solution $( c β τ ) β ∈ G m τ$ for $τ = 0 , 1$. Similar to the proof of Theorem 1, such a sequence $( c β τ ) β ∈ G m τ$ is full-axis symmetric, which implies,
$∑ β ∈ G m τ c β β μ = 0 = y μ τ , μ ∈ Λ m \ Λ τ ,$
In particular, if $y μ τ , μ ∈ Λ m$ are of real value, then obviously the solutions $( c β τ ) β ∈ G m τ$ are of real value. The proof of uniqueness is similar to the proof of uniqueness in Theorem 1. ☐

## 3. Explicit Form of the Bivariate Quincunx Interpolatory Masks

In dimension $d = 1$, the unique quincunx interpolatory mask $a m$ with $m = 2 n + 1$ is the Deslaruier–Dubuc interpolatory mask, which has the explicit form ([20,21]):
$a ^ m ( ξ ) = ( 2 ( n + 1 ) ) ! 2 2 ( n + 1 ) n ! ( n + 1 ) ! ∫ − 1 cos ξ ( 1 − t 2 ) n d t = cos 2 n ( ξ / 2 ) ∑ j = 0 n − 1 n − 1 + j j sin 2 j ( ξ / 2 ) .$
In dimension $d = 2$, Han and Jia [1] show that the quincunx interpolatory masks $a m$ in Theorem 1 for $m = ( 2 n , 1 )$ and $m = ( 2 n − 1 , 2 )$ are given by:
$a ^ ( 2 n , 1 ) ( ξ 1 , ξ 2 ) = ( 2 n ) ! 2 2 n n ! ( n − 1 ) ! ∫ − 1 cos ξ 1 ( 1 − t 2 ) n − 1 ( 1 − t cos ξ 2 ) d t , a ^ ( 2 n − 1 , 2 ) ( ξ 1 , ξ 2 ) = ( 2 n ) ! 2 2 n n ! n ! ∫ − 1 cos ξ 1 ( 1 − t 2 ) n − 2 [ ( n − 1 ) ( 1 − t cos ξ 2 ) 2 + 1 2 ( 1 − t 2 ) sin 2 j ξ 2 ] d t .$
In this paper, we give the explicit form of the full family $a m$ of Theorem 1 in dimension $d = 2$.
Theorem 3.
Let $n , k 0 ∈ N 0$ satisfy $n > 2 k 0$. Then the unique quincunx interpolatory mask $a m$ in Theorem 1 with $d = 2$ is explicitly given as follows:
(1)
If $m = ( 2 n + 1 − 2 k 0 , 2 k 0 )$, then,
$a ^ m ( ξ 1 , ξ 2 ) = ( 2 ( n + 1 − k 0 ) ) ! k 0 ! 2 2 ( n + 1 − k 0 ) n ! ( n + 1 − k 0 ) ! ∫ − 1 cos ξ 1 ( 1 − t 2 ) n − 2 k 0 × × ∑ j = 0 k 0 n − k 0 k 0 − j ( 1 − t cos ξ 2 ) 2 ( k 0 − j ) k 0 − 1 / 2 j ( 1 − t 2 ) j sin 2 j ξ 2 d t .$
(2)
If $m = ( 2 n + 1 − ( 2 k 0 + 1 ) , 2 k 0 + 1 )$, then,
$a ^ m ( ξ 1 , ξ 2 ) = ( 2 ( n − k 0 ) ) ! k 0 ! 2 2 ( n − k 0 ) n ! ( n − 1 − k 0 ) ! ∫ − 1 cos ξ 1 ( 1 − t 2 ) n − 2 k 0 − 1 × × ∑ j = 0 k 0 n − 1 − k 0 k 0 − j ( 1 − t cos ξ 2 ) 2 ( k 0 − j ) + 1 k 0 + 1 / 2 j ( 1 − t 2 ) j sin 2 j ξ 2 d t .$
(3)
If $m = ( m 1 , m 2 )$ with $m 1 + m 2$ odd and $m 1 < m 2$, then,
$a ^ ( m 1 , m 2 ) ( ξ 1 , ξ 2 ) = a ^ ( m 2 , m 1 ) ( ξ 2 , ξ 1 )$
with $a ^ ( m 2 , m 1 )$ being determined by items (1) or (2).
Moreover, we have $a ^ m ( ξ 1 , ξ 2 ) ⩾ 0$ for all $ξ = ( ξ 1 , ξ 2 ) ∈ R 2$.
Before we proceed to the proof of Theorem 3, let us introduce some notation and lemmas to simplify our proof. Define,
$c n , k 0 : = ( 2 ( n + 1 − k 0 ) ) ! k 0 ! 2 2 ( n + 1 − k 0 ) n ! ( n + 1 − k 0 ) ! , d n , k 0 : = ( 2 ( n − k 0 ) ) ! k 0 ! 2 2 ( n − k 0 ) n ! ( n − 1 − k 0 ) !$
and,
$g n , k 0 ( t , y ) : = ∑ j = 0 k 0 n − k 0 k 0 − j ( 1 − t y ) 2 ( k 0 − j ) k 0 − 1 / 2 j ( 1 − t 2 ) j ( 1 − y 2 ) j , h n , k 0 ( t , y ) : = ∑ j = 0 k 0 n − 1 − k 0 k 0 − j ( 1 − t y ) 2 ( k 0 − j ) + 1 k 0 + 1 / 2 j ( 1 − t 2 ) j ( 1 − y 2 ) j .$
Then, by letting $x = cos ξ 1$ and $y = cos ξ 2$, the masks $a m$ in (24) and (25) can be written as:
First, consider the sum rule definition in (3). In the case of the quincunx dilation matrix in dimension $d = 2$, a mask a satisfies the sum rules of order $κ + 1$ with respect to a quincunx dilation matrix $M$ ($| det M | = 2$), equivalent to:
$∂ | μ | a ^ ( ξ 1 , ξ 2 ) ∂ ξ 1 μ 1 ∂ ξ 2 μ 2 ξ 1 = π , ξ 2 = π = 0 ∀ | μ | ⩽ κ .$
The following lemma shows that our masks $a m$ in (28) indeed satisfy the sum rules of order at least $2 n + 1$ with respect to the quincunx dilation matrix.
Lemma 5.
Let $a m$ be defined as in (28). Define:
Then, we have,
$∂ | μ | Q m ( x , y ) ∂ x μ 1 ∂ y μ 2 x = − 1 , y = − 1 = 0 ∀ | μ | ⩽ n .$
It follows that,
$∂ | μ | a ^ m ( ξ 1 , ξ 2 ) ∂ ξ 1 μ 1 ∂ ξ 2 μ 2 ξ 1 = π , ξ 2 = π = 0 ∀ | μ | ⩽ 2 n ,$
that is, $a m$ satisfies the sum rules of order $2 n + 1$.
Proof.
Note that $Q m ( x , y )$ is the linear combination of terms of the form:
$P ( x , y ) = ( 1 − y 2 ) j ∫ − 1 x ( 1 − t 2 ) n − α + j ( 1 − t y ) α − 2 j d t$
with $α = 2 k 0$ for $m 2 = 2 k 0$ and $α = 2 k 0 + 1$ for $m 2 = 2 k 0 + 1$. Using the Leibniz rule, one can easily check that,
$∂ | μ | P ( x , y ) ∂ x μ 1 ∂ y μ 2 x = − 1 , y = − 1 = 0 ∀ | μ | ⩽ n .$
Hence (30) holds. That is, at $x = y = − 1$, $Q m ( x , y )$ is a polynomial of the form:
$Q m ( x , y ) = ∑ ν 1 + ν 2 > n c ν ( 1 + x ) ν 1 ( 1 + y ) ν 2 .$
Thus, at $ξ 1 = π , ξ 2 = π$, we have,
$a m ( ξ 1 , ξ 2 ) = o ( ( 1 + cos ξ 1 ) μ 1 ( 1 + cos ξ 2 ) μ 2 ) = o ( ( ξ 1 − π ) 2 μ 1 ( ξ 2 − π ) 2 μ 2 ) , μ 1 + μ 2 = n .$
It follows that,
$∂ | μ | a ^ m ( ξ 1 , ξ 2 ) ∂ ξ 1 μ 1 ∂ ξ 2 μ 2 ξ 1 = π , ξ 2 = π = 0 ∀ | μ | ⩽ 2 n .$
We are done. ☐
Second, consider the interpolatory condition in (6). In dimension $d = 2$, the mask a being a quincunx interpolatory mask is equivalent to:
$a ^ ( ξ 1 , ξ 2 ) + a ^ ( ξ 1 + π , ξ 2 + π ) = 1 , ( ξ 1 , ξ 2 ) ∈ R 2 .$
We have the following lemma concerning $a ^ m ( ξ 1 , ξ 2 ) + a ^ m ( ξ 1 + π , ξ 2 + π )$.
Lemma 6.
Let $a m$ be defined as in (28). Then the function:
$a ^ m ( ξ 1 , ξ 2 ) + a ^ m ( ξ 1 + π , ξ 2 + π ) , ( ξ 1 , ξ 2 ) ∈ R 2$
is independent of the variable $ξ 2$.
Proof.
It is easy to show that for $m = ( 2 n + 1 − 2 k 0 , 2 k 0 )$, we have:
$a ^ m ( ξ 1 , ξ 2 ) + a ^ m ( ξ 1 + π , ξ 2 + π ) = c n , k 0 ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 g n , k 0 ( t , y ) d t$
and for $m = ( 2 n + 1 − ( 2 k 0 + 1 ) , 2 k 0 + 1 )$, we have:
$a ^ m ( ξ 1 , ξ 2 ) + a ^ m ( ξ 1 + π , ξ 2 + π ) = d n , k 0 ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 − 1 h n , k 0 ( t , y ) d t$
with $x = cos ξ 1 , y = cos ξ 2$. Hence, we only need to show that the right-hand sides of (33) and (34) are independent of the variable y. We next prove (33). The proof for (34) is analogous.
One can show that,
$∫ − 1 1 ( 1 − t 2 ) m t 2 ℓ d t = Γ ( m + 1 ) Γ ( ℓ + 1 / 2 ) Γ ( m + 3 / 2 + ℓ ) ,$
where the $Γ$ function is defined by $Γ ( x ) = ∫ 0 ∞ t x − 1 e − t d t$ and we have,
$n k = Γ ( n + 1 ) Γ ( k + 1 ) Γ ( n − k + 1 ) , Γ ( 1 / 2 + n ) = π n − 1 / 2 n n ! = π ( 2 n − 1 ) ! ! 2 n .$
Recalling the definition of $g n , k 0 ( t , y )$ in (27), the right-hand side of (33) is the linear combination of terms of the form:
$∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 + j ( 1 − t y ) 2 ( k 0 − j ) d t = ∑ ℓ = 0 k 0 − j y 2 ℓ 2 ( k 0 − j ) 2 ℓ ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 + j t 2 ℓ d t = ∑ ℓ = 0 k 0 − j y 2 ℓ 2 ( k 0 − j ) 2 ℓ × Γ ( n − 2 k 0 + j + 1 ) Γ ( ℓ + 1 / 2 ) Γ ( n − 2 k 0 + j + 1 + ℓ + 1 / 2 ) = ∑ ℓ = 0 k 0 − j y 2 ( k 0 − j − ℓ ) 2 ( k 0 − j ) 2 ( k 0 − j − ℓ ) × Γ ( n − 2 k 0 + j + 1 ) Γ ( k 0 − j − ℓ + 1 / 2 ) Γ ( n − k 0 − ℓ + 3 / 2 ) .$
Moreover,
$n − k 0 k 0 − j k 0 − 1 / 2 j 2 ( k 0 − j ) 2 ( k 0 − j − ℓ ) × Γ ( n − 2 k 0 + j + 1 ) Γ ( k 0 − j − ℓ + 1 / 2 ) Γ ( n − k 0 − ℓ + 3 / 2 ) = ( n − k 0 ) ! k 0 ! k 0 − ℓ j k 0 ℓ × Γ ( k 0 + 1 / 2 ) Γ ( 1 / 2 ) Γ ( n + 1 − k 0 − ℓ + 1 / 2 ) Γ ( ℓ + 1 / 2 ) .$
Thus,
$G ( y ) : = ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 g n , k 0 ( t , y ) d t = ∑ j = 0 k 0 ∑ ℓ = 0 k 0 − j n − k 0 k 0 − j k 0 − 1 / 2 j 2 ( k 0 − j ) 2 ( k 0 − j − ℓ ) × Γ ( n − 2 k 0 + j + 1 ) Γ ( k 0 − j − ℓ + 1 / 2 ) Γ ( n − k 0 − ℓ + 3 / 2 ) y 2 ( k 0 − j − ℓ ) ( 1 − y 2 ) j .$
By changing the order of j and in the above, we obtain,
$G ( y ) = ∑ ℓ = 0 k 0 ∑ j = 0 k 0 − ℓ n − k 0 k 0 − j k 0 − 1 / 2 j 2 ( k 0 − j ) 2 ( k 0 − j − ℓ ) × Γ ( n − 2 k 0 + j + 1 ) Γ ( k 0 − j − ℓ + 1 / 2 ) Γ ( n − k 0 − ℓ + 3 / 2 ) y 2 ( k 0 − ℓ − j ) ( 1 − y 2 ) j = ∑ ℓ = 0 k 0 ∑ j = 0 k 0 − ℓ ( n − k 0 ) ! k 0 ! k 0 − ℓ j k 0 ℓ × Γ ( k 0 + 1 / 2 ) Γ ( 1 / 2 ) Γ ( n + 1 − k 0 − ℓ + 1 / 2 ) Γ ( ℓ + 1 / 2 ) y 2 ( k 0 − ℓ − j ) ( 1 − y 2 ) j = ( n − k 0 ) ! k 0 ! ∑ ℓ = 0 k 0 k 0 ℓ Γ ( k 0 + 1 / 2 ) Γ ( 1 / 2 ) Γ ( n + 1 − k 0 − ℓ + 1 / 2 ) Γ ( ℓ + 1 / 2 ) × ∑ j = 0 k 0 − ℓ k 0 − ℓ j y 2 ( k 0 − ℓ − j ) ( 1 − y 2 ) j = ( n − k 0 ) ! k 0 ! ∑ ℓ = 0 k 0 k 0 ℓ Γ ( k 0 + 1 / 2 ) Γ ( 1 / 2 ) Γ ( n + 1 − k 0 − ℓ + 1 / 2 ) Γ ( ℓ + 1 / 2 ) ,$
which is a constant independent of y. We are done. ☐
Now, we are ready to prove Theorem 3.
Proof of Theorem 3.
Obviously, by the definition, $a ^ m$ is nonnegative. We will show that $a m$ with $m = ( m 1 , m 2 )$ satisfies the following properties:
(1)
$a m$ is a quincunx interpolatory mask.
(2)
$a m$ is supported on $[ − m 1 , m 1 ] × [ − m 2 , m 2 ] ∩ Z 2$.
(3)
$a m$ satisfies the sum rules of order $2 ( n + 1 )$ with respect to the quincunx lattice $Q 2$.
Item (2) can be checked directly. We first prove item (1). By Lemma 6, we see that,
$G n , k 0 : = ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 g n , k 0 ( t , y ) d t and H n , k 0 : = ∫ − 1 1 ( 1 − t 2 ) n − 1 − 2 k 0 h n , k 0 ( t , y ) d t$
are both independent of y. Consequently, by (32), (33), and (34), we see that $a m$ is interpolatory is equivalent to:
$c n , k 0 G n , k 0 = 1 and d n , k 0 H n , k 0 = 1$
for any integer pair $( n , k 0 )$ with $n > 2 k 0 ⩾ 0$. We prove (35) by induction on n.
It is straightforward to show that (35) holds when $n = 1$ ($k 0$ must be 0) for both $c n , k 0 G n , k 0$ and $d n , k 0 H n , k 0$, since it reduces to the case $m = ( 2 n + 1 , 0 )$ for 1-dimensional interpolatory mask $a 2 n + 1$ and the case $m = ( 2 n − 1 , 1 )$ in [18].
Suppose that (35) holds for $( n 0 , k 0 )$ with any $n 0 ⩽ n$ and $k 0$ satisfying $n 0 > 2 k 0$. We next show that (35) holds for $n + 1$.
Note that by the independency of $G n , k 0$ and $H n , k 0$ with respect to y, we have (by substituting $y = 1$),
$c n , k 0 G n , k 0 = c n , k 0 n − k 0 k 0 ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 ( 1 − t ) 2 k 0 d t , d n , k 0 H n , k 0 = d n , k 0 n − 1 − k 0 k 0 ∫ − 1 1 ( 1 − t 2 ) n − 1 − 2 k 0 ( 1 − t ) 2 k 0 + 1 d t .$
Using integration by parts, it is easy to show that,
$α n , k 0 : = ∫ − 1 1 ( 1 − t 2 ) n − 2 k 0 ( 1 − t ) 2 k 0 d t and β n , k 0 : = ∫ − 1 1 ( 1 − t 2 ) n − 1 − 2 k 0 ( 1 − t ) 2 k 0 + 1 d t$
satisfy,
$α n + 1 , k 0 + 1 = k 0 + 1 / 2 n − 2 k 0 α n , k 0 + β n , k 0 and β n + 1 , k 0 = α n , k 0 + k 0 n − 2 k 0 + 1 β n , k 0 − 1 .$
Consequently, by our induction hypothesis, we have,
$c n + 1 , k 0 + 1 G n + 1 , k 0 + 1 = c n + 1 , k 0 + 1 n − k 0 k 0 + 1 k 0 + 1 / 2 n − 2 k 0 α n , k 0 + β n , k 0 = c n + 1 , k 0 + 1 c n , k 0 × k 0 + 1 / 2 n − 2 k 0 × n − k 0 k 0 + 1 n − k 0 k 0 × c n , k 0 n − k 0 k 0 α n , k 0 + c n + 1 , k 0 + 1 d n , k 0 × n − k 0 k 0 + 1 n − 1 − k 0 k 0 × d n , k 0 n − 1 − k 0 k 0 β n , k 0 = c n + 1 , k 0 + 1 c n , k 0 × k 0 + 1 / 2 n − 2 k 0 × n − k 0 k 0 + 1 n − k 0 k 0 + c n + 1 , k 0 + 1 d n , k 0 × n − k 0 k 0 + 1 n − 1 − k 0 k 0 = k 0 + 1 n + 1 × k 0 + 1 / 2 n − 2 k 0 × n − 2 k 0 k 0 + 1 + ( k 0 + 1 ) ( n − k 0 + 1 / 2 ) ( n + 1 ) ( n − k 0 ) × n − k 0 k 0 + 1 = k 0 + 1 / 2 n + 1 + n − k 0 + 1 / 2 n + 1 = 1$
and,
$d n + 1 , k 0 H n + 1 , k 0 = d n + 1 , k 0 n − k 0 k 0 α n , k 0 + k 0 n − 2 k 0 + 1 β n , k 0 − 1 = d n + 1 , k 0 c n , k 0 × c n , k 0 n − k 0 k 0 α n , k 0 + d n + 1 , k 0 d n , k 0 − 1 × n − k 0 k 0 n − k 0 k 0 − 1 × k 0 n − 2 k 0 + 1 × d n , k 0 − 1 n − k 0 k 0 − 1 β n , k 0 − 1 = d n + 1 , k 0 c n , k 0 + d n + 1 , k 0 d n , k 0 − 1 × n − k 0 k 0 n − k 0 k 0 − 1 × k 0 n − 2 k 0 + 1 = n + 1 − k 0 n + 1 + k 0 n + 1 × n + 1 − 2 k 0 k 0 × k 0 n + 1 − 2 k 0 = n + 1 − k 0 n + 1 + k 0 n + 1 = 1 .$
Therefore, by induction, we conclude that (35) holds for any integer pair $( n , k 0 )$ with $n > 2 k 0 ⩾ 0$ and hence item (1) holds.
It remains to show that item (3) holds. That is $a m$ satisfies the sum rules of order $2 ( n + 1 )$. By Lemma 5, we see that $a m$ satisfies the sum rules of order $2 n + 1$; that is,
Noting that $a m$ is full-axis symmetric, we have for any $| μ |$ odd. Together with that $a m$ is interpolatory, we conclude that,
We are done. ☐

## 4. Conclusions

We show that there exists a family of unique quincunx interpolatory masks in arbitrary dimensions. Such a family of masks is real-valued and full-axis symmetric. We present the explicit form of the family of two-dimensional quincunx interpolatory masks which also show their nonnegativity property in the frequency domain. It remains open for the explicit form of such a family in dimension $d ⩾ 3$. We conjecture that such an explicit form exists and the nonnegativity property holds as well.

## Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.The research of this paper was supported in part by the Research Grants Council of Hong Kong (Project No. CityU 11300717) and City University of Hong Kong (Project No. 7200462 and 7004445).

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Han, B.; Jia, R.Q. Multivariate refinement equations and convergence of subdivision schemes. SIAM J. Math. Anal. 1998, 29, 1177–1199. [Google Scholar] [CrossRef]
2. Dyn, N.; Levin, D. Interpolating subdivision schemes for the generation of curves and surfaces. In Multivariate Approximation and Interpolation; Haussmann, W., Jetter, K., Eds.; Birkhauser Verlag: Basel, Switzerland, 1990; pp. 91–106. [Google Scholar]
3. Chui, C.K.; Han, B.; Zhuang, X. A dual-chain approach for bottom-up construction of wavelet filters with any integer dilation. Appl. Comput. Harmon. Anal. 2012, 33, 204–225. [Google Scholar] [CrossRef]
4. Cohen, A.; Daubechies, I. Non-separable bidimensional wavelet bases. Rev. Mat. Iber. 1993, 9, 51–137. [Google Scholar] [CrossRef]
5. Han, B.; Kwon, S.G.; Zhuang, X. Generalized interpolating refinable function vectors. J. Comput. Appl. Math. 2009, 227, 254–270. [Google Scholar] [CrossRef]
6. Han, B.; Jiang, Q.T.; Shen, Z.; Zhuang, X. Symmetric canonical quincunx tight framelets with high vanishing moments and smoothness. Math. Comp. 2017. [Google Scholar] [CrossRef]
7. Han, B.; Zhuang, X. Analysis and construction of Multivariate interpolating refinable function vectors. Acta Appl. Math. 2009, 107, 143–171. [Google Scholar] [CrossRef]
8. Han, B.; Zhuang, X. Matrix Extension with symmetry and its applications to symmetric orthonormal multiwavelets. SIAM J. Mat. Anal. Appl. 2010, 42, 2297–2317. [Google Scholar] [CrossRef]
9. Mo, Q.; Zhuang, X. Matrix splitting with symmetry and dyadic framelet filter banks over algebraic number fields. Linear Algebra Appl. 2012, 437, 2650–2679. [Google Scholar]
10. Zhuang, X. Matrix extension with symmetry and construction of biorthogonal multiwavelets with any integer dilation. Appl. Comput. Harmon. Anal. 2012, 33, 159–181. [Google Scholar] [CrossRef]
11. Deslauriers, G.; Dubuc, S. Symmetric iterative interpolation processes. Constr. Approx. 1989, 5, 49–68. [Google Scholar] [CrossRef]
12. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
13. Dyn, N.; Gregory, J.A.; Levin, D. A butterfly subdivision scheme for surface interpolation with tension control. ACM Trans. Graph. 1990, 9, 160–169. [Google Scholar] [CrossRef]
14. Deslauriers, G.; Dubois, J.; Dubuc, S. Multidimensional iterative interpolation. Can. J. Math. 1991, 43, 297–312. [Google Scholar] [CrossRef]
15. Mongeau, J.P.; Deslauriers, G. Continuous and differentiable multidimensional iterative interpolation. Linear Algebra Appl. 1993, 180, 95–120. [Google Scholar] [CrossRef]
16. Riemenschneider, S.D.; Shen, Z. Multidimensional interpolatory subdivision schemes. SIAM J. Numer. Anal. 1997, 34, 2357–2381. [Google Scholar] [CrossRef]
17. Han, B.; Jia, R.Q. Optimal interpolatory subdivision schemes in multidimensional spaces. SIAM J. Math. Anal. 1999, 36, 105–124. [Google Scholar] [CrossRef]
18. Han, B.; Jia, R.Q. Quincunx fundamental refianble functions and quincux biorthogonal wavelets. Math. Comp. 2002, 71, 165–196. [Google Scholar] [CrossRef]
19. Chui, C.K.; Lai, M.J. Vandermonde Determinants and Lagrange Interpolation in ℝs, in Nonlinear and Convex Analysis; Lin, B.L., Simons, S., Eds.; Marcel Dekker: New York, NY, USA, 1987; pp. 23–35. [Google Scholar]
20. Meyer, Y. Wavelets and Operators; Cambridge Press: Cambridge, UK, 1992; Volume 1. [Google Scholar]
21. Micchelli, C.A. Interpolatory subdivision schemes and wavelets. J. Approx. Theory 1996, 86, 41–71. [Google Scholar] [CrossRef]

## Share and Cite

MDPI and ACS Style

Zhuang, X. Quincunx Fundamental Refinable Functions in Arbitrary Dimensions. Axioms 2017, 6, 20. https://doi.org/10.3390/axioms6030020

AMA Style

Zhuang X. Quincunx Fundamental Refinable Functions in Arbitrary Dimensions. Axioms. 2017; 6(3):20. https://doi.org/10.3390/axioms6030020

Chicago/Turabian Style

Zhuang, Xiaosheng. 2017. "Quincunx Fundamental Refinable Functions in Arbitrary Dimensions" Axioms 6, no. 3: 20. https://doi.org/10.3390/axioms6030020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.