Next Article in Journal
A Tool for Control Research Using Evolutionary Algorithm That Generates Controllers with a Pre-Specified Morphology
Next Article in Special Issue
An Algorithm for Construction of the Asymptotic Approximation of a Stable Stationary Solution to a Diffusion Equation System with a Discontinuous Source Function
Previous Article in Journal
Special Issue “AI for Cybersecurity: Robust Models for Authentication, Threat and Anomaly Detection”
Previous Article in Special Issue
Algorithm for Approximate Solving of a Nonlinear Boundary Value Problem for Generalized Proportional Caputo Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ω-Circulant Matrices: A Selection of Modern Applications from Preconditioning of Approximated PDEs to Subdivision Schemes

by
Rafael Díaz Fuentes
1,2,
Stefano Serra-Capizzano
1,3,* and
Rosita Luisa Sormani
4,*
1
Department of Science and High Technology, University of Insubria, Via Valleggio 11, 22100 Como, Italy
2
Department of Mathematics, University of Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy
3
Division of Scientific Computing, Department of Information Technology, University of Uppsala, Box 337, SE-751 05 Uppsala, Sweden
4
Department of Theoretical and Applied Sciences, University of Insubria, Via Dunant 3, 21100 Varese, Italy
*
Authors to whom correspondence should be addressed.
Algorithms 2023, 16(7), 328; https://doi.org/10.3390/a16070328
Submission received: 8 June 2023 / Revised: 3 July 2023 / Accepted: 6 July 2023 / Published: 8 July 2023

Abstract

:
It is well known that ω -circulant matrices with ω 0 can be simultaneously diagonalized by a transform matrix, which can be factored as the product of a diagonal matrix, depending on ω , and of the unitary matrix F n associated to the Fast Fourier Transform. Hence, all the sets of ω -circulants form algebras whose computational power, in terms of complexity, is the same as the classical circulants with ω = 1 . However, stability is a delicate issue, since the condition number of the transform is equal to that of the diagonal part, tending to max { | ω | , | ω | 1 } . For ω = 0 , the set of related matrices is still an algebra, which is the algebra of lower triangular matrices, but they do not admit a common transform since most of them (all except the multiples of the identity) are non-diagonalizable. In the present work, we review two modern applications, ranging from parallel computing in preconditioning of PDE approximations to algorithms for subdivision schemes, and we emphasize the role of such algebra. For the two problems, few numerical tests are conducted and critically discussed and the related conclusions are drawn.

1. Introduction

When dealing with structured matrices of Toeplitz type [1,2] not related to fast trigonometric transforms, the problems of computing the solution of large linear system, the matrix vector product, or the eigenvalues are greatly accelerated by using them as approximation matrices belonging to algebras associated to trigonometric transforms [3,4,5,6]. Among them, a very classical choice is the algebra of ω -circulant matrices with ω 0 (see the seminal book [7] and references therein). In this work, after briefly introducing these matrices and reviewing a few related applications, we focus on new, challenging problems, such as parallel computing in preconditioning of approximated partial differential equations (PDEs) and algorithms for subdivision schemes, highlighting the benefits and drawbacks of employing ω -circulant matrices.
The paper is organized in the following manner. In Section 2 we introduce the basic notions on circulant and ω -circulant matrices. In Section 3, after recalling a technique for the parallel computing of the matrix-vector product and matrix inversion concerning ω -circulant matrices, we present a general procedure (introduced by Bini [8]) to precondition (block) triangular Toeplitz linear systems. Then, we move on to scalar subdivision schemes, introducing them in Section 4 and presenting the interpolation model as a direct problem with its associated inverse problem in Section 5. Section 6 is devoted to selected numerical experiments and the related critical discussion. To conclude, in Section 7 we draw conclusions and discuss a few open problems, with special attention to the use of structured matrices in subdivision schemes.

2. Some Remarks on Circulant and ω -Circulant Matrices

In this introductory section, we lay the groundwork for the main part of this paper by recalling the basic definitions and properties of both circulant and ω -circulant matrices. More information can be found in Ref. [7]. We recall that circulant and ω -circulant matrices have played a major role in the last four decades for the fast solution of structured linear systems, mainly of Toeplitz type, but also stemming from approximated PDEs. Classical references are Refs. [2,9,10,11,12,13,14], where preconditioned Krylov methods, local Fourier analysis, and its GLT generalizations are treated with attention to the circulant structure and to the related fast Fourier transform (FFT); see Ref. [15]. More specific references for the numerical treatment of PDEs via ω -circulant preconditioning or directly by multigrid solvers or combination of them can be found in Refs. [10,16,17,18].
Definition 1.
Consider  α = [ α 0 , α 1 , , α n 1 ]  with  α j R . A square matrix  C R n × n is called circulant and it is denoted with  C = c i r c ( α ) if we have
( C ) s , t = α s t ( mod n ) s , t = 0 , , n 1 .
An equivalent and compact representation of  C is given by
C = j = 0 n 1 Π n j α j ,
where  Π n is the permutation matrix
Π n = 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 .
Note that for any fixed p , q N the matrix C = j = p q Π n j α j is circulant, since this sum can always be traced back to the form in the above definition.
Denoting with · * the conjugate transpose, circulant matrices admit the following spectral decomposition.
Proposition 1.
C can be diagonalized as
C = F n L n F n * ,
where
L n = diag s = 0 , , n 1 j = 0 n 1 exp 2 π i s j n α j
is the diagonal matrix containing the eigenvalues of C and
F n = 1 n exp 2 π i j k n j , k = 0 n 1 , i 2 = 1
is the unitary Fourier matrix.
Proof. 
By a direct computation of the eigenvalues, it is easy to verify that Π n factorizes as
Π n = F n Ω n F n *
with
Ω n = diag s = 0 , , n 1 exp 2 π i s n .
The thesis follows. □
Remark 1.
The eigenvalues of C , namely the diagonal elements of L n , are given by the Fourier transform of the first column of C . Alternatively, they can be seen as
Λ ( C ) = f 2 π s n , s = 0 , , n 1 ,
where the function  f ( θ ) : = j = 0 n 1 α j exp ( i j θ ) is called (spectral) symbol of  C .
ω -circulant matrices represent an extension of the notion of circulant matrix (see, e.g., Refs. [6,19,20] for more details).
Definition 2.
Consider α = [ α 0 , α 1 , , α n 1 ] with α j R . A square matrix C ω R n × n is called ω -circulant and it is denoted with  C ω = c i r c ω ( α ) if it holds
( C ω ) s , t = α s t ( mod n ) , i f s > t , ω α s t ( mod n ) , i f s t , s , t = 0 , , n 1 .
A compact representation for C ω is given by
C ω = j = 0 n 1 Π n , ω j α j ,
where Π n , ω is the matrix
Π n , ω = 0 0 0 0 ω 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 .
Note that setting ω = 1 leads back to Definition 1. An analogous spectral decomposition can be derived.
Proposition 2.
C ω can be diagonalized as
C ω = F n , ω L n , ω F n , ω 1 ,
where
L n , ω = diag s = 0 , , n 1 j = 0 n 1 ω j / n exp 2 π i s j n α j
is the diagonal matrix containing the eigenvalues of C ω and
F n , ω = D ω F n , D ω = diag s = 0 , , n 1 ω s n .
Proof. 
Let us write ω = ρ exp ( i ψ ) with ρ > 0 and consider ω n = ρ n exp i ψ n . Applying (1), the matrix Π n , ω is diagonalized as
Π n , ω = ω n D ω Π n D ω 1 = ω n D ω F n Ω n F n * D ω 1
and the thesis follows. □
The above factorization is equivalent to
Π n , ω = | ω | n D ω F n Ω n , ω F n * D ω 1
with
Ω n , ω = diag s = 0 , , n 1 exp 2 π s + ψ n i ,
which, compared with (2), gives a deeper understanding of the role of ω . As in the circulant case, we stress that for any fixed p , q N the matrix C ω = j = p q Π n , ω j α j is ω -circulant.
Remark 2.
With ω = ρ exp i ψ , the eigenvalues of C ω can be expressed as
Λ ( C ω ) = f 2 π s + ψ n , s = 0 , , n 1 ,
where the function  f ( θ ) : = j = 0 n 1 α j ρ j n exp ( i j θ ) is called(spectral) symbol of  C ω .
To conclude this section, we extend the above definitions and properties to the block case.
Definition 3.
Consider A = [ A 0 , A 1 , , A n 1 ] with A j R d × d . A square matrix C ω R d 2 × d 2 is called ω - ( d × d ) -block circulant and it is denoted with  C ω = c i r c ω ( A ) if it holds
( C ω ) s , t = A s t ( mod n ) , i f s > t , ω A s t ( mod n ) , i f s t , s , t = 0 , , n 1 .
A compact representation for C ω is given by
C ω = j = 0 n 1 Π n , ω j A j .
For ω = 1 , Definition 3 describes the notion of ( d × d ) -block circulant matrices, while, as expected, for d = 1 it reduces to Definition 2. As in the case of d = 1 , for any fixed p , q N the matrix C ω = j = p q Π n , ω j A j is ω - ( d × d ) -block circulant.
Proposition 3.
C ω can be diagonalized as
C ω = ( D ω I d ) ( F n , 1 I d ) L n , ω ( F n , 1 * I d ) ( D ω 1 I d ) = ( F n , ω I d ) L n , ω ( F n , ω 1 I d )
where
L n , ω = diag s = 0 , , n 1 j = 0 n 1 ω j / n exp 2 π i s j n A j ,
and I d is the identity of size d.
Remark 3.
With ω = ρ exp i ψ , the eigenvalues of C ω can be expressed as
Λ ( C ω ) = λ k f 2 π s + ψ n ; k = 1 , , d ; s = 0 , , n 1 ,
where  f ( θ ) : = j = 0 n 1 A j ρ j n exp i j θ is a  d × d -matrix valued function, called (spectral) symbol of  C ω , and λ k ( f ( θ ) ) , k = 1 , , d are its eigenvalue functions.

3. Parallel Solution and Preconditioning of Triangular Toeplitz Linear Systems

In this section, we present a parallel model to perform computations involving ω -circulant matrices and show an application to the preconditioning of triangular Toeplitz linear systems. The procedure is summarized in the following section and has been introduced in 1987 by Bini, who in Ref. [8] addressed the problem of inverting a n × n triangular Toeplitz matrix and proposed a parallel algorithm that exploits the properties of ω -circulant matrices.

3.1. Parallel Solution of (Block) Triangular Toeplitz Systems

Since a diagonalization of ω -circulant matrices through the Fourier matrix is available, it can be used to solve a linear system of the form C ω x ω = r , with C ω as in Definition 2, x ω , r R n and ω 0 , in an efficient way. More precisely, supposing that C ω is invertible and deriving from Proposition 2 the spectral decomposition
C ω 1 = D ω 1 F n * L n , ω 1 F n D ω ,
x ω can be calculated as the matrix-vector product C ω 1 r via the following algorithm
  • Compute y = F n D ω r via the FFT, with sequential cost O ( n log ( n ) ) ;
  • Compute z = L n , ω 1 y , with sequential cost O ( n ) ;
  • Compute x ω = D ω 1 F n * z via the FFT, with sequential cost O ( n log ( n ) ) .
This algorithm has a total sequential cost of O ( n log ( n ) ) , but, since all the steps can be fully parallelized, it can be lowered to O ( log ( n ) ) in a parallel model of computation.
Now, the key idea to compute the solution of a (lower) triangular Toeplitz linear system A x = r , with x , r R n and
A = a 0 a 1 a 0 a n 1 a 1 a 0 , a 0 , a 1 , , a n 1 C ,
consists in completing A to the following ω -circulant
C ω = a 0 ω a n 1 ω a 1 a 1 a 0 ω a n 1 a n 1 a 1 a 0 = A + ω 0 a n 1 a 1 0 a n 1 0
and observing that
lim ω 0 C ω 1 = A 1 , lim ω 0 x ω 1 = x , where x ω = C ω 1 r .
Hence, applying the algorithm described above, the solution of interest x can be approximated by computing x ω , in theory up to an arbitrarily small error for ω 0 . Nevertheless, in practice an arbitrarily small error may not be attainable, since the condition number of the transform F n D ω corresponds to the condition number of D ω , which tends to max { | ω | , | ω | 1 } as n and thus tends to infinity as ω 0 , causing stability issues. See the next subsection for a discussion in the setting of preconditioning.
The cost of this computation amounts to O ( n log ( n ) ) in a sequential implementation and to O ( log ( n ) ) in a parallel implementation. We refer the reader to Ref. [8] for details and two strategies that allow for the retrieval of the exact solution.
We stress that, although for the sake of clarity the above discussion takes into account the lower triangular case only, the technique can be easily extended to block triangular and upper triangular Toeplitz matrices.

3.2. Preconditioning for (Block) Triangular Toeplitz Systems

The algorithm described in the previous subsection can be applied in a very straightforward way to the preconditioning of linear systems with (block) triangular Toeplitz structure, allowing for an extremely efficient computation of the solution. On the other hand, special attention must be paid to the issue of stability, which strongly relies on the parameter ω . We consider an example to present the approach, but we emphasize that the same basic ideas pertain to a variety of situations.
The example we use is taken from Ref. [21]. Here, Liu and Wu seek the numerical solution of the linear wave equation model
y t t Δ y = f , in Ω × ( 0 , T ) , y = 0 , on Ω × ( 0 , T ) , y ( · , 0 ) = ψ 0 , y t ( · , 0 ) = ψ 1 in Ω ,
where Ω R d with d > 1 is a bounded and open domain with Lipschitz boundary, ψ 0 , ψ 1 are the initial conditions and f is a given source term. The equations are discretized all-at-once in time by means of implicit finite-difference schemes (see Ref. [21] for details). In the two-dimensional case with Ω a rectangle, the attained matrix shows the following block lower triangular Toeplitz structure
K = 1 τ 2 L 2 I m L L 2 I m L L 2 I m L , L = I m τ 2 2 Δ h ,
in which
  • m is the number of grid nodes in the spatial mesh and h = 1 m + 1 is the spatial step size;
  • τ is the time mesh step size, obtained as T n , where n is the number of time steps;
  • I m is the identity matrix of size m;
  • Δ h is the matrix obtained by discretizing the Laplacian operator Δ in (4) with the central finite difference method.
The associated linear system encompasses all the time steps at once and its solution corresponds to the solution to (4) at each time step simultaneously. In other words, if y j R m denotes the solution to (4) at the j-th time step, the system has the form
K y = b , y = y 1 y n , b = b 1 b n , b j R m , j = 1 , , n
and therefore the vectors y j , j = 1 , , n are computed in parallel as the solution to the all-at-once linear system with the coefficient matrix K .
To solve this linear system, the authors adopt a preconditioned GMRES method and construct a class of block ω -circulant preconditioners as follows. Defining B 1 and B 2 as
B 1 = 1 0 1 1 0 1 1 0 1 n × n , B 2 = 0 1 0 1 0 1 0 n × n ,
as described in Ref. [22], K can be expressed in the form
K = 1 τ 2 ( B 1 L B 2 2 I m ) .
Then, given the matrix Π n , ω as in Section 2 and
Ψ n , ω = 1 ω 0 0 1 ω 1 0 1 1 0 1 ,
the generalized preconditioner is defined for ω ( 0 , 1 ] as
P ω = 1 τ 2 Ψ n , ω L Π n , ω 2 I m .
By exploiting the simultaneous diagonalization of the ω -circulant matrices Π n , ω and Γ n , ω and the properties of the Kronecker product, P ω can be written as
P ω = 1 τ 2 ( Γ ω 1 F n * I m ) ( D 1 L 2 D 2 I m ) ( F n Γ ω I m ) ,
where F n is the Fourier matrix, Γ ω = diag 1 , ω 1 n , , ω n 1 n , D 1 = diag n F n Γ ω Ψ n , ω ( 1 ) and D 2 = diag n F n Γ ω Π n , ω ( 1 ) , with Ψ n , ω ( 1 ) , Π n , ω ( 1 ) being the first column of Ψ n , ω and Π n , ω .
Therefore, given a vector r , the proposed algorithm for computing z = P ω 1 r is the following
  • Compute s 1 = ( F n Γ ω I m ) r via the FFT;
  • Compute s 2 ( k ) = τ 2 D 1 ( k , k ) L 2 D 2 ( k , k ) I m 1 s 1 ( k ) , k = 1 , , n , where s 1 ( k ) , s 2 ( k ) denote the k-th block of dimension m of s 1 , s 2 ;
  • Compute z = ( Γ ω 1 F n * I m ) s 2 via the FFT;
Which in its essence is identical to the one proposed by Bini, although we must observe that Ref. [8] is not referenced in the work by Fan and Liu [21].
We conclude this section by discussing the matter of stability when dealing with ω -circulants. While theoretically the aforementioned algorithm may be applied whenever ω 0 , in practical implementation exceedingly small values of the parameter will cause stability issues, which should be considered, when stating the global precision of a solution method. In fact, as we mentioned in the previous section, the condition number of the transform F n Γ ω is equal to that of Γ ω , which tends to max { | ω | , | ω | 1 } as n and therefore tends to infinity as ω 0 .
We tested the stability of the procedure described above by fixing a vector x , computing b = P ω x and retrieving the starting vector as P ω 1 b with the spectral decomposition of P ω , using the algorithm described above and in Ref. [21]. We denote the retrieved vector with x ˜ .
As a first example, we set m = 2 8 , n = 2 8 , x = sin j π n m + 3 j = 1 , , n m and show the relative errors between x and x ˜ as the parameter ω decreases in Table 1. Clearly, as ω gets closer to zero, the error grows and x ˜ ceases to be an accurate approximation of x , especially in the right part in accordance with the accumulation errors in the solution of a lower triangular system. In particular the error for very small values of ω becomes even worse than that obtained when using a direct inherently sequential algorithm working for triangular systems ( ω = 0 ). Figure 1 shows the plots of the two vectors for ω = 10 16 .
Our second example, where x is this time chosen at random, grants similar results, collected in Table 2. We observe that this matter is not addressed in Ref. [21].
Because of these instability issues, the possibility of using preconditioned MINRES recently emerged, where the preconditioner is taken from the τ algebra whose eigenvector matrix can be chosen as a real symmetric orthogonal and very stable transform (see Ref. [22] and the references therein). This type of methods, called all-at-once, have attracted a remarkable attention in the last 10 years for the potential use in parallel in time methods for evolution of PDEs; see Refs. [23,24,25,26] and their references. Further works on the matter can be found in Refs. [27,28], also for fractional differential equations.
We now consider the second application of ω -circulants in the context of subdivision schemes for generating curves and surfaces.

4. Basic Ideas on Scalar Subdivision Schemes

A subdivision scheme is an iterative method that generates curves and surfaces based on successive refinements of a polygon or a mesh. The rules that determine said refinements can be formulated by linear, non-linear, or geometrical operators [29,30,31,32,33]. The case of linear rules is related to refinable functions in Wavelets Theory [34]. In this setting, the vertices of the polygon or mesh are the coefficients in a particular basis of the so-called subdivision curve or subdivision surface. In what follows we focus on the case of the subdivision curve.
Linear uniform subdivision schemes are based on the notion of refinable function, i.e., a function φ ( t ) satisfying a relation of the form
φ ( t ) = j Z a j φ ( 2 t j ) , t R , a j R
and such that, given an initial set of control points  P 0 = { P j 0 R m , j = 0 , , n 1 } and the periodisation P j 0 = P j + n 0 , j Z , the closed curve
c ( t ) = j Z P j 0 φ ( t j ) , t R
can also be written as
c ( t ) = j Z P j k φ 2 k t j , t R
with
P i k + 1 = j Z a i 2 j P j k , k N , i = 0 , , 2 k + 1 n 1 ,
where for the set P k of new points the periodisation modulo 2 k n holds, i.e., P j k = P j + 2 k n k , j Z .
The relation in (7) is known as the subdivision rule and defines a subdivision scheme, while the coefficients a = { a j , j Z } in (5) form the so-called subdivision mask. Then, a subdivision scheme is an iterative method where a curve is generated by consecutive refinements of an initial polygon (see Figure 2). Regarding the convergence of this scheme, it has been proven that the sequence of polygons with vertices in P k converges uniformly to a smooth curve c ( t ) , although in practice a few iterations are sufficient to produce a polygon that appears smooth to the human eye.
The sampling of the curve at integer parameters is derived from (5) and reads as
c ( s ) = j Z P j 0 φ s j , t Z , s Z .
The set of values β 0 = β j 0 = φ ( j ) , j Z is called first limit stencil and it can be obtained from a linear system of equations stemming from (5) (see Ref. [35]) or by performing the spectral analysis of the local subdivision matrix (see Refs. [33,36,37]). In a similar way, the second limit stencil and higher order stencils β k 1 = β j k 1 = φ ( k 1 ) ( j ) , j Z , k N , k 1 can be determined. Hence, by (6) the k-th derivative of the curve at dyadic parameters t = Z / 2 m is
c ( k ) s 2 m = j Z β s j k P j m .
The mask and the stencils have compact support and therefore in (5) they define a function that has compact support. Even though we use indexes ranging over Z , we are dealing with finite masks and stencils and correspondingly we have only a finite number of non-zero elements.
Given the control points P 0 , let us denote with V 0 = { V j 0 R m , j = 0 , , n 1 } the points obtained by evaluating the subdivision curve with (8), i.e., V j 0 = c ( j ) , j = 0 , , n 1 (see Figure 2d). This problem is modeled by the linear operator M n such that
M n P 0 = β 0 β 1 β 2 β 2 β 1 β 1 β 0 β 1 β 3 β 2 β 1 β 2 β 3 β 1 β 0 P 0 0 P 1 0 P n 1 0 = V 0 0 V 1 0 V n 1 0 = c ( 0 ) c ( 1 ) c ( n 1 ) .
M n is referred to as the matrix that represents the point interpolation operator for linear subdivision schemes. Note that the structure of M n is circulant, according to Definition 1, and the first row is the vector
β n : = β 0 , β 1 , , β p , 0 , , 0 , β q , , β 1 R m .
Depending on the symmetry of the stencils, some particular cases may be analysed.
Definition 4.
A subdivision scheme is called
  • Odd-symmetric if  a j = a j ;
  • Even-symmetric if  a 1 j = a j for  j N .
This definition contains particular cases of the primal [38] and dual [39] form of subdivision schemes, respectively, and the corresponding limit stencils show the same type of symmetries [33]. As a consequence, the odd-order limit stencil inherits the odd or even symmetry:
β j 2 d = β j 2 d , for odd - symmetric schemes , β 1 j 2 d , for even - symmetric schemes , d N .
Then, for the even-order limit stencil we get
β j 2 d + 1 = β j 2 d + 1 for odd - symmetric schemes , β 1 j 2 d + 1 for even - symmetric schemes , d N .
For d 1 it holds j Z β j d = 0 and j Z β j 0 = 1 .
Now, let us consider the interpolation of n points V 0 with a subdivision curve. It is natural to think of those points as a sampling of the curve at integer parameters V s 0 = c ( s ) , s = 0 , , n 1 , as in (8). By doing so, we get the inverse problem with respect to (10) (see Ref. [40]).
In this setting, an interpolation problem is said to be singular if the operator M n is singular and therefore ill-posed for the selected stencil. In Section 5, we show that in some cases the singularity depends on the value of n.
To solve a non-singular interpolation problem where M n has a circulant structure, one can take advantage of the diagonalization through the Fourier matrix recalled in Section 2. However, in the singular case more strategy is needed. In the literature, in the context of curve and surface schemes those cases are treated by a fitting model [41] or by a fairness functional [42] while introducing more points as degrees of freedom. Another possible approach consists in the regularization in a Tikhonov sense [40,43]. Moreover, it is possible to consider a non-singular perturbation of the matrix M n . This is the strategy that we explore in Section 5.2, using a computationally convenient ω -circulant matrix.

The Block-Circulant Case for Hermite Interpolation

Considering the case of interpolating points and associated derivatives up to the ( d 1 ) -th order, with tangent interpolation as the first instance, Equation (9) provides some insight. Let U ( d ) = [ V 0 0 , V 0 1 , , V 0 d 1 , V 1 0 , V 1 1 , , V 1 d 1 , , V n 1 0 , V n 1 1 , , V n 1 d 1 ] be the data that we wish to interpolate and suppose that there exists a sequence t j , j = 0 , , n 1 such that the subdivision curve c ( t ) interpolates the data U ( d ) , i.e.,
c ( k ) ( t j ) = V j k , for k = 0 , , d 1 , j = 0 , , n .
If the curve c ( t ) is defined by n control points as in (10), we may get a solution to the point interpolation problem (10) that contradicts the values of higher order derivatives. Thus, in order to interpolate all the information in U ( d ) with an equal amount of variables and equations, we need to use n d control points P 0 = { P j 0 R m , j = 0 , , n d 1 } with the periodisation P j 0 = P j + n d 0 , j Z . A natural choice is to set the parameters in (13) as t j = d j , j = 0 , , n 1 . Then, from (9) we get the n d equations
c ( k ) ( d j ) = V j k = j Z P s 0 β d j s k , j = 0 , , n 1 , k = 0 , , d 1 ,
which can be represented in matrix form as
M n P 0 = B 0 B 1 B 2 B 2 B 1 B 1 B 0 B 1 B 3 B 2 B 1 B 2 B 3 B 1 B 0 P 0 0 P 1 0 P n d 1 0 = U ( d ) ,
where the d × d blocks of the matrix M n R n d × n d satisfy B j = B j n for j = 1 , , n and
B j = β d j 0 β d j 1 0 β d j 2 0 β d ( j 1 ) + 1 0 β d j 1 β d j 1 1 β d j 2 1 β d ( j 1 ) + 1 1 β d j d 1 β d j 1 d 1 β d j 2 d 1 β d ( j 1 ) + 1 d 1 .
The structure of the matrix in (14) is the block adaptation of the scalar version (10), i.e., it is a block circulant matrix, as described in Definition 3. Indeed, when d = 1 , M n M n and U ( 1 ) V 0 . Therefore, the Hermite problem represents the block extension of the point interpolation problem in the matrix sense and we can exploit the corresponding linear algebra tools to solve both inverse problems.

5. Interpolation Model with Scalar Subdivision Schemes

Given the problem M n P 0 = V 0 in (10), let us apply the tools available for circulant matrices to solve it. Indeed, by Remark 1 we deduce that the spectrum of M n is
Λ ( M n ) = b 2 π j n , j = 0 , , n 1
and for the stencils with compact support in [ p , q ] the symbol can be written as
b ( θ ) : = k = p q β k exp ( i k θ ) ,
independent of n. Thus, the singularity of M n depends on the existence of roots for b ( θ ) in the grid 2 π N n [ 0 , 2 π ] .
For odd-symmetric schemes, β n : = β 0 , β 1 , , β p , 0 , , 0 , β p , , β 1 R n and thus
b ( θ ) = k = p p β k exp ( i k θ ) = β 0 + 2 k = 1 p β k cos ( k θ ) .
Conversely, for even-symmetric schemes β n : = β 0 , β 1 , , β p + 1 , 0 , , 0 , β p , , β 1 R n , hence
b ( θ ) = k = p + 1 p β k exp ( i k θ ) = k = 1 p β k exp ( i ( k + 1 ) θ ) + exp ( i k θ ) = exp i θ 2 k = 1 p β k exp i k + 1 2 θ + exp i k 1 2 θ = 2 exp i θ 2 k = 1 p β k cos ( 2 k 1 ) θ 2 .
Either way, the first limit stencil satisfies b ( 0 ) = 1 . Note that for odd-symmetric schemes it holds
b ( 2 π θ ) = β 0 + 2 k = 1 p β k cos ( k ( 2 π θ ) ) = β 0 + 2 k = 1 p β k cos ( k θ ) = b ( θ ) ,
while for the even-symmetric schemes
b ( 2 π θ ) = 2 exp i ( 2 π θ ) 2 k = 0 p β k cos ( 2 k 1 ) ( 2 π θ ) 2 = 2 exp i θ 2 k = 0 p β k cos ( 2 k 1 ) θ 2 = exp ( i θ ) b ( θ ) .
Therefore, the study of the symbol can be restricted to the interval [ 0 , π ] instead of [ 0 , 2 π ] . In particular, for both (17) and (18), b ( θ ) = 0 implies b ( 2 π θ ) = 0 .
From (18), for even-symmetric schemes we find b ( π ) = 0 independently of n and β . As π belongs to the grid 2 π N n [ 0 , 2 π ] for even values of n, then from (16) we obtain the following result.
Proposition 4.
For any even-symmetric subdivision scheme, if the amount of interpolated points n is even, then the interpolation matrix M n is singular.
On the other hand, in the context of odd-symmetric schemes the symbol may also vanish in the grid 2 π N n [ 0 , 2 π ] . As an example, consider the primal family of J-spline schemes [38] for the particular case that generates C 3 subdivision curves
P 2 j k + 1 = 1 4 P j 1 k + 1 2 P j k + 1 4 P j + 1 k , P 2 j + 1 k + 1 = 1 16 P j 1 k + 7 16 P j k + 7 16 P j + 1 k + 1 16 P j + 2 k , k N , j Z ,
with first limit stencil
β = 1 48 1 , 12 , 22 , 12 , 1 .
The symbol
b ( θ ) = 11 24 + 1 2 cos ( θ ) + 1 24 cos ( 2 θ ) = ( cos ( θ ) + 1 ) cos ( θ ) + 1 5
has a root at θ = π , which belongs to the grid 2 π N n [ 0 , 2 π ] when n is an even number.
In the Hermite interpolation scenario, we can find a singular point and tangents interpolation operator even when the point interpolation operator is not singular. Let us consider for instance the cubic B-spline scheme, whose first and second limit stencil are { 1 6 , 4 6 , 1 6 } { 1 2 , 0 , 1 2 } , respectively. Then the corresponding matrices are
4 6 1 6 0 0 1 6 1 6 4 6 1 6 0 0 1 6 0 0 4 6 1 6 ,
which is not singular, and
4 6 1 6 0 1 2 0 0 0 0 0 0 0 0 0 1 6 0 1 2 0 1 6 0 1 2 4 6 1 6 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 6 0 1 2 4 6 1 6 0 1 2
which is singular with kernel of dimension 1.
As a matter of fact, in the specific case of point and tangent interpolation with odd-symmetric schemes, the following proposition holds.
Proposition 5.
Using an odd-symmetric subdivision scheme and setting d = 2 , the attained matrix M n is singular.
Proof. 
From (3) and (15), for odd-symmetric schemes we have that the ( 1 , 1 ) -block of L n is:
( L n ) 1 , 1 = j = 0 p 2 β 2 j 0 β 2 j 1 0 β 2 j 1 β 2 j 1 1 + j = 1 p 2 β 2 j 0 β 2 j 1 0 β 2 j 1 β 2 j 1 1 = β 0 0 β 1 0 β 0 1 β 1 1 + j = 1 p 2 β 2 j 0 β 2 j 1 0 β 2 j 1 β 2 j + 1 1 + j = 1 p 2 β 2 j 0 β 2 j 1 0 β 2 j 1 β 2 j 1 1 = β 0 0 + 2 j = 1 p 2 β 2 j 0 2 j = 1 p 2 β 2 j 1 0 0 0 ,
where we used the fact that β 0 1 = 0 from (12). The block ( L n ) 1 , 1 is singular, since it has a null row, and the matrices L n and M n are in turn singular because of (3). □

5.1. A Solution by Shifting Parameters

Propositions 4 and 5 state that, for even-symmetric and odd-symmetric masks, the matrices M n and M n are singular independently of the mask. In this section, we propose a different model for interpolating that avoids this inconvenience. What follows is motivated by Plonka’s work [44] on Hermite interpolation with B-spline parametric curves.
Let us consider the dual subdivision schemes again. In Section 4, by interpolating at the parameters t j = j , j = 0 , , n 1 , we obtained the first limit stencil as
V j 0 = c ( j ) = s Z φ ( j s ) P s 0 = s = p q φ ( s ) P j s 0 , with supp φ = [ p , q ] .
Here we consider the variant
V j 0 = c ( j + σ ) = s Z φ ( j s + σ ) P s 0 = s = p q 1 φ ( s + σ ) P j s 0
with σ ( 0 , 1 ) . More specifically, given a positive integer ν , we investigate the method for σ varying in 1 ν Z ( 0 , 1 ) . The evaluation of the basic function is done as in [35].
In the case of symmetric masks, as in (11) and (12), the following results hold.
Proposition 6.
For an odd-symmetric subdivision scheme with mask a = { a p , a p + 1 , , a p } where p N , the stencil obtained with the set φ 1 2 + Z [ p , p ] is even-symmetric.
Analogously, for an even-symmetric subdivision scheme with mask a = { a p , a p + 1 , , a p + 1 } where p N , the stencil obtained with the set φ 1 2 + Z [ p , p + 1 ] is odd-symmetric.
Our proposal consists in modifying the interpolation models (10) and (14) to c ( k ) 1 2 + d j = V j k . For point interpolation, the symbol changes from (18) to (17) and this way we resolve the singularity in Proposition 4. Figure 3 portrays an example of even-symmetric subdivision scheme where after a 1 2 -shift the new matrix is nonsingular. However, when the new stencil leads to a trigonometric polynomial with roots in the grid 1 n Z [ 0 , 1 ] it is not possible to avoid the singularity of the corresponding matrix. In such situations, one may consider the general approach V j 0 = c ( j + σ ) , with σ ( 0 , 1 ) or treat the singularity as discussed in the next subsection.
The use of shifted parameters for interpolation can be employed as a degree of freedom for the geometry of the interpolation curve (see Figure 4). Nevertheless, the symmetry provided by the subdivision scheme might be lost.
Now let us consider the point and tangent interpolation with the interpolation at the parameters 1 2 + Z for odd-symmetric schemes. The first block (20) in the factorization of M n becomes
( L n ) 1 , 1 = j = p 1 2 q 2 φ ( 2 j + 1 2 ) φ ( 2 j 1 2 ) φ ( 2 j + 1 2 ) φ ( 2 j 1 2 ) = φ ( 1 2 ) + j = 1 p 1 2 ( φ ( 2 j 1 2 ) + φ ( 2 j + 1 2 ) ) φ ( 1 2 ) + j = 1 p 1 2 ( φ ( 2 j 1 2 ) + φ ( 2 j + 1 2 ) ) φ ( 1 2 ) + j = 1 p 1 2 ( φ ( 2 j + 1 2 ) φ ( 2 j 1 2 ) ) φ ( 1 2 ) + j = 1 p 1 2 ( φ ( 2 j 1 2 ) φ ( 2 j + 1 2 ) ) = Σ 1 Σ 1 Σ 2 Σ 2 .
with
Σ 1 = j = p 1 2 q 2 φ 2 j + 1 2 and Σ 2 = j = p 1 2 q 2 φ 2 j + 1 2 .
This block is not singular for all stencils, as was the case when σ = 0 in Proposition 5. Figure 5b is an example of an odd-symmetric subdivision scheme where after a 1 2 -shift the new matrix is nonsingular, while Figure 5a shows the least square solution ( σ = 0 ), which in this case presents artifacts—loops, to be precise.
In the cases of point interpolation with odd-symmetric schemes and point-tangent interpolation with even-symmetric schemes, we stick with the choice of interpolation at integer parameters.

5.2. Our Regularizing Strategy

We have already observed that in some cases the matrix obtained by interpolating at integer parameters is singular and that sometimes the singularity cannot be avoided neither by a proper shift of the parameters. In those cases, independently of the interpolated points, the inverse of M n or M n is not defined and an alternative solution has to be chosen. The first idea is using the pseudo-inverse of the matrix, providing a least-square solution [41]. Another approach is considering a regularization, which is an approximation of the ill-posed problem by a family of neighboring well-posed problems [40]. Among the regularization methods for solving an ill-posed problem whose discrete form is the linear system A x = b we mention the Tikhonov approach, which consists in solving the family of problems
argmin x A x b 2 + λ L x 2 ,
depending on the regularization parameter  λ . The latter controls the weight given to the residual norm and the regularization term. The optimal value for λ can be chosen by discrepancy principle, generalized cross-validation, or the L-curve method [43]. The operator L, which can be taken as the identity operator or a differential operator (for instance, the first or second derivative operator), looks to alter the least square solution to enforce special features of the regularized approximations [40,43]. As a necessary condition it is required that K e r ( A ) K e r ( L ) = { 0 } . This problem is equivalent to solve the normal equations ( A A + λ L L ) x = A b for each λ .
Since the quality of the obtained solution either by least-squares or Tikhonov regularization is not acceptable in many cases, in the following we study a new regularized problem depending on a parameter ω in which a (possibly block) ω -circulant matrix replaces the original circulant coefficient matrix.
First we consider the basic interpolation case where M n , ω is the ω -circulant counterpart of M n with ω = exp ( i ψ ) . By Remark 3, the eigenvalues of the new matrix M n , ω are given by the standard uniform sampling of the function b ( θ + ψ n ) . In this manner, the spectrum of M n is shifted. We observe that M n and M n , ω are both singular if there exists at least two roots of b ( θ ) in the grid 2 π N n [ 0 , 2 π ] with distance ψ n .
If the latter condition is not satisfied, then the initial system M n P 0 = V 0 is singular, while its perturbation M n , ω P 0 = V 0 has a unique, complex solution, even though the original problem is defined in the real domain. However, for ψ small enough in modulus, the imaginary part of the solution becomes negligible and the perturbed system is close to the original one.
Secondly, we consider the interpolation of points and associated tangent vectors. The block (20) in the corresponding ω - ( 2 × 2 ) -circulant matrix M n , ω is
( L n , ω ) 1 , 1 = j = 0 p 2 B j ω j / n + j = 1 p 2 B j ω ( n j ) / n = β 0 0 β 1 0 β 0 1 β 1 1 + j = 1 p 2 β 2 j 0 β 2 j 1 0 β 2 j 1 β 2 j 1 1 ω j / n + j = 1 p 2 β 2 j 0 β 2 j + 1 0 β 2 j 1 β 2 j + 1 1 ω ( n j ) / n = β 0 0 + j = 1 p 2 β 2 j 0 ω j / n + ω ( n j ) / n β 1 0 1 + ω ( n 1 ) / n + j = 2 p 2 β 2 j 1 0 ω j / n + ω ( n j ) / n j = 1 p 2 β 2 j 1 ω j / n ω ( n j ) / n β 1 1 ω 1 / n 1 + j = 2 p 2 β 2 j 1 1 ω j / n ω ( n j ) / n .
Then, M n , ω is not necessarily singular, even if M n is, and we can solve M n , ω P 0 = U ( 2 ) rather than (14). In this case, the structure of the symbol differs from (17) and (18).
Remark 4.
For the particular case of cubic B-spline curves, in Ref. [45] the point and tangent interpolation problem is solved by considering only the case of unitary tangents V j 1 , j = 0 , , n 1 . Their proposal consists in a non-linear iterative method and convergence has not been proven.

6. Numerical Tests

In the present section we perform a few numerical tests in order to assess the quality of the solution obtained by adopting the ω -circulant regularization, in the setting of a singular interpolation operator. In all the following examples we employ the J-spline family [38].
We denote by P ^ 0 and P ^ ω 0 the control point vectors obtained as M n U ( 1 ) (resp. M n U ( 2 ) in the Hermite case) and M n , ω U ( 1 ) (resp. M n , ω U ( 2 ) in the Hermite case), respectively. The pseudo-inverses M n , ω and M n , ω are given by
( F n , ω I d ) L n , ω ( F n , ω 1 I d ) , with ( L n , ω ) k = 1 ( L n , ω ) k , if ( L n , ω ) k 0 , 0 , if ( L n , ω ) k = 0 ,
with d = 1 for M n , ω and d = 2 for M n , ω , and where ( L n , ω ) k is the k-th diagonal element in (3). For ω = 1 we immediately get M n and M n .
We compare the subdivision curves generated from the control point vectors P ^ 0 and P ^ ω 0 ; firstly in terms of interpolation, which is the main goal. With this purpose, consider the residuals M n P ^ 0 U ( 1 ) , M n P ^ 0 U ( 2 ) , M n P ^ ω 0 U ( 1 ) and M n P ^ ω 0 U ( 2 ) which are vectors of points. Notice that the last two residuals are independent from the systems of equations M n , ω P ^ ω 0 = U ( 1 ) ,   M n , ω P ^ ω 0 = U ( 2 ) , but they are related to the interpolation problems (10) and (14) with the regularized solution.
By reasoning in local terms, we choose the norms
M n P ^ ω 0 U ( 1 ) , M n P ^ ω 0 U ( 2 )
with A : = sup j A j 2 , A j the j-row of matrix A and · 2 the vector 2-norm. This way we measure how far the interpolated information in U ( d ) , d = 1 , 2 is from the approximations in M n , ω P ^ ω 0 , and M n , ω P ^ ω 0 by the maximum distance.
On the other hand, reasoning in global terms we consider the relative error with Frobenius matrix norms
M n P ^ ω 0 U ( 1 ) fro U ( 1 ) fro , M n P ^ ω 0 U ( 2 ) fro U ( 2 ) fro .
In order to evaluate the quality of each curve, proper fairness measures could be used [46,47], penalizing artifacts such as loops (see Figure 5a).
In each figure, the curve obtained with the least square solution P ^ 0 is portrayed in solid green lines, while the one obtained with the regularized solution P ^ ω 0 is represented by dashed red lines.
The solution computed with the ω -circulant matrix interpolates the given points even if a non singular matrix is used. Indeed, in Figure 6, where we set ω = exp ( 5 × 10 3 i ) , we only see one line because the two solutions P ^ 0 and P ^ ω 0 visually match. We get similar results with values of ω close to exp ( 5 × 10 3 i ) . Hence, the quality of the interpolating solution is not affected by the perturbation.
Applying a primal scheme with odd-symmetric mask as in (19), with a singular interpolation operator, the data points are not interpolated by the least square solution (see Figure 7). The spectrum of M n is perturbed considering a regularized solution and shifting the null eigenvalue as shown in Figure 8. As a result, the condition number is improved, but the approximation is not. In Figure 7 the results are shown for ω = exp ( 5 × 10 2 i ) . The residual norms are
M n P ^ 0 U ( 1 ) = 0.3 , M n P ^ ω 0 U ( 1 ) = 0.6 ,
M n P ^ 0 U ( 1 ) fro U ( 1 ) fro = 5.5 × 10 2 , and M n P ^ ω 0 U ( 1 ) fro U ( 1 ) fro = 6.0 × 10 2 .
It is worth noticing that the regularized curve is closer to the interpolation points, although this is not reflected in the norms.
In the block setting for point and tangent interpolation (see Figure 9) the situation is similar. Even though the spectrum of M n , ω is perturbed with respect to M n and does not contain a null eigenvalue as the latter (see Figure 10), the solution is not improved; refer to Figure 9 in which ω = exp ( 5 × 10 1 i ) . We observe that the points are interpolated, but the tangent interpolation is less accurate. In this case the residual norms are
M n P ^ 0 U ( 2 ) = 2.22 × 10 15 , M n P ^ ω 0 U ( 2 ) = 1.15 × 10 1 ,
M n P ^ 0 U ( 2 ) fro U ( 2 ) fro = 2.89 × 10 16 , and M n P ^ ω 0 U ( 2 ) fro U ( 2 ) fro = 1.09 × 10 2 .
However, if we consider a known solution P 0 , we can generate U ( 1 ) = M n P 0 from the columns of M n and obtain, for a suitable parameter ω , a solution that interpolates the points as accurately as the least square solution (see Figure 11 in which ω = exp ( 5 × 10 3 i ) ), even though the spectra of M n and M n , ω are different (see Figure 12).

7. Conclusions

In the present work we explored two distinct applications of ω -circulant matrices.
After a brief introduction of this matrix algebra and its fundamental properties, we first described their role in the parallel solution and preconditioning of (block) triangular linear systems of Toeplitz type. We recalled an efficient classical algorithm, which relies on the well-known diagonalization of ω -circulants through the FFT, and studied the stability issue, which needs special attention due to the fact that the condition number of the fast transform strongly depends on the parameter ω .
Then we analyzed subdivision schemes, for which a few questions remain open.
Indeed, the interpolation of points and tangent vectors with scalar subdivision schemes as inverse problem may lead to a system of equations with a singular matrix. We proposed as a first possible solution to change the common approach of interpolating at integer parameters in dependence of the symmetry of the subdivision mask. With this solution we avoid the presence of a singular matrix in the model and we still benefit from the Fourier factorization of a non-singular circulant or block-circulant matrix.
In some cases it is not possible to avoid the singularity while keeping the symmetry of the stencil. In such situations we considered the solution obtained by the means of least square solution and related ω -circulant regolarization for the interpolation problem. More in detail, owing to the singular character of the matrix in the interpolation setting, difficulties are overcome by perturbing the spectrum, by taking into consideration its ω -circulant counterpart.
As observed in the numerical experiments, the ω -regularization approach is not sufficient for solving the problem. When the original is singular, the ω -perturbation is well conditioned, but the interpolation condition is not represented exactly and the latter affects the approximation quality. However, the numerical solution stemming from the ω -circulant linear system interpolates the data points at least in some cases where the solution is known.
A further open problem is the study of the solution Sol( exp ( i ψ ) : in this setting, we would like to explore the existence of an asymptotic expansion of the form
Sol ( exp ( i ψ ) ) = Sol + c 1 ψ + c 2 ψ 2 + c 3 ψ 3 + ,
when a small parameter ψ is considered.
An expansion of the latter type would open the door to simple and cheap extrapolation procedures for the computation of very precise solutions. Preliminary numerical have been performed and are encouraging. We are convinced that this research line is worth to be investigated in future steps.

Author Contributions

Conceptualization, all the authors equally; methodology, all the authors equally; software, R.D.F.; validation, R.D.F. and R.L.S.; formal analysis, all the authors equally; investigation, all the authors equally; resources, all the authors equally; writing—original draft preparation, all the authors equally; writing—review and editing, all the authors equally; visualization, all the authors equally; supervision, S.S.-C.; project administration, all the authors equally; funding acquisition, S.S.-C. All authors have read and agreed to the published version of the manuscript.

Funding

The work of the three authors is partly supported by the Italian Agency INdAM-GNCS. The work of Stefano Serra-Capizzano is funded from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955701. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Belgium, France, Germany, Switzerland. Furthermore Stefano Serra-Capizzano is grateful for the support of the Laboratory of Theory, Economics and Systems–Department of Computer Science at Athens University of Economics and Business.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are obtained using our software and are available under request.

Acknowledgments

The three authors are members of the INdAM research group GNCS.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications, Vol. I; Springer: Cham, Switzerland, 2017. [Google Scholar]
  2. Ng, M.K. Iterative Methods for Toeplitz Systems. Numerical Mathematics and Scientific Computation; Oxford University Press: New York, NY, USA, 2004. [Google Scholar]
  3. Aricò, A.; Serra-Capizzano, S.; Tasche, M. Fast and numerically stable algorithms for discrete Hartley transforms and applications to preconditioning. Commun. Inf. Syst. 2005, 5, 21–68. [Google Scholar] [CrossRef] [Green Version]
  4. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring images. Matrices, spectra, and filtering. In Fundamentals of Algorithms; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2006; Volume 3. [Google Scholar]
  5. Kailath, T.; Olshevsky, V. Displacement structure approach to discrete-trigonometric-transform based preconditioners of G. Strang type and of T. Chan type. Toeplitz matrices: Structures, algorithms and applications. (Cortona, 1996). Calcolo 1998, 33, 191–208. [Google Scholar] [CrossRef]
  6. Serra-Capizzano, S. A Korovkin-type theory for finite Toeplitz operators via matrix algebras. Numer. Math. 1999, 82, 117–142. [Google Scholar] [CrossRef]
  7. Davis, P. Circulant Matrices; John Wiley and Sons: Hoboken, NJ, USA, 1979. [Google Scholar]
  8. Bini, D. Matrix structures in parallel matrix computations. Calcolo 1988, 25, 37–51. [Google Scholar] [CrossRef]
  9. Chan, R.H.F.; Jin, X.Q. An introduction to iterative Toeplitz solvers. In Fundamentals of Algorithms; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2007; Volume 5. [Google Scholar]
  10. Chan, R.H.; Ng, M.K. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [Google Scholar] [CrossRef] [Green Version]
  11. Chan, T.F.; Elman, H.C. Fourier analysis of iterative methods for elliptic problems. SIAM Rev. 1989, 31, 20–49. [Google Scholar] [CrossRef] [Green Version]
  12. Huckle, T. A note on skewcirculant preconditioners for elliptic problems. Numer. Algorithms 1992, 2, 279–286. [Google Scholar] [CrossRef]
  13. Huckle, T. Thomas Circulant and skewcirculant matrices for solving Toeplitz matrix problems. Iterative methods in numerical linear algebra (Copper Mountain, CO, 1990). SIAM J. Matrix Anal. Appl. 1992, 13, 767–777. [Google Scholar] [CrossRef]
  14. Serra-Capizzano, S. The GLT class as a generalized Fourier analysis and applications. Linear Algebra Appl. 2006, 419, 180–233. [Google Scholar] [CrossRef] [Green Version]
  15. Loan, C.V. Computational frameworks for the fast Fourier transform. In Frontiers in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992; Volume 10. [Google Scholar]
  16. Aricò, A.; Donatelli, M.; Serra-Capizzano, S. V-cycle optimal convergence for certain (multilevel) structured linear systems. SIAM J. Matrix Anal. Appl. 2004, 26, 186–214. [Google Scholar] [CrossRef] [Green Version]
  17. Bertaccini, D.; Ng, M.K. Block ω-circulant preconditioners for the systems of differential equations. Calcolo 2003, 40, 71–90. [Google Scholar] [CrossRef]
  18. Serra-Capizzano, S.; Tablino-Possio, C. Multigrid methods for multilevel circulant matrices. SIAM J. Sci. Comput. 2004, 26, 55–85. [Google Scholar] [CrossRef]
  19. Bini, D. Parallel solutions of certain Toeplitz linear systems. SIAM J. Comput. 1984, 13, 268–276. [Google Scholar] [CrossRef]
  20. Cline, R.E.; Plemmons, R.J.; Worm, G. Generalized inverses of certain Toeplitz matrices. Linear Algebra Its Appl. 1974, 8, 25–33. [Google Scholar] [CrossRef] [Green Version]
  21. Liu, J.; Wu, S.L. A fast block α-circulant preconditioner for all-at-once systems from wave equations. SIAM J. Matrix Anal. Appl. 2020, 41, 1912–1943. [Google Scholar] [CrossRef]
  22. Hon, S.; Serra-Capizzano, S. A block Toeplitz preconditioner for all-at-once systems from linear wave equations. Electron. Trans. Numer. Anal. 2023, 58, 177–195. [Google Scholar] [CrossRef]
  23. Danieli, F.; Wathen, A.J. All-at-once solution of linear wave equations. (English summary). Numer. Linear Algebra Appl. 2021, 28, 16. [Google Scholar] [CrossRef]
  24. Gander, M.J.; Halpern, L.; Rannou, J.; Ryan, J. A direct time parallel solver by diagonalization for the wave equation. SIAM J. Sci. Comput. 2019, 41, A220–A245. [Google Scholar] [CrossRef] [Green Version]
  25. Gander, M.J.; Wu, S.L. A diagonalization-based parareal algorithm for dissipative and wave propagation problems. SIAM J. Numer. Anal. 2020, 58, 2981–3009. [Google Scholar] [CrossRef]
  26. Soszyńska, M.; Richter, T. Adaptive time-step control for a monolithic multirate scheme coupling the heat and wave equation. BIT 2021, 61, 1367–1396. [Google Scholar] [CrossRef]
  27. Bertaccini, D.; Durastante, F. Limited memory block preconditioners for fast solution of fractional partial differential equations. J. Sci. Comput. 2018, 77, 950–970. [Google Scholar] [CrossRef]
  28. Jiang, Y.; Liu, J. Fast parallel-in-time quasi-boundary value methods for backward heat conduction problems. Appl. Numer. Math. 2023, 184, 325–339. [Google Scholar] [CrossRef]
  29. Andersson, L.E.; Stewart, N.F. Introduction to the Mathematics of Subdivision Surfaces; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2010. [Google Scholar]
  30. Dyn, N. Linear and Nonlinear Subdivision Schemes in Geometric Modeling; School of Mathematical Sciences, Tel Aviv University: Tel Aviv, Israel, 2008. [Google Scholar]
  31. Dyn, N.; Levin, D. Subdivision schemes in geometric modelling. Acta Numer. 2002, 11, 73–144. [Google Scholar] [CrossRef]
  32. Dyn, N.; Levin, D.; Gregory, J.A. A 4-point interpolatory subdivision scheme for curve design. Comput. Aided Geom. Des. 1987, 4, 257–268. [Google Scholar] [CrossRef]
  33. Sabin, M. Analysis and Design of Univariate Subdivision Schemes; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  34. Chui, C.; de Villiers, J. Wavelet Subdivision Methods: Gems for Rendering Curves and Surfaces; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  35. Schaefer, S.; Warren, J. Exact evaluation of limits and tangents for non-polynomial subdivision schemes. Comput. Aided Geom. Des. 2008, 25, 607–620. [Google Scholar] [CrossRef] [Green Version]
  36. Daubechies, I.; Guskov, I.; Sweldens, W. Commutation for irregular subdivision. Constr. Approx. 2001, 17, 479–514. [Google Scholar] [CrossRef]
  37. Dyn, N.; Gregory, J.A.; Levin, D. Analysis of uniform binary subdivision schemes for curve design. Constr. Approx. 1991, 7, 127–147. [Google Scholar] [CrossRef] [Green Version]
  38. Rossignac, J.; Schaefer, S. J-splines. Comput. Aided Des. 2008, 40, 1024–1032. [Google Scholar] [CrossRef]
  39. Dyn, N.; Floater, M.S.; Hormann, K. A C2 four-point subdivision scheme with fourth order accuracy and its extensions. In Mathematical Methods for Curves and Surfaces: TROMSØ 2004, Modern Methods in Mathematics; Nashboro Press: Brentwood, TN, USA, 2005; pp. 145–156. [Google Scholar]
  40. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Springer: New York, NY, USA, 1996. [Google Scholar]
  41. Hoppe, H.; DeRose, T.; Duchamp, T.; Halstead, M.; Jin, H.; McDonald, J.; Schweitzer, J.; Stuetzle, W. Piecewise smooth surface reconstruction. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24–29 July 1994; pp. 295–302. [Google Scholar]
  42. Halstead, M.A.; Kass, M.; DeRose, T. Efficient, fair interpolation using Catmull-Clark surfaces. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 2–6 August 1993; pp. 35–44. [Google Scholar]
  43. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999. [Google Scholar]
  44. Plonka, G. An efficient algorithm for periodic Hermite spline interpolation with shifted nodes. Numer. Algorithms 1993, 5, 51–62. [Google Scholar] [CrossRef]
  45. Okaniwa, S.; Nasri, A.; Lin, H.; Abbas, A.; Kineri, Y.; Maekawa, T. Uniform B-spline curve interpolation with prescribed tangent and curvature vectors. IEEE Trans. Vis. Comput. Graph. 2012, 18, 1474–1487. [Google Scholar] [CrossRef]
  46. Albrecht, G. Invariante Gütekriterien im Kurvendesign–Einige neuere Entwicklungen. Effiziente Methoden der Geometrischen Modellierung und der Wissenschaftlichen Visualisierung; Springer: Berlin/Heidelberg, Germany, 1999; pp. 134–148. [Google Scholar]
  47. Veltkamp, R.C.; Wesselink, W. Modeling 3D curves of minimal energy. Comput. Graph. Forum 1995, 14, 97–110. [Google Scholar] [CrossRef]
Figure 1. x = sin j π n m + 3 j = 1 , , n m and x ˜ for ω = 10 16 .
Figure 1. x = sin j π n m + 3 j = 1 , , n m and x ˜ for ω = 10 16 .
Algorithms 16 00328 g001
Figure 2. Iterations of a non-interpolatory subdivision scheme. (a) Control points P 0 (blue balls). (b) First refinement of the polygon with vertices P 1 . (c) Second refinement with vertices P 2 . (d) Fourth refinement and interpolated points V 0 in the limit (red squares).
Figure 2. Iterations of a non-interpolatory subdivision scheme. (a) Control points P 0 (blue balls). (b) First refinement of the polygon with vertices P 1 . (c) Second refinement with vertices P 2 . (d) Fourth refinement and interpolated points V 0 in the limit (red squares).
Algorithms 16 00328 g002
Figure 3. Point interpolation with a dual subdivision scheme [39] considering the shifted parameters.
Figure 3. Point interpolation with a dual subdivision scheme [39] considering the shifted parameters.
Algorithms 16 00328 g003
Figure 4. Interpolation with quintic uniform B-spline at different parameter values.
Figure 4. Interpolation with quintic uniform B-spline at different parameter values.
Algorithms 16 00328 g004
Figure 5. Interpolating points and tangents directions with a cubic B-spline curve by shifting the parameters.
Figure 5. Interpolating points and tangents directions with a cubic B-spline curve by shifting the parameters.
Algorithms 16 00328 g005
Figure 6. Point interpolation with a quintic B-spline curve (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Figure 6. Point interpolation with a quintic B-spline curve (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Algorithms 16 00328 g006
Figure 7. Point interpolation with a J-spline curve (19) with singular interpolation operator (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Figure 7. Point interpolation with a J-spline curve (19) with singular interpolation operator (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Algorithms 16 00328 g007
Figure 8. Eigenvalues of M n and M n , ω (real part) in Figure 7.
Figure 8. Eigenvalues of M n and M n , ω (real part) in Figure 7.
Algorithms 16 00328 g008
Figure 9. Point and tangent vectors interpolation with a quintic B-spline curve (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Figure 9. Point and tangent vectors interpolation with a quintic B-spline curve (that belongs to the J-spline family). Green solid line: least square solution. Red dashed line: regularized solution.
Algorithms 16 00328 g009
Figure 10. Eigenvalues of M n and M n , ω (real part) in Figure 9.
Figure 10. Eigenvalues of M n and M n , ω (real part) in Figure 9.
Algorithms 16 00328 g010
Figure 11. Point interpolation with a J-spline curve (19) with a singular interpolation operator for a known solution. Green solid line: least square solution. Red dashed line: regularized solution.
Figure 11. Point interpolation with a J-spline curve (19) with a singular interpolation operator for a known solution. Green solid line: least square solution. Red dashed line: regularized solution.
Algorithms 16 00328 g011
Figure 12. Eigenvalues of M n and M n , ω (real part) in Figure 11.
Figure 12. Eigenvalues of M n and M n , ω (real part) in Figure 11.
Algorithms 16 00328 g012
Table 1. The relative error between x = sin j π n m + 3 j = 1 , , n m and x ˜ as ω decreases.
Table 1. The relative error between x = sin j π n m + 3 j = 1 , , n m and x ˜ as ω decreases.
ω x x ˜ x
10 2 1.81 × 10 14
10 4 4.93 × 10 13
10 6 1.52 × 10 11
10 8 1.28 × 10 9
10 10 1.15 × 10 7
10 12 7.47 × 10 6
10 14 4.76 × 10 4
10 16 2.20 × 10 2
10 18 1.23 × 10
Table 2. The relative error between x randomly chosen and x ˜ as ω decreases.
Table 2. The relative error between x randomly chosen and x ˜ as ω decreases.
ω x x ˜ x
10 2 4.06 × 10 14
10 4 1.84 × 10 12
10 6 6.04 × 10 11
10 8 5.80 × 10 9
10 10 8.41 × 10 7
10 12 4.96 × 10 5
10 14 6.31 × 10 3
10 16 2.18 × 10 1
10 18 6.13 × 10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Díaz Fuentes, R.; Serra-Capizzano, S.; Sormani, R.L. ω-Circulant Matrices: A Selection of Modern Applications from Preconditioning of Approximated PDEs to Subdivision Schemes. Algorithms 2023, 16, 328. https://doi.org/10.3390/a16070328

AMA Style

Díaz Fuentes R, Serra-Capizzano S, Sormani RL. ω-Circulant Matrices: A Selection of Modern Applications from Preconditioning of Approximated PDEs to Subdivision Schemes. Algorithms. 2023; 16(7):328. https://doi.org/10.3390/a16070328

Chicago/Turabian Style

Díaz Fuentes, Rafael, Stefano Serra-Capizzano, and Rosita Luisa Sormani. 2023. "ω-Circulant Matrices: A Selection of Modern Applications from Preconditioning of Approximated PDEs to Subdivision Schemes" Algorithms 16, no. 7: 328. https://doi.org/10.3390/a16070328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop