ω -Circulant Matrices: A Selection of Modern Applications from Preconditioning of Approximated PDEs to Subdivision Schemes

: It is well known that ω -circulant matrices with ω (cid:54) = 0 can be simultaneously diagonalized by a transform matrix, which can be factored as the product of a diagonal matrix, depending on ω , and of the unitary matrix F n associated to the Fast Fourier Transform. Hence, all the sets of ω -circulants form algebras whose computational power, in terms of complexity, is the same as the classical circulants with ω = 1. However, stability is a delicate issue, since the condition number of the transform is equal to that of the diagonal part, tending to max {| ω | , | ω | − 1 } . For ω = 0, the set of related matrices is still an algebra, which is the algebra of lower triangular matrices, but they do not admit a common transform since most of them (all except the multiples of the identity) are non-diagonalizable. In the present work, we review two modern applications, ranging from parallel computing in preconditioning of PDE approximations to algorithms for subdivision schemes, and we emphasize the role of such algebra. For the two problems, few numerical tests are conducted and critically discussed and the related conclusions are drawn.


Introduction
When dealing with structured matrices of Toeplitz type [1,2] not related to fast trigonometric transforms, the problems of computing the solution of large linear system, the matrix vector product, or the eigenvalues are greatly accelerated by using them as approximation matrices belonging to algebras associated to trigonometric transforms [3][4][5][6].Among them, a very classical choice is the algebra of ω-circulant matrices with ω = 0 (see the seminal book [7] and references therein).In this work, after briefly introducing these matrices and reviewing a few related applications, we focus on new, challenging problems, such as parallel computing in preconditioning of approximated partial differential equations (PDEs) and algorithms for subdivision schemes, highlighting the benefits and drawbacks of employing ω-circulant matrices.
The paper is organized in the following manner.In Section 2 we introduce the basic notions on circulant and ω-circulant matrices.In Section 3, after recalling a technique for the parallel computing of the matrix-vector product and matrix inversion concerning ω-circulant matrices, we present a general procedure (introduced by Bini [8]) to precondition (block) triangular Toeplitz linear systems.Then, we move on to scalar subdivision schemes, introducing them in Section 4 and presenting the interpolation model as a direct problem with its associated inverse problem in Section 5. Section 6 is devoted to selected numerical experiments and the related critical discussion.To conclude, in Section 7 we draw conclusions and discuss a few open problems, with special attention to the use of structured matrices in subdivision schemes.

Some Remarks on Circulant and ω-Circulant Matrices
In this introductory section, we lay the groundwork for the main part of this paper by recalling the basic definitions and properties of both circulant and ω-circulant matrices.More information can be found in Ref. [7].We recall that circulant and ω-circulant matrices have played a major role in the last four decades for the fast solution of structured linear systems, mainly of Toeplitz type, but also stemming from approximated PDEs.Classical references are Refs.[2,[9][10][11][12][13][14], where preconditioned Krylov methods, local Fourier analysis, and its GLT generalizations are treated with attention to the circulant structure and to the related fast Fourier transform (FFT); see Ref. [15].More specific references for the numerical treatment of PDEs via ω-circulant preconditioning or directly by multigrid solvers or combination of them can be found in Refs.[10,[16][17][18].Definition 1.Consider α = [α 0 , α 1 , . . ., α n−1 ] with α j ∈ R. A square matrix C ∈ R n×n is called circulant and it is denoted with C = circ(α) if we have (C) s,t = α s−t (mod n) ∀s, t = 0, . . ., n − 1.
An equivalent and compact representation of C is given by where Π n is the permutation matrix , i 2 = −1 is the unitary Fourier matrix.
Proof.By a direct computation of the eigenvalues, it is easy to verify that Π n factorizes as with Ω n = diag s=0,...,n−1 exp 2πi s n . ( The thesis follows.
Remark 1.The eigenvalues of C, namely the diagonal elements of L n , are given by the Fourier transform of the first column of C. Alternatively, they can be seen as where the function f (θ ω-circulant matrices represent an extension of the notion of circulant matrix (see, e.g., Refs. [6,19,20] for more details).
A compact representation for C ω is given by where Π n,ω is the matrix Note that setting ω = 1 leads back to Definition 1.An analogous spectral decomposition can be derived.

Proposition 2. C ω can be diagonalized as
where is the diagonal matrix containing the eigenvalues of C ω and Proof.Let us write ω = ρ exp(iψ) with ρ > 0 and consider Applying (1), the matrix Π n,ω is diagonalized as and the thesis follows.
The above factorization is equivalent to which, compared with (2), gives a deeper understanding of the role of ω.As in the circulant case, we stress that for any fixed p, q ∈ N the matrix

Remark 2.
With ω = ρ exp(iψ), the eigenvalues of C ω can be expressed as where the function f (θ To conclude this section, we extend the above definitions and properties to the block case.

Definition 3. Consider
A compact representation for C ω is given by For ω = 1, Definition 3 describes the notion of (d × d)-block circulant matrices, while, as expected, for d = 1 it reduces to Definition 2. As in the case of d = 1, for any fixed where and I d is the identity of size d.

Parallel Solution and Preconditioning of Triangular Toeplitz Linear Systems
In this section, we present a parallel model to perform computations involving ωcirculant matrices and show an application to the preconditioning of triangular Toeplitz linear systems.The procedure is summarized in the following section and has been introduced in 1987 by Bini, who in Ref. [8] addressed the problem of inverting a n × n triangular Toeplitz matrix and proposed a parallel algorithm that exploits the properties of ω-circulant matrices.

Parallel Solution of (Block) Triangular Toeplitz Systems
Since a diagonalization of ω-circulant matrices through the Fourier matrix is available, it can be used to solve a linear system of the form C ω x ω = r, with C ω as in Definition 2, x ω , r ∈ R n and ω = 0, in an efficient way.More precisely, supposing that C ω is invertible and deriving from Proposition 2 the spectral decomposition x ω can be calculated as the matrix-vector product C −1 ω r via the following algorithm 1.
Compute y = F n D ω r via the FFT, with sequential cost O(n log(n)); 2.
Compute x ω = D −1 ω F * n z via the FFT, with sequential cost O(n log(n)).This algorithm has a total sequential cost of O(n log(n)), but, since all the steps can be fully parallelized, it can be lowered to O(log(n)) in a parallel model of computation.Now, the key idea to compute the solution of a (lower) triangular Toeplitz linear system Ax = r, with x, r ∈ R n and and observing that lim ω→0 Hence, applying the algorithm described above, the solution of interest x can be approximated by computing x ω , in theory up to an arbitrarily small error for ω → 0. Nevertheless, in practice an arbitrarily small error may not be attainable, since the condition number of the transform F n D ω corresponds to the condition number of D ω , which tends to max{|ω|, |ω| −1 } as n → ∞ and thus tends to infinity as ω → 0, causing stability issues.See the next subsection for a discussion in the setting of preconditioning.
The cost of this computation amounts to O(n log(n)) in a sequential implementation and to O(log(n)) in a parallel implementation.We refer the reader to Ref. [8] for details and two strategies that allow for the retrieval of the exact solution.
We stress that, although for the sake of clarity the above discussion takes into account the lower triangular case only, the technique can be easily extended to block triangular and upper triangular Toeplitz matrices.

Preconditioning for (Block) Triangular Toeplitz Systems
The algorithm described in the previous subsection can be applied in a very straightforward way to the preconditioning of linear systems with (block) triangular Toeplitz structure, allowing for an extremely efficient computation of the solution.On the other hand, special attention must be paid to the issue of stability, which strongly relies on the parameter ω.We consider an example to present the approach, but we emphasize that the same basic ideas pertain to a variety of situations.
The example we use is taken from Ref. [21].Here, Liu and Wu seek the numerical solution of the linear wave equation model where Ω ⊂ R d with d > 1 is a bounded and open domain with Lipschitz boundary, ψ 0 , ψ 1 are the initial conditions and f is a given source term.The equations are discretized all-at-once in time by means of implicit finite-difference schemes (see Ref. [21] for details).In the two-dimensional case with Ω a rectangle, the attained matrix shows the following block lower triangular Toeplitz structure in which • m is the number of grid nodes in the spatial mesh and h = 1 m+1 is the spatial step size; • τ is the time mesh step size, obtained as T n , where n is the number of time steps; • I m is the identity matrix of size m; • ∆ h is the matrix obtained by discretizing the Laplacian operator ∆ in (4) with the central finite difference method.
The associated linear system encompasses all the time steps at once and its solution corresponds to the solution to (4) at each time step simultaneously.In other words, if y j ∈ R m denotes the solution to (4) at the j-th time step, the system has the form and therefore the vectors y j , j = 1, . . ., n are computed in parallel as the solution to the all-at-once linear system with the coefficient matrix K.
To solve this linear system, the authors adopt a preconditioned GMRES method and construct a class of block ω-circulant preconditioners as follows.Defining B 1 and B 2 as as described in Ref. [22], K can be expressed in the form Then, given the matrix Π n,ω as in Section 2 and , the generalized preconditioner is defined for ω ∈ (0, 1] as By exploiting the simultaneous diagonalization of the ω-circulant matrices Π n,ω and Γ n,ω and the properties of the Kronecker product, P ω can be written as where F n is the Fourier matrix, n,ω , with Ψ n,ω , Π n,ω being the first column of Ψ n,ω and Π n,ω .Therefore, given a vector r, the proposed algorithm for computing z = P −1 ω r is the following 1. Compute )s 2 via the FFT; Which in its essence is identical to the one proposed by Bini, although we must observe that Ref. [8] is not referenced in the work by Fan and Liu [21].
We conclude this section by discussing the matter of stability when dealing with ω-circulants.While theoretically the aforementioned algorithm may be applied whenever ω = 0, in practical implementation exceedingly small values of the parameter will cause stability issues, which should be considered, when stating the global precision of a solution method.In fact, as we mentioned in the previous section, the condition number of the transform F n Γ ω is equal to that of Γ ω , which tends to max{|ω|, |ω| −1 } as n → ∞ and therefore tends to infinity as ω → 0.
We tested the stability of the procedure described above by fixing a vector x, computing b = P ω x and retrieving the starting vector as P −1 ω b with the spectral decomposition of P ω , using the algorithm described above and in Ref. [21].We denote the retrieved vector with x.
As a first example, we set m = 2 8 , n = 2 8 , x = sin jπ nm + 3 j=1,...,nm and show the relative errors between x and x as the parameter ω decreases in Table 1.Clearly, as ω gets closer to zero, the error grows and x ceases to be an accurate approximation of x, especially in the right part in accordance with the accumulation errors in the solution of a lower triangular system.In particular the error for very small values of ω becomes even worse than that obtained when using a direct inherently sequential algorithm working for triangular systems (ω = 0).Figure 1 shows the plots of the two vectors for ω = 10 −16 .2.18 × 10 −1 10 −18 6.13 × 10 Because of these instability issues, the possibility of using preconditioned MINRES recently emerged, where the preconditioner is taken from the τ algebra whose eigenvector matrix can be chosen as a real symmetric orthogonal and very stable transform (see Ref. [22] and the references therein).This type of methods, called all-at-once, have attracted a remarkable attention in the last 10 years for the potential use in parallel in time methods for evolution of PDEs; see Refs.[23][24][25][26] and their references.Further works on the matter can be found in Refs.[27,28], also for fractional differential equations.
We now consider the second application of ω-circulants in the context of subdivision schemes for generating curves and surfaces.

Basic Ideas on Scalar Subdivision Schemes
A subdivision scheme is an iterative method that generates curves and surfaces based on successive refinements of a polygon or a mesh.The rules that determine said refinements can be formulated by linear, non-linear, or geometrical operators [29][30][31][32][33].The case of linear rules is related to refinable functions in Wavelets Theory [34].In this setting, the vertices of the polygon or mesh are the coefficients in a particular basis of the so-called subdivision curve or subdivision surface.In what follows we focus on the case of the subdivision curve.
Linear uniform subdivision schemes are based on the notion of refinable function, i.e., a function ϕ(t) satisfying a relation of the form and such that, given an initial set of control points P 0 = {P 0 j ∈ R m , j = 0, . . ., n − 1} and the periodisation P 0 j = P 0 j+n , ∀j ∈ Z, the closed curve can also be written as where for the set P k of new points the periodisation modulo 2 k n holds, i.e., P k j = P k j+2 k n , j ∈ Z.
The relation in ( 7) is known as the subdivision rule and defines a subdivision scheme, while the coefficients a = {a j , j ∈ Z} in (5) form the so-called subdivision mask.Then, a subdivision scheme is an iterative method where a curve is generated by consecutive refinements of an initial polygon (see Figure 2).Regarding the convergence of this scheme, it has been proven that the sequence of polygons with vertices in P k converges uniformly to a smooth curve c(t), although in practice a few iterations are sufficient to produce a polygon that appears smooth to the human eye.The sampling of the curve at integer parameters is derived from (5) and reads as The set of values β 0 = β 0 j = ϕ(j), j ∈ Z is called first limit stencil and it can be obtained from a linear system of equations stemming from (5) (see Ref. [35]) or by performing the spectral analysis of the local subdivision matrix (see Refs. [33,36,37]).In a similar way, the second limit stencil and higher order stencils k ≥ 1 can be determined.Hence, by (6) the k-th derivative of the curve at dyadic parameters The mask and the stencils have compact support and therefore in (5) they define a function that has compact support.Even though we use indexes ranging over Z, we are dealing with finite masks and stencils and correspondingly we have only a finite number of non-zero elements.
Given the control points P 0 , let us denote with V 0 = {V 0 j ∈ R m , j = 0, . . ., n − 1} the points obtained by evaluating the subdivision curve with (8), i.e., V 0 j = c(j), j = 0, . . ., n − 1 (see Figure 2d).This problem is modeled by the linear operator M n such that M n is referred to as the matrix that represents the point interpolation operator for linear subdivision schemes.Note that the structure of M n is circulant, according to Definition 1, and the first row is the vector . . ,β −p , 0, . . . , 0,β q , . . . , Depending on the symmetry of the stencils, some particular cases may be analysed.

Definition 4.
A subdivision scheme is called This definition contains particular cases of the primal [38] and dual [39] form of subdivision schemes, respectively, and the corresponding limit stencils show the same type of symmetries [33].As a consequence, the odd-order limit stencil inherits the odd or even symmetry: −j , for odd-symmetric schemes, Then, for the even-order limit stencil we get For Now, let us consider the interpolation of n points V 0 with a subdivision curve.It is natural to think of those points as a sampling of the curve at integer parameters V 0 s = c(s), s = 0, . . ., n − 1, as in (8).By doing so, we get the inverse problem with respect to (10) (see Ref. [40]).
In this setting, an interpolation problem is said to be singular if the operator M n is singular and therefore ill-posed for the selected stencil.In Section 5, we show that in some cases the singularity depends on the value of n.
To solve a non-singular interpolation problem where M n has a circulant structure, one can take advantage of the diagonalization through the Fourier matrix recalled in Section 2. However, in the singular case more strategy is needed.In the literature, in the context of curve and surface schemes those cases are treated by a fitting model [41] or by a fairness functional [42] while introducing more points as degrees of freedom.Another possible approach consists in the regularization in a Tikhonov sense [40,43].Moreover, it is possible to consider a non-singular perturbation of the matrix M n .This is the strategy that we explore in Section 5.2, using a computationally convenient ω-circulant matrix.

The Block-Circulant Case for Hermite Interpolation
Considering the case of interpolating points and associated derivatives up to the (d − 1)-th order, with tangent interpolation as the first instance, Equation ( 9) provides some insight.Let U be the data that we wish to interpolate and suppose that there exists a sequence t j , j = 0, . . ., n − 1 such that the subdivision curve c(t) interpolates the data U (d) , i.e., If the curve c(t) is defined by n control points as in (10), we may get a solution to the point interpolation problem (10) that contradicts the values of higher order derivatives.Thus, in order to interpolate all the information in U (d) with an equal amount of variables and equations, we need to use nd control points P 0 = {P 0 j ∈ R m , j = 0, . . ., nd − 1} with the periodisation P 0 j = P 0 j+nd , j ∈ Z.A natural choice is to set the parameters in (13) as t j = dj, j = 0, . . ., n − 1.Then, from (9) we get the nd equations which can be represented in matrix form as where the d × d blocks of the matrix M n ∈ R nd×nd satisfy B j = B j−n for j = 1, . . ., n and The structure of the matrix in ( 14) is the block adaptation of the scalar version (10), i.e., it is a block circulant matrix, as described in Definition 3. Indeed, when d = 1, M n ≡ M n and U (1) ≡ V 0 .Therefore, the Hermite problem represents the block extension of the point interpolation problem in the matrix sense and we can exploit the corresponding linear algebra tools to solve both inverse problems.

Interpolation Model with Scalar Subdivision Schemes
Given the problem M n P 0 = V 0 in (10), let us apply the tools available for circulant matrices to solve it.Indeed, by Remark 1 we deduce that the spectrum of M n is and for the stencils with compact support in [−p, q] the symbol can be written as Conversely, for even-symmetric schemes β n := β 0 , β −1 , . . ., β −p+1 , 0, . . ., 0, β p , . . ., Either way, the first limit stencil satisfies b(0) = 1.Note that for odd-symmetric schemes it holds Therefore, the study of the symbol can be restricted to the interval [0, π] instead of [0, 2π].
In particular, for both ( 17) and ( 18), b(θ) = 0 implies b(2π − θ) = 0. From (18), for even-symmetric schemes we find b(π) = 0 independently of n and β.As π belongs to the grid 2πN  n ∩ [0, 2π] for even values of n, then from ( 16) we obtain the following result.Proposition 4. For any even-symmetric subdivision scheme, if the amount of interpolated points n is even, then the interpolation matrix M n is singular.
On the other hand, in the context of odd-symmetric schemes the symbol may also vanish in the grid 2πN  n ∩ [0, 2π].As an example, consider the primal family of J-spline schemes [38] for the particular case that generates C 3 subdivision curves with first limit stencil β = 1 48 {1, 12, 22, 12, 1}.The symbol b(θ) = 11 24 + 1 2 cos(θ) + 1 24 cos(2θ) = (cos(θ) + 1) cos(θ) + 1 5 has a root at θ = π, which belongs to the grid 2πN n ∩ [0, 2π] when n is an even number.In the Hermite interpolation scenario, we can find a singular point and tangents interpolation operator even when the point interpolation operator is not singular.Let us consider for instance the cubic B-spline scheme, whose first and second limit stencil are { 1 6 , 4 6 , 1 6 } {− 1 2 , 0, 1 2 }, respectively.Then the corresponding matrices are , which is not singular, and which is singular with kernel of dimension 1.
As a matter of fact, in the specific case of point and tangent interpolation with oddsymmetric schemes, the following proposition holds.Proposition 5. Using an odd-symmetric subdivision scheme and setting d = 2, the attained matrix M n is singular.
Proof.From ( 3) and ( 15), for odd-symmetric schemes we have that the (1, 1)-block of L n is: where we used the fact that β 1 0 = 0 from (12).The block (L n ) 1,1 is singular, since it has a null row, and the matrices L n and M n are in turn singular because of (3) .

A Solution by Shifting Parameters
Propositions 4 and 5 state that, for even-symmetric and odd-symmetric masks, the matrices M n and M n are singular independently of the mask.In this section, we propose a different model for interpolating that avoids this inconvenience.What follows is motivated by Plonka's work [44] on Hermite interpolation with B-spline parametric curves.
Let us consider the dual subdivision schemes again.In Section 4, by interpolating at the parameters t j = j, j = 0, . . ., n − 1, we obtained the first limit stencil as Here we consider the variant with σ ∈ (0, 1).More specifically, given a positive integer ν, we investigate the method for σ varying in 1 ν Z ∩ (0, 1).The evaluation of the basic function is done as in [35].In the case of symmetric masks, as in ( 11) and ( 12), the following results hold.Proposition 6.For an odd-symmetric subdivision scheme with mask a = {a −p , a −p+1 , . . ., a p } where p ∈ N, the stencil obtained with the set ϕ 1 2 + Z ∩ [−p, p] is even-symmetric.Analogously, for an even-symmetric subdivision scheme with mask a = {a −p , a −p+1 , . . ., a p+1 } where p ∈ N, the stencil obtained with the set Our proposal consists in modifying the interpolation models (10) and (14) to For point interpolation, the symbol changes from (18) to (17) and this way we resolve the singularity in Proposition 4. Figure 3 portrays an example of even-symmetric subdivision scheme where after a 1  2 -shift the new matrix is nonsingular.However, when the new stencil leads to a trigonometric polynomial with roots in the grid 1 n Z ∩ [0, 1] it is not possible to avoid the singularity of the corresponding matrix.In such situations, one may consider the general approach V 0 j = c(j + σ), with σ ∈ (0, 1) or treat the singularity as discussed in the next subsection.
The use of shifted parameters for interpolation can be employed as a degree of freedom for the geometry of the interpolation curve (see Figure 4).Nevertheless, the symmetry provided by the subdivision scheme might be lost.Now let us consider the point and tangent interpolation with the interpolation at the parameters 1  2 + Z for odd-symmetric schemes.The first block (20) in the factorization of

Initial data with shifted parameters LS with integer parameters
with This block is not singular for all stencils, as was the case when σ = 0 in Proposition 5. Figure 5b is an example of an odd-symmetric subdivision scheme where after a 1 2 -shift the new matrix is nonsingular, while Figure 5a shows the least square solution (σ = 0), which in this case presents artifacts-loops, to be precise.
In the cases of point interpolation with odd-symmetric schemes and point-tangent interpolation with even-symmetric schemes, we stick with the choice of interpolation at integer parameters.

Our Regularizing Strategy
We have already observed that in some cases the matrix obtained by interpolating at integer parameters is singular and that sometimes the singularity cannot be avoided neither by a proper shift of the parameters.In those cases, independently of the interpolated points, the inverse of M n or M n is not defined and an alternative solution has to be chosen.The first idea is using the pseudo-inverse of the matrix, providing a least-square solution [41].Another approach is considering a regularization, which is an approximation of the ill-posed problem by a family of neighboring well-posed problems [40].Among the regularization methods for solving an ill-posed problem whose discrete form is the linear system Ax = b we mention the Tikhonov approach, which consists in solving the family of problems argmin x Ax − b 2 + λ Lx 2 , depending on the regularization parameter λ.The latter controls the weight given to the residual norm and the regularization term.The optimal value for λ can be chosen by discrepancy principle, generalized cross-validation, or the L-curve method [43].The operator L, which can be taken as the identity operator or a differential operator (for instance, the first or second derivative operator), looks to alter the least square solution to enforce special features of the regularized approximations [40,43].As a necessary condition it is required that Ker(A) ∩ Ker(L) = {0}.This problem is equivalent to solve the normal equations (A A + λL L)x = A b for each λ.
Since the quality of the obtained solution either by least-squares or Tikhonov regularization is not acceptable in many cases, in the following we study a new regularized problem depending on a parameter ω in which a (possibly block) ω-circulant matrix replaces the original circulant coefficient matrix.
First we consider the basic interpolation case where M n,ω is the ω-circulant counterpart of M n with ω = exp(iψ).By Remark 3, the eigenvalues of the new matrix M n,ω are given by the standard uniform sampling of the function b(θ + ψ n ).In this manner, the spectrum of M n is shifted.We observe that M n and M n,ω are both singular if there exists at least two roots of b(θ) in the grid 2πN  n ∩ [0, 2π] with distance ψ n .If the latter condition is not satisfied, then the initial system M n P 0 = V 0 is singular, while its perturbation M n,ω P 0 = V 0 has a unique, complex solution, even though the original problem is defined in the real domain.However, for ψ small enough in modulus, the imaginary part of the solution becomes negligible and the perturbed system is close to the original one.
Secondly, we consider the interpolation of points and associated tangent vectors.The block (20) Then, M n,ω is not necessarily singular, even if M n is, and we can solve M n,ω P 0 = U (2)  rather than (14).In this case, the structure of the symbol differs from ( 17) and (18).Remark 4. For the particular case of cubic B-spline curves, in Ref. [45] the point and tangent interpolation problem is solved by considering only the case of unitary tangents V 1 j , j = 0, . . ., n − 1.Their proposal consists in a non-linear iterative method and convergence has not been proven.

Numerical Tests
In the present section we perform a few numerical tests in order to assess the quality of the solution obtained by adopting the ω-circulant regularization, in the setting of a singular interpolation operator.In all the following examples we employ the J-spline family [38].
We denote by P 0 and P 0 ω the control point vectors obtained as M † n U (1) (resp.M † n U (2)  in the Hermite case) and M † n,ω U (1) (resp.M † n,ω U (2) in the Hermite case), respectively.The pseudo-inverses M † n,ω and M † n,ω are given by with d = 1 for M † n,ω and d = 2 for M † n,ω , and where (L n,ω ) k is the k-th diagonal element in (3).For ω = 1 we immediately get M † n and M † n .We compare the subdivision curves generated from the control point vectors P 0 and P 0 ω ; firstly in terms of interpolation, which is the main goal.With this purpose, consider the residuals M n P 0 − U (1) , M n P 0 − U (2) , M n P 0 ω − U (1) and M n P 0 ω − U (2) which are vectors of points.Notice that the last two residuals are independent from the systems of equations M n,ω P 0 ω = U (1) , M n,ω P 0 ω = U (2) , but they are related to the interpolation problems ( 10) and ( 14) with the regularized solution.
By reasoning in local terms, we choose the norms M n P 0 ω − U (1) , M n P 0 ω − U (2)   with A := sup j (A) j 2 , A j the j-row of matrix A and • 2 the vector 2-norm.This way we measure how far the interpolated information in U (d) , d = 1, 2 is from the approximations in M n,ω P 0 ω , and M n,ω P 0 ω by the maximum distance.
On the other hand, reasoning in global terms we consider the relative error with Frobenius matrix norms M n P 0 ω − U (1)   fro U (1)   fro , M n P 0 ω − U (2)   fro U (2)   fro .
In order to evaluate the quality of each curve, proper fairness measures could be used [46,47], penalizing artifacts such as loops (see Figure 5a).
In each figure, the curve obtained with the least square solution P 0 is portrayed in solid green lines, while the one obtained with the regularized solution P 0 ω is represented by dashed red lines.
The solution computed with the ω-circulant matrix interpolates the given points even if a non singular matrix is used.Indeed, in Figure 6, where we set ω = exp(5 × 10 −3 i), we only see one line because the two solutions P 0 and P 0 ω visually match.We get similar results with values of ω close to exp(5 × 10 −3 i).Hence, the quality of the interpolating solution is not affected by the perturbation.
Applying a primal scheme with odd-symmetric mask as in ( 19), with a singular interpolation operator, the data points are not interpolated by the least square solution (see Figure 7).The spectrum of M n is perturbed considering a regularized solution and shifting the null eigenvalue as shown in Figure 8.As a result, the condition number is improved, but the approximation is not.In Figure 7 the results are shown for ω = exp(5 × 10 −2 i).The residual norms are M n P 0 − U (1)   fro U (1)   fro = 5.5 × 10 −2 , and M n P 0 ω − U (1)   fro U (1)   fro = 6.0 × 10 −2 .

Data points
Least square solution Regularized solution It is worth noticing that the regularized curve is closer to the interpolation points, although this is not reflected in the norms.In the block setting for point and tangent interpolation (see Figure 9) the situation is similar.Even though the spectrum of M n,ω is perturbed with respect to M n and does not contain a null eigenvalue as the latter (see Figure 10), the solution is not improved; refer to Figure 9 in which ω = exp(5 × 10 −1 i).We observe that the points are interpolated, but the tangent interpolation is less accurate.In this case the residual norms are However, if we consider a known solution P 0 , we can generate U (1) = M n P 0 from the columns of M n and obtain, for a suitable parameter ω, a solution that interpolates the points as accurately as the least square solution (see Figure 11 in which ω = exp(5 × 10 −3 i)), even though the spectra of M n and M n,ω are different (see Figure 12).

Conclusions
In the present work we explored two distinct applications of ω-circulant matrices.After a brief introduction of this matrix algebra and its fundamental properties, we first described their role in the parallel solution and preconditioning of (block) triangular linear systems of Toeplitz type.We recalled an efficient classical algorithm, which relies on the well-known diagonalization of ω-circulants through the FFT, and studied the stability issue, which needs special attention due to the fact that the condition number of the fast transform strongly depends on the parameter ω.
Then we analyzed subdivision schemes, for which a few questions remain open.Indeed, the interpolation of points and tangent vectors with scalar subdivision schemes as inverse problem may lead to a system of equations with a singular matrix.We proposed as a first possible solution to change the common approach of interpolating at integer parameters in dependence of the symmetry of the subdivision mask.With this solution we avoid the presence of a singular matrix in the model and we still benefit from the Fourier factorization of a non-singular circulant or block-circulant matrix.
In some cases it is not possible to avoid the singularity while keeping the symmetry of the stencil.In such situations we considered the solution obtained by the means of least square solution and related ω-circulant regolarization for the interpolation problem.More in detail, owing to the singular character of the matrix in the interpolation setting, difficulties are overcome by perturbing the spectrum, by taking into consideration its ω-circulant counterpart.
As observed in the numerical experiments, the ω-regularization approach is not sufficient for solving the problem.When the original is singular, the ω-perturbation is well conditioned, but the interpolation condition is not represented exactly and the latter affects the approximation quality.However, the numerical solution stemming from the ω-circulant linear system interpolates the data points at least in some cases where the solution is known.
A further open problem is the study of the solution Sol(exp(iψ): in this setting, we would like to explore the existence of an asymptotic expansion of the form Sol(exp(iψ)) = Sol + c 1 ψ + c 2 ψ 2 + c 3 ψ 3 + • • • , when a small parameter ψ is considered.
An expansion of the latter type would open the door to simple and cheap extrapolation procedures for the computation of very precise solutions.Preliminary numerical have been performed and are encouraging.We are convinced that this research line is worth to be investigated in future steps.
any fixed p, q ∈ N the matrix C = q ∑ j=−p Π j n α j is circulant, since this sum can always be traced back to the form in the above definition.Denoting with • * the conjugate transpose, circulant matrices admit the following spectral decomposition.Proposition 1. C can be diagonalized asC = F n L n F * n ,where L n = diag s=0,...,

Figure 2 .
Figure 2. Iterations of a non-interpolatory subdivision scheme.(a) Control points P 0 (blue balls).(b) First refinement of the polygon with vertices P 1 .(c) Second refinement with vertices P 2 .(d) Fourth refinement and interpolated points V 0 in the limit (red squares).

Figure 3 .
Figure 3.Point interpolation with a dual subdivision scheme[39] considering the shifted parameters.

5 Figure 4 .
Figure 4. Interpolation with quintic uniform B-spline at different parameter values.

Figure 5 .
Figure 5. Interpolating points and tangents directions with a cubic B-spline curve by shifting the parameters.

Figure 6 .
Figure 6.Point interpolation with a quintic B-spline curve (that belongs to the J-spline family).Green solid line: least square solution.Red dashed line: regularized solution.

Figure 7 .1Figure 8 .
Figure 7. Point interpolation with a J-spline curve (19) with singular interpolation operator (that belongs to the J-spline family).Green solid line: least square solution.Red dashed line: regularized solution.

Figure 9 .Figure 10 .
Figure 9. Point and tangent vectors interpolation with a quintic B-spline curve (that belongs to the J-spline family).Green solid line: least square solution.Red dashed line: regularized solution.

Figure 11 .
Figure 11.Point interpolation with a J-spline curve (19) with a singular interpolation operator for a known solution.Green solid line: least square solution.Red dashed line: regularized solution.

1Figure 12 .
Figure 12.Eigenvalues of M n and M n,ω (real part) in Figure 11.

Table 1 .
The relative error between x = sin Our second example, where x is this time chosen at random, grants similar results, collected in

Table 2 .
We observe that this matter is not addressed in Ref.[21].

Table 2 .
The relative error between x randomly chosen and x as ω decreases.