Next Article in Journal
Multilayer Neuroadaptive Output Feedback Control of Hydraulic Manipulators with Disturbance Compensation
Previous Article in Journal
Shewhart Control Chart for Monitoring Time Between Events with Estimated Parameters in Short Production Runs
Previous Article in Special Issue
Probing the Topology of the Space of Tokens with Structured Prompts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Eigenvector Problem Arising in the Study of Convergence of Walsh–Fourier Series

by
Jeffrey A. Hogan
1 and
Joseph D. Lakey
2,*
1
School of Computer and Information Sciences, The University of Newcastle, Australia, Newcastle, NSW 2308, Australia
2
College of Arts and Sciences, New Mexico State University, Las Cruces, NM 88003, USA
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(5), 829; https://doi.org/10.3390/math14050829
Submission received: 5 February 2026 / Revised: 21 February 2026 / Accepted: 26 February 2026 / Published: 28 February 2026
(This article belongs to the Special Issue New Perspectives in Harmonic Analysis)

Abstract

This work establishes bounds for certain matrices that arise in the study of the convergence of expansions in Walsh functions, or Walsh–Fourier series. The matrices in question arise by “truncating” certain orthogonal matrices corresponding to expansions of dyadic step functions on the unit interval in the basis of Walsh functions. Here, truncation means, for each column, replacing all entries in that column below a column-dependent row by zero. The truncations correspond to partial sum operators in the Walsh basis. We study here a specific family of these truncated matrices that are shown elsewhere to have optimal norms among certain families of truncations. The main result here provides an approximate eigenvalue bound from which one can conclude that the norm of the truncation approaches a fixed value as the dimension of the truncation matrix approaches infinity. Its proof relies on the interplay between continuous and discrete sets. In particular, it is shown that integer samples of certain sinusoidal functions form approximate eigenvectors of a compressed version of the truncation. This bound plays an important role in a bigger new approach to the convergence of Walsh–Fourier series that this work is part of.

1. Introduction

The subject matter of this work is an eigenvalue problem that the authors stumbled upon as part of a larger study of the convergence of Fourier series in the Walsh setting. The broader work seeks uniform bounds on the norms of certain matrices obtained starting from a 2 N × 2 N Hadamard matrix (in the Paley ordering) and setting, for each column k, all entries below a row r ( k ) that depend on k, equal to zero (see Figure 1). These “truncated” matrices correspond to linearized partial sum operators for Walsh–Fourier series when operating on dyadic step functions on [ 0 , 1 ] as explained in [1]. Uniform bounds on their norms imply a direct L 2 -bound for the maximal partial sum operator for Walsh–Fourier series, and thereby almost everywhere convergence of these series on L 2 [ 0 , 1 ] .
The study of almost everywhere convergence of trigonometric Fourier series has a long history. Seminal results include Carleson’s proof [2] of Lusin’s conjecture that the Fourier series of f L 2 ( 0 , 1 ) converges to f almost everywhere, Hunt’s extension [3] to L p ( 1 < p < ) by means of interpolation, and Fefferman’s proof  [4] by means of linearized maximal partial sum operators. Some of the substantial body of related work on convergence of Fourier series in the trigonometric setting is documented in the references [5,6,7,8,9,10,11,12]. Almost everywhere convergence of Walsh–Fourier series in L 2 ( 0 , 1 ) was first established by Billard [13] not long after Carleson’s theorem was proved for the trigonometric case, and  extended to L p along the lines of Hunt’s approach for the trigonometric case by Tateoka in 1968 [14]. Gosselin provided a proof along the lines of Fefferman’s approach [15]. Some related work on the convergence of Walsh–Fourier series can be found in the references [16,17,18]. The Walsh setting corresponds to expansion in characters of the group Z 2 . This generalizes naturally to Vilenkin groups, and almost everywhere convergence has also been studied in this setting, e.g., [19,20,21]. Almost everywhere convergence in L 2 has been studied for other nontrigonometric orthogonal expansions, including Laguerre functions [22], orthogonal polynomials [23], and other orthogonal systems, e.g.,  [24,25,26]. The direct approach of our broader program to convergence in L 2 in the Walsh setting leverages the specific structure of dyadic partial sums, a type of lacunarity. Convergence of lacunary partial sums of trigonometric series has also been studied in the contexts of almost periodic functions [27], multiple Fourier series [28,29], and norms near L 1 in the trigonometric [30] and Walsh [31] settings. Together these references represent just a small fraction of the study of the convergence of Fourier series in the setting of function spaces. Pointwise (almost everywhere) convergence of Fourier expansions of L 2 -functions in these settings typically follows from boundedness of associated maximal partial sum operators between certain endpoint function spaces. But L 2 -boundedness of maximal partial sums is known indirectly, not by direct and more precise estimates. For example, Fefferman’s proof gives an L 2 L 1 -bound for a maximal partial sum operator. The goal of our broader work is to establish direct L 2 -bounds for maximal partial sums in the Walsh setting. The specific matrices considered here have the largest norms among a family of partial sum operators in this setting as will be explained in forthcoming work. See [1] for further discussion. The purpose of the present paper is to identify a precise 2 2 operator norm bound on these specific matrices. While this represents just one piece of the bigger puzzle, it is a critical one. The methods required for this piece are particularly elementary, so we have decided to show it separately as an illustration of the interplay between continuous and discrete methods in harmonic analysis [32,33]. We will now leave this backstory and formulate the specific eigenvalue problem to be addressed here.
For each N = 2 , 3 , we define ( N + 1 ) × ( N + 1 ) matrices D N ,   M N , and C N with rows and columns indexed by { 0 , 1 , , N } × { 0 , 1 , , N } as follows.
D N = diag ( 1 , 1 , 2 , 4 , , 2 N 1 ) , M N ( i , j ) = 1 , i + j N 0 , otherwise , C N = 2 N / 2 D 1 / 2 M N D 1 / 2 .
The matrix C 30 is illustrated in Figure 2a. The main result of this work can be stated as follows.
Theorem 1.
There is a constant C > 0 such that for all N = 2 , 3 , there is an α > 0 and vector s = { sin ( α ( 1 k / N ) } k = 0 N such that
( C N s ) k 1 + 2 2 ( s ) k C N .
If Q = N ln 2 2 then α can be taken to have the form α = π 1 1 Q + 1 Q 2 + O 1 Q 3  as Q .
This uniform bound obviously also implies the 2 -bound C N s ( 1 + 2 2 ) s 2 2 C 2 N . Either the uniform or 2 bound allows us to state that { sin ( α ( 1 · / N ) ) } is an approximate eigenvector of C N . However, to be as precise as possible, it is more accurate to state that { sin ( α ( 1 · / N ) ) } is an approximate eigenvector of C N corresponding to the largest eigenvalue of C N . The norm of C N is increasing with N and is strictly smaller than 1 + 2 2 although evidently the numerical values of C N 2 2 , which are the largest eigenvalues of C N (which is symmetric), approach 1 + 2 2 as N increases, as shown in Figure 2b. The proof of Theorem 1 is quite elementary, and the error terms leading to the bound are fairly explicit. The bound on C N provides a bound on norms of truncated Walsh–Hadamard matrices as described next.

1.1. Connection with “Truncated” Walsh–Hadamard Matrices

Figure 1a shows the Hadamard matrix W H 5 with columns in the “Paley order” (see [1] for a precise definition of W H N ). The middle plot in Figure 2b is the matrix T W 5 and the right plot shows a random “column truncation” of W H 5 . By a column truncation here, we mean, for each column k, setting all entries equal to zero below a given row r ( k ) that depends on k. The matrix T W N is such that the ( k , ) -entry of 2 N / 2 T W N is equal to one if k I i (i.e., 2 i 1 k < 2 i ), I j , and i + j N , and otherwise T W N ( k , ) = 0 . In forthcoming work, it will be shown that for each N, T W N has maximal norm among all “dyadic column truncations” of W H N , that is, column truncations for which the index of the last nonzero entry of a column is always a power of two, cf., [1]. As such, it is imperative to know what is the norm of T W N , or at least to know a tight bound for this norm. Obtaining such a bound is the overall aim of the present work. The following lemma identifies C N as a compression of T W N .
Lemma 1.
If v = [ v 0 , v 1 , , v 2 N 1 ] T is an eigenvector of T W N then the entries of v are constant on dyadic intervals I j = 2 j 1 , , 2 j 1 , j = 1 , , N . That is, there is a vector w = [ w 0 , , w N ] T R N + 1 such that v i = w j if i I j . The compressed vector c = [ c 0 , , c N ] where c 0 = w 0 and c j = 2 ( j 1 ) / 2 w j , j = 1 , , N where w j is the value of v on I j , is an eigenvector of C N with eigenvalue equal to that of v as an eigenvector of T W N .
Proof of Lemma 1.
Any linear combination of columns of T W N is constant on each of the dyadic intervals I 1 , , I N and the first statement follows from this observation. Suppose now that v = [ w 0 , w 1 , ] ( w i is repeated | I i | = 2 i 1 times) is an eigenvector of T W N with eigenvalue λ . Since w j is repeated 2 j 1 times for j = 1 , , N , by the definition of T W N , if  k I i then
λ w i = λ v k = ( T W N v ) k = 2 N / 2 [ w 0 + j = 1 N i 2 ( j 1 ) w j ] = 2 N / 2 [ w 0 + j = 1 N i 2 ( j 1 ) / 2 c j ] = 2 ( i 1 ) / 2 2 ( i 1 N ) / 2 [ c 0 + j = 1 N i 2 ( j 1 ) / 2 c j ]
where we used c j = 2 ( j 1 ) / 2 w j . On the other hand, direct calculation from the definition of C N gives
( C N c ) i = 2 ( i 1 N ) / 2 [ c 0 + j = 1 N i 2 ( j 1 ) / 2 c j ] .
Since k I i , it follows then that 2 ( i 1 ) / 2 λ v k = 2 ( i 1 ) / 2 λ w i = λ c i , that is, c is a λ -eigenvector of C N .    □
Remark 1.
The direct expression above for C N c allows one to express a λ-eigenvector as a solution of the recurrence relation
λ c k + 1 = 2 λ c k 1 2 c N k .
A simple derivation of this relation can be found in [1]. It can be of use in estimating the eigenvectors numerically when taking into account that entries of C N are below machine precision for large N. This relation, however, does not lead to any elegant analytical expression for values of eigenvectors of C N . To get around this obstacle we introduce in the next section a continuous model for C N .

1.2. Outline of Approach

Here is an outline of our approach to proving Theorem 1. In Section 2 we provide a continuous parameter model for the operators C N and denote the corresponding model operators by K Q where Q is related to N by Q = N ln 2 / 2 . In contrast to the finite-dimensional operator C N , the spectrum of the operator K Q can be quantified precisely. In Section 4 we complete the proof of Theorem 1 by applying standard approximation methods in analysis to estimate the errors between coordinates of specific eigenvectors of C N and samples of corresponding eigenfunctions of K Q . In Section 5, we provide plots of the numerically computed eigenvectors of C N having the largest eigenvalue alongside corresponding eigenfunctions of the model operators and the errors between them in order to illustrate how the quality of approximation depends on N. In Appendix A we prove a technical lemma introduced in Section 3 whose proof is less elementary than the other content.

2. A Continuous Model Operator for C N

The sums defining eigenvectors of C N can be regarded as Riemann sum approximations of certain integrals. Specifically, think of c k as a value f ( k ) , k = 0 , , N of a continuous function f and as the weight 2 ( k 1 ) / 2 as an integer value of 2 ( t 1 ) / 2 = 2 1 / 2 e ln 2 2 t for t [ 0 , N ] . Then,
2 ( k 1 N ) / 2 [ c 0 + = 1 N k 2 ( 1 ) / 2 c ] 1 2 e ( t N ) ln 2 2 0 N t e ln 2 2 s f ( s ) d s .
Note that the value at zero is weighted differently from the other values—a matter that we will return to when discussing errors between continuous and discrete settings. In order to think of these sums as approaching some limit as N it is helpful to normalize and think of the samples as being taken on the fixed interval [ 0 , 1 ] . To do so one can set s = N u , d s = N d u and t = N v so the integral can be rewritten
N 2 e N ( v 1 ) ln 2 2 u = 0 1 v e N ln 2 2 u f ( N u ) d u = N 2 e Q ( v 1 ) u = 0 1 v e Q u f ( N u ) d u ,
where Q = N ln 2 2 . Define now the continuous model operator K = K Q associated with C N  by
( K g ) ( v ) = Q ln 2 e Q ( v 1 ) u = 0 1 v e Q u g ( u ) d u , Q = N ln 2 2 .
Our goal is to show that eigenfunctions of K Q are sinusoidal functions then, use this observation to show that the integer (or k / N ) samples of sinusoidal eigenfunctions with largest eigenvalue form approximate C N -eigenvectors of the matrix C N = D 1 / 2 M N D 1 / 2 . That the error in Theorem 1 is on the order of 1 / N heuristically then follows from the entries of the eigenvectors being samples of the eigenfunctions of K Q at points k / N , k = 0 , , N .

3. Spectrum of the Continuous Model

The kernel of K is Q ln 2 e Q e Q ( s + t ) 1 s + t 1 ( s , t ) and K is therefore a compact, symmetric operator on L 2 [ 0 , 1 ] with discrete spectrum. From (3),
d ( K f ) ( t ) d t = Q ln 2 Q e Q ( t 1 ) 0 1 t e Q u f ( u ) d u e Q ( t 1 ) e Q ( 1 t ) f ( 1 t ) = Q ( K f ) ( t ) Q ln 2 f ( 1 t ) .
If K f = λ f then, this can be expressed as
λ f ( t ) = Q λ f ( t ) Q ln 2 f ( 1 t ) or f ( t ) = Q f ( t ) 1 ln 2 1 λ f ( 1 t ) .
Differentiating the last equation again and substituting for f gives
f ( t ) = Q ( f ( t ) + 1 ln 2 1 λ f ( 1 t ) ) = Q 2 f ( t ) 1 ln 2 1 λ f ( 1 t ) + 1 ln 2 1 λ f ( 1 t ) 1 ln 2 1 λ f ( 1 ( 1 t ) ) = Q 2 f ( t ) 1 1 ( λ ln 2 ) 2 .
Proposition 1.
If f is an eigenfunction of K on ( 0 , 1 ) then, f satisfies
f ( t ) = Q 2 1 1 ( λ ln 2 ) 2 f ( t ) , 0 < t < 1 .
In particular, the eigenvalues λ of K and α 2 of the Laplacian are related by λ = 1 ln 2 Q 2 α 2 + Q 2 1 / 2 so λ 1 ln 2 . The eigenfunctions f of K have the form
f ( t ) = a cos ( α t ) + b sin ( α t ) , α 2 = Q 2 1 ( λ ln 2 ) 2 1
where λ is the corresponding eigenvalue of K. We wish to emphasize here that for the bigger project of which this work is an important part, we are interested solely in the smallest positive value of α (largest value of λ ) for which these relations hold when N is fixed, although we will not necessarily mention this in what follows when relationships between values of α and λ appear.
The integral structure of K forces (continuous) eigenfunctions to vanish at t = 1 . This implies that
a cos α + b sin α = 0 , or a b = tan α .
Up to a normalizing factor, this is satisfied by taking a = sin α and b = cos α and hence f ( t ) = sin ( α ( 1 t ) ) . Also, the equation f ( t ) = Q [ f ( t ) 1 λ ln 2 f ( 1 t ) ] for f ( t ) = sin α cos ( α t ) cos α sin ( α t ) implies
f ( 0 ) = α cos α = Q sin α or α Q = tan α
f ( 1 ) = α = sin α Q λ ln 2 or λ = sin α α Q ln 2 = cos α ln 2 .
Corollary 1.
If f ( t ) = a cos ( α t ) + b sin ( α t ) is an eigenfunction of K in (3) then,
a b = tan α = α Q
with α as in (4). Consequently, up to normalization, f can be written as f ( t ) = sin ( α ( 1 t ) ) , t [ 0 , 1 ] . The corresponding eigenvalue λ of K is associated with α via (6).

Relations Between λ and α

We will freely invoke relationships (4)–(6) and the ones we mention presently in what follows, sometimes without reference. The identity (4) implies that λ = 1 ln 2 Q 2 α 2 + Q 2 1 / 2 . Combining this with (6) allows elimination of λ and identification of the values of α via cos ( α ) = Q 2 α 2 + Q 2 1 / 2 which in turn implies cos 2 α = Q 2 α 2 + Q 2 and sin 2 α = α 2 α 2 + Q 2 . From (4)–(6) one also has λ = sec α ln 2 Q 2 α 2 + Q 2 . Setting this equal to the value λ = sin α α Q ln 2 in (6) one obtains
sin ( 2 α ) = 2 sin α cos α = 2 λ α ln 2 Q 1 λ ln 2 Q 2 α 2 + Q 2 = 2 α Q Q 2 α 2 + Q 2 .
Starting from this identity one can derive the approximation of its smallest positive solution stated in Lemma 2. Because it requires less elementary techniques we delay the proof to Appendix A. We provide a numerical example below for illustration.
Lemma 2.
For Q sufficiently large (e.g., Q > 2 ), the smallest positive solution α 0 of sin α = α 2 α 2 + Q 2 1 / 2 has the form
α 0 = π 1 1 Q + 1 Q 2 + O 1 Q 3 .
Example 1.
Standard root finding involving transcendental functions follows the method just outlined: one approximates the function by a polynomial, then applies, for example, Newton’s method to find a root of the derived equation. If one enters
solve sin(t) = sqrt (t^2/(t^2 + q^2)), q = 1000, 0 < t < 10
into Wolfram alpha the smallest nonzero solution produced is  t = 3.13845 . For this value,  1 t / π 9.98 × 10 4 10 3 . Moreover for this value one has  1 t / π 10 3 = 1.002 × 10 6  and  1 t / π 10 3 + 10 6 = 2.27 × 10 9 . Similarly, taking  q = 10 4  the positive value returned is  t = 3.141278  where  1 t / π 9.998 × 10 5 10 4  while  1 t / π 10 4 = 1.0002 × 10 8  and  1 t / π 10 4 + 10 8 = 2.288 × 10 12 . These cases confirm that the smallest positive value of α solving  sin α = α 2 α 2 + Q 2 1 / 2  has the form  α = π 1 1 Q + 1 Q 2 + O 1 Q 3  for these values of Q.

4. Spectrum Approximation of C N

We show now that samples of the vectors sin ( α ( 1 · / N ) ) , where α is the smallest positive value such that sin ( α ( 1 t ) ) is an eigenfunction of K = K Q , form approximate eigenvectors of C N defined by (1) ( Q = N ln 2 2 ). That is, we will show that there is a “discrete” approximate eigenvalue λ discr such that the vectors s = s α with entries ( s α ) k = sin ( α ( 1 k / N ) ) , k = 0 , , N , form approximate eigenvectors of C N in the sense that | ( C N s α ) k λ discr ( N ) ( s α ) k | C / N for some constant C > 0 independent of N. When α is the smallest positive solution of sin α = α 2 α 2 + Q 2 ) 1 / 2 , which corresponds to the largest eigenvalue of K, the specific value that we provide for the approximate eigenvalue of C N is λ discr = 1 + 2 2 , which is an upper bound for C N for any fixed N = 2 , 3 , .
It will help to return to the simple observation that with v = N t and u = N s ,
0 1 t e Q s f ( s ) d s = 0 N v e Q u / N f ( u / N ) d u N
so that
( K f ) v N = Q ln 2 e Q v N 1 0 N v e Q u / N f u N d u N = 1 2 e ln 2 2 v N 0 N v e ln 2 2 u f u N d u .
When applied to the function sin ( α ( 1 u / N ) ) one then has
K sin α 1 · N k N = 1 2 2 k N 2 = 0 N k 1 2 2 0 1 e ln 2 2 u sin α 1 ( + u ) N d u .
To compute the integrals in the sum over the parameter one can use integration by parts (twice) to obtain the following.
Proposition 2.
The integrals in (7) satisfy
1 + N ln 2 2 α 2 0 1 e ln 2 2 u sin ( α ( 1 ( + u ) / N ) ) d u                             = N α [ e ln 2 2 u cos ( α ( 1 ( + u ) / N ) ) | 0 1 + N 2 α 2 ln 2 2 sin ( α ( 1 ( + u ) / N ) ) e ln 2 2 u | 0 1 .
Proof of Theorem  1.
Continuing to write Q = N ln 2 2 , evaluating the terms on the right of the equation in Proposition 2 at the endpoints and dividing both sides by 1 + Q α 2 allows us to write
0 1 e ln 2 2 u sin ( α ( 1 ( + u ) / N ) d u = α N α 2 + Q 2 2 cos α 1 ( + 1 ) N cos α 1 N + 2 ln 2 Q 2 α 2 + Q 2 2 sin α 1 ( + 1 ) N sin α 1 N = C ( ) + S ( )
where C ( ) denotes the cosine difference term and S ( ) the sine difference term. For the cosine difference terms we have
= 0 N k 1 2 2 C = N sin 2 α α = 0 N k 1 2 + 1 2 cos α 1 ( + 1 ) N 2 2 cos α 1 N = N sin 2 α α 2 N k 2 cos α 1 ( N k ) N cos ( α ) = N sin 2 α α 2 N k 2 cos α k N cos ( α ) .
Denote by EC the “cosine error” equal to 2 ( k N ) / 2 times this last expression, that is,
EC ( k ) = N sin 2 α α cos α k N 2 k N 2 cos ( α ) .
By (6), sin α / α = 2 λ / N , hence N sin 2 α / α = 4 α λ 2 / N . Then, since cos ( t ) 1 we have
2 N k 2 | EC ( k ) | = = 0 N k 1 2 2 C 2 N k 2 ( 2 + 1 ) 4 α λ 2 N .
In addressing the terms S ( ) , it will help to recall that the eigenvalue λ of K is λ = sec α ln 2 Q 2 α 2 + Q 2 from which it follows that the coefficient 2 ln 2 Q 2 α 2 + Q 2 in S ( ) is equal to 2 cos ( α ) λ . Although the terms S ( ) can also form a telescoping series, we treat those terms differently in order to compare to the sine-term outputs of the operator K. For this reason we write  
S ( ) = 2 λ cos α 2 sin α 1 ( + 1 ) N sin α 1 N = 2 λ cos α 2 sin α 1 ( + 1 ) N sin α 1 N + ( 2 1 ) sin α 1 N = 2 λ cos α 2 α N cos α 1 t N + ( 2 1 ) sin α 1 N = S C ( ) + S S ( )
where t ( / N , ( + 1 ) / N ) is determined by the mean value theorem and where S C ( ) is the “cosine” term and S S ( ) the corresponding “sine” term in the preceding line. Define ESC ( k ) = 2 ( k N ) / 2 = 0 N k 1 2 / 2 S C ( ) . Since | cos ( t ) | 1 it follows that for each
S C ( ) = 2 λ cos α 2 α N cos α 1 t N 2 3 / 2 λ α N
so it follows that
2 N k 2 | ESC ( k ) | 2 3 2 λ α N = 0 N k 1 2 2 = 2 3 2 λ α N 2 N k 2 1 2 1 2 N k 2 ( 2 + 2 ) 2 λ α N .
As for the terms S S ( ) one has
= 0 N k 1 2 2 S S ( ) = 2 λ cos α 2 + 1 = 0 N k 1 2 2 sin ( α ( 1 / N ) ) = 2 λ cos α 2 + 1 2 sin α + 2 = 1 N k 1 2 1 2 s α ( 2 1 ) sin α = 2 3 / 2 λ cos α 2 + 1 sin α + = 1 N k 2 1 2 s α 2 N k 1 2 s N k α 2 1 2 sin α = 2 3 / 2 λ cos α 2 + 1 2 N k + 1 2 ( C N s α ) k 2 N k 1 2 s N k α 2 1 2 sin α
where, as before, s α = { sin ( α ( 1 · / N ) ) } . Now define | ESS ( k ) | = 2 ( k N ) / 2 2 λ | cos α | 2 + 1 ( 2 1 ) sin α . By Lemma 4, the smallest nonzero α corresponding to an eigenvalue of K, which is the value of interest here, satisfies α = π ( 1 1 / Q + 1 / Q 2 + O ( 1 / Q 3 ) ) . Because of this we have 0 < sin ( α ) < π Q = 2 π N ln 2 . Therefore, we have the bound
| ESS ( k ) | 2 k N 2 4 2 1 2 + 1 λ | cos α | π N ln 2 .
Assembling all of these pieces one has  
( K sin α 1 · N k N = 1 2 2 k N 2 = 0 N k 1 2 2 0 1 e ln 2 2 u sin α 1 ( + u ) N d u = 1 2 2 k N 2 = 0 N k 1 2 2 ( C ( ) + S ( ) ) = 1 2 EC ( k ) + ESC ( k ) + ESS ( k ) + 2 3 / 2 λ ( cos α ) 2 + 1 2 k N 2 2 N k + 1 2 ( C N s α ) k 2 N k 1 2 s N k α = 1 2 EC ( k ) + ESC ( k ) + ESS ( k ) + 2 λ ( cos α ) 2 + 1 2 ( C N s α ) k s N k α
where the terms EC ( k ) , ESC ( k ) , and ESS ( k ) are bounded by multiples of 1 / N according to (9), (10), and (11), respectively.
Since ( K sin α 1 · N k N = λ sin α 1 k N and the terms EC ( k ) , ESC ( k ) , and ESS ( k ) are O ( 1 / N ) the relationship just derived can be written
λ s k α = O 1 N + λ ( cos α ) 2 + 1 2 ( C N s α ) k s N k α .
For λ 0 , canceling λ from both sides this becomes
2 ( C N s α ) k s N k α = ( sec α ) ( 2 + 1 ) s k α + O 1 N
or
( C N s α ) k = 1 2 ( sec α ) ( 2 + 1 ) s k α + s N k α + O 1 N .
We will see momentarily (see Lemmas 3 and 4) that for the smallest α > 0 satisfying (4) one has s N k α = cos α s k α + O ( 1 / N ) . With this additional estimate, finally, one has
( C N s ) k = 1 2 sec ( α ) ( 2 + 1 ) cos ( α ) s k α + O 1 N = ( 1 + 2 2 ) s k α + O 1 N
where we have used that for π α = π ( 1 / Q 1 / Q 2 + O ( 1 / Q 3 ) ) one has sec α = 1 + π 2 2 Q 2 + O ( Q 4 ) while cos α = 1 π 2 2 Q 2 + O ( Q 4 ) . This completes the proof of Theorem 1, pending Lemmas 3 and 4 below.    □
Remark 2.
As observed in Figure 2b, the norms of C N are increasing with N and bounded above by 1 + 2 2 . Further numerical analysissuggests that 1 + 2 2 C N decays like 1 / N 2 , just as 1 ln 2 K Q decays like 1 / Q 2 .While the estimates established here will suffice for bounds on sums of Walsh–Fourier series to be established elsewhere, identifying, for fixed N, a more precise approximation of  C N  that is strictly smaller than the value  1 + 2 2  identified by Theorem 1 would require a more detailed analysis of each of the error terms  EC , ESC , and  ESS .
Lemma 3.
When  α = π ( 1 1 Q + O ( 1 / Q 2 ) ) ( Q = N ln 2 / 2 ) one has
sin ( α k / N ) = cos α sin ( α ( 1 k / N ) ) + O ( 1 / N ) .
Proof. 
sin ( α k / N ) = sin ( π α k / N ) = sin ( π α + α ( 1 k / N ) ) = sin ( π α ) cos ( α ( 1 k / N ) ) + cos ( π α ) sin ( α ( 1 k / N ) ) = π Q + O 1 Q 2 cos ( α ( 1 k / N ) ) cos α sin ( α ( 1 k / N ) )
where we have used the Taylor approximation for sine after Lemma 2. We observe also that cos ( π α ) = 1 π 2 2 Q 2 + O ( 1 / Q 4 ) which allows us to conclude also that sin ( α k / N ) sin ( α ( 1 k / N ) ) = O ( 1 / N ) .    □
Lemma 4.
When α = π ( 1 1 Q + O ( 1 / Q 2 ) ) ( Q = N ln 2 / 2 ) one has sin ( α ) = 2 π N ln 2 + O ( 1 / N 2 ) .
This lemma follows immediately from the Taylor approximation of sin t .

5. Plots

Figure 3 plots the C N -eigenvectors of C N for N = 32 , 180 and 1000, the last value being close to the maximum size that matlab (R2025b) can handle due to entries of C N being below machine precision beyond this range. Note that 180 is close to being the geometric mean of 32 and 1000. The eigenvectors of C N are plotted as solid curves. In comparison, the dashed plots are the curves sin ( γ ( 1 k / N ) ) where γ = π ( 1 1 / Q + 1 / Q 2 ) is an approximation of the value α defined by (4) for the corresponding eigenvalue, see Lemma 2. The curves near the bottom of each plot are the differences between the corresponding (normalized) eigenvector and sin ( γ ( 1 k / N ) ) curves. These error curves are themselves approximately sinusoidal. This suggests that the C N -eigenvectors of C N themselves are approximately sinusoidal (except in the zeroth coordinate) and that the error comes mainly from the approximation of the value α itself. The 2 -norms of these errors are 0.1475 ( N = 32 ), 0.0344 ( N = 180 ) and 0.0065 ( N = 1000 ), respectively.

6. Conclusions

We have proved that the vectors s α = { sin ( α ( 1 k / N ) ) } k = 0 N form approximate eigenvectors of C N with approximate eigenvalue 1 + 2 2 when α is the smallest nonzero solution of sin α = α 2 α 2 + Q 2 1 / 2 . The methods do not strictly prove that 1 + 2 2 is an upper bound on the eigenvalues of C N but this can be confirmed by running the arguments backwards. That is, starting with a sequence of norm-maximizing eigenvectors of C N , one can work back to showing that these eigenvectors approximately correspond to samples of eigefunctions of the continuous model operator K, which are sinusoidal. That the norm of C N is bounded by 1 + 2 2 has been verified numerically directly for computable values of N, and the numerical norm-maximizing vectors are evidently sinusoidal (with the exception of values at the zeroth coordinate where a correction term has to be introduced due to the boundary weighting). The estimates presented here, using elementary techniques from calculus and a bit of linear algebra, form an important component of a study of convergence of Fourier series in the setting of Walsh functions on L 2 [ 0 , 1 ] .

Author Contributions

Conceptualization, J.D.L. and J.A.H.; validation, J.D.L. and J.A.H.; writing—original draft preparation, J.D.L.; writing—review and editing, J.D.L. and J.A.H.; visualization, J.D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Lemma 2

Proof of Lemma 2.
The eigenvalues λ of K Q form a discrete set corresponding, for fixed Q, to the positive solutions α of sin ( α ) = ( α 2 / ( α 2 + Q 2 ) ) 1 / 2 which correspond to (simple) eigenvalues of the Laplacian (i.e., d 2 d x 2 ) on ( 0 , 1 ) . These solutions α form a discrete set of “t” coordinates of points of intersection of the graphs of sin t , which has period 2 π , and of ( t 2 / ( t 2 + Q 2 ) ) 1 / 2 , which is concave down on ( 0 , ) and approaches one as | t | . For k = 1 , 2 , these intersections occur for t near ( 2 k + 1 2 ) π , once on each side of ( 2 k + 1 2 ) π , and the t-coordinates of the crossings get closer to ( 2 k + 1 2 ) π as k . By Jordan’s inequality one has sin t 2 t / π on [ 0 , π / 2 ] . This implies that for Q > π / 2 one has sin t > ( t 2 / ( t 2 + Q 2 ) ) 1 / 2 on [ 0 , π / 2 ] . For such Q the smallest positive solution α 0 ( Q ) of sin α = ( α 2 / ( α 2 + Q 2 ) ) 1 / 2 thus satisfies π / 2 < α 0 ( Q ) < π . The kernel e Q ( t 1 ) e Q s 1 s + t 1 extends analytically in the parameter Q which implies that the operators K Q extend to a (weakly) analytic family of operators whose (distinct, simple) eigenvalues depend analytically on Q and therefore also analytically in 1 / Q for | Q | > 1 , say, e.g., [34]. By (4), α = ( Q λ ln 2 ) / 1 ( λ ( ln 2 ) ) 2 and λ < 1 / ln 2 . It follows that α 0 = α 0 ( Q ) also can be expressed as an analytic function of 1 / Q when Q is large. Since α 0 > π / 2 we can write α 0 = π ( 1 β ) where β < 1 / 2 depends analytically on 1 / Q (for Q > π / 2 ) and thus can be written β ( Q ) = c 0 + c 1 / Q + c 2 / Q 2 + O ( 1 / Q 3 ) , ( | Q | > π / 2 ). Using periodicity of sin t we can then expand the first few terms of the identity
sin ( 2 π β ) = sin ( 2 α 0 ) = 2 α 0 Q k = 0 ( 1 ) k α 0 Q 2 k
as
2 π β ( 2 π β ) 3 3 ! + = 2 π ( 1 β ) Q 1 π ( 1 β ) Q 2 + .
When expressing the right hand side above in 1 / Q there is no constant term. This leads one to conclude that in the expansion β = c 0 + c 1 / Q + c 2 / Q 2 + O ( 1 / Q 3 ) one must have c 0 = 0 . Likewise, setting the coefficients of 1 / Q equal on both sides then leads to c 1 = 1 then setting the coefficients of 1 / Q 2 equal on both sides yields 2 π c 2 = 2 π c 1 or c 2 = 1 . Altogether this yields β = 1 Q 1 Q 2 + O ( 1 / Q 3 ) and therefore α 0 ( Q ) = π 1 1 Q + 1 Q 2 + O ( 1 Q 3 ) . This proves the lemma.□

References

  1. Lakey, J.D. Towards direct L2-bounds for maximal partial sums of Walsh-Fourier series: The case of dyadic partial sums. arXiv 2026. [Google Scholar] [CrossRef]
  2. Carleson, L. On convergence and growth of partial sums of Fourier series. Acta Math. 1966, 116, 135–157. [Google Scholar] [CrossRef]
  3. Hunt, R.A. On the convergence of Fourier series. In Orthogonal Expansions and Their Continuous Analogues (Proc. Conf., Edwardsville, Ill., 1967); Southern Illinois Univ. Press: Carbondale, IL, USA, 1968; pp. 235–255. [Google Scholar]
  4. Fefferman, C. Pointwise convergence of Fourier series. Ann. Math. 1973, 98, 551–571. [Google Scholar] [CrossRef]
  5. Arias de Reyna, J. Pointwise Convergence of Fourier Series; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2002; Volume 1785. [Google Scholar]
  6. De Souza, G.S. On the convergence of Fourier series. Internat. J. Math. Math. Sci. 1984, 7, 817–820. [Google Scholar] [CrossRef]
  7. Jørsboe, O.G.; Mejlbro, L. The Carleson-Hunt Theorem on Fourier Series; Lecture Notes in Mathematics; Springer: Berlin, Germany; New York, NY, USA, 1982; Volume 911. [Google Scholar]
  8. Kolmogoroff, A. Une contribution á l’étude de la convergence des séries de Fourier. Fund. Math. 1924, 5, 96–97. [Google Scholar] [CrossRef]
  9. Lacey, M.; Thiele, C. A proof of boundedness of the Carleson operator. Math. Res. Lett. 2000, 7, 361–370. [Google Scholar] [CrossRef]
  10. Lacey, M. Carleson’s theorem: Proof, complements, variations. Publ. Mat. 2004, 48, 251–307. [Google Scholar] [CrossRef][Green Version]
  11. Mastyło, M.; Rodríguez-Piazza, L. Convergence almost everywhere of multiple Fourier series over cubes. Trans. Am. Math. Soc. 2018, 370, 1629–1659. [Google Scholar] [CrossRef]
  12. Mozzochi, C.J. On the Pointwise Convergence of Fourier Series; Lecture Notes in Mathematics; Springer: Berlin, Germany; New York, NY, USA, 1971; Volume 199. [Google Scholar]
  13. Billard, P. Sur la convergence presque partout des séries de Fourier-Walsh des fonctions de l’espace L2(0,1). Studia Math. 1967, 28, 363–388. [Google Scholar] [CrossRef]
  14. Tateoka, J. Almost-everywhere convergence of Walsh-Fourier series. Proc. Jpn. Acad. 1968, 44, 647–650. [Google Scholar] [CrossRef]
  15. Gosselin, J. On the convergence of Walsh-Fourier series for L2(0,1). Adv. Math. Suppl. Stud. 1979, 4, 223–232. [Google Scholar]
  16. Hunt, R.A. Almost everywhere convergence of Walsh-Fourier series of L2 functions. In Actes du Congrès International des Mathématiciens (Nice, 1970), Tome 2; Gauthier-Villars Éditeur: Paris, France, 1971; pp. 655–661. [Google Scholar]
  17. Muscalu, C.; Tao, T.; Thiele, C. Lp estimates for the biest. I. The Walsh case. Math. Ann. 2004, 329, 401–426. [Google Scholar] [CrossRef]
  18. Thiele, C. The quartile operator and pointwise convergence of Walsh series. Trans. Am. Math. Soc. 2000, 352, 5745–5766. [Google Scholar] [CrossRef][Green Version]
  19. Areshidze, N.; Persson, L.E.; Tephnadze, G. Convergence almost everywhere of partial sums and Féjer means of Vilenkin-Fourier series. Mediterr. J. Math. 2025, 22, 15. [Google Scholar] [CrossRef]
  20. Gát, G. On almost everywhere convergence of Fourier series on unbounded Vilenkin groups. Publ. Math. Debrecen 2009, 75, 85–94. [Google Scholar] [CrossRef]
  21. Gosselin, J. Almost everywhere convergence of Vilenkin-Fourier series. Trans. Am. Math. Soc. 1973, 185, 345–370. [Google Scholar] [CrossRef]
  22. Chen, C.P.; Lin, C.C. Almost everywhere convergence of Laguerre series. Stud. Math. 1994, 109, 291–301. [Google Scholar]
  23. Badkov, V.M. Convergence in the mean and almost everywhere of Fourier series in polynomials that are orthogonal on an interval. Math. USSR-Sb. 1974, 95, 229–262, 327. [Google Scholar] [CrossRef]
  24. Kita, H. Almost everywhere convergence of orthogonal series. Acta Math. Hungar. 1985, 46, 73–80. [Google Scholar] [CrossRef]
  25. Guadalupe, J.J.; Pérez, M.; Ruiz, F.J.; Varona, J.L. Two notes on convergence and divergence a.e. of Fourier series with respect to some orthogonal systems. Proc. Am. Math. Soc. 1992, 116, 457–464. [Google Scholar] [CrossRef][Green Version]
  26. Móricz, F.; Tandori, K. Almost everywhere convergence of orthogonal series revisited. J. Math. Anal. Appl. 1994, 182, 637–653. [Google Scholar] [CrossRef][Green Version]
  27. Bailey, A.D. Pointwise convergence of lacunary partial sums of almost periodic Fourier series. Proc. Am. Math. Soc. 2014, 142, 1757–1771. [Google Scholar] [CrossRef]
  28. Antonov, N.Y. On the almost everywhere convergence of lacunary sequences of multiple rectangular Fourier sums. Tr. Inst. Mat. Mekh. 2015, 21, 30–45. [Google Scholar] [CrossRef]
  29. Goginava, U.; Oniani, G. On the almost everywhere convergence of multiple Fourier series of square summable functions. Publ. Math. Debrecen 2022, 97, 313–320. [Google Scholar] [CrossRef]
  30. Lie, V. Pointwise convergence of Fourier series (I). On a conjecture of Konyagin. J. Eur. Math. Soc. (JEMS) 2017, 19, 1655–1728. [Google Scholar] [CrossRef]
  31. Di Plinio, F. Lacunary Fourier and Walsh-Fourier series near L1. Collect. Math. 2015, 6, 219–232. [Google Scholar]
  32. Ayman-Mursaleen, M. Approximation and Convergence Analysis of Blending-Type q- Baskakov Operators Using Wavelet Transformations. Filomat 2025, 39, 11117–11130. [Google Scholar]
  33. Bhat, A.A.; Khan, A.; Iliyas, M.; Khan, K.; Mursaleen, M. A blending approach to transfinite interpolation on compact disks using neural network operators. Iran. J. Sci. 2025, 1–11. [Google Scholar] [CrossRef]
  34. Kato, T. Perturbation Theory for Linear Operators, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
Figure 1. (a) Walsh–Hadamard matrix W H N of size 32 × 32 ( N = 5 ). (b) T W N matrix of size 32 × 32 . (c) Column truncation length = 2 r where r { 0 , , 5 } is uniformly randomly generated.
Figure 1. (a) Walsh–Hadamard matrix W H N of size 32 × 32 ( N = 5 ). (b) T W N matrix of size 32 × 32 . (c) Column truncation length = 2 r where r { 0 , , 5 } is uniformly randomly generated.
Mathematics 14 00829 g001
Figure 2. (a) The matrix C N ( N = 5 ). (b) Norm of C N for N = 1 , , 100 .
Figure 2. (a) The matrix C N ( N = 5 ). (b) Norm of C N for N = 1 , , 100 .
Mathematics 14 00829 g002
Figure 3. Plots of C N -eigenvector, approximating sinusoid, and error. (a) N = 32 . (b) N = 180 . (c) N = 1000 .
Figure 3. Plots of C N -eigenvector, approximating sinusoid, and error. (a) N = 32 . (b) N = 180 . (c) N = 1000 .
Mathematics 14 00829 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hogan, J.A.; Lakey, J.D. An Eigenvector Problem Arising in the Study of Convergence of Walsh–Fourier Series. Mathematics 2026, 14, 829. https://doi.org/10.3390/math14050829

AMA Style

Hogan JA, Lakey JD. An Eigenvector Problem Arising in the Study of Convergence of Walsh–Fourier Series. Mathematics. 2026; 14(5):829. https://doi.org/10.3390/math14050829

Chicago/Turabian Style

Hogan, Jeffrey A., and Joseph D. Lakey. 2026. "An Eigenvector Problem Arising in the Study of Convergence of Walsh–Fourier Series" Mathematics 14, no. 5: 829. https://doi.org/10.3390/math14050829

APA Style

Hogan, J. A., & Lakey, J. D. (2026). An Eigenvector Problem Arising in the Study of Convergence of Walsh–Fourier Series. Mathematics, 14(5), 829. https://doi.org/10.3390/math14050829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop