1. Introduction
The aim of this paper is to give a concrete estimate of the error incurred when approximating a function f in the root mean square by a partial sum, , of its Hermite series. The source of the estimate, the fact that the estimate is numerical rather than just an order of magnitude and the centrality of the so-called Dirichlet operator to the partial sum are all discussed following the statement below of our principal result.
Hermite series, often under the name Gram–Charlier series of Type A, are used to approximate probability density functions, which are key to statistical inference [
1] in such fields as: visual image processing in engineering [
2], reliability ([
3], Chapter 3), econometrics [
4], quantile regression [
5], big data in nonparametric statistics ([
6], Chapters 1 and 2) and machine learning in artificial intelligence ([
7], Chapter 5). In
Section 5 of this paper, we apply our estimates to the Hermite series approximation of a trimodal density function from [
8] (pp. 176–178); see also [
9].
Again, as observed in [
10] and the references therein, Hermite functions are the exact unperturbed eigenfunctions or the limiting asymptotic eigenfunctions (Mathieu functions, prolate spheroidal wave functions) for many problems of physical interest, Thus, Hermite series arise, for example, in the study of the harmonic oscillator in quantum mechanics and of equatorial waves in dynamic meteorology and oceanography.
We recall that the
k-th Hermite function,
, is given at
by:
where
. Given
, its Hermite series is:
in which:
Our principal result is:
Theorem 1. Consider a band-limited function . Fix , and suppose exists and is integrable on .
Then, with , the K-th partial sum of the Hermite series of f and:one has:in which . We work with an expression for
, due to G. Sansone [
11], the which expression is a refinement of the one employed by J.V. Uspensky in [
12] to prove his classical convergence theorems for Hermite series (the treatment of
, is similar).
As seen from Formulas (2) and (5) below, the core of
is the Dirichlet operator:
in which
and
. We have cut the time content of
f outside
and the frequency content greater than
N, so that our assertion about the Dirichlet operator requires
f to be almost time-limited in
and the frequency content almost band-limited in
Moreover, the estimates we give in
Section 3 of the terms other than:
in the estimate of
in Formula (5) below in
Section 4 quantify how far
, or rather
, is from zero in the root mean square on
. The application of these estimates in our example is straightforward (though somewhat tedious) given the material in
Appendix A. We emphasize that the estimates are in numbers and not in orders of magnitude.
A key fact, used repeatedly in the derivation at our estimates, is that the Hilbert transform,
H, given, for suitable
f at almost all
by:
is a unitary operator on
. Also important will be certain identities of Bedrosian, valid for band-limited functions
, namely,
and:
for fixed
. See [
13].
The error involved in approximating
f by
is established in Lemma 1 in the next section. The Sansone estimates are intensively studied in
Section 3. These enable the proof of Theorem 1 in the following section, where
is defined. An explicit estimate of
is described in
Appendix A.
The estimate of the root mean square error in (
1) is both more specific and more easily calculated than the one in the paper [
9] of the first two authors and M. Brannan. In the final section, we revisit the trimodal distribution studied in that paper.
3. The Sansone Estimates
To begin, we describe Sansone’s analysis of the usual expression for
, when
K is even, say
. For ease of reference to [
12], we work with the variables
x and
α, rather than
t and
s.
Now, according to [
11] (p. 372, (4) and (5)),
where:
and
. Further, by (7) and (8) on p. 373 and the first two estimates on p. 374, together with (15.1) and (15.2) on p. 362, one has:
in which:
and:
The functions
and
are defined through the equations:
and:
Expanding the products in (3) yields:
where, firstly,
as shown on p. 375 of [
11]. Again, on p. 376, we find:
An argument similar to the one for
gives:
Let us note that the expression for
on pp. 345–372 of [
11] is incorrect.
To prove Theorem 1, we will require the following estimates of integrals involving terms on the right-hand side of (2).
3.1. Estimate of
Next,
and, by Bedrosian’s identity,
where
.
A similar result holds with replaced by .
3.2. Estimate of
3.3. Estimate for
Now,
here is, essentially, the same as the
involved in
, and we find:
Finally, with
,
Observe that, by Bedrosian’s identity,
where
. A similar result holds with
replaced by
. Thus,
since:
Expanding the integrand on the right-hand side of the last inequality, we find:
3.4. Estimate of
3.5. Estimate of
The expression:
is dominated by the sum of five terms, which we now consider in turn.
- (i)
The term
is no bigger than:
in which:
- (ii)
Arguing as in (i), we have:
- (iii)
The mean square on
of:
is dominated by:
- (iv)
The method of (ii) applied to the estimation of the square mean on
, of:
leads to the upper bound:
- (v)
The square mean, on
, of:
is, by a now familiar argument,
5. An Example
Example 1 of [
9] involved the Hermite series approximation of the trimodal density function:
in which:
is the standard normal density.
Figure 1 above shows that
f is essentially supported in
. Again, from the graph of
in Figure 4c of [
9], we see that it effectively lives in
.
Taking
and
(so
)
, we obtain:
One always has:
so, if the supremum norm is rather large, the smaller root mean square norm gives a better measure of the average size of
. In our case:
Therefore, the supremum norm is here the better measure. Nevertheless, it is the computable estimates giving (6) that lead us to
Figure 2 and hence to (7).
We observe that the graph in
Figure 2 is of the error function
approximated by
, where
is the Dominici approximation to
given in Theorem 1.1 of [
9].
The term involving
in (6) makes the biggest contribution to the upper bound in (
1). Thus,
and:
while:
For the convenience of the reader, we have gathered together in the Appendix the terms that make up .