1. Introduction
Each convergent series has a certain sum value, which is equal to the number to which the series converges. However, as early as in the 18th century, it was discovered that the concept of the sum is much broader than the convergence. Various examples of the summation of divergent series performed by mathematicians of the 17th and 18th centuries (see, e.g., [
1,
2] and chapters 1 and 2 in [
3]) showed that many naturally occurring divergent series have definite sum values (similarly to convergent ones), achieved, generally, by rather natural manipulation with series by the rules valid for finite sums, without considering convergence as a preliminary condition. Summarizing these studies, Euler notes in [
1], Part I, Section 109 the following:
“…we conclude that series of this kind, which are called divergent, have no fixed sums, since the partial sums do not approach any limit that would be the sum for the infinite series. This is certainly a true conclusion, since we have shown the error in neglecting the final remainder. However, it is possible, with considerable justice, to object that these sums, even though they seem not to be true, never lead to error. Indeed, if we allow them, then we can discover many excellent results that we would not have if we rejected them out of hand”.
(English translation taken from [
4])
As well as [
3], we refer to [
5,
6,
7,
8] for a modern account of those earlier summation attempts.
Remark 1. We let , with no limits specified, always mean in this paper.
Somewhat later, already in the 19th century, the question of the summation of divergent series was fully developed on the basis of rigorous definitions of summation. These definitions (see [
3]) revealed the exact mathematical content of the intuitive approach of Euler and other earlier mathematicians. On the other hand, these definitions of summation themselves arose largely as a result of the analysis of early examples of summation.
One of such example is
the hypergeometric series of alternating factorials (also called
the Wallis series), which appears in two closely related forms:
Euler [
2] and Part II, Section 10.III in [
1] suggested several ways of the summation of (
1). One of these methods (see
Section 2) yields the value of the sum in the form of a definite integral, which is not expressible in elementary functions but is easily calculated with any necessary accuracy using standard numerical methods.
Another method consists of a special procedure of an
iterative multi-step transformation of a given series with an ensuing separation of some numbers of initial terms of the intermediate series. Euler performs three steps of this iterative procedure and obtains an approximate value of the sum of the series (
1), which is quite close to the “exact” value given by the first method but does not consider the issue of convergence to the “exact” value with an infinite number of steps. This method was called “a more remarkable, though less precise calculation” in Hardy Section 2.6 in [
3].
The analysis of this second, iterative summation method is the main task of this article. We will show that it belongs to the category of linear summation methods, more specifically, to the category of triangular matrix methods, well known and extensively studied in modern theory of the summability of divergent series. We investigate the convergence of different variants of this summation by Euler.
We will also show that the summation method in question was rediscovered in the late 1950s as the
Jakimovski method [
9], Section 1.3, on the basis of a completely different background and in a different form, and Euler’s priority for more than 200 years seems to have gone unnoticed. The history of the Jakimovski method began in an article [
10] by Karamata, where a triangular summation method was introduced, such that the corresponding matrix coefficients were identical to Stirling numbers of the first kind. This method was reintroduced by Lototsky [
11] and became known in the late 1950s as the Karamata–Stirling, or Lototsky method, denoted as
or
, with
a real parameter. Finally, Jakimovski [
12] defined the transformation
named after him, with
being a sequence of real parameters. That was a far-reaching generalization since
is identical to
.
To finish the Introduction, we may note that various aspects of the summation of series (
1) remain the subject of active modern research, see, for example, [
13,
14].
2. The “Exact” Value of the Sum of Alternating
Factorials
Euler’s own approach to summing divergent series was outlined in [
1], Part I, Section 111 (English translation taken from [
4]):
”Let us say that the sum of any infinite series is a finite expression from which the series can be derived. <…> With this understanding, if the series is convergent, the new definition of sum agrees with the usual definition. Since divergent series do not have a sum, properly speaking, there is no real difficulty which arises from this new meaning. Finally, with the aid of this definition we can keep the usefulness of divergent series and preserve their reputations.”
This results, in particular, in the Abel summation method (see [
3]), according to which a series
has (A) sum
s in the case when the power series
converges at small x (i.e., in some neighborhood of zero) to the analytical function
such that
.
However, as far as series (
1) is concerned, the corresponding power series
diverges at any
, i.e., the definition of (A) sum as above does not immediately work in this case. Nevertheless, Euler applied this idea in [
2], Sections 19–20 (see also Sections 2.4–2.5 in [
3], and [
7]) via series (
2), which, however, is interpreted as the asymptotic series of the function
where
, and
is the exponential integral function [
15]. This allows us to formally (and heuristically) expand
as follows:
since
. Now, with
, we obtain the following evaluation of (
1),
which we will call
the “exact” values of the divergent series (
1), respextively, (A) and (B). Euler gives the value
for
in [
1], Part II, Section 10. However, here, the last four decimals were found to be erroneous by Mascheroni [
16], p. 11, who gave the true value
, also cited by Kowalewski (editorial) in [
17].
This summation technique can be given a mathematically rigorous form based on the summation method (B*) according to Hardy [
3], Section 8.11. This method defines a value
s to be the (B*) sum of a series
if
- (a)
The series converges in some neighborhood of 0;
- (b)
The function has a regular analytic continuation to ;
- (c)
.
Thus, the value
in (
5) is the (B*) sum of the series (
5)(A).
Other Eulerian summation arguments, leading to the same value (
5)(A), are presented in Section 2.4 of [
3], [
7], and [
18]. For instance, let
. After formal differentiation, we arrive at equation
. Its solution, with the initial condition
at
, has the form
Taking
here leads to (
5)(A), as required.
Now, having the exact (in a certain conditional sense, of course) values (
5) of the sums of the divergent series (
1)(A),(B), let us proceed to another Eulerian method for summing the series of alternating factorials.
3. Iterative Summation Procedure by Euler
Each step of this Euler procedure includes a transformation, later named after him and also known as the
Euler–Knopp summation [
3], which translates a given series
into
, where
given (in a slightly different notation), e.g., in [
1], Part II, Section 3 or Section 8 for alternating series.
It is known from e.g., [
3], Section 8.3 that every convergent series is converted by (
6) again into a convergent series, and with the same sum. (This property is called
regularity.) At the same time, there are also divergent series that transform into convergent ones. For example, the series
is converted to the series
. This progression is summed to the value
, exactly equal to the “sum” of the original series, formally calculated according to the rule of infinitely decreasing geometric progression.
Generally, any progression
is transformed by (
6) to the progression
, where
, and we have
in the range
.
The transformation (
6) can be applied to a given progression
multiple times. In this case, the resulting series after
m transformations will converge provided the inequality
holds, which tends to be limited in the left semi-axis
.
If we now consider the series (
1) as a kind of progression, the denominator of which tends to
via negative integers, then it becomes rather clear that the series (
1) will not converge after any finite number of transformations using the Formula (
6); this can be verified directly as well. The idea that promises success also becomes clear: apply transformation (
6) sequentially infinitely many times.
However, the result of such an infinite conversion will be the series of all zeros, and there is no benefit from this. Euler overcomes this difficulty with a special trick. Namely, he directly sums up several initial terms of the series and, stashing the resulting sum, applies the transformation once again to the remainder of the series. This results in the following computation in [
1], Part II, Section 10.III, also presented in [
19], Section 1047, and [
3], Section 2.6.
Example 1. Working with the truncated series as in (
1)
(B1), Euler evaluates it as follows: Thus, transformation (
6)
is applied three times, with separate summations of two, and again two initial terms before the second and third transformations, respectively. Finally take eight initial terms in parentheses in the last row (i.e., those framed in line (F)) along with the number and, performing calculations, find the value(Erroneously in the original publication [
1]
.) This is rather close to the “exact” value of (
5)
(B). This ends the calculation. Two things related to Example 1 should be mentioned.
First, the correction in (
7) was made by Gerhard Kowalewski in [
17].
Second, in fact Euler did not explicitly mention taking precisely eight terms in line F, rather speaking about four + some more terms in [
1], Part II, Section 10.III. However, it is only the choice of exactly eight terms that is compatible with the final value (
7) (in either version), see
Table 7 in
Section 15 for numerical details.
Example 2. One can maintain a Euler-style computation for the full series of (
1)
(A), instead of the truncated one of (
1)
(B), closely following the numerical content of Example 1: Now, take eight initial terms in parentheses in F’ along with the number . This returns , close to of (
5)
(A). Another rather similar but somewhat differently arranged computation in [
2], Sections 13–16, (commented in [
7], Section III) yields an approximate value
for the whole series
of (
1)(A) after three steps.
4. Why 2–2–8?
It is not immediately clear why Euler cuts exactly two, two, and eight initial terms of the given and intermediate series in the process of calculation in Example 1. Indeed, in the two first cases, Euler cuts to the least term. But this is definitely not the case in the last cutting of eight terms, because in fact the least term (in absolute value) is the sixth one
not the eighth one
, as
Table 5 in
Section 15 shows. Moreso terms 4, 5, 7 are smaller than the eighth term as well.
It does not look either as if Euler takes the best possible approximation, since taking seven, five, six, or three terms in brackets in the last line of calculation in Example 1 gives slightly better, than (
7), approximations of the “exact” value
of (
5), see
Table 7 in
Section 15.
Generally, Euler did not go into the details of his choice of the number of the initial terms to separate at each consecutive step. There is not much said about this in [
3,
7] either. Our guess is that Euler separated as many initial terms as necessary for the remainder:
- (I)
To be alternate;
- (II)
To have absolute values monotonously increase;
- (III)
To start with a positive term.
This is definitely true for the first and second separations in Example 1 and
almost true for the final separation of eight terms, because, as demonstrated above, Euler should have separated six (rather than eight) terms to satisfy (I), (II), and (III). Yet the increment of the eighth term over the seventh one (in absolute values) is about 0.000056…, as
Table 5 shows, which is too small a fraction of the absolute values of terms 7 and 8 themselves (respextively, 0.002524… and 0.002580…). We may guess that Euler hesitantly decided to take two more terms because the increase after the eighth term becomes much more transparent.
Anyway, we can ask how can the process of summation in Examples 1 and 2 be continued so that the exact results of (
5) are obtained in the limit? The idea is generally clear: carry out an infinite number of steps. Yet it is unclear how many initial terms should be allocated and separately summed up before the transformation at each step. In order to provide various possibilities here, as well as to allow some generalizations, we will give the summation process a form in which the execution of each step will depend on special parameters. After that, an accurate analysis of the results will be possible. This will be the topic of the next section.
5. General Form of the Iterative Euler Summation Process
It is known that the Euler transformation, given by the Formula (
6), is a special case (for the parameter value
) of the transformation
, which translates a given series
into a series
, the terms of which are defined by the formula
This transformation is also named after Euler. It is thoroughly analyzed in Chapter 8 and 9 of [
3].
Let us now consider the following iterative summation process, the parameters of which are two sequences and of reals and integers , . The mechanics of the process consists of the sequential application of the transformation , such that for the mth step, we take , whereas the numbers determine the amount of terms separated before the application of .
- A given series
. (As above, ∑ with no limits means .) We put for all indices k, the initial iteration.
- Step , part 1,
. Given a series
obtained at the previous step (or the initial series in case
), we separate first
terms
of
. Let
be their sum (with the understanding that
in case
) and re-enumerate the remainder of
as
. In other words,
- Step , part 2,
. Given a series
obtained at Step
m, part 1, we apply the transformation
by Formula (
8), obtaining the transformed series
, the next iteration.
- Conclusion.
The result of the process is the series of the sums of separated terms,
If this series converges (in the usual sense) to a finite or infinite value , then we say that S is the sum of the given series , which is said to be summable accordingly. (EI from Euler iterative.)
The Eulerian calculation given in Example 2 naturally fits into the four first steps of the summation scheme
applied to the series
of (
1)(A), with the following initial parameter values:
In terms of this summation process, the final value
of Example 2 is equal to
Now let us assume that the tuples of numbers (
11) are somehow extended to infinite sequences
and
. Then, the summation
of the series (
1) can be considered as a continuation of the Euler summation and the resulting value (
7) as a partial sum (
12) of the resulting series (
10). Hence, the following problem arises:
Problem 1. Find out how to extend the tuples of numbers (
11)
to infinite sequences and so that the sum of is equal to the “true” value of (
5)
(A). Additionally, find out how to define such an extension so that, on the contrary, the sum of does not coincide with . 6. Simplified Form of the Iterative Euler Process
We will consider this task, but first let us introduce a useful simplification that is possible without significant damage to its content. This simplified case is
,
. To understand the relationship with the general case, we define a new sequence
for a given pair of infinite sequences
and
, as follows:
In other words,
zeros are inserted before each
. In particular, we have the following finite sequence of parameters
(
) out of (
11):
Definition 1. In the remainder, we denote the summation process as .
It is clear that the
transformation does nothing with the series according to the definition of (
8). Therefore, the inserted zeros in the sequence
of the form (
13) ensure in the process
a separation of one term for each zero from the row obtained in the step before this block of zeros. This means that the sequence of partial sums of the resulting series (
10) in the process
is a sub-sequence of partial sums of the resulting series in the process
. This implies the following:
Corollary 1. If satisfy (
13)
, then the summation method is included in in the sense that any series summed to a finite or infinite sum S by the first method is summed to the same sum S by the second method. The corollary does not imply the inverse reduction. Nevertheless, now Problem 1 can be reformulated as follows:
Problem 2. Find out how how to extend the tuple of numbers (
14)
to an infinite sequence so that the sum of is equal (or, conversely, not equal) to the “true” value of (
5)
(A). 7. Iterated Euler Method as a Triangular Method
Linear summation methods include [
3,
9] methods that define the sum of a given series
as the sum, in the sense of ordinary convergence, of some other series
(if it converges), the terms of which are given by equalities:
where the coefficients
are determined by the method and do not depend on the choice of a given series
. If in addition
whenever
(i.e.,
depends only on
with
), then the summation method is called
triangular, and in this case,
Dealing with this class of summation methods is simplified by symbolically using the
displacement operator to shift the index of the terms
of a given series, which, by definition (see, e.g., [
12], Section 5, where it is denoted by
E), formally acts in such a way that
This defines the formal action of any polynomial
, for example,
Accordingly, the equality (
15) for a triangular method can be presented as
The next theorem shows that the summation method
belongs to the category of triangular methods and, moreover, gives exact values of the corresponding coefficients. This identification of the coefficient formula will allow us not only to prove our main result on the identity of the iterative Euler and the Jakimovski methods (Corollary 2) but also to carry out the calculations presented in
Section 15.
Theorem 1. The summation method is a triangular method, which acts so that , where is defined by the following Formula (
18)
. We remind that an empty product, like
, is always equal to 1. Then, we have
by (B). With this understanding, we have
, and hence the second formula in (A) can be rewritten as
Proof. Recall that
is
. Thus let us come back to the summation process
defined in
Section 5 under the assumptions
and
for all
. This process begins with
and introduces intermediate series
and
by induction on
We are going to prove the following equations:
Case . As
, the equality (
20) takes the form
, which holds by (
16) since
by construction and
.
Step , part 1. Suppose that
and the equality (
20) holds for the upper index
. Now, (
21) follows from (
20) (for
) because
by construction:
Step , part 2. Let us check that then (
20) holds for
m itself. Recall that
is obtained from
by
by means of Formula (
8). Thus,
because
and
.
Thus, we have obtained (
20). This completes the proof of (
20) and (
21).
Now, to accomplish the proof of the lemma, it remains to note that
by (
20), which coincides with (
19), as required. □
Remark 2. We can completely remove from the formulation and proof of the theorem without any harm to its content by representing (
18)
(A) in the form and the rightmost equality serves as a definition of the coefficients . 8. Iterated Euler Method = Jakimovski Method
As defined in [
12], if
is a sequence of numbers
, then the triangular summation method defined by (
18)(A), or equivalently by (
22), is called
the Jakimovski summation method and denoted by
.
Condition
is necessary for
in view of the Formulas (
18) and (
22).
Corollary 2. The iterative Euler method is identical to the Jakimovski method , assuming that and satisfy (
18)(B).
See [
20] regarding the identity of the iterative Euler and Jakimovski summation from the perspective of nonstandard analysis.
Note that the following string of parameters
arises from (
14) by rule (
18)(B):
Problem 3 (a reformulation of Problem 2)
. Find out how to extend the tuple of numbers (
23)
to an infinite sequence so that the sum of is equal (or, conversely, not equal) to the “true” value of (
5)
(A) ? —solved in Section 9 below. We may note that (
18)(A)–(
22) is the
series-to-series form of the Jakimovski summation
. A somewhat different
sequence-to-sequence form
also occurs in the publications on divergent series, e.g., [
12], Section 2.
To see the connection here, assume that
, put
,
, and
prove by induction that then (
24) holds. Indeed (the basis
),
and
, whereas the empty product
in (
24) is equal to 1. To carry out the step, note that
. On the other hand, (
24) implies that
However
, and
. Thus,
But this is precisely the value of
by (
18)(A). This completes the inductive step.
9. About the Jakimovski Summation
A number of results on the summability method
, in the context of the theory of divergent series, were obtained in, e.g., [
21,
22,
23,
24,
25,
26,
27,
28,
29]. Of these, we present here the following result characterizing the region of summability depending on the comparison of sequences
. See [
9], Section 1.3 for a more substantial review.
Proposition 1 ([
29])
. Assume that and are sequences of , such that and, for some N, holds for all and . Then, is the summability of any series and implies its —the summability to the same value. Some results are known concerning the connections of the Jakimovski method with other summation methods. In particular, Theorem 5.4 in [
12] states that, under certain conditions (including
and
), the method
includes Euler’s summation
for all
. There are also some known connections with the Borel method, see, for instance, ref. [
30].
A substantial direction of the research on the Jakimovski summation method has been related to the case of the
linear distribution of the nodes
,
i.e.,
including the following most notable summability methods:
Karamata–Stirling, , ,
Lototsky, ,
Martic, , ,
and some more, see [
9,
10,
11,
31,
32,
33]. Of the typical questions in the theory of summability considered in connection with these methods, we are most interested in the summability of Eulerian series (
1) and, generally, (
2). In this direction, studies [
24,
31,
32,
33] and others have demonstrated that the key factor in determining the
sum of
is the value of the coefficient
A compared to the product
. In particular, the following was established in [
31], Sections 4 and 5 in the case
and outlined in [
24] for the general case.
Proposition 2. If and , then the series
- (i)
Is summable by to the value of (
3)
in case ; - (ii)
Is not summable by in case .
The next theorem gives a close, and even somewhat stronger, result.
Theorem 2. Assume that , , and for all n beginning with some N and for all n. Then, the series
- (i)
Is summable to of (
3)
in case ; - (ii)
Is not summable in case .
Example 3. Let . By Theorem 2, the series is summable but not summable.
Now, we make use of it to immediately solve the above formulated problems.
Proof. [Solution of Problem 3 and thereby Problems 2 and 1 too] Take
and
(basically, any
works as well) and put
for all
, keeping
,
, defined by (
23). The extension of (
23) takes the form
whereas the corresponding extension of (
14) takes the form
where, by (
18)(B),
for all
. In this case,
is
summable to
by Theorem 2(i).
Then, take (or just any ) and put for all . Then, the series is not summable to by Theorem 2(ii). □
Remark 3. Theorem 2 implies Proposition 2, under the following extra assumption: if exactly, then .
To prove this reduction, we consider three cases. Note that the extra assumption of Remark 3 is related to Case 3 only. Also note that definitely holds for .
Case 1: . Take any satisfying . Then, is summable by Theorem 2(i). But for all but finite n. Therefore, is still summable by Proposition 1. However, by definition, this is exactly the summability.
Case 2: . A symmetric argument with works.
Case 3: . Then, by the extra assumption . It follows that , and hence summability implies summability, as required.
10. Jakimovski Method and Iterative Hutton Summation
The following point attracts attention: why is the result of the Eulerian iterative summation determined via as in Corollary 2 not by the numbers themselves but by the derived reals ? Answering this question, we will show that formula of the method is obtained from an iterative summation process that has the numbers themselves as parameters and differs from only in that each step uses not the Euler transform but another (and generally simpler) transformation.
Given
, we define a transformation
, which translates a given series
into a series
, the terms of which are defined by the formula
Note that if
, then
is identical to what is defined as
and related to Hutton by Hardy [
3], p. 22. Thus, we may call it
the Hutton transform. Also note that the case
is excluded as it implies the zero denominator of fractions in (
28).
The associated
iterative method
is defined following the scheme in
Section 5, with
for all
n. Namely,
is the given series. Then, we proceed by induction. If
and
is defined, then we put
, let
for all
, and let the next step
be the
transform of
.
Finally, is the transformation of . If converges (in the usual sense) to a finite or infinite value , then we say that S is the sum of the given series , which is said to be summable accordingly.
Theorem 3. The numbers , defined as above, satisfy equalities (
18)(A)
. Therefore, the iterative Hutton transform is identical to the Jakimovski transform . Proof (sketch). The following equalities are easily provable by induction:
We conclude that, by (
30), the reals
satisfy (
18)(A). □
Example 4. Let us recalculate Example 2 using the method with the sequence of (
23)
. — the given series;
, ;
— by (
28)
with ; , ;
— by (
28)
with ; , ;
— by (
28)
with ; , ;
— by (
28)
with ; , ;
— by (
28)
with ; , ;
— by (
28)
with , , et cetera. Thus, we obtain , which is identical to the computation of Example 2. However, we may note that the calculation by (
28)
is much simpler than the one that uses (
8)
as in Example 2. This may provide a purely computational advantage. See also [
20] regarding iterative Euler and Hutton transforms and summability from the point of view of nonstandard analysis.
11. Theorem 2: Evaluation of the Remainder
In this section, we begin the proof of Theorem 2.
We adopt the following notation and global assumptions:
- (∗) (a)
A sequence of reals is fixed;
- (b)
, and ;
- (c)
is a given series;
- (d)
is its transform;
- (e)
is fixed and
is defined by (
3);
- (f)
for any n, is the formal remainder;
- (g)
;
- (h)
If then put .
Lemma 1. for all n.
Proof. We begin by analyzing the auxiliary
sum of the geometric series
. In this case, the displacement operator
of (
16) is identical to the product by
z; thus,
in a sense. Therefore, if
is the transformed series, then we have
by (
18)(A), and, subsequently, the auxiliary remainder satisfies
by an elementary induction on
n based on (
31). Indeed,
as required, and the proof of (
32) is accomplished. In particular,
Now, we recall that
by (
3), and hence easily
Substituting
, we obtain
as required. □
Lemma 2. Assume that, in addition to (*) above, . Then, with uniformly on the interval . In other words, for every , there exists N such that for all and all ξ in the interval , we have .
Proof. Quite obviously
in case
. To estimate the product
note that by
, there is some
such that
for all
. Let
Then, for any
and
, we have
by (
34). It remains to note that
since
by (*)b. □
Corollary 3. Let . Then, for Theorem 2 to hold, it suffices to prove that
- (i)
—in case ;
- (ii)
or nonexistent—in case .
Proof. It suffices to note that by Lemma 2. □
12. Theorem 2: Reshaping
It is somewhat troublesome for different evaluations below in the proof of Theorem 2 that the condition in the theorem is assumed for all n beginning with some N, rather than generally for all . Fortunately, the next lemma allows one to change the values of , (and generally any finite number of values, of course) to in such a way that the content of Theorem 2 is preserved.
Lemma 3. Assuming (*) of Section 11, suppose that is another sequence of reals , satisfying (*)a and (*)b, and such that for all . Then, Theorem 2 simultaneously holds (or simultaneously fails) for the summation methods and . Proof. Let
and define
accordingly, similar to (*)(g)(h) in
Section 11. Then,
Thus,
is a constant. Moreover, if
, then
, and hence
, and finally
where
and
. It remains to refer to Corollary 3. □
13. Theorem 2: The Case of Summability
Beginning here the proof of Theorem 2, we assume (*) of
Section 11, and, strengthening (*)a and (*)b, we also assume the following:
- (∗∗) (a)
Reals are fixed, , and for all ;
- (b)
We re-denote of (*) as ;
- (c)
We accordingly put .
Note that, by Lemma 3, condition , , in (**)a does not reduce the generality of the assumptions of Theorem 2. We begin with Case (i) of the theorem.
Lemma 4. Suppose that , as in (i) of Theorem 2. Then, for all , where does not depend on .
Proof. Let us evaluate the factors in the product as in (**)b in the domain
- (†)
, or equivalently, .
Fact 1. If (†) holds and , then .
Fact 2. If (†) holds and , then easily .
To conclude, if (†) holds and
, then by Fact 2 and Stirling
Here,
,
, and
so that
because
is assumed. Here,
does not depend on
.
On the other hand, if (†) holds and
, then by Facts 1, 2
where
does not depend on
. Taking
, we deduce
from (
35) and (
36), so
proves the lemma. □
Proof of Claim (i) of Theorem 2. Assuming (∗) of
Section 11, along with (∗∗) above, and
, we have to prove that
. By Lemma 4, there exists a real constant
such that
for all
. As
converges, given any
, there exists
K such that
. On the other hand,
by Lemma 2. This completes the proof. □
14. Theorem 2: The Case of Divergence
In continuation of the proof of Theorem 2, consider Case (ii) of the theorem. Still assume (∗) and (∗∗) (
Section 11 and
Section 13) and recall that, in particular,
, , .
Lemma 5. There is a real constant such that for all .
Proof. If
and
(as in (†) in the proof of Lemma 4), then
by (
36), where
does not depend on
—therefore,
. Separately, if
, then
by Lemma 2; hence, there is some
such that
for all
n. Note that
converges. Taking
, we obtain the result required. □
Lemma 6. Assume that . Let . Then, .
Proof. Note that
is a sign-constant function in the domain
by (∗∗)c. It follows that
Now, suppose that
. Then, easily
since
. In other words,
provided
. Combining this with (
37), we obtain the lemma. □
Lemma 7. Assume that . Then, ; hence, by Lemma 6, as well.
Proof. It follows from (∗∗)b that
and hence
. Converting here the factorials by Stirling, we obtain
where
C does not depend on
. However,
since
. Therefore, the exponential function
increases faster than any polynomial. This ends the proof of the lemma. □
Proof of Claim (ii) of Theorem 2. Still assuming (∗) and (∗∗) (
Section 11 and
Section 13), and
(as in case (ii) of the theorem), we are going to prove that
. For that purpose, note that for all
n sufficiently large, we have
where the first addendum is uniformly bounded by Lemma 5, whereas the second one tends to
∞ by Lemma 7. This ends the proof of Theorem 2 as a whole. □
15. Example 1 with More Detailed Numerical Information
Here we present more detailed numerical information related to calculations in Example 1, lines B–F and the final sum value. The numerical data in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5,
Table 6 and
Table 7 is presented as exact simple text-based fractions and in decimal form rounded to 8 decimal places.
Table 1.
Example 1 line B.
Table 1.
Example 1 line B.
1/2 | = | 0.5 |
−1/4 | = | −0.25 |
3/8 | = | 0.375 |
−11/16 | = | −0.6875 |
53/32 | = | 1.65625000 |
−309/64 | = | −4.82812500 |
2119/128 | = | 16.55468750 |
−16,687/256 | = | −65.18359375 |
148,329/512 | = | 289.70507813 |
−1,468,457/1024 | = | −1434.04003906 |
16,019,531/2048 | = | 7822.03662109 |
−190,899,411/4096 | = | −46,606.30151367 |
2,467,007,773/8192 | = | 301,148.40979004 |
−34,361,893,981/16,384 | = | −2.09728357 × 106 |
513,137,616,783/32,768 | = | 1.56597173 × 107 |
−8,178,130,767,479/65,536 | = | −1.24788372 × 108 |
Table 2.
Example 1 line C terms in brackets.
Table 2.
Example 1 line C terms in brackets.
3/8 | = | 0.375 |
−11/16 | = | −0.68750000 |
53/32 | = | 1.65625000 |
−309/64 | = | −4.82812500 |
2119/128 | = | 16.55468750 |
−16,687/256 | = | −65.18359375 |
148,329/512 | = | 289.70507813 |
−1,468,457/1024 | = | −1434.04003906 |
16,019,531/2048 | = | 7822.03662109 |
−190,899,411/4096 | = | −46,606.30151367 |
2,467,007,773/8192 | = | 301,148.40979004 |
−34,361,893,981/16,384 | = | −2.09728357 × 106 |
513,137,616,783/32,768 | = | 1.56597173 × 107 |
−8,178,130,767,479/65,536 | = | −1.24788372 × 108 |
Table 3.
Example 1 line D terms in brackets.
Table 3.
Example 1 line D terms in brackets.
3/16 | = | 0.18750000 |
−5/64 | = | −0.07812500 |
21/256 | = | 0.08203125 |
−99/1024 | = | −0.09667969 |
615/4096 | = | 0.15014648 |
−4401/16,384 | = | −0.26861572 |
36,585/65,536 | = | 0.55824280 |
−342,207/262,144 | = | −1.30541611 |
3,565,323/1,048,576 | = | 3.40015697 |
−40,866,525/4,194,304 | = | −9.74333882 |
510,928,317/16,777,216 | = | 30.45370084 |
−6,915,941,595/67,108,864 | = | −103.05556051 |
100,734,321,519/268,435,456 | = | 375.26459068 |
−1,570,587,184,521/1,073,741,824 | = | −1462.72330035 |
Table 4.
Example 1 line E terms in brackets.
Table 4.
Example 1 line E terms in brackets.
21/256 | = | 0.08203125 |
−99/1024 | = | −0.09667969 |
615/4096 | = | 0.15014648 |
−4401/16,384 | = | −0.26861572 |
36,585/65,536 | = | 0.55824280 |
−342,207/262,144 | = | −1.30541611 |
3,565,323/1,048,576 | = | 3.40015697 |
−40,866,525/4,194,304 | = | −9.74333882 |
510,928,317/16,777,216 | = | 30.45370084 |
−6,915,941,595/67,108,864 | = | −103.05556051 |
100,734,321,519/268,435,456 | = | 375.26459068 |
−1,570,587,184,521/1,073,741,824 | = | −1462.72330035 |
Table 5.
Example 1 line F terms in brackets.
Table 5.
Example 1 line F terms in brackets.
21/512 | = | 0.04101563 |
−15/4096 | = | −0.00366211 |
159/32,768 | = | 0.00485229 |
−429/262,144 | = | −0.00163651 |
5241/2,097,152 | = | 0.00249910 |
−26,283/16,777,216 | = | −0.00156659 |
338,835/134,217,728 | = | 0.00252452 |
−2,771,097/1,073,741,824 | = | −0.00258079 |
36,159,837/8,589,934,592 | = | 0.00420956 |
−416,721,543/68,719,476,736 | = | −0.00606410 |
5,868,508,359/549,755,813,888 | = | 0.01067475 |
−84,143,115,525/4,398,046,511,104 | = | −0.01913193 |
Table 6.
Example 1, final sum value, Equation (7).
Table 6.
Example 1, final sum value, Equation (7).
Uiter = 430,377,791/1,073,741,824 = 0.40082055 |
Table 7.
Example 1: partial sums terms in brackets in line F. The value is the closest to the “exact” value 1 − = 0.40365… of (5)(B). The values , , and are also closer to 1 − than the Eulerian choice of .
Table 7.
Example 1: partial sums terms in brackets in line F. The value is the closest to the “exact” value 1 − = 0.40365… of (5)(B). The values , , and are also closer to 1 − than the Eulerian choice of .
| = | 23/64 | = | 0.35937500 |
| = | 205/512 | = | 0.40039063 |
| = | 1625/4096 | = | 0.39672852 |
| = | 13,159/32,768 | = | 0.40158081 |
| = | 104,843/262,144 | = | 0.39994431 |
| = | 843,985/2,097,152 | = | 0.40244341 |
| = | 6,725,597/16,777,216 | = | 0.40087682 |
| = | 54,143,611/134,217,728 | = | 0.40340134 |
| = | 430,377,791/1,073,741,824 | = | 0.40082055 |
| = | 3,479,182,165/8,589,934,592 | = | 0.40503011 |
| = | 27,416,735,777/68,719,476,736 | = | 0.39896601 |
16. Further Examples and Notes
Here, we will present a couple more applications of the Euler–Jakimovski method and then proceed with a few notes on possible extensions of our results and methods and possible connections to different modern approaches.
Example 5 (Section 4 in [
12])
. summation of power series is considered. Among other results, Theorem 4.1 there claims that ifand the transformation is regular, then it sums the series to for all complex z with , but it does not sum in case .The case here is left open in [
12]
. Some sufficient regularity conditions for
are given, e.g., in [
12,
21,
22] or elsewhere. For instance, by [
22], Section 3, if
are complex numbers, then
is regular provided
- (a)
Only finitely many ;
- (b)
For some , for all n; ,
- (c)
, where indicates that the product is over all nonzero factors.
Example 6 (Section 10.III in [
1], Part II).
Euler demonstrates that his summation method helps transform slowly converging series into rapidly converging ones, which has obvious computational applications. Starting with the slowly converging alternating harmonic series , Euler finds that the transformed series iswhich converges much faster. Another example there concerns the series (with base 10 logarithms). This is a divergent series of course since . To evaluate it, Euler sums up the first eight terms (up to inclusively), obtaining , and then transforms the tail with , and, summing up several terms of the transformed series, obtains . The final result is thereby . This can be viewed as a two-step application of the iterative Euler summation.
Next, we will make a few comments about the possible expansion of our results and methods and possible connections with various modern approaches. The application of the iterative Euler/Jakimovski summation technique in these new areas will be an interesting topic for further research.
Note 1. When discussing the difficulties related to such mathematical singularities as division by zero or assigning meaningful sums to divergent series, those studies can be further deepened through the perspective of uncertain numbers, a number system recently proposed by Yue [
34]. Uncertain numbers provide alternative ways of interpreting divergent series, reinforcing the need for generalized summability methods like those discussed in our paper. This may be the subject of further prospective research.
Note 2. One more direction of further prospective research can exploit some implicit affinities and historical-to-modern parallels between iterative summation and algebraic operator theory. The iterative summation process explored in our paper echoes the algebraic structures introduced in Rota–Baxter theory, particularly in the context of operator identities, as discussed in a recent paper by Guo et al. [
35].
Note 3. Another direction of studies on summation of divergent series is based on the method of zeta function regularization, see, e.g., [
36]. This is quite distinct from the more traditional triangular and other linear methods such as the iterative Euler and Jakimovski methods. Zeta function methods allow one to evaluate series not summable by more traditional techniques. One of the most striking examples is the evaluation
, the Euler–Mascheroni constant. The shortest track to this evaluation is as follows by [
37]. Formally,
. While the function has a pole at 1, we can find its Cauchy principal value there:
. See a more detailed output using the the Ramanujan summation method in [
38], page 87. However, it would be interesting to really apply
to the harmonic series with different distributions of the nodes
.
Note 4. Two new methods of summation of divergent series, most notably the series of alternating factorials, are presented and analyzed in [
13]. Those are the Padé approximants and the delta transformation, a powerful nonlinear technique that works very well in the case of strictly alternating series. The method is based on a new factorial series representation of the truncation error of the series of alternating factorials. Explicit expressions for the transformation errors of Padé approximants and of the delta transformation are defined. A subsequent asymptotic analysis proves rigorously the convergence of both Padé and delta methods. However, asymptotic estimates and other known numerical results allow one to draw a conclusion of the superiority of the delta transformation over Padé.
Some other applications of factorial-type series to asymptotic series are developed and studied in [
39].
17. Conclusions and Problems
In this study, the methods of the theory of divergent series are used to analyze one of the examples of iterative summation given by Euler in part II of his Foundations of differential calculus.
This example (Example 1 in
Section 3) leads to the definition of
the iterative Euler transform, which consists of an alternating application of the usual Euler transformation
, and a separate summation of some numbers of the initial terms of the sequentially occurring series.
Analyzing Example 1 and a related Example 2, we introduce the
iterative Euler transformation , having an infinite sequence
as a parameter, in
Section 5 and
Section 6.
Corollary 2, our first main result, demonstrates that the Euler iterative summation
is equivalent to the Jakimovski summability method
, introduced in the 1950s, provided that the reals
and
satisfy the equality
of (
18)(B).
We also prove (Theorem 3) that is equivalent to (with the same parameters ), another iterative summation method, which involves the Hutton transform at each step instead of the Euler transform.
In addition, we established Theorem 2, our second main result, which determines whether the series
is
summable to
of (
5)(A) in terms of the distribution of parameters
, which somewhat improves the earlier results in this area.
These are new results, and they make a significant contribution to summability theory.
The technique developed in this paper may lead to further progress in studies of various aspects of the summation of divergent series.
The following Problems 4–6 arise from our study. (Recall that Problems 1–3 were formulated and solved above in this article in the course of our presentation.) These problems are unlikely to be solved when applied to really arbitrary series; however, a solution may be possible for the series of alternating factorials.
Problem 4 (summation to the least term).
Suppose that the iterative summation of Section 5 is carried out so that for all m (as in Examples 1 and 2), whereas, for each , the number of separated terms is chosen so that the corresponding term of the series is the least (in absolute value) among all terms . Is the series summable to of (
5)
(A) with this method ? Problem 5 (summation to the best partial sum). A variant of the previous one. The same question for is defined so that the partial sum obtained is the best possible approximation of at this step of iteration.
Problem 6 (summation to monotonous increase).
A variant of the previous one. The same question for is defined to be the least satisfying (I)
, (II)
, and (III)
in Section 4.