Next Article in Journal
Dunkl Linear Canonical Wavelet Transform: Concentration Operators and Applications to Scalogram and Localized Functions
Previous Article in Journal
A Hybrid Mathematical Framework for Dynamic Incident Prioritization Using Fuzzy Q-Learning and Text Analytics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Notes on Iterative Summation of Alternating Factorials

by
Vladimir Kanovei
*,† and
Vassily Lyubetsky
*,†
Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), 127051 Moscow, Russia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(12), 1942; https://doi.org/10.3390/math13121942
Submission received: 20 April 2025 / Revised: 27 May 2025 / Accepted: 3 June 2025 / Published: 11 June 2025

Abstract

The Eulerian iterative method of the summation of divergent series, invented in Institutiones Calculi Differentialis, is studied. We demonstrate that the method is equivalent to the Karamata–Lototsky–Jakimovski summability method, introduced in the 1950s. We prove a new theorem on the Euler iterative summability of the series of alternating factorials. Ensuing summability corollaries are discussed.
MSC:
40C05; 40D25; 40G05; 40-03

1. Introduction

Each convergent series has a certain sum value, which is equal to the number to which the series converges. However, as early as in the 18th century, it was discovered that the concept of the sum is much broader than the convergence. Various examples of the summation of divergent series performed by mathematicians of the 17th and 18th centuries (see, e.g., [1,2] and chapters 1 and 2 in [3]) showed that many naturally occurring divergent series have definite sum values (similarly to convergent ones), achieved, generally, by rather natural manipulation with series by the rules valid for finite sums, without considering convergence as a preliminary condition. Summarizing these studies, Euler notes in [1], Part I, Section 109 the following:
“…we conclude that series of this kind, which are called divergent, have no fixed sums, since the partial sums do not approach any limit that would be the sum for the infinite series. This is certainly a true conclusion, since we have shown the error in neglecting the final remainder. However, it is possible, with considerable justice, to object that these sums, even though they seem not to be true, never lead to error. Indeed, if we allow them, then we can discover many excellent results that we would not have if we rejected them out of hand”.
(English translation taken from [4])
As well as [3], we refer to [5,6,7,8] for a modern account of those earlier summation attempts.
Remark 1. 
We let a n , with no limits specified, always mean n = 0 a n in this paper.
Somewhat later, already in the 19th century, the question of the summation of divergent series was fully developed on the basis of rigorous definitions of summation. These definitions (see [3]) revealed the exact mathematical content of the intuitive approach of Euler and other earlier mathematicians. On the other hand, these definitions of summation themselves arose largely as a result of the analysis of early examples of summation.
One of such example is the hypergeometric series of alternating factorials (also called the Wallis series), which appears in two closely related forms:
1 1 + 2 6 + 24 120 + = ( 1 ) k k ! , ( A ) 1 2 + 6 24 + 120 = 1 ( 1 ) k k ! = ( 1 ) k ( k + 1 ) ! ( B 1 ) = k = 1 ( 1 ) k + 1 k ! . ( B 2 )
Euler [2] and Part II, Section 10.III in [1] suggested several ways of the summation of (1). One of these methods (see Section 2) yields the value of the sum in the form of a definite integral, which is not expressible in elementary functions but is easily calculated with any necessary accuracy using standard numerical methods.
Another method consists of a special procedure of an iterative multi-step transformation of a given series with an ensuing separation of some numbers of initial terms of the intermediate series. Euler performs three steps of this iterative procedure and obtains an approximate value of the sum of the series (1), which is quite close to the “exact” value given by the first method but does not consider the issue of convergence to the “exact” value with an infinite number of steps. This method was called “a more remarkable, though less precise calculation” in Hardy Section 2.6 in [3].
The analysis of this second, iterative summation method is the main task of this article. We will show that it belongs to the category of linear summation methods, more specifically, to the category of triangular matrix methods, well known and extensively studied in modern theory of the summability of divergent series. We investigate the convergence of different variants of this summation by Euler.
We will also show that the summation method in question was rediscovered in the late 1950s as the Jakimovski method [9], Section 1.3, on the basis of a completely different background and in a different form, and Euler’s priority for more than 200 years seems to have gone unnoticed. The history of the Jakimovski method began in an article [10] by Karamata, where a triangular summation method was introduced, such that the corresponding matrix coefficients were identical to Stirling numbers of the first kind. This method was reintroduced by Lototsky [11] and became known in the late 1950s as the Karamata–Stirling, or Lototsky method, denoted as S ( λ ) or KS ( λ ) , with λ a real parameter. Finally, Jakimovski [12] defined the transformation F , d n named after him, with d n 0 n < being a sequence of real parameters. That was a far-reaching generalization since S ( λ ) is identical to F , d n = n 1 λ .
To finish the Introduction, we may note that various aspects of the summation of series (1) remain the subject of active modern research, see, for example, [13,14].

2. The “Exact” Value of the Sum of Alternating Factorials

Euler’s own approach to summing divergent series was outlined in [1], Part I, Section 111 (English translation taken from [4]):
”Let us say that the sum of any infinite series is a finite expression from which the series can be derived. <…> With this understanding, if the series is convergent, the new definition of sum agrees with the usual definition. Since divergent series do not have a sum, properly speaking, there is no real difficulty which arises from this new meaning. Finally, with the aid of this definition we can keep the usefulness of divergent series and preserve their reputations.”
This results, in particular, in the Abel summation method (see [3]), according to which a series a k has (A) sum s in the case when the power series a k x k converges at small x (i.e., in some neighborhood of zero) to the analytical function S ( x ) such that S ( 1 ) = s .
However, as far as series (1) is concerned, the corresponding power series
( 1 ) k k ! x k = 1 x + 2 x 2 6 x 3 + 24 x 4 120 x 5 +
diverges at any x 0 , i.e., the definition of (A) sum as above does not immediately work in this case. Nevertheless, Euler applied this idea in [2], Sections 19–20 (see also Sections 2.4–2.5 in [3], and [7]) via series (2), which, however, is interpreted as the asymptotic series of the function
S ( x ) = 0 e w 1 + w x d w = e 1 / x x 1 / x e ξ ξ d ξ = e 1 / x x Ei ( 1 x ) ,
where ξ = w + 1 x , and
Ei ( y ) = y e t t d t = y e t t d t
is the exponential integral function [15]. This allows us to formally (and heuristically) expand S ( x ) as follows:
S ( x ) = 0 ( 1 ) k x k w k e w d w = ( 1 ) k x k 0 w k e w d w = series ( 2 ) ,
since 0 w k e w d w = k ! . Now, with x = 1 , we obtain the following evaluation of (1),
S : = S ( 1 ) = 0 e w 1 + w d w = 0.59634 , the Euler–Gompertz constant ( A ) accordingly , 1 S = 0.40365 ( B ) ,
which we will call the “exact” values of the divergent series (1), respextively, (A) and (B). Euler gives the value 0.4036524077 for 1 S in [1], Part II, Section 10. However, here, the last four decimals were found to be erroneous by Mascheroni [16], p. 11, who gave the true value 1 S = 0.40365263767 , also cited by Kowalewski (editorial) in [17].
This summation technique can be given a mathematically rigorous form based on the summation method (B*) according to Hardy [3], Section 8.11. This method defines a value s to be the (B*) sum of a series a k if
(a)
The series a k t k / k ! = α ( t ) converges in some neighborhood of 0;
(b)
The function α has a regular analytic continuation to [ 0 , + ) ;
(c)
α ( 1 ) = s .
  • Thus, the value S in (5) is the (B*) sum of the series (5)(A).
Other Eulerian summation arguments, leading to the same value (5)(A), are presented in Section 2.4 of [3], [7], and [18]. For instance, let y = ( 1 ) k k ! x k . After formal differentiation, we arrive at equation x 2 y + ( x + 1 ) y = 1 . Its solution, with the initial condition y = 1 at x = 0 , has the form
y = 0 + e w 1 + w x d w .
Taking x = 1 here leads to (5)(A), as required.
Now, having the exact (in a certain conditional sense, of course) values (5) of the sums of the divergent series (1)(A),(B), let us proceed to another Eulerian method for summing the series of alternating factorials.

3. Iterative Summation Procedure by Euler

Each step of this Euler procedure includes a transformation, later named after him and also known as the Euler–Knopp summation [3], which translates a given series a k into b n , where
b n = 2 n 1 k = 0 n n k a k ,
given (in a slightly different notation), e.g., in [1], Part II, Section 3 or Section 8 for alternating series.
It is known from e.g., [3], Section 8.3 that every convergent series is converted by (6) again into a convergent series, and with the same sum. (This property is called regularity.) At the same time, there are also divergent series that transform into convergent ones. For example, the series ( 2 ) k is converted to the series ( 1 ) n 2 n 1 . This progression is summed to the value 1 / 3 , exactly equal to the “sum” of the original series, formally calculated according to the rule of infinitely decreasing geometric progression.
Generally, any progression r k is transformed by (6) to the progression t k , where t = 1 + r 2 , and we have | t | < 1 in the range 3 < r < 1 .
The transformation (6) can be applied to a given progression r k multiple times. In this case, the resulting series after m transformations will converge provided the inequality ( 2 m + 1 1 ) < r < 1 holds, which tends to be limited in the left semi-axis ( , 1 ) .
If we now consider the series (1) as a kind of progression, the denominator of which tends to via negative integers, then it becomes rather clear that the series (1) will not converge after any finite number of transformations using the Formula (6); this can be verified directly as well. The idea that promises success also becomes clear: apply transformation (6) sequentially infinitely many times.
However, the result of such an infinite conversion will be the series of all zeros, and there is no benefit from this. Euler overcomes this difficulty with a special trick. Namely, he directly sums up several initial terms of the series and, stashing the resulting sum, applies the transformation once again to the remainder of the series. This results in the following computation in [1], Part II, Section 10.III, also presented in [19], Section 1047, and [3], Section 2.6.
Example 1. 
Working with the truncated series 1 2 + 6 24 + 120 as in (1)(B1), Euler evaluates it as follows:
1 2 + 6 24 + 120 720 + 5040 40,320 + = ( A ) Euler transforming the series by ( 6 ) 1 2 1 4 + 3 8 11 16 + 53 32 309 64 + 2119 128 16,687 256 + 148,329 512 1,468,457 2 10 + = ( B ) separating two terms framed 1 4 + 3 8 11 16 + 53 32 309 64 + 2119 128 16,687 256 + 148,329 512 1,468,457 2 10 + = ( C ) transforming the remainder in brackets 1 4 + 3 16 5 64 + 21 256 99 2 10 + 615 2 12 4401 2 14 + 36,585 2 16 342,207 2 18 + 3,565,321 2 20 = ( D ) separating two more terms framed 23 64 + 21 2 8 99 2 10 + 615 2 12 4401 2 14 + 36,585 2 16 342,207 2 18 + 3,565,321 2 20 40,866,525 2 22 + = ( E ) transforming the remainder 23 64 + 21 2 9 15 2 12 + 159 2 15 429 2 18 + 5241 2 21 26,283 2 24 + 338,835 2 27 2,771,097 2 30 + . ( F )
Thus, transformation (6) is applied three times, with separate summations of two, and again two initial terms before the second and third transformations, respectively.
Finally take eight initial terms in parentheses in the last row (i.e., those framed in line (F)) along with the number 23 64 and, performing calculations, find the value
U iter = 0.40082055 , rounded to eight places .
(Erroneously 0.40082038 in the original publication [1].) This is rather close to the “exact” value 1 S = 0.40365 of (5)(B). This ends the calculation.
Two things related to Example 1 should be mentioned.
First, the correction in (7) was made by Gerhard Kowalewski in [17].
Second, in fact Euler did not explicitly mention taking precisely eight terms in line F, rather speaking about four + some more terms in [1], Part II, Section 10.III. However, it is only the choice of exactly eight terms that is compatible with the final value (7) (in either version), see Table 7 in Section 15 for numerical details.
Example 2. 
One can maintain a Euler-style computation for the full series ( 1 ) n n ! = 1 1 + 2 6 + 24 120 + 720 of (1)(A), instead of the truncated one 1 2 + 6 24 + of (1)(B), closely following the numerical content of Example 1:
1 1 + 2 6 + 24 120 + 720 5040 + 40 , 320 = ( A ) separating one term , framed 1 + 1 + 2 6 + 24 120 + 720 5040 + 40,320 = ( A ) Euler transforming the series in brackets by ( 6 ) 1 + 1 2 + 1 4 3 8 + 11 16 53 32 + 309 64 2119 128 + 16,687 256 148,329 512 + 1,468,457 2 10 = ( B ) separating 2 more terms 3 4 + 3 8 + 11 16 53 32 + 309 64 2119 128 + 16,687 256 148,329 512 + 1,468,457 2 10 = ( C ) transforming the remainder in brackets 3 4 + 3 16 + 5 64 21 256 + 99 2 10 615 2 12 + 4401 2 14 36,585 2 16 + 342,207 2 18 3,565,321 2 20 + = ( D ) separating two more terms 41 64 + 21 2 8 + 99 2 10 615 2 12 + 4401 2 14 36,585 2 16 + 342,207 2 18 3,565,321 2 20 + 40,866,525 2 22 = ( E ) transforming the remainder 41 64 + 21 2 9 + 15 2 12 159 2 15 + 429 2 18 5241 2 21 + 26,283 2 24 338,835 2 27 + 2,771,097 2 30 . ( F )
Now, take eight initial terms in parentheses in F’ along with the number 41 64 . This returns V iter = 1 U iter = 0.59917944 , close to S = 0.59634 of (5)(A).
Another rather similar but somewhat differently arranged computation in [2], Sections 13–16, (commented in [7], Section III) yields an approximate value 38 , 015 / 65 , 536 = 0.58006286621 for the whole series ( 1 ) n n ! of (1)(A) after three steps.

4. Why 2–2–8?

It is not immediately clear why Euler cuts exactly two, two, and eight initial terms of the given and intermediate series in the process of calculation in Example 1. Indeed, in the two first cases, Euler cuts to the least term. But this is definitely not the case in the last cutting of eight terms, because in fact the least term (in absolute value) is the sixth one 26 , 283 / 2 24 = 0.00157 not the eighth one 2 , 771 , 097 / 2 30 = 0.00258 , as Table 5 in Section 15 shows. Moreso terms 4, 5, 7 are smaller than the eighth term as well.
It does not look either as if Euler takes the best possible approximation, since taking seven, five, six, or three terms in brackets in the last line of calculation in Example 1 gives slightly better, than (7), approximations of the “exact” value 1 S of (5), see Table 7 in Section 15.
Generally, Euler did not go into the details of his choice of the number of the initial terms to separate at each consecutive step. There is not much said about this in [3,7] either. Our guess is that Euler separated as many initial terms as necessary for the remainder:
(I)
To be alternate;
(II)
To have absolute values monotonously increase;
(III)
To start with a positive term.
This is definitely true for the first and second separations in Example 1 and almost true for the final separation of eight terms, because, as demonstrated above, Euler should have separated six (rather than eight) terms to satisfy (I), (II), and (III). Yet the increment of the eighth term over the seventh one (in absolute values) is about 0.000056…, as Table 5 shows, which is too small a fraction of the absolute values of terms 7 and 8 themselves (respextively, 0.002524… and 0.002580…). We may guess that Euler hesitantly decided to take two more terms because the increase after the eighth term becomes much more transparent.
Anyway, we can ask how can the process of summation in Examples 1 and 2 be continued so that the exact results of (5) are obtained in the limit? The idea is generally clear: carry out an infinite number of steps. Yet it is unclear how many initial terms should be allocated and separately summed up before the transformation at each step. In order to provide various possibilities here, as well as to allow some generalizations, we will give the summation process a form in which the execution of each step will depend on special parameters. After that, an accurate analysis of the results will be possible. This will be the topic of the next section.

5. General Form of the Iterative Euler Summation Process

It is known that the Euler transformation, given by the Formula (6), is a special case (for the parameter value q = 1 ) of the transformation ( E , q ) , which translates a given series a k into a series b n , the terms of which are defined by the formula
b n = 1 ( 1 + q ) n + 1 k = 0 n n k q n k a k .
This transformation is also named after Euler. It is thoroughly analyzed in Chapter 8 and 9 of [3].
Let us now consider the following iterative summation process, the parameters of which are two sequences { L m } and { Q m } of reals Q m 1 and integers L m 1 , m = 1 , 2 , 3 , . The mechanics of the process consists of the sequential application of the transformation ( E , q ) , such that for the mth step, we take q = Q m , whereas the numbers L m determine the amount of terms separated before the application of ( E , Q m ) .
A given series 
a k = a 0 + a 1 + a 2 + a 3 + . (As above, ∑ with no limits means 0 .) We put b k ( 0 ) = a k for all indices k, the initial iteration.
Step  m , part 1, 
m = 1 , 2 , 3 , . Given a series k b k ( m 1 ) obtained at the previous step (or the initial series in case m = 1 ), we separate first L m terms b 0 ( m 1 ) , b 1 ( m 1 ) , , b L m 1 ( m 1 ) of k b k ( m 1 ) . Let b m 1 be their sum (with the understanding that b m 1 = 0 in case L m = 0 ) and re-enumerate the remainder of b k ( m 1 ) as k a k ( m ) . In other words,
b m 1 = b 0 ( m 1 ) + b 1 ( m 1 ) + + b L m 1 ( m 1 ) and a k ( m ) = b L m + k ( m 1 ) , k = 0 , 1 , 2 , 3 , .
Step  m , part 2, 
m = 1 , 2 , 3 , . Given a series k a k ( m ) obtained at Step m, part 1, we apply the transformation ( E , Q m ) by Formula (8), obtaining the transformed series n b n ( m ) , the next iteration.
Conclusion. 
The result of the process is the series of the sums of separated terms,
b m = b 0 + b 1 + b 2 + .
If this series converges (in the usual sense) to a finite or infinite value S = b m , then we say that S is the ( EI , L m , Q m ) sum of the given series a k = a 0 + a 1 + a 2 + , which is said to be ( EI , L m , Q m ) summable accordingly. (EI from Euler iterative.)
The Eulerian calculation given in Example 2 naturally fits into the four first steps of the summation scheme ( EI , L m , Q m ) applied to the series ( 1 ) k k ! = 1 1 + 2 6 + 24 of (1)(A), with the following initial parameter values:
L m 1 m 4 = 1 , 2 , 2 , 8 and Q m 1 m 3 = 1 , 1 , 1 .
In terms of this summation process, the final value V iter of Example 2 is equal to
b 0 + b 1 + b 2 + b 3 = 1 1 4 7 64 + b 3 = 0.5991794476 .
Now let us assume that the tuples of numbers (11) are somehow extended to infinite sequences { L m } and { Q m } . Then, the summation ( EI , L m , Q m ) of the series (1) can be considered as a continuation of the Euler summation and the resulting value (7) as a partial sum (12) of the resulting series (10). Hence, the following problem arises:
Problem 1. 
Find out how to extend the tuples of numbers (11) to infinite sequences { Q m } and { L m } so that the ( EI , L m , Q m ) sum of ( 1 ) k k ! = 1 1 + 2 6 + 24 is equal to the “true” value S of (5)(A). Additionally, find out how to define such an extension so that, on the contrary, the ( EI , L m , Q m ) sum of ( 1 ) k k ! does not coincide with S .

6. Simplified Form of the Iterative Euler Process

We will consider this task, but first let us introduce a useful simplification that is possible without significant damage to its content. This simplified case is L m = 1 , m . To understand the relationship with the general case, we define a new sequence { q m } for a given pair of infinite sequences { Q m } and { L m } , as follows:
{ q m } = 0 , , 0 L 1 1 0 s , Q 1 , 0 , , 0 L 2 1 0 s , Q 2 , 0 , , 0 L 3 1 0 s , Q 3 , .
In other words, L m 1 zeros are inserted before each Q m . In particular, we have the following finite sequence of parameters q m ( 1 m 11 ) out of (11):
{ q m } = 1 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 .
Definition 1. 
In the remainder, we denote the summation process ( EI , 1 , q m ) as ( EI , q m ) .
It is clear that the ( E , 0 ) transformation does nothing with the series according to the definition of (8). Therefore, the inserted zeros in the sequence { q m } of the form (13) ensure in the process ( EI , q m ) a separation of one term for each zero from the row obtained in the step before this block of zeros. This means that the sequence of partial sums of the resulting series (10) in the process ( EI , L m , Q m ) is a sub-sequence of partial sums of the resulting series in the process ( EI , q m ) . This implies the following:
Corollary 1. 
If { Q m } , { L m } , { q m } satisfy (13), then the summation method ( EI , L m , Q m ) is included in ( EI , q m ) in the sense that any series summed to a finite or infinite sum S by the first method is summed to the same sum S by the second method.
The corollary does not imply the inverse reduction. Nevertheless, now Problem 1 can be reformulated as follows:
Problem 2. 
Find out how how to extend the tuple of numbers (14) to an infinite sequence { q m } m 1 so that the ( EI , q m ) sum of ( 1 ) k k ! is equal (or, conversely, not equal) to the “true” value S of (5)(A).

7. Iterated Euler Method as a Triangular Method

Linear summation methods include [3,9] methods that define the sum of a given series a k as the sum, in the sense of ordinary convergence, of some other series b n (if it converges), the terms of which are given by equalities:
b n = k = 0 c n k a k ,
where the coefficients c n k are determined by the method and do not depend on the choice of a given series a k . If in addition c n k = 0 whenever k > n (i.e., b n depends only on a k with k n ), then the summation method is called triangular, and in this case,
b n = k = 0 n c n k a k .
Dealing with this class of summation methods is simplified by symbolically using the displacement operator  E to shift the index of the terms a k of a given series, which, by definition (see, e.g., [12], Section 5, where it is denoted by E), formally acts in such a way that
E · a k = a k + 1 , and generally E n · a k = a k + n .
This defines the formal action of any polynomial P ( E ) , for example,
( 1 + E ) 3 · a 5 = a 5 + 3 a 6 + 3 a 7 + a 8 .
Accordingly, the equality (15) for a triangular method can be presented as
b n = P n ( E ) · a 0 , where P n ( E ) = k = 0 n c n k E k .
The next theorem shows that the summation method ( EI , q n ) belongs to the category of triangular methods and, moreover, gives exact values of the corresponding coefficients. This identification of the coefficient formula will allow us not only to prove our main result on the identity of the iterative Euler and the Jakimovski methods (Corollary 2) but also to carry out the calculations presented in Section 15.
Theorem 1. 
The summation method ( EI , q n ) is a triangular method, which acts so that ( EI , q n ) ( a n ) = b n , where b n is defined by the following Formula (18).
b 0 = a 0 , and b n = E 1 + d n k = 1 n 1 E + d k 1 + d k · a 0 for n 1 , ( A ) where d k = ( 1 + q 1 ) ( 1 + q k ) 1 , hence q k = 1 + d k 1 + d k 1 1 , ( B ) in particular , d 1 = q 1 , b 1 = a 1 1 + d 1 = a 1 1 + q 1 . ( C )
We remind that an empty product, like k = 1 0 , is always equal to 1. Then, we have d 0 = 0 by (B). With this understanding, we have E + d 0 1 + d 0 = E , and hence the second formula in (A) can be rewritten as
b n = 1 1 + d n k = 0 n 1 E + d k 1 + d k · a 0 , which formally implies b 0 = a 0 .
Proof. 
Recall that ( EI , q n ) is ( EI , 1 , q n ) . Thus let us come back to the summation process ( EI , L m , Q m ) defined in Section 5 under the assumptions Q m = q m and L m = 1 for all m 1 . This process begins with k b k ( 0 ) : = k a k and introduces intermediate series k a k ( m ) and k b k ( m ) by induction on m 1 . We are going to prove the following equations:
b k ( m ) = 1 1 + d m = 0 m 1 E + d 1 + d E + d m 1 + d m k · a 0 ( m 0 ) ,
a k ( m ) = 1 1 + d m 1 = 0 m 1 E + d 1 + d E + d m 1 1 + d m 1 k · a 0 ( m 1 ) .
Case m = 0 . As d 0 = 0 , the equality (20) takes the form b k ( 0 ) = E k · a 0 , which holds by (16) since b k ( 0 ) = a k by construction and d 0 = 0 .
Step m , part 1. Suppose that m 1 and the equality (20) holds for the upper index m 1 . Now, (21) follows from (20) (for m 1 ) because a k ( m ) = b k + 1 ( m 1 ) by construction:
a k ( m ) = 1 1 + d m 1 = 0 m 2 E + d 1 + d E + d m 1 1 + d m 1 k + 1 · a 0 = 1 1 + d m 1 = 0 m 1 E + d 1 + d E + d m 1 1 + d m 1 k · a 0 .
Step m , part 2. Let us check that then (20) holds for m itself. Recall that k b k ( m ) is obtained from k a k ( m ) by ( E , q m ) by means of Formula (8). Thus,
b n ( m ) = 1 ( 1 + q m ) n + 1 k = 0 n n k q m n k a k ( m ) = = 1 ( 1 + q m ) n + 1 ( 1 + d m 1 ) = 0 m 1 E + d 1 + d k = 0 n n k q m n k E + d m 1 1 + d m 1 k · a 0 = = 1 ( 1 + q m ) n ( 1 + d m 1 ) = 0 m 1 E + d 1 + d q m + E + d m 1 1 + d m 1 n · a 0 = = 1 1 + d m = 0 m 1 E + d 1 + d E + d m 1 + d m n · a 0 ,
because ( 1 + q m ) ( 1 + d m 1 ) = 1 + d m and q m + E + d m 1 1 + d m 1 = E + d m 1 + d m 1 .
Thus, we have obtained (20). This completes the proof of (20) and (21).
Now, to accomplish the proof of the lemma, it remains to note that b n = b 0 ( n ) = 1 1 + d n = 0 n 1 E + d 1 + d · a 0 by (20), which coincides with (19), as required. □
Remark 2. 
We can completely remove E from the formulation and proof of the theorem without any harm to its content by representing (18)(A) in the form
b 0 = a 0 , and b n = k = 0 n 1 c n k a k + 1 for n 1 , where k = 0 n 1 c n k x k = 1 1 + d n = 1 n 1 x + d 1 + d ,
and the rightmost equality serves as a definition of the coefficients c n k .

8. Iterated Euler Method = Jakimovski Method

As defined in [12], if d n n 1 is a sequence of numbers d n 1 , then the triangular summation method defined by (18)(A), or equivalently by (22), is called the Jakimovski summation method and denoted by F , d n .
Condition d n 1 is necessary for F , d n in view of the Formulas (18) and (22).
Corollary 2. 
The iterative Euler method ( EI , q n ) is identical to the Jakimovski method F , d n , assuming that q n and d n satisfy (18)(B).
See [20] regarding the identity of the iterative Euler and Jakimovski summation from the perspective of nonstandard analysis.
Note that the following string of parameters d n arises from (14) by rule (18)(B):
d n 1 n 11 = 1 , 1 , 3 , 3 , 7 , 7 , 7 , 7 , 7 , 7 , 7 .
Problem 3 
(a reformulation of Problem 2). Find out how to extend the tuple of numbers (23) to an infinite sequence { d m } m 1 so that the F , d m sum of ( 1 ) k k ! is equal (or, conversely, not equal) to the “true” value S of (5)(A) ? —solved in Section 9 below.
We may note that (18)(A)–(22) is the series-to-series form of the Jakimovski summation F , d n . A somewhat different sequence-to-sequence form
B n = = 1 n T + d 1 + d · A 0 , where T · A k = A k + 1 ,
also occurs in the publications on divergent series, e.g., [12], Section 2.
To see the connection here, assume that F , d n ( a n ) = b n , put A n = a 0 + + a n , B n = b 0 + + b n , and prove by induction that then (24) holds. Indeed (the basis n = 0 ), B 0 = b 0 and A 0 = a 0 , whereas the empty product = 1 0 in (24) is equal to 1. To carry out the step, note that B n + 1 B n = b n + 1 . On the other hand, (24) implies that
B n + 1 B n = = 1 n T + d 1 + d T + d n + 1 1 + d n + 1 1 · A 0 = 1 1 + d n + 1 = 1 n T + d 1 + d ( T 1 ) · A 0 .
However ( T 1 ) · A 0 = A 1 A 0 , and T · ( A 1 A 0 ) = A + 1 A = a + 1 = E · a 1 . Thus,
B n + 1 B n = 1 1 + d n + 1 = 1 n E + d 1 + d · a 1 .
But this is precisely the value of b n + 1 by (18)(A). This completes the inductive step.

9. About the Jakimovski Summation

A number of results on the summability method F , d n , in the context of the theory of divergent series, were obtained in, e.g., [21,22,23,24,25,26,27,28,29]. Of these, we present here the following result characterizing the region of summability depending on the comparison of sequences d n . See [9], Section 1.3 for a more substantial review.
Proposition 1 
([29]). Assume that d n n 1 and d n n 1 are sequences of d n , d n > 1 , such that n 1 | 1 + d n | = + and, for some N, 0 1 + d k 1 + d n 1 holds for all n N and 1 k n . Then, F , d n is the summability of any series and implies its F , d n —the summability to the same value.
Some results are known concerning the connections of the Jakimovski method with other summation methods. In particular, Theorem 5.4 in [12] states that, under certain conditions (including d n 1 and lim d n = + ), the method F , d n includes Euler’s summation ( E , q ) for all q > 0 . There are also some known connections with the Borel method, see, for instance, ref. [30].
A substantial direction of the research on the Jakimovski summation method has been related to the case of the linear distribution of the nodes d n , i.e.,
d n = A n + B ( n 1 ) , A > 0 and A + B > 1 ( to achieve d n > 1 , n ) ,
including the following most notable summability methods:
Karamata–Stirling, KS ( λ ) = F , d n = n 1 λ , 0 < λ = Const ,
Lototsky, L = KS ( 1 ) = F , d n = n 1 ,
Martic, S α β = F , d n = α + n 1 β , α , β = Const ,
  • and some more, see [9,10,11,31,32,33]. Of the typical questions in the theory of summability considered in connection with these methods, we are most interested in the summability of Eulerian series (1) and, generally, (2). In this direction, studies [24,31,32,33] and others have demonstrated that the key factor in determining the F , A n + B sum of ( 1 ) k k ! x k is the value of the coefficient A compared to the product x log 2 . In particular, the following was established in [31], Sections 4 and 5 in the case λ = 1 and outlined in [24] for the general case.
Proposition 2. 
If λ > 0 and x 0 , then the series ( 1 ) k k ! x k
(i)
Is summable by KS ( λ ) = F , d n = n 1 λ to the value S ( x ) of (3) in case λ 1 x log 2 ;
(ii)
Is not summable by KS ( λ ) in case λ > 1 x log 2 .
The next theorem gives a close, and even somewhat stronger, result.
Theorem 2. 
Assume that A > 0 , x 0 , and d n = A n 1 for all n beginning with some N and d n 1 for all n. Then, the series ( 1 ) k k ! x k
(i)
Is F , d n summable to S ( x ) of (3) in case x log 2 A ;
(ii)
Is not F , d n summable in case 0 < A < x log 2 .
Example 3. 
Let x = 1 . By Theorem 2, the series ( 1 ) k k ! is F , d n = 2 n 1 summable but not F , d n = n / 2 1 summable.
Theorem 2 will be proved below in Section 11, Section 12, Section 13 and Section 14.
Now, we make use of it to immediately solve the above formulated problems.
Proof. 
[Solution of Problem 3 and thereby Problems 2 and 1 too] Take x = 1 and A = 1 (basically, any A log 2 = 0.693 works as well) and put d n = A n 1 for all n 12 , keeping d n , 1 n 11 , defined by (23). The extension of (23) takes the form
d n n 1 = 1 , 1 , 3 , 3 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 11 , 12 , 13 , ,
whereas the corresponding extension of (14) takes the form
q n n 1 = 1 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 1 2 , 1 12 , 1 13 , ,
where, by (18)(B),
q 12 = 1 + d 12 1 + d 11 1 = 12 8 1 = 1 2 and q n = 1 + d n 1 + d n 1 1 = n n 1 1 = 1 n 1
for all n 13 . In this case, ( 1 ) k k ! is F , d n summable to S = S ( 1 ) by Theorem 2(i).
Then, take A = 0.5 (or just any 0 < A < log 2 ) and put d n = A n 1 for all n 12 . Then, the series ( 1 ) k k ! is not F , d n summable to S by Theorem 2(ii). □
Remark 3. 
Theorem 2 implies Proposition 2, under the following extra assumption: ( ) if A = x log 2 exactly, then A 1 .
To prove this reduction, we consider three cases. Note that the extra assumption ( ) of Remark 3 is related to Case 3 only. Also note that ( ) definitely holds for x = 1 .
Case 1: A = 1 λ > x log 2 . Take any A satisfying A > A > x log 2 . Then, ( 1 ) k k ! x k is F , d n = A n 1 summable by Theorem 2(i). But A n A > d n = A n 1 for all but finite n. Therefore, ( 1 ) k k ! x k is still F , d n = A n A summable by Proposition 1. However, by definition, this is exactly the KS ( λ ) summability.
Case 2: 0 < A = 1 λ < x log 2 . A symmetric argument with A < A < x log 2 works.
Case 3: A = 1 λ = x log 2 . Then, A 1 by the extra assumption ( ) . It follows that n 1 λ = A n A A n 1 , and hence F , A n 1 summability implies F , A n A = KS ( λ ) summability, as required.

10. Jakimovski Method and Iterative Hutton Summation

The following point attracts attention: why is the result of the Eulerian iterative summation ( EI , q n ) determined via F , d n as in Corollary 2 not by the numbers q n themselves but by the derived reals d n ? Answering this question, we will show that formula of the method F , d n is obtained from an iterative summation process that has the numbers d n themselves as parameters and differs from ( EI , · ) only in that each step uses not the Euler transform ( E , · ) but another (and generally simpler) transformation.
Given d 1 , we define a transformation ( H , d ) , which translates a given series a k into a series b n , the terms of which are defined by the formula
b 0 = a 0 1 + d , and b m = a m 1 + d + d a m 1 1 + d for m 1 .
Note that if d = 1 2 , then ( H , d ) is identical to what is defined as ( Hu , 1 ) and related to Hutton by Hardy [3], p. 22. Thus, we may call it the Hutton transform. Also note that the case d = 1 is excluded as it implies the zero denominator of fractions in (28).
The associated iterative method ( HI , d n ) is defined following the scheme in Section 5, with L n = 1 for all n. Namely, b k ( 0 ) = a k is the given series. Then, we proceed by induction. If m 1 and b k ( m 1 ) is defined, then we put b m 1 = b 0 ( m 1 ) , let a k ( m ) = b k + 1 ( m 1 ) for all k 0 , and let the next step b k ( m ) be the ( H , d m ) transform of a k ( m ) .
Finally, b m is the ( HI , d m ) transformation of a k . If b m converges (in the usual sense) to a finite or infinite value S = b m , then we say that S is the ( HI , d m ) sum of the given series a k = a 0 + a 1 + a 2 + , which is said to be ( HI , d m ) summable accordingly.
Theorem 3. 
The numbers a m , b m , defined as above, satisfy equalities (18)(A). Therefore, the iterative Hutton transform ( HI , d m ) is identical to the Jakimovski transform F , d m .
Proof (sketch). 
The following equalities are easily provable by induction:
b n ( m ) = E n k = 1 m E + d k 1 + d k · a 0 for n 1 and any m ,
a n ( m ) = b n + 1 ( m 1 ) = E n + 1 k = 1 m 1 E + d k 1 + d k · a 0 for m 1 and any n .
We conclude that, by (30), the reals b m = a 0 ( m ) 1 + d m satisfy (18)(A). □
Example 4. 
Let us recalculate Example 2 using the method ( HI , d m ) with the sequence d n 1 n 11 = 1 , 1 , 3 , 3 , 7 , 7 , 7 , 7 , 7 , 7 , 7 of (23).
b k ( 0 ) = a k = 1 1 + 2 6 + 24 120 + 720 — the given series;
b 0 = b 0 ( 0 ) = 1 , a k ( 1 ) = 1 + 2 6 + 24 120 + 720 ;
b k ( 1 ) = 1 2 + 1 2 2 + 9 48 + 300 — by (28) with d 1 = 1 ;
b 1 = b 0 ( 1 ) = 1 2 , a k ( 2 ) = 1 2 2 + 9 48 + 300 ;
b k ( 2 ) = 1 4 3 4 + 7 2 39 2 + 126 — by (28) with d 2 = 1 ;
b 2 = b 0 ( 2 ) = 1 4 , a k ( 3 ) = 3 4 + 7 2 39 2 + 126 ;
b k ( 3 ) = 3 16 + 5 16 9 4 + 135 8 — by (28) with d 3 = 3 ;
b 3 = b 0 ( 3 ) = 3 16 , a k ( 4 ) = 5 16 9 4 + 135 8 ;
b k ( 4 ) = 5 64 21 64 + 81 32 — by (28) with d 4 = 3 ;
b 4 = b 0 ( 4 ) = 5 64 , a k ( 5 ) = 21 64 + 81 32 ;
b k ( 5 ) = 21 2 9 + 15 2 9 — by (28) with d 5 = 7 ;
b 5 = b 0 ( 5 ) = 21 2 9 , a k ( 6 ) = 15 2 9 ;
b k ( 6 ) = 15 2 12 — by (28) with d 6 = 7 , b 6 = b 0 ( 6 ) = 15 2 12 , et cetera.
Thus, we obtain b n = 1 1 2 + 1 4 3 16 + 5 64 21 2 9 + 15 2 12 , which is identical to the computation of Example 2. However, we may note that the calculation by (28) is much simpler than the one that uses (8) as in Example 2. This may provide a purely computational advantage.
See also [20] regarding iterative Euler and Hutton transforms and summability from the point of view of nonstandard analysis.

11. Theorem 2: Evaluation of the Remainder

In this section, we begin the proof of Theorem 2.
We adopt the following notation and global assumptions:
(∗) (a)
A sequence d n n 1 of reals d n 1 is fixed;
   (b)
1 < d n + , and 1 1 + d n = + ;
   (c)
a k ( x ) = ( 1 ) k k ! x k is a given series;
   (d)
b k ( x ) = F , d n a k ( x ) is its F , d n transform;
   (e)
x > 0 is fixed and S ( x ) is defined by (3);
   (f)
for any n, R n ( x ) = S ( x ) b 0 ( x ) + + b n 1 ( x ) is the formal remainder;
   (g)
δ n x ( ξ ) = e ξ ξ m = 1 n 1 1 ξ x 1 + d m ;
   (h)
If 0 a < b + then put I n x ( a , b ) = a b δ n x ( ξ ) d ξ .
Lemma 1. 
R n ( x ) = e 1 / x x I n x ( 1 / x , + ) for all n.
Proof. 
We begin by analyzing the auxiliary F , d n sum of the geometric series k z k . In this case, the displacement operator E of (16) is identical to the product by z; thus, E = z in a sense. Therefore, if β n ( z ) = F , d n z k is the transformed series, then we have
β 0 ( z ) = 1 and β n ( z ) = 1 1 + d n m = 1 n 1 z + d m 1 + d m for n 1
by (18)(A), and, subsequently, the auxiliary remainder satisfies
ρ n ( z ) : = 1 1 z β 0 ( z ) + + β n 1 ( z ) = 1 1 z m = 1 n 1 z + d m 1 + d m ,
by an elementary induction on n based on (31). Indeed,
ρ n + 1 ( z ) = ρ n ( z ) β n ( z ) = 1 1 z z + d n 1 + d n m = 1 n 1 z + d m 1 + d m = 1 1 z m = 1 n z + d m 1 + d m ,
as required, and the proof of (32) is accomplished. In particular,
ρ n ( w x ) = 1 1 + w x m = 1 n 1 d m w x 1 + d m .
Now, we recall that
a k ( x ) : = ( 1 ) k k ! x k = 0 + ( w x ) k e w d w and S ( x ) = 0 + e w 1 + w x d w
by (3), and hence easily
R n ( x ) = 0 + e w 1 + w x m = 1 n 1 d m w x 1 + d m d w .
Substituting w = ξ x 1 , we obtain
R n ( x ) = 1 / x + e ξ + 1 / x ξ x m = 1 n 1 1 + d m ξ x 1 + d m d ξ = e 1 / x x 1 / x + e ξ ξ m = 1 n 1 1 + d m ξ x 1 + d m δ n x ( ξ ) d ξ ,
as required. □
Lemma 2. 
Assume that, in addition to (*) above, 0 < a < b < + . Then, δ n x ( ξ ) 0 with n uniformly on the interval a ξ b . In other words, for every ε > 0 , there exists N such that for all n N and all ξ in the interval a ξ b , we have | δ n x ( ξ ) | < ε .
Proof. 
Quite obviously e ξ ξ C : = e a a in case a ξ b . To estimate the product
m = 1 n 1 1 + d m ξ x 1 + d m = m = 1 n 1 1 ξ x 1 + d m , a ξ b ,
note that by d n + , there is some M = M x such that b x 1 + d m < 1 for all m M . Let
C = m = 1 M 1 max 1 ξ a 1 + d m , 1 ξ b 1 + d m .
Then, for any n > M and a ξ b , we have
| δ n x ( ξ ) | C C m = M n 1 1 a x 1 + d m C C m = M n 1 exp a x 1 + d m = C C exp m = M n 1 a x 1 + d m
by (34). It remains to note that m = M n 1 a x 1 + d m + since 1 1 + d n = + by (*)b. □
Corollary 3. 
Let 1 x < H < + . Then, for Theorem 2 to hold, it suffices to prove that
(i)
lim n I n x ( H , + ) = 0 —in case x log 2 A ;
(ii)
lim n I n x ( H , + ) 0 or nonexistent—in case 0 < A < x log 2 .
Proof. 
It suffices to note that lim n I n x ( 1 x , H ) = 0 by Lemma 2. □

12. Theorem 2: Reshaping

It is somewhat troublesome for different evaluations below in the proof of Theorem 2 that the condition d n = A n 1 in the theorem is assumed for all n beginning with some N, rather than generally for all n 1 . Fortunately, the next lemma allows one to change the values of d m , m < N (and generally any finite number of values, of course) to d m = A m 1 in such a way that the content of Theorem 2 is preserved.
Lemma 3. 
Assuming (*) of Section 11, suppose that d n n 1 is another sequence of reals d n 1 , satisfying (*)a and (*)b, and such that d n = d n for all n N . Then, Theorem 2 simultaneously holds (or simultaneously fails) for the summation methods F , d n and F , d n .
Proof. 
Let δ n x ( ξ ) = e ξ ξ m = 1 n 1 1 ξ x 1 + d m and define I n x ( a , b ) accordingly, similar to (*)(g)(h) in Section 11. Then,
δ n x ( ξ ) δ n x ( ξ ) = P · Q ( ξ ) , where P = m = 1 N 1 + d m 1 + d m and Q ( ξ ) = m = 1 N ξ x 1 d m ξ x 1 d m .
Thus, P > 0 is a constant. Moreover, if ξ > H = 1 x + 2 x · max 1 m N ( 1 + | d m | + | d m | ) , then ξ x 2 · max 1 m N ( 1 + | d m | + | d m | ) , and hence 1 2 ξ x 1 d m ξ x 1 d m 3 2 , and finally
g δ n x ( ξ ) δ n x ( ξ ) G , therefore , g I n x ( H , + ) I n x ( H , + ) G ,
where g = P · ( 1 / 2 ) N and G = P · ( 3 / 2 ) N . It remains to refer to Corollary 3. □

13. Theorem 2: The Case of Summability

Beginning here the proof of Theorem 2, we assume (*) of Section 11, and, strengthening (*)a and (*)b, we also assume the following:
(∗∗) (a)
Reals x , A > 0 are fixed, γ = A x , and d n = A n 1 for all n 1 ;
    (b)
We re-denote δ n x ( ξ ) of (*) as δ n γ ( ξ ) = e ξ ξ m = 1 n 1 1 ξ m γ ;
    (c)
We accordingly put I n γ ( a , b ) = a b δ n γ ( ξ ) d ξ .
Note that, by Lemma 3, condition d n = A n 1 , n , in (**)a does not reduce the generality of the assumptions of Theorem 2. We begin with Case (i) of the theorem.
Lemma 4. 
Suppose that γ log 2 , as in (i) of Theorem 2. Then, | I n γ * ( k γ , k γ + γ ) | C k 3 / 2 for all n , k 1 , where C = C ( γ ) does not depend on n , k .
Proof. 
Let us evaluate the factors 1 ξ m γ in the product δ n γ ( ξ ) as in (**)b in the domain
(†)
k γ ξ k γ + γ , or equivalently, k m ξ m γ k + 1 m .
Fact 1. If (†) holds and m k + 1 , then 0 1 k + 1 m 1 ξ m γ 1 k m < 1 .
Fact 2. If (†) holds and m k , then easily 1 ξ m γ k + 1 m m .
To conclude, if (†) holds and n 1 k , then by Fact 2 and Stirling
| δ n γ ( ξ ) | e k γ k γ · m = 1 n 1 k + 1 m m C 1 · e k γ k · k ! ( n 1 ) ! ( k n + 1 ) ! C 2 · e k γ k · k k ( n 1 ) n 1 ( k n + 1 ) k n + 1 X · e n 1 e k n + 1 e k Y · 2 π k 2 π ( n 1 ) 2 π ( k n + 1 ) Z .
Here, X 2 k , Y = 1 , and Z = k 2 π ( n 1 ) ( k n + 1 ) k 2 π ( k / 2 ) 2 2 π k so that
| δ n γ ( ξ ) | C 3 · e 2 k γ k · 2 2 k k = C 3 · e 2 k γ + 2 k log 2 k 3 / 2 C 3 · k 3 / 2 in case k n 1 ,
because γ log 2 is assumed. Here, C 3 = C 3 ( A , x ) does not depend on n , k , ξ .
On the other hand, if (†) holds and n 1 k , then by Facts 1, 2
δ n γ ( ξ ) = e k γ k γ · m = 1 k k + 1 m m · m = k + 1 n 1 m k m e k γ k γ C 4 · k 3 / 2 in case k n 1 ,
where C 4 = C 4 ( γ ) does not depend on n , k , ξ . Taking C 5 = max { C 3 , C 4 } , we deduce | I n γ ( u k , u k + 1 ) | C 5 · γ · k 3 / 2 from (35) and (36), so C = γ C 5 proves the lemma. □
Proof of Claim (i) of Theorem 2. 
Assuming (∗) of Section 11, along with (∗∗) above, and γ log 2 , we have to prove that lim n I n γ ( 1 / x , + ) = 0 . By Lemma 4, there exists a real constant C > 0 such that | I n γ ( k γ , k γ + γ ) | C k 3 / 2 for all n , k 1 . As k 3 / 2 converges, given any ε > 0 , there exists K such that | I n γ ( K γ , + ) | ε . On the other hand, lim n I n γ ( 1 / x , K γ ) = 0 by Lemma 2. This completes the proof. □

14. Theorem 2: The Case of Divergence

In continuation of the proof of Theorem 2, consider Case (ii) of the theorem. Still assume (∗) and (∗∗) (Section 11 and Section 13) and recall that, in particular,
γ = A x , δ n γ ( ξ ) = e ξ ξ m = 1 n 1 1 ξ m γ , I n γ ( a , b ) = a b δ n γ ( ξ ) d ξ .
Lemma 5. 
There is a real constant C > 0 such that | I n γ ( 1 / x , n γ ) | C for all n 1 .
Proof. 
If 1 k < n and k γ ξ k γ + γ (as in (†) in the proof of Lemma 4), then δ n γ ( ξ ) C 4 · k 3 / 2 by (36), where C 4 > 0 does not depend on n , k —therefore, | I n γ ( k γ , k γ + γ ) | γ C 4 . Separately, if 1 / x < γ , then lim n I n γ ( 1 / x , γ ) = 0 by Lemma 2; hence, there is some C 5 > 0 such that | I n γ ( 1 / x , γ ) | < C 5 for all n. Note that k 3 / 2 = X < + converges. Taking C = X C 4 + C 5 , we obtain the result required. □
Lemma 6. 
Assume that 0 < γ < log 2 . Let n 1 . Then, | I n γ ( n γ , + ) | | δ n γ ( 2 n γ ) | · γ 4 .
Proof. 
Note that δ n γ ( ξ ) is a sign-constant function in the domain ξ > n γ by (∗∗)c. It follows that
| I n γ ( n γ , + ) | | I n γ ( 2 n γ , 2 n γ + γ ) | .
Now, suppose that 2 n γ ξ 2 n γ + γ . Then, easily
δ n γ * ( ξ ) δ n γ * ( 2 n γ ) 2 n γ e 2 n γ ξ e ξ 2 n γ ( 2 n + 1 ) γ · e 2 n γ ( 2 n + 1 ) γ e γ 2 1 4 ,
since γ < log 2 . In other words, δ n γ ( ξ ) δ n γ * ( 2 n γ ) 4 provided 2 n γ ξ 2 n γ + γ . Combining this with (37), we obtain the lemma. □
Lemma 7. 
Assume that 0 < γ < log 2 . Then, lim n + δ n γ ( 2 n γ ) = + ; hence, by Lemma 6, lim n + | I n γ ( n γ , + ) | = + as well.
Proof. 
It follows from (∗∗)b that
δ n γ ( 2 n γ ) = 2 n γ γ γ · 2 n γ 2 γ 2 γ · · 2 n γ ( n 1 ) γ ( n 1 ) γ · e 2 n γ 2 n γ = ( 2 n 1 ) ( 2 n 2 ) ( 2 n n + 1 ) e 2 n γ ( n 1 ) ! 2 n γ ,
and hence δ n γ ( 2 n γ ) = e 2 n γ 4 n γ ( 2 n ) ! n ! n ! . Converting here the factorials by Stirling, we obtain
δ n γ ( 2 n γ ) C · e 2 n γ · 2 2 n 4 n γ · n 1 / 2 ,
where C does not depend on n , γ . However, e γ < 2 since γ < log 2 . Therefore, the exponential function e 2 n γ · 2 2 n = e 2 n log 2 2 n γ increases faster than any polynomial. This ends the proof of the lemma. □
Proof of Claim (ii) of Theorem 2. 
Still assuming (∗) and (∗∗) (Section 11 and Section 13), and 0 < γ < log 2 (as in case (ii) of the theorem), we are going to prove that lim n I n γ ( 1 / x , + ) = . For that purpose, note that for all n sufficiently large, we have
I n γ ( 1 / x , + ) = I n γ ( 1 / x , n γ ) + I n γ ( n γ , + ) ,
where the first addendum is uniformly bounded by Lemma 5, whereas the second one tends to by Lemma 7. This ends the proof of Theorem 2 as a whole. □

15. Example 1 with More Detailed Numerical Information

Here we present more detailed numerical information related to calculations in Example 1, lines B–F and the final sum value. The numerical data in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 is presented as exact simple text-based fractions and in decimal form rounded to 8 decimal places.
Table 1. Example 1 line B.
Table 1. Example 1 line B.
1/2=0.5
−1/4=−0.25
3/8=0.375
−11/16=−0.6875
53/32=1.65625000
−309/64=−4.82812500
2119/128=16.55468750
−16,687/256=−65.18359375
148,329/512=289.70507813
−1,468,457/1024=−1434.04003906
16,019,531/2048=7822.03662109
−190,899,411/4096=−46,606.30151367
2,467,007,773/8192=301,148.40979004
−34,361,893,981/16,384=−2.09728357 × 106
513,137,616,783/32,768=1.56597173 × 107
−8,178,130,767,479/65,536=−1.24788372 × 108
Table 2. Example 1 line C terms in brackets.
Table 2. Example 1 line C terms in brackets.
3/8=0.375
−11/16=−0.68750000
53/32=1.65625000
−309/64=−4.82812500
2119/128=16.55468750
−16,687/256=−65.18359375
148,329/512=289.70507813
−1,468,457/1024=−1434.04003906
16,019,531/2048=7822.03662109
−190,899,411/4096=−46,606.30151367
2,467,007,773/8192=301,148.40979004
−34,361,893,981/16,384=−2.09728357 × 106
513,137,616,783/32,768=1.56597173 × 107
−8,178,130,767,479/65,536=−1.24788372 × 108
Table 3. Example 1 line D terms in brackets.
Table 3. Example 1 line D terms in brackets.
3/16=0.18750000
−5/64=−0.07812500
21/256=0.08203125
−99/1024=−0.09667969
615/4096=0.15014648
−4401/16,384=−0.26861572
36,585/65,536=0.55824280
−342,207/262,144=−1.30541611
3,565,323/1,048,576=3.40015697
−40,866,525/4,194,304=−9.74333882
510,928,317/16,777,216=30.45370084
−6,915,941,595/67,108,864=−103.05556051
100,734,321,519/268,435,456=375.26459068
−1,570,587,184,521/1,073,741,824=−1462.72330035
Table 4. Example 1 line E terms in brackets.
Table 4. Example 1 line E terms in brackets.
21/256=0.08203125
−99/1024=−0.09667969
615/4096=0.15014648
−4401/16,384=−0.26861572
36,585/65,536=0.55824280
−342,207/262,144=−1.30541611
3,565,323/1,048,576=3.40015697
−40,866,525/4,194,304=−9.74333882
510,928,317/16,777,216=30.45370084
−6,915,941,595/67,108,864=−103.05556051
100,734,321,519/268,435,456=375.26459068
−1,570,587,184,521/1,073,741,824=−1462.72330035
Table 5. Example 1 line F terms in brackets.
Table 5. Example 1 line F terms in brackets.
21/512=0.04101563
−15/4096=−0.00366211
159/32,768=0.00485229
−429/262,144=−0.00163651
5241/2,097,152=0.00249910
−26,283/16,777,216=−0.00156659
338,835/134,217,728=0.00252452
−2,771,097/1,073,741,824=−0.00258079
36,159,837/8,589,934,592=0.00420956
−416,721,543/68,719,476,736=−0.00606410
5,868,508,359/549,755,813,888=0.01067475
−84,143,115,525/4,398,046,511,104=−0.01913193
Table 6. Example 1, final sum value, Equation (7).
Table 6. Example 1, final sum value, Equation (7).
Uiter = 430,377,791/1,073,741,824 = 0.40082055
Table 7. Example 1: partial sums U N iter = 23 64 + N terms in brackets in line F. The value U 7 iter is the closest to the “exact” value 1 − S = 0.40365… of (5)(B). The values U 5 iter , U 3 iter , and U 6 iter are also closer to 1 − S than the Eulerian choice of U iter = U 8 iter .
Table 7. Example 1: partial sums U N iter = 23 64 + N terms in brackets in line F. The value U 7 iter is the closest to the “exact” value 1 − S = 0.40365… of (5)(B). The values U 5 iter , U 3 iter , and U 6 iter are also closer to 1 − S than the Eulerian choice of U iter = U 8 iter .
U 0 iter =23/64=0.35937500
U 1 iter =205/512=0.40039063
U 2 iter =1625/4096=0.39672852
U 3 iter =13,159/32,768=0.40158081
U 4 iter =104,843/262,144=0.39994431
U 5 iter =843,985/2,097,152=0.40244341
U 6 iter =6,725,597/16,777,216=0.40087682
U 7 iter =54,143,611/134,217,728=0.40340134
U iter = U 8 iter =430,377,791/1,073,741,824=0.40082055
U 9 iter =3,479,182,165/8,589,934,592=0.40503011
U 10 iter =27,416,735,777/68,719,476,736=0.39896601

16. Further Examples and Notes

Here, we will present a couple more applications of the Euler–Jakimovski method and then proceed with a few notes on possible extensions of our results and methods and possible connections to different modern approaches.
Example 5 
(Section 4 in [12]).  F , d n summation of power series is considered. Among other results, Theorem 4.1 there claims that if
lim n d n = + , d n 1 for all n , and d n 2 < + ,
and the F , d n transformation is regular, then it sums the series 1 + z + z 2 + to 1 1 z for all complex z with z < 1 , but it does not sum 1 + z + z 2 + in case z > 1 .
The case z = 1 here is left open in [12].
Some sufficient regularity conditions for F , d n are given, e.g., in [12,21,22] or elsewhere. For instance, by [22], Section 3, if d n are complex numbers, then F , d n is regular provided
(a)
Only finitely many d n = 0 ;
(b)
For some K < + , for all n; k = 1 n 1 + | d k | | 1 + d k | K ,
(c)
lim n k = 1 n d k 1 + d k = 0 , where indicates that the product is over all nonzero factors.
Example 6 
(Section 10.III in [1], Part II). Euler demonstrates that his summation method helps transform slowly converging series into rapidly converging ones, which has obvious computational applications. Starting with the slowly converging alternating harmonic series S = 1 1 2 + 1 3 1 4 + , Euler finds that the transformed series is
S = 1 2 + 1 2 · 4 + 1 2 · 4 + 1 3 · 8 + = k = 1 2 k k ,
which converges much faster.
Another example there concerns the series S = lg 2 lg 3 + lg 4 lg 5 + (with base 10 logarithms). This is a divergent series of course since lim a n = + . To evaluate it, Euler sums up the first eight terms (up to lg 9 inclusively), obtaining S 1 = 0.3911005 , and then transforms the tail S 2 = lg 10 lg 11 + lg 12 lg 13 + with ( E , q = 1 ) , and, summing up several terms of the transformed series, obtains S 2 = 0.4891606 . The final result is thereby S = S 1 + S 2 = 0.0980601 . This can be viewed as a two-step application of the iterative Euler summation.
Next, we will make a few comments about the possible expansion of our results and methods and possible connections with various modern approaches. The application of the iterative Euler/Jakimovski summation technique in these new areas will be an interesting topic for further research.
Note 1.
When discussing the difficulties related to such mathematical singularities as division by zero or assigning meaningful sums to divergent series, those studies can be further deepened through the perspective of uncertain numbers, a number system recently proposed by Yue [34]. Uncertain numbers provide alternative ways of interpreting divergent series, reinforcing the need for generalized summability methods like those discussed in our paper. This may be the subject of further prospective research.
Note 2.
One more direction of further prospective research can exploit some implicit affinities and historical-to-modern parallels between iterative summation and algebraic operator theory. The iterative summation process explored in our paper echoes the algebraic structures introduced in Rota–Baxter theory, particularly in the context of operator identities, as discussed in a recent paper by Guo et al. [35].
Note 3.
Another direction of studies on summation of divergent series is based on the method of zeta function regularization, see, e.g., [36]. This is quite distinct from the more traditional triangular and other linear methods such as the iterative Euler and Jakimovski methods. Zeta function methods allow one to evaluate series not summable by more traditional techniques. One of the most striking examples is the evaluation 1 + 1 2 + 1 3 + = γ = 0.5772 , the Euler–Mascheroni constant. The shortest track to this evaluation is as follows by [37]. Formally, n 1 = ζ ( 1 ) . While the function has a pole at 1, we can find its Cauchy principal value there: lim h 0 ζ ( 1 + h ) + ζ ( 1 h ) 2 = γ . See a more detailed output using the the Ramanujan summation method in [38], page 87. However, it would be interesting to really apply F , d n to the harmonic series with different distributions of the nodes d n .
Note 4.
Two new methods of summation of divergent series, most notably the series of alternating factorials, are presented and analyzed in [13]. Those are the Padé approximants and the delta transformation, a powerful nonlinear technique that works very well in the case of strictly alternating series. The method is based on a new factorial series representation of the truncation error of the series of alternating factorials. Explicit expressions for the transformation errors of Padé approximants and of the delta transformation are defined. A subsequent asymptotic analysis proves rigorously the convergence of both Padé and delta methods. However, asymptotic estimates and other known numerical results allow one to draw a conclusion of the superiority of the delta transformation over Padé.
Some other applications of factorial-type series to asymptotic series are developed and studied in [39].

17. Conclusions and Problems

  • In this study, the methods of the theory of divergent series are used to analyze one of the examples of iterative summation given by Euler in part II of his Foundations of differential calculus.
  • This example (Example 1 in Section 3) leads to the definition of the iterative Euler transform, which consists of an alternating application of the usual Euler transformation ( E , q n ) , and a separate summation of some numbers of the initial terms of the sequentially occurring series.
  • Analyzing Example 1 and a related Example 2, we introduce the iterative Euler transformation  ( EI , q n ) , having an infinite sequence { q n } n 1 as a parameter, in Section 5 and Section 6.
  • Corollary 2, our first main result, demonstrates that the Euler iterative summation ( EI , q n ) is equivalent to the Jakimovski summability method F , d n , introduced in the 1950s, provided that the reals q n and d n satisfy the equality d n = ( 1 + q 1 ) ( 1 + q n ) 1 of (18)(B).
  • We also prove (Theorem 3) that F , d n is equivalent to ( HI , d n ) (with the same parameters d n ), another iterative summation method, which involves the Hutton transform ( Hu , d n ) at each step instead of the Euler transform.
  • In addition, we established Theorem 2, our second main result, which determines whether the series ( 1 ) k k ! x k is F , d n summable to S * = 0.59634 of (5)(A) in terms of the distribution of parameters d n , which somewhat improves the earlier results in this area.
  • These are new results, and they make a significant contribution to summability theory.
  • The technique developed in this paper may lead to further progress in studies of various aspects of the summation of divergent series.
  • The following Problems 4–6 arise from our study. (Recall that Problems 1–3 were formulated and solved above in this article in the course of our presentation.) These problems are unlikely to be solved when applied to really arbitrary series; however, a solution may be possible for the series of alternating factorials.
Problem 4 
(summation to the least term). Suppose that the iterative summation ( EI , L m , Q m ) of Section 5 is carried out so that Q m = 1 for all m (as in Examples 1 and 2), whereas, for each m 1 , the number L m of separated terms is chosen so that the corresponding term b L m ( m ) of the series k b k ( m ) is the least (in absolute value) among all terms b k ( m ) . Is the series ( 1 ) k k ! summable to S * = 0.59634 of (5)(A) with this method ( EI , L m , Q m ) ?
Problem 5 
(summation to the best partial sum). A variant of the previous one. The same question for L m is defined so that the partial sum obtained is the best possible approximation of S * at this step of iteration.
Problem 6 
(summation to monotonous increase). A variant of the previous one. The same question for L m is defined to be the least satisfying (I), (II), and (III) in Section 4.
  • Some possible directions of further prospective studies were outlined in Section 16.
  • We also hope that this research can be useful in creating algorithms or computational algorithmic models that represent the evolution of cell types and are related to the storage and processing of genomic information.

Author Contributions

Conceptualization, V.K. and V.L.; methodology, V.K. and V.L.; validation, V.K.; formal analysis, V.K. and V.L.; investigation, V.K. and V.L.; writing original draft preparation, V.K.; writing review and editing, V.K. and V.L.; project administration, V.L.; funding acquisition, V.L. All authors have read and agreed to the final version of the manuscript.

Funding

The research was carried out at the expense of a grant from the Russian Science Foundation https://rscf.ru/project/24-44-00099/, accessed on 5 June 2025, No. 24-44-00099.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors thank the anonymous reviewers for their thorough review and highly appreciate the comments and suggestions, which have significantly contributed to improving the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Euler, L. Institutiones Calculi Differentialis, Parts I, II; Academiae Imperialis Scientiarum Petropolitanae: Saint Petersburg, Russia, 1755. [Google Scholar]
  2. Euler, L. De seriebus divergentibus. In Leonhardi Euleri Opera Omnia; Boehm, C., Faber, G., Eds.; Ser. 1: Opera Math. Orell Füssli: Zürich, Switzerland, 1925; Volume 14, pp. 585–617. [Google Scholar]
  3. Hardy, G.H. Divergent Series, 2nd (textually unaltered) ed.; Chelsea: New York, NY, USA, 1991. [Google Scholar]
  4. Euler, L. Foundations of Differential Calculus; Blanton, D.J., Translator; Springer: New York, NY, USA, 2000. [Google Scholar]
  5. Ferraro, G. Convergence and formal manipulation of series from the origins of calculus to about 1730. Ann. Sci. 2002, 59, 179–199. [Google Scholar] [CrossRef]
  6. Ferraro, G. Convergence and formal manipulation in the theory of series from 1730 to 1815. Hist. Math. 2007, 34, 62–88. [Google Scholar] [CrossRef]
  7. Barbeau, E.J.; Leah, P.J. Euler’s 1760 paper on divergent series. Hist. Math. 1976, 3, 141–160. [Google Scholar] [CrossRef]
  8. Guichardet, A. A divergent series summed by Euler. Quadrature 2014, 92, 18–19. [Google Scholar]
  9. Kangro, G.F. Theory of summability of sequences and series. J. Soviet Math. 1976, 5, 1–45. [Google Scholar] [CrossRef]
  10. Karamata, J. Théoremes sur la sommabilité exponentielle et d’autres sommabilités s’y rattachant. Mathematica 1935, 9, 164–178. [Google Scholar]
  11. Lototskij, A.V. Über eine lineare Transformation von Folgen und Reihen. Ivanov. Gos. Ped. Inst. Uchenye Zap. Fiz.-Mat. Nauki 1953, 4, 61–91. [Google Scholar]
  12. Jakimovski, A. A generalization of the Lototsky method of summability. Mich. Math. J. 1959, 6, 277–290. [Google Scholar] [CrossRef]
  13. Borghi, R.; Weniger, E.J. Convergence analysis of the summation of the factorially divergent Euler series by Padé approximants and the delta transformation. Appl. Numer. Math. 2015, 94, 149–178. [Google Scholar] [CrossRef]
  14. Brezinski, C.; Redivo-Zaglia, M.; Weniger, E.J. Special Issue: Approximation and Extrapolation of of Convergent and Divergent Sequences and Series. Selected Papers Based on the Presentations at the Conference, Marseille, France, September 28–2 October, 2009. Appl. Numer. Math. 2010, 60, 1183–1464. Available online: https://www.sciencedirect.com/journal/applied-numerical-mathematics/vol/60/issue/12 (accessed on 2 June 2025). [CrossRef]
  15. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Reprint of the 1972 ed.; Selected Government Publications; JohnWiley & Sons, Inc.: New York, NY, USA; National Bureau of Standards: Washington, DC, USA, 1984; Volume xiv, 1046p.
  16. Mascheroni, L. Adnotationes Ad Calculum Integralem Euleri; Ex typographia Petri Galeatii: Ticini, Italy, 1790. [Google Scholar]
  17. Euler, L. Opera Omnia sub Auspiciis Societatis Scientiarum Naturalium Helveticae Edenda curaverunt Ferdinand Rudio, Adolf Krazer, Paul Stäckel; Kowalewski, G., Ed.; Series Prima. Opera Mathematica; Orell Füssli: Zürich, Switzerland, 1913; Volume Decimum. [Google Scholar]
  18. Digernes, T.; Varadarajan, V.S. Notes on Euler’s work on divergent factorial series and their associated continued fractions. Indian J. Pure Appl. Math. 2010, 41, 39–66. [Google Scholar] [CrossRef]
  19. Lacroix, S.F. Traité des Différences et des Séries; Chez J. B. M. Duprat; Libraire Pour les Mathématiques, Quai des Augustinsz: Paris, France, 1800. [Google Scholar]
  20. Kanovei, V.; Lyubetsky, V. Grossone approach to Hutton and Euler transforms. Appl. Math. Comput. 2015, 255, 36–43. [Google Scholar] [CrossRef]
  21. Jakimovski, A.; Meir, A. Regularity theorems for (F,dn)-transformations. Ill. J. Math. 1965, 9, 527–534. [Google Scholar]
  22. Jakimovski, A.; Skerry, H. Some regularity for the (f,dn,z1) summability method. Proc. Am. Math. Soc. 1970, 24, 281–287. [Google Scholar] [CrossRef]
  23. Jayasri, C. On generalized Lotosky summability. Indian J. Pure Appl. Math. 1982, 13, 795–805. [Google Scholar]
  24. Macphail, M.S. Stirling summability of rapidly divergent series. Mich. Math. J. 1965, 12, 113–118. [Google Scholar] [CrossRef]
  25. Bingham, N.H.; Stadtmüller, U. Jakimovski methods and almost-sure convergence. In Disorder in Physical Systems: A Volume in Honour of John M. Hammersley; Oxford University Press: Oxford, UK, 1990; Available online: https://www.statslab.cam.ac.uk/~grg/books/hammfest/2-nhb.pdf (accessed on 2 June 2025).
  26. Bingham, N.H. Tauberian theorems for Jakimovski and Karamata-Stirling methods. Mathematika 1988, 35, 216–224. [Google Scholar] [CrossRef]
  27. Faulstich, K. The intersection of Jakimovski methods with Riesz Nörlund methods. Constructive function theory. Proc. Int. Conf. Varna/Bulg. 1983, 1981, 314–316. [Google Scholar]
  28. Vuckovic, V. The mutual inclusion of Karamata-Stirling methods of summation. Mich. Math. J. 1959, 6, 291–297. [Google Scholar] [CrossRef]
  29. Meir, A. On the [F,dn]-transformations of A. Jakimovski. Bull. Res. Council Israel Sect. F 1962, 10, 165–187. [Google Scholar]
  30. Bingham, N.H. On Borel and Euler summability. J. Lond. Math. Soc. II Ser. 1984, 29, 141–146. [Google Scholar] [CrossRef]
  31. Agnew, R.P. The Lototsky method for evaluation of series. Mich. Math. J. 1957, 4, 105–128. [Google Scholar] [CrossRef]
  32. Agnew, R.P. Relations among the Lototsky, Borel and other methods for evaluation of series. Mich. Math. J. 1959, 6, 363–371. [Google Scholar] [CrossRef]
  33. Martic, B. On the B transformations of M. Bajraktarevic. Period. Math.-Phys. Astron. II Ser. 1964, 19, 225–235. [Google Scholar]
  34. Yue, P. Uncertain Numbers. Mathematics 2025, 13, 496. [Google Scholar] [CrossRef]
  35. Guo, S.; Li, Y.; Wang, D. 2-Term Extended Rota–Baxter Pre-Lie-Algebra and Non-Abelian Extensions of Extended Rota–Baxter Pre-Lie Algebras. Results Math. 2025, 80, 96. [Google Scholar] [CrossRef]
  36. Elizalde, E.; Odintsov, S.D.; Romeo, A.; Bytsenko, A.A.; Zerbini, S. Zeta Regularization Techniques with Applications; World Scientific: Singapore, 1994. [Google Scholar]
  37. math.stackexchange.com. Sum of the Harmonic Series, 2014. Available online: https://math.stackexchange.com/questions/650507/sum-of-the-harmonic-series (accessed on 25 May 2025).
  38. Delabaere, E. Ramanujan’s Summation. In Algorithms Seminar 2001–2002; Chyzak, F., Ed.; INRIA: Rocquencourt, France, 2003; pp. 83–88. Available online: https://algo.inria.fr/seminars/sem01-02/delabaere2.pdf (accessed on 25 May 2025).
  39. Weniger, E.J. Summation of divergent power series by means of factorial series. Appl. Numer. Math. 2010, 60, 1429–1441. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kanovei, V.; Lyubetsky, V. Notes on Iterative Summation of Alternating Factorials. Mathematics 2025, 13, 1942. https://doi.org/10.3390/math13121942

AMA Style

Kanovei V, Lyubetsky V. Notes on Iterative Summation of Alternating Factorials. Mathematics. 2025; 13(12):1942. https://doi.org/10.3390/math13121942

Chicago/Turabian Style

Kanovei, Vladimir, and Vassily Lyubetsky. 2025. "Notes on Iterative Summation of Alternating Factorials" Mathematics 13, no. 12: 1942. https://doi.org/10.3390/math13121942

APA Style

Kanovei, V., & Lyubetsky, V. (2025). Notes on Iterative Summation of Alternating Factorials. Mathematics, 13(12), 1942. https://doi.org/10.3390/math13121942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop