Analysis, Evaluation and Exact Tracking of the Finite Precision Error Generated in Arbitrary Number of Multiplications

: In the present paper, a novel approach is introduced for the study, estimation and exact tracking of the finite precision error generated and accumulated during any number of multiplications. It is shown that, as a rule, this operation is very “toxic”, in the sense that it may force the finite precision error accumulation to grow arbitrarily large, under specific conditions that are fully described here. First, an ensemble of definitions of general applicability is given for the rigorous determination of the number of erroneous digits accumulated in any quantity of an arbitrary algorithm. Next, the exact number of erroneous digits produced in a single multiplication is given as a function of the involved operands, together with formulae offering the corresponding probabilities. In case the statistical properties of these operands are known, exact evaluation of the aforementioned probabilities takes place. Subsequently, the statistical properties of the accumulated finite precision error during any number of successive multiplications are explicitly analyzed. A method for exact tracking of this accumulated error is presented, together with associated theorems. More-over, numerous dedicated experiments are developed and the corresponding results that fully support the theoretical analysis are given. Eventually, a number of important, probable and possible applications is proposed, where all of them are based on the methodology and the results introduced in the present work. The proposed methodology is expandable, so as to tackle the round-off error analysis in all arithmetic operations.


Introduction
All contemporary computing machines store both integer and floating-point numbers with a finite number of digits. This piece of fixed-sized data that is handled as a unity by the instruction set or the processor's hardware is called finite word; the number of bits that form this piece of data, is frequently called "finite word length" or "employed precision". In addition, on the hardware level, a computer performs fundamental operations, using a finite word length. Nowadays, dedicated software programs have been developed, which perform operations with a finite number of digits, the value of which is chosen by the programmer and/or the user, the only limitation being the memory and time constraints. We shall also use for this number of digits the term "finite word length" or "employed precision".
Due to the fact that the precision with which all arithmetic operations are made is always limited, a numerical error is, as a rule, accumulated during the execution of most algorithms. In particular, in various algorithms and corresponding applications, the obtained results may be totally erroneous and/or unreliable due to the aforementioned reasons, which are inherent to all computing machines. We stress that this problem exists even when an arbitrarily large finite word length is employed for the execution of the algorithm, as it will become evident from the analysis introduced in the present paper. For this reason, we will use the term "finite precision error" for this numerical error; various authors and researchers also use the term "quantization error", "round-off error" or other equivalent terms.
Consequently, a number of articles address the associated issues and the problems they generate. Thus, for example, authors in [1] study the finite precision error in the least mean square (LMS) adaptive algorithm and they show that the error's mean squared value is inversely proportional to the adaptation step size μ. Reference [2] introduces a fast algorithm for exponentially weighted least squares Kalman filtering, which suffers less from finite precision error drawbacks, intrinsic to this class of algorithms. Reference [3] presents algorithms for accurately converting floating-point numbers to decimal representation. Article [4] studies the finite precision effects on the execution of the Lanczos algorithm for solving the standard non-symmetric eigenvalue problem. The authors of [5] study round-off error propagation in an algorithm which computes the orthonormal basis of a Krylov subspace with Householder orthonormal matrices. Authors in [6] study the propagation of round-off error near the periodic orbits of a discretized linear area-preserving map. The round-off error probability distribution, considered as a function of time, is shown to be a calculable algebraic number. In [7] it is shown that there are theoretically convergent schemes that solve non-linear partial differential equations, which can produce numerical steady state solutions that do not correspond to steady state solutions of the boundary value problem. In [8], it is pointed out that the convergence of Gegenbauer polynomials at the endpoints is affected by round-off error; the article proposes both parameter optimization and reduction of the round-off error for the Gegenbauer reconstruction method. In [9], a set of specific semantics is introduced which describes the propagation of round-off error during a calculation. The authors of [10] give an estimation of the round-off error generated in long-time integration in a number of standard, nonlinear systems. Authors in [11] introduce an algorithm for the computation of the orthogonal Fourier-Mellin moments which is of linear complexity and is resistant to finite precision error effects. Reference [12] proposes a method for dealing with the instability of the digital frequency synthesis (DFS), caused by the round-off error. Article [13] presents bounds for round-off error, generated in various algorithms. Moreover, another approach is presented in [14], according to which, the evolution of round-off error in chaotic maps is treated as an additive noise to the expected exact solutions; the introduced method spots a threshold below which global errors may be ignored. Article [15] studies the round-off error generated during computation of Hardy's multiquadric and its related interpolators and proposes the use of arbitrary precision arithmetics to circumvent the associated finite precision error problems. Authors of [16] propose a fast, resistant to finite precision error method for evaluation of high order Zernike moments. Article [17] proposes a method for an improved scaling of finite precision error analysis. Finally, in [18] and [19], a preliminary form of the approach introduced here is presented.
In the present paper, we introduce a novel approach for studying and evaluating the finite precision error generated during the operation of multiplication. It is shown that, as a rule, this operation is very "toxic", in the sense that it may force the finite precision error accumulation to grow arbitrarily large. The exact amount of the generated number of erroneous digits added or subtracted to the result (product) of this operation is given. Consequently, the probabilities the number of erroneous digits of the product differ by k from the maximum number of erroneous digits of the operands are explicitly computed. In the process of doing so, a set of general definitions is given, applicable to all operations performed with finite word length by a computing machine. Then, the accumulation of erro-neous digits after an arbitrary number of successive multiplications is extensively analyzed. Statistical properties of this accumulated error are stated that allow for exact error prediction when the distribution of the associated operands are given. In addition, a number of theoretical results are introduced, which allow for the exact tracking of the generated and accumulated finite precision error during any number of multiplications in general. Numerous experimental results are presented, which fully support the presented theoretical analysis. We stress that the introduced methodology is expandable, so as to tackle all arithmetic operations.

A Set of Basic Definitions, Notations and Abbreviations
The entire analysis will be mainly made in the decimal arithmetic system, only because this system is far more familiar to most users. However, all results referred to in the present work hold perfectly well for the binary system too, or any other radix; the corresponding analysis and deductions may be obtained by means of a quite straightforward and slight modification of the approach introduced here. In any arithmetic system, we assume that all numbers are expressed in scientific/canonical form. Thus, any number is written as • 10 , in the decimal arithmetic system, where | | ∈ [1,10), ∈ ℤ; in the binary system, the same number is expressed as • 2 , where | | ∈ [1, 2), ∈ ℤ . Independently of the employed radix, we will use the symbols ( ) and ( ) for the mantissa and the exponent of any quantity respectively. We shall demonstrate in the following that the analysis introduced here based on the decimal radix offers accurate results and prediction for the multiplication(s) performed by computing machines.
We will use symbol ■ to denote completion of a theorem, a proof, a definition, an example, etc. Abbreviations 1. We will use the acronym e. d. d. in place of "erroneous decimal digits" and c. d. d. for "correct decimal digits". In general, abbreviation "d. d." stands for "decimal digits". The abbreviation f. p. e. stands for "finite precision error". The symbols # ( ) and # ( ) stand for the number of e. d. d. accumulated in quantity , due to f. p. e., and its number of c. d. d. respectively. ■ Notation 1. The expressions the algorithm "has failed" or it "has been destroyed due to f. p. e." mean that the algorithm in hand offers completely unreliable results, at a certain iteration. ■ Suppose that any two numbers α and β are given, both written in canonical form. In order to unambiguously verify if these two numbers share a common number of initial digits (i.e., stem of digits), starting from the most significant one, we shall employ the following: A simple inspection might lead someone to deduce that these two numbers differ by six (6) decimal digits. Actually and according to Definition 1, the following holds: where the absolute difference is | − | = 6.6459 • 10 Hence, Therefore, the two aforementioned numbers differ in four (4) decimal digits, contrary to a probable initial expectation. ■ Example 2. According to Definition 1, the two numbers with 32 decimal digits in the mantissa, Suppose, moreover, that all operations were made with infinite precision; then, let an arbitrary quantity have the value , where superscript indicates the ideally correct value of . Next, suppose that the very same quantity is calculated in a computing machine which performs the same operations as in the infinite case, using digits in the mantissa; suppose that this machine generates the representation for the specific quantity . Then, a rigorous relation between and is obtained via the following: Let us assume that we restrict the infinite precision quantity to its first digits, obtaining quantity . Let us also assume that comparison of and by means of Definition 1, manifests that these two quantities differ by digits. Then, we deduce that quantity has the first = ( − ) digits correct and all its other digits erroneous. The aforementioned statement holds for both the binary system, which is the base of contemporary computing machines, as well as for the decimal radix. ■ A number of practical examples associated with the above Definition, will be given in Section 6.
It is known that a mantissa represented by a number of bits, say , ∈ ℕ, in a computing machine is approximated in the decimal radix by a number of d. d., pretty close to the nearest integer of • 2. Since • 2 is, as a rule, not an integer, then the number of correct digits of a quantity's decimal representation may fluctuate by one digit at most.

Generation of Finite Precision Error in a Single Multiplication and Corresponding Probabilities
This Section presents a solution to the following problem: consider two arbitrary numbers, say , found in a computing machine that uses a finite word length of decimal digits in the mantissa. Moreover, suppose due to an ensemble of previous calculations has been computed with erroneous decimal digits (e. d. d.) in its mantissa, while with e. d. d. in the mantissa. In addition, consider that multiplication = is executed in this computing machine. Then, so far, it has been an open question to determine the exact number of e. d. d. with which is evaluated; moreover, the corresponding probabilities that is computed with a specific number of e. d. d. must be evaluated.

Bounds and Evaluation of the Finite Precision Error Produced in a Single Multiplication
Consider any two quantities , having and ideally correct digits, should all operations and representations be made with infinite precision. Next, suppose that quantities and have been evaluated in a computing machine using d. d. in the mantissa; we let the representations of these two numbers in this computing machine be and , respectively. In addition, following Definition 2, we let the restriction of and in this machine be , respectively. We would like to emphasize that the difference between and is the following: quantity may have been evaluated with finite precision error, due to previous calculations. On the contrary, is free of finite precision error since it is always considered to be a restriction of the ideally correct value of in decimal digits.
Consider, moreover, the product = • , executed both with infinite precision yielding product = • , as well as in a computing machine using digits in the mantissa, generating = • . In addition, suppose that, due to previous calculations, has been computed with erroneous decimal digits (e. d. d.), (⇔ = − correct decimal digits), while with e. d. d. (⇔ = − c. d. d.) due to the fact that all operations have been made with a finite word length. We note, as it will become evident in the following analysis, that the finite precision error generated in the multiplication process is located only in the mantissae of the involved terms. Hence, we may assume that , , and are plain mantissae, namely that ( ) = ( ) = = 0. In order to study the finite precision error generated in the computation of the product , we distinguish a number of cases, which are analytically presented below; in addition, a concise presentation of all these cases takes place in Τables 1 and 2, positioned in the end of the present sub-section. Thus: Case 1. Quantities and share the same number of correct decimal digits = .
Therefore, according to Definition 2, it holds that | − | = • 10 ( ) , ∈ [1,10), from which we deduce that we can express quantities and as follows: where and are the signed mantissae of the finite precision error. Taking (3.1) into consideration, we may write: Since, by hypothesis, = = , the above expression becomes Thus, according to Definition 2, the finite precision error (f. p. e.) with which product has been evaluated is We point out that the subsequent analysis may use (3.3) with slight, straightforward modifications; in fact, in practice, it is sufficient to keep the first-order terms when ≥ 3, since term • 10 is practically negligible. Should the algorithm tend to fail, i.e., if < 3, then, of (3.3) can be used in the subsequent analysis, in a very straightforward manner. To compute the number of erroneous decimal digits (e. d. d.) of , it is absolutely necessary to distinguish the cases | ( ) ( )| < 10 and | ( ) ( )| ≥ 10, for reasons that will become evident in the following. In fact: Immediately below we will show that, in this case, the maximum number of additional erroneous decimal digits generated in the multiplication = • is 2. Indeed, here, since we have assumed that all involved quantities have zero exponents, the product , is given by using the aforementioned first-order approximation. Hence, given that | ( ) ( )| < 10, it is rather straightforward to show that the supremum of quantity | + | may acquire is = 110, since all terms, , , , , are mantissae. Therefore, we distinguish the following sub-cases: Case 1.i.a: Then, + = ( + ) • 10 , which implies that The above relation (3.7) implies that Which according to Definition 2 shows that quantity = has been computed with two less correct decimal digits, namely with (λ − 2) correct decimal digits (c. d. d.) or equivalently with two additional erroneous decimal digits than those of the operands and .   (3.19) The above equality (3.19), together with Definition 2 dictates that has been evaluated with ( − 1) correct decimal digits (c. d. d.). Even though (3.6) and (3.17) (3.23) The later implies that quantity has been computed with an additional correct decimal digit, i.e., that the multiplication operation has relaxed the finite precision error (f. p. e.) by one decimal digit. As in Case 1, we will use a first-order approximation in (3.26). Again, the introduced analysis may be extended in a straightforward manner to incorporate the higher order term, too; however, as it will become clear from the subsequent sections, the accuracy improvement is negligible, given also the dramatic increase in complexity. Thus, we may safely assume that = +

Probabilities for Obtaining a Specific Number of Erroneous Digits in the Execution of a Single Multiplication
We once more adopt the distinction in cases made in Section 3.1, which are presented in Tables 1 and 2 Moreover, in connection with Case 2 ( ≠ ), we cite the following Table 2: Table 2. It refers to Case 2, where ≠ and in particular λ > λ , without any loss of generality. The first column under the title "sub-cases", the eventual values of

Sub-Cases
Consider any multiplication of two numbers and sharing the same number of e. d. d. Thus, quantity is computed with less e. d. d. Then, following Section 3.1, is a random variable, independent of . Therefore, the probabilities for obtaining a specific value of are independent of ; this suggests the use of the following notation:

Notation 2. Let
= ; then, quantity is computed with + e. d. d., = 2,1,0, −1, −2, … . We denote the corresponding probabilities by ( ; , ).∎ As before, and are mantissae and , are the mantissae of the f.p.e. stochastic part. Hence, for the evaluation of ( ; , ), it is necessary to know the joint probability density function (pdf) of the random variables , , which express the f. p. e. of the mantissae , respectively; we shall symbolize this joint pdf as ( , ). We shall give the general formulae of the sought-for probabilities for a generic pdf. Later on, we shall specify a class of pdfs encountered in practice, we shall calculate the corresponding probabilities and present the associated numeric results. At this point, since | |, | | ∈ [1,10), we form the square shown in Figure 1, where every mantissae couple ( , ) corresponds to a certain point of the sub-domain We point out that the joint probability ( , ) is a conditional pdf, where ( , ) ∈ , in the sense that it satisfies relation ∬ ( , ) = 1 . If the initial pdf ( , ) is defined in a superset of , then, we restrict it to by means of the conditional probability rule. Notice that the points of the "cross" = do not belong to , since and are mantissae. We again distinguish the sub-cases introduced in Section 3.1.
In order to determine the sub-domain of , where inequality (3.6) holds, we assume, first, that both , are positive mantissae and we draw the straight lines: : Let be the set of points ( , ) of that lie between and , where superscript and subscript a express the last two letters of the Case in hand. Further, consider the straight lines: : For an arbitrary pair of multiplication operands ( , ), consider, now, the straight lines: : + = 10 and : Let be the set of points ( , ) ∈ lying between and and be the set of points ( , ) ∈ lying between and . = ∪ is the entire ensemble of points in satisfying (3.8), depicted in magenta in Figure 2. Then, probability Next, in accordance with the previous analysis, we draw the straight lines: : Then, is the sub-domain of bounded by and , while is the sub-region bounded by and . Setting = ∪ (cyan area in Figure 2), the probability that a pair ( , ) of error mantissae satisfies (3.10) is: Finally, concerning the remaining Case 1.i.d, it holds that:  . This case may be treated as Case 1.i; however, here, as stated in Section 3.1, the computing machine performs a right shift in order to restore the product = in its canonical form. Therefore, product is computed with a number of e.d.d. reduced by one with respect to the previous Case 1.i. Thus, briefly, we note the following: Once again, lines and , confine ⊂ , and lines and , confine ⊂ . Let, again, = ∪ (shown in magenta in Figure 3, for a specific pair of multiplication operands ( , )). When ( , ) ∈ , product is computed with exactly one additional e. d. d. with probability (3.20).
We draw the straight lines , to obtain , lines , to confine and we let = ∪ (shown in cyan in Figure 3). The probability is We select points ( , ) ∈ lying between and , forming and points ( , ) ∈ lying between and , forming ; again, we let = ∪ (green area in Figure 3). Now, the probability that f. p. e. is relaxed by one digit is . Along very similar lines we define sub-domains of , and (shown in yellow and blue respectively in Figure 3 for a specific pair of operands ( , ) and = 5). We eventually evaluate   (3.51)

Experimental Confirmation of the Previous Theoretical Results
In order to test the validity of the analysis and the results of previous Sections 3. 1 Table 3. On the contrary, Tables 5, 6 refer to the case where # ( ) − # ( ) < 0. Table 5 corresponds to the case in which = 1, while Table 6 corresponds to the one in which = 2. From both tables, the excellent agreement between theory and experiment is pretty evident. We have repeated the previous step, using uniformly distributed "contamination numbers" , in the interval [10 , 10 ], producing numbers = + . By repeating all actions of previous Step 3 for the case of uniform contamination, the obtained results have confirmed an excellent agreement between theory and practice.
A set of concrete experiments and associated tables.
We have randomly chosen 25,000 couples of mantissae terms ( , ), covering all cases referred to in Sections 3.1 and 3.2. We have embedded both and in 64 d. d. precision, as described previously in the present sub-section, thus forming corresponding couples ( ,  Tables 5 and 6 refer to the cases in which = 1 and = 2 respectively, where = # ( ) − # ( ). For all arbitrarily chosen contaminated pairs , , we have evaluated the theoretical probabilities introduced in Sub-Sections 3.1 and 3.2, numerically. From all tables, the excellent agreement between theory and experiment is pretty evident. We would like to point out that this excellent agreement appears in all performed experiments, concerning 10 − ℎ of different values of standard deviation .

Analysis of the Case of Many Successive Multiplications
In this section, we will compute the probability that successive multiplications generate erroneous d. d. in the final product.
In fact, suppose that any two numbers, and , are multiplied in a computing machine using decimal digits (d. d.) in the mantissa; let = . Next, is multiplied by an arbitrary number, say , giving rise to = and so on. The analysis of Section 3 indicates that a different number of erroneous d. d. emerges as it is analytically presented in Tables 1 and 2. Therefore, in order to estimate the number of erroneous decimal digits (e. d. d.) accumulated in a result of many successive multiplications, one may employ the following: 1. The mantissa y of the finite precision error (f. p. e.) accumulated at an arbitrary quantity, say α, is a random variable, already symbolized as . Therefore, when two quantities and are multiplied with f. p. e. mantissae and , then the f. p. error of the product = is itself a random variable.
2. As before, without any loss of generality, suppose that is the maximum number of e. d. d. between and . Then, reminding that the symbol "#" stands for cardinal number, # ( ) differ from by e. d. d. Evidently, is a random variable itself, having integer values = 2,1,0, −1, −2, … . 3. In Section 3, we have given a method for evaluating the probabilities ( ; , ), namely the probability that product is computed with a number of erroneous decimal digits (e. d. d.) differing by ξ decimal digits from the common e. d. d. of . In the same section, we have also proposed a method for evaluating the probabilities ( , ; , ), i.e., the probability that product is computed with a number of e. d. d. differing by ξ decimal digits from the maximum number of e. d. d. between . For brevity, in the present section, we will assume that in all successive multiplications the worst case always takes place, namely that the two multiplication operands share the same number of correct decimal digits (c. d. d.). Moreover, we will momentarily simplify notation by letting Then, the f. p. e. mantissa of the product follows a normal distribution with zero (0) mean value and standard deviation ∈ [1.1, 6.5]. Hence, probabilities , = 2,1,0, −1, −2, … are immediately obtained via the analysis of Section 3. However, the present analysis is valid for any distribution of error mantissae that gives rise to a set of probabilities . 5. For brevity and simplicity reasons, we shall assume that the ensemble of probabilities remains unaltered throughout the entire successive multiplications process. Should any concern on that arise, as we will explicitly state below, a proper source code may be used in order to compute ( ; , ) dynamically, while the essence of the following analysis remains intact.
Subsequently, we will compute the probability that successive multiplications generate erroneous d. d. in the final product . In fact, suppose that one performs successive multiplications and that of them produce two additional e. . We are interested in the mean value and variation of quantity . To achieve that, we shall present a set of quite general lemmas and theorems; for this reason, for the present section only, we shall introduce an alternative, equivalent notation described below: In order to obtain the aforementioned bounds for , we shall employ the subsequent quite general results. Lemma 1. Consider a multinomial distribution with possible outcomes , , … , , with corresponding probability of appearance , , … , . Suppose that one performs an experiment times, whose outcome is modeled by this distribution. Let the first event with outcome be observed times, the second event with outcome times and so on. Then, quantity = + ⋯ + = ∑ has a mean value and a variance : Proof of Lemma 1. The probability that ω occurs is given by .

(4.3)
For the mean value : By definition: We treat each multiple sum separately. Therefore,  By employing the previously given expression for , we obtain: Expanding [ + ⋯ + ] , we obtain the partial sums: , follows a normal distribution with mean value and variance given by (4.1) and (4.2). ■ We will apply all the previous results to the three more important cases, described below. Case 1. The worst case, where in all multiplications man(γ )man(β ) < 10 holds. Case 2. The most favorable case, where man(γ )man(β ) ≥ 10 always holds. Case 3. The general case, where the distribution of man(γ )man(β ) is arbitrary. Case 1. If at each multiplication, inequality (3.4) holds, namely | ( ) ( )| < 10, then we choose the following values around which the corresponding probabilities are more frequently encountered: 1. = 2: (2; , ) = ≈ (10 ) (i.e., almost negligible). We repeat that we use probabilities only, since we consider the worst case as far as f. p. error generation and accumulation is concerned, namely that # ( ) = # ( ).  However, in this case, the right-hand side of inequality (4.6) is always positive and, moreover, is a monotonically increasing function of . Consequently, the accumulated number of erroneous decimal digits of every product , = 1, 2, … , , tends to rapidly increase even for a particularly small number of successive multiplications . This is fully supported by the contents of Tables 7 and 8, below. Table 7.
is a percentage of multiplications in which inequality (3.16) holds. Thus, when < 50% holds, then becomes completely erroneous after a relatively small number of multiplications, in full accordance with the theoretical predictions. These predictions are based on the results of Theorems 1, 3 and 5, refer to the lower bound of the expected e. d. d. for each and they are presented in the last column for confidence level 1 − 10 . For each the experimental and theoretical results manifest an excellent agreement.     ±1 d. d.) number of erroneous decimal digits, which are accumulated at the − ℎ, arbitrary, product , = 1,2, … , by applying the method introduced in Section 3. This dynamic computation of # ( ) can be made by a rather straightforward code based on the results of Section 3. ■ Case 3. In the general case, either inequality man(γ )man(β ) ≥ 10 (3.16) or inequality man(γ )man(β ) < 10 (3.4) arbitrarily holds. Then, in order to obtain a rigorous estimation of the number of e. d. d. in each multiplication, together with the corresponding probability, one must know the statistical distribution of man(γ )man(β ) , as compared to ten (10). In general, these distributions may highly depend on the algorithm in hand. However, in order to obtain an estimation of the corresponding generated f. p. error, we will state the very interesting example where both man(γ ) and man(β ) follow a uniform distribution in the interval [1,10). In fact, in this case, the set of (γ , β ) in the γ β -plain satisfying (3.4), is the 2D domain bounded by the straight lines γ = 1 and β = 1 and the hyperbola γ β = 10. Dually, the 2D domain for which the alternative inequality (3.16) holds, is the one limited by the straight lines γ = 10, β = 10 and the same hyperbola. Then, we follow the results of Section 3 and we use the graphical representation associated with the square of Figure 1 for the probability density function ( , ) = , defined on this square except the cross. Consequently, in a rather straightforward manner, we obtain ( ) ( ) < 10 = ∫ = [10 ln( )] ≅ 0.7157.

Percentage of Successive
In case that there is no discernible distribution of ( ) ( ) within the course of the algorithm, we may dynamically calculate the finite precision error accumulation for every product in order to estimate the accumulation of the finite precision error in the algorithm in general, as described in Theorems 2 and 4; we remind that Theorem 2 refers to the worst case in which (γ ) ( ) < 10 always holds, while Theorem 4 is connected to the dual inequality (3.16), which is most favorable from the point of view of generation of finite precision error during multiplication. In any case, the following holds: Theorem 5. Suppose again that during successive multiplications = , = 0, … , − 1 and that for a fraction, say , of these multiplications, inequality (3.16) holds, while for the other fraction = 1 − of them inequality (3.4) holds. Then, concerning the f. p. error accumulation in the products , = 1, … , , the following two cases hold: > 0.5, product tends to behave as described in Case 2, i.e., the overall number of e. d. d. of , = 1, … , is restrained. The closer to 1 fraction is, the greater the restriction of the number of e. d. d. accumulated in products (see Table 8). (ii) If < 0.5, the accumulated f. p. error in the products is amplified. The closer to 0 is, the more rapidly the f. p. e. accumulated in products grows (Table 7). ■ The theoretical approach and the associated results introduced in the present Section 4, have been fully confirmed experimentally, as it will be described in Section 6 of the present work.

Comparing the Finite Precision Error Generation and Accumulation during Execution of the Same Algorithm Including Successive Multiplications, with Different Finite Word Length/Precision
We shall begin by giving a brief description of the goal of the present section: consider an algorithm , involving multiplications at each iteration. We execute first with n decimal digits in the mantissa (say n ≥ 7) and simultaneously with m decimal digits (d. d.) in the mantissa, where we assume that m ≥ 2n + 7, using exactly the same input in both cases. Consider any quantity γ of and let γ be the value of this quantity at the i − th iteration of , where all calculations are made with precision of n d. d. in the mantissa. Similarly, let γ be the value of this quantity at the same iteration of , when all operations are made with precision of m d. d. In the present section, we will compare the number of erroneous d. d. with which any such two quantities γ , γ are calculated and, in particular, for the difference = # (γ ) − # (γ ) . In Section 4 we have concluded that, independently of the finite word length, the number of e. d. d. of any product follows a normal distribution if the number of successive multiplications which generated , is greater than or equal to 30. Thus, the difference in the number of e. d. d. between and also follows a normal distribution with mean value zero and a variance that can be immediately estimated from the results of Section 4. Hence, one may deduce:

Experiments that Fully Support the Theoretical Analysis
In this section, we shall introduce a number of experiments that have been specifically designed by the authors, in order to test the validity and the reliability of the theoretical analysis and results presented in the previous sections.

Description of a First Class of Experiments that Confirm the Theoretical Approach and Deductions of Section 4
Aiming at testing methodology and the associated theoretical results introduced in Sections 3 and 4, we have proceeded as follows: first, we have selected a set of 10 randomly chosen floating point numbers having 16  respectively, maintaining the natural biunivocal relation ( , ). We continued in this way, forming sets ( , ),… , ( , ), etc. with the same factor . In all these cases we evaluated the number of e. d. d. with which products , are computed as it has been previously described in connection with and . In addition, whenever an exponent exceeded a large absolute value (e. g. 50) during the previous process, it was set to zero, since the exponent 10 , ∈ ℤ, of the scientific form plays no role in the f. p. e. generation and accumulation in the multiplication process in general. We have taken this action, in order to avoid possible effects of overflow or underflow in consecutive multiplications, since these easily spotted problems have nothing to do with the present study. However, we have kept the overall exponent of each product by simple recursive additions.
We have repeated the aforementioned experiment for various values of , where always ∈ [0,1]. At this point, we have distinguished two additional sub-cases: a) < 0.5 and b) > 0.5. Sub-case (a) is quite analogous to Case 1, for which inequality (3.4) holds permanently. Specifically, the obtained products = , have been calculated with all digits erroneous after a relatively small number of iterations, as shown in Table 7. The smaller fraction , the more serious the f. p. error is.
On the contrary, Sub-case (b) is quite similar to Case 2, in the sense that products = manifested substantially smaller f. p. e. accumulation, as Table 8 manifests. In full accordance with the theoretical analysis, the smaller , the smaller the accumulated f. p. e. in is.
In connection to it, we have performed the following experiment: we have implemented an artificial algorithm, which forces all successive multiplications to satisfy (3.4). The flow chart of this algorithm is the following: Starting from an arbitrary number ∈ [1, 10), we express it with a certain number of decimal digits (d. d.), as well as with = 2 + 10 d. d. We then multiply by itself in both precisions. In case exceeds ten, then we subtract a properly selected positive integer , from in both precisions; we do so, in order that 1 ≤ < 10 now holds. We stress that is adequately selected to be an integer in order that its subtraction from the initial does not add any e. d. d. to ; in all performed experiments, we ensured this by checking the number of e. d. d. of the difference ( − ), via Definition 2. By comparing product in both precisions, we calculate the erroneous decimal digits (e. d. d.) of in the digits precision. Next, we set the exponent of equal to zero, in order to avoid overflow or underflow and we let the obtained mantissa of be a new number, , expressed in both precisions. Then, we repeat the previous actions by letting play the role of and we evaluate and store the number of e. d. d. with which is computed, after ensuring that ∈ [1, 10), via a proper subtraction − , as before. We continue this process until the obtained is calculated in the digits precision with all its digits erroneous, while we have ensured that ∈ [1, 10). We have executed this algorithm for 1000 different initial values of , always belonging to the interval [1,10). The obtained maximum number of iterations for which was totally erroneous is shown in Table 9 for various values of precision . Thus, we obtain the particularly important result that is totally erroneous after an impressively small number of iterations, in comparison to the employed precision, in full accordance with the theory and in particular with Theorem 1 of Section 4. We have, again, performed an additional experiment, in which we have written an artificial algorithm, that forces all successive multiplications to satisfy ( ) ( ) ≥ 10. Indeed, this algorithm is quite similar to the one described in connection with Case 1 above and it has the following flow chart: Starting, again, from an arbitrary number ∈ [1, 10), we express it in both and = 2 + 10 d. d. precision. We then execute ⋅ in both precisions. In case is smaller than ten, then we add a properly selected positive integer to in both precisions, so as ≥ 10. We stress that + never manifests any e. d. d. By comparing product in both precisions, we calculate and store the -precision number's e. d. d., again by means of Definition 2 and Theorem 5. Next, we set the exponent of to zero, once more to avoid overflow or underflow and we let the obtained mantissa of be a new number expressed in both precisions. Next, we repeat the previous actions by letting play the role of and we store and evaluate the number of e. d. d. with which number is computed, after ensuring that ∈ [1, 10) by adding a proper to , if necessary. We repeated this process for an arbitrarily large number of iterations, while monitoring the f. p. error of . We have executed this algorithm 10 times in 16 and 42 d. d. precision for 1000 initial values of , always belonging to the interval [1,10). The experiment has shown that the number of erroneous decimal digits with which has been calculated never exceeded two (2), while the mean value of these e. d. d. remained always pretty close to zero, even for the larger numbers of iterations of the algorithm, in full accordance with Theorem 3.

Description of a Third Experiment that Fully Supports the Theoretical Results of Section 5
We have experimentally tested the correctness of Theorem 7 of Section 5, by performing successive multiplications as described in Section 4. However, now, each multiplication has been executed three times with 16 Table 10 and fully justify the aforementioned Theorems of Section 5, but also of Section 4.

Eventual Applications Associated with the Present Work
In the section in hand, we shall present and highlight an ensemble of possible and probable applications, which will be based in the analysis and methodology introduced here. Thus: A. In certain applications, like the ones that will be described below, it is preferable and/or necessary to use finite elements methods, which employ polynomials of high order to approximate the considered function on each element, usually called "higher order basis functions". In this approach, if the higher order basis functions are of order , then one must use elements consisting of ( + 1) nodes; moreover, one frequently uses the following basis functions ( [20]): where (a) is the cardinal number of the node in hand, = 1,2, … , + 1, (b) represents the cardinal number of the other nodes of the specific element, hence = 1,2,3, … , + 1, ≠ , c) is the independent variable of the polynomial basis function and d) evidently , = 1,2,3, … , + 1 is the value that this variable acquires on the − ℎ element of the node in hand.
It is rather clear that both the nominator and the denominator in relation (7.1) are results of successive multiplications.
However, even in the case of second order basis functions, one employs the basis functions: which includes multiplications. Consequently, the entire previous analysis may be applied immediately, so that together with ( ), = 1,2, … , + 1 computation, the user may know the exact number of erroneous decimal digits with which this quantity has been evaluated, each time. Clearly, in case that the numerical value of such a basis function for a certain is highly or even totally "contaminated", then the user may immediately receive a corresponding signal.
Therefore, more specifically, this method can be applied to the subsequent applications: 1. In research associated with the modelling of the fatigue of materials employed in the rail-wheel system ( [21], [22]). 2. In the study of rail corrugation ( [23], [24]).
3. In the study of the influence of bending on the value of friction coefficient ( [25]). 4. In tackling important classes of contact problems in elatostatics ( [26], [27]). 5. In the investigation and analysis of the spatial stress-strain states of a pipe with respect to its corrosion damage, taking into account various types of complex loading ( [28]). 6. In real time analysis of local damage in wear-and-fatigue tests, whenever finite elements methods are required/applied ( [29]).
It is worthwhile noticing that in many of the aforementioned studies the involved models frequently include multiplications; consequently the approach introduced in the present manuscript may also be proved helpful in associated numerical experiments.
B. The sequence of powers of a real number.
Consider a single real number, say > 0. Moreover, consider the sequence of powers of , usually computed recursively by means of the following succession of multiplications: Suppose that the numerical value of > 0 is such that, statistically, multiplication • , ∈ ℕ, satisfies inequality (3.4) more frequently than the opposite one (3.16), namely Then, according to the previous analysis, one expects that will continually be evaluated with a larger number of erroneous decimal digits (e. d. d.), as grows. To verify/demonstrate that, we have employed = 1.12, we have generated sequence of the powers of by means of the aforementioned sequence of successive multiplications and we have evaluated the exact number of e. d. d. with which is calculated each time; the determination of the exact number of erroneous d. d. has been made as described in Sections 5 and 6, using = 16 decimal digits word length and = 40 decimal digits precision. The associated results are depicted in Figure 4, from which it is evident that after the impressively small number of 55 iterations, the power is computed with all its digits erroneous.
We must emphasize that, in order to circumvent the effects of overflow, each time we have multiplied the mantissae of only and not the entire number . Equivalently, whenever the exponent of exceeded a rather large number, say ( ) = 50, then we have divided with 10 ( ) . However, we have registered the power's exponent each time by simple recursive additions. It is important to stress that, in both these approaches the number of erroneous decimal digits accumulated in were identical. has been chosen in such a way, so as inequality | ( ) • ( )| < 10 holds more frequently than the dual one (3.16). As a consequence, the number of e. d. d. grows rapidly, in full accordance with the analysis and the results of Sections 4 and 6.
C. Continual multiplication of contaminated numbers.
Exactly the same analysis holds true, in the case that instead of multiplying with itself to produce , we instead perform the sequence of multiplications: where , , … , , ∈ ℕ is an arbitrary sequence of contaminated numbers, to which erroneous digits are accumulated probably due to another procedure. The application (D) that follows, we believe that it will clarify the content of these statements.

D. Finite Precision Error Accumulated in Various Fast Kalman Algorithms.
One of the most widely used filtering procedures is the Kalman one [30]. In many of these algorithmic schemes a certain scalar quantity, say ( + 1), , ∈ ℕ, is updated at the ( + 1) − ℎ time instant by means of a formula of the type ( + 1) = • ( ) • ( + 1), where (i) ∈ ℝ is the so-called "forgetting factor" almost always belonging to the interval [0.97,0.99] and ii) ( + 1) is another quantity of the algorithm, which is also computed recursively. In many applications [30], quantities ( ) and ( + 1) have values such that inequality (3.4) | ( ) • ( )| < 10, holds very frequently, statistically. Hence, every formula of the type (7.3), tends to generate one additional erroneous decimal digit in a relatively small number of recursions; this erroneous digit is added to the value of ( + 1). Subsequently, since ( + 1) enters directly or indirectly, in all other formulae of the corresponding Kalman algorithms, including ( + 1), it follows that these schemes are very frequently destroyed due to this successive-multiplication-based finite precision error, in an impressively small number of iterations [30].
Thus, for example, the faster existing Kalman algorithm (the FAEST [31]) can never converge in practice due to this type of f. p. e.
In general, the methodology introduced here allows for both the evaluation of the number of erroneous decimal digits with which all quantities in any fast Kalman algorithm are computed, as well as for finding methods of stabilizing various algorithms of this class ( [32]).

Conclusion
In this paper, we have presented a new approach to the study of the finite precision error generation and accumulation in the multiplication process. We have initially given a strict mathematical definition of the number of correct digits of a real quantity expressed in any finite word length. We emphasize that although the analysis introduced here is made in the decimal radix, it offers accurate results and prediction of the f. p. e. generated and accumulated in any computing machine that performs an arbitrary number of multiplications, successively.
Along this new approach, we have shown the following fundamental result: suppose that one executes an arbitrary multiplication = in a computing environment employing the equivalent of decimal digits in the mantissa. Moreover, let operands and have erroneous decimal digits at most in their mantissae. Then, the number of e. d. d. with which product is calculated depends on the value of | ( ) ( )|. In fact, if inequality | ( ) ( )| < 10 holds, then product is calculated with at most + 2 erroneous d. d. or with , − 1, − 2, − 3 e. d. d. In case the complementary inequality holds, then product may be calculated with up to + 1, or with , − , = 1, 2, 3, 4 e. d. d.
We have also shown that the chance of encountering one of the aforementioned cases heavily depends on the exponent of quantity + • 10 , where and are the multiplication operands' f. p. e. mantissae and = # ( ) − # ( ). In order to calculate the probabilities that each one of the aforementioned cases holds, we have introduced the rectangular shaped set of points of Figure 1 and we have defined the sub-domains in which the values of the random variables and correspond, in order that product is computed with a specific number of e. d. d. Then, by integration on the corresponding sub-domains, we have calculated the associated probabilities.
We have also given exact formulae for the mean value and standard deviation of the number of e. d. d. accumulated in the results of successive multiplications.
Moreover, we have established that if we perform the exact same set of successive multiplications using and > 2 + 7 d. d., then we may easily track the number of e. d. d. accumulated in the precision results.
Finally, in order to test the validity of the introduced theoretical analysis, we have performed a number of specially developed experiments. The results of these experiments fully supported the theoretical analysis introduced here.
We emphasize that the developed novel methodology is expandable, so as to tackle the finite precision error generation and accumulation in any arithmetic operation; this will be the subject of forthcoming manuscripts.