Exact Analysis of the Finite Precision Error Generated in Important Chaotic Maps and Complete Numerical Remedy of These Schemes

: A first aim of the present work is the determination of the actual sources of the “finite preci ‐ sion error” generation and accumulation in two important algorithms: Bernoulli’s map and the folded Baker’s map. These two computational schemes attract the attention of a growing number of research ‐ ers, in connection with a wide range of applications. However, both Bernoulli’s and Baker’s maps, when implemented in a contemporary computing machine, suffer from a very serious numerical error due to the finite word length. This error, causally, causes a failure of these two algorithms after a rela ‐ tively very small number of iterations. In the present manuscript, novel methods for eliminating this numerical error are presented. In fact, the introduced approach succeeds in executing the Bernoulli’s map and the folded Baker’s map in a computing machine for many hundreds of thousands of itera ‐ tions, offering results practically free of finite precision error. These successful techniques are based on the determination and understanding of the substantial sources of finite precision (round ‐ off) error, which is generated and accumulated in these two important chaotic maps.


Introduction
In recent years, a quite extensive research in connection with Bernoulli's and Baker's maps applications takes place. However, as we demonstrate in the present work, and as it is also referred to in the bibliography (e.g., [1]), a serious amount of finite precision error is accreted during the execution of these two algorithms.
As far as the applicability of Bernoulli's map is concerned, we would like to emphasize that the use of Bernoulli chaotic maps grows constantly. Thus, for example, this chaotic map is employed for image watermarking [2]; in this publication, the authors perform a statistical analysis of a watermarking system based on Bernoulli chaotic sequences. In [3], a comparison of the performance of various chaotic maps, used for image watermarking, is presented.
Another application of the Bernoulli map is associated with the construction of reliable random number generators. In this framework, the authors of [4] employ chaotic true orbits of the Bernoulli map on quadratic algebraic integers, in order to implement a pseudorandom number generator. The authors report that the generated numbers manifest good randomness. In [5], a hardware implementation of pseudo-random number generators based on chaotic maps is introduced.
Bernoulli map is also employed in cryptography. In particular, a number of image encryption algorithms based on this chaotic map have been proposed, given that the associated encryption schemes are fast and easily implemented in hardware and software. For example, the authors of [6] use the Bernoulli chaotic map in order to embed a secret message in an image.
In connection with Baker's map, we would like to cite the following: In [7] an alternative chaotic image encryption based on Baker's map is presented. The authors mention that this enhanced symmetric-key algorithm can support a variable-size image, in contrast to other encryption algorithms, which are mainly based on Baker's map that require only a square image. In [8], a method for obtaining cryptographically strong 8 × 8 substitution boxes (S-boxes) is presented. The method is based on chaotic baker's map and a "mini version" of a new block cipher. In [9], a family of pure analog coding schemes, with good properties, constructed from dynamic systems, which are governed by chaotic functions -Baker's map, is proposed. The authors of [10] show that pattern classification systems can be designed based upon training algorithms, such as Baker's map, designed to control the qualitative behavior of a nonlinear system. In [11], a comparison of the efficiency of three chaotic maps-Baker's, Cat and Line maps-is performed, as far as cryptography is concerned.
However, both Bernoulli's and Baker's maps manifest serious finite precision error, when implemented in a contemporary computing machine. The accumulation of this finite precision error makes both algorithms generate completely unreliable results, after a relatively very small number of iterations (e.g., [1,12]).

Brief Summary and Organization of the Present Work
Here, the instability of these two chaotic maps, the Bernoulli map and the Baker's one, is confirmed and the actual reason for the causation and accumulation of the related finite word length numerical error is spotted and demonstrated. Moreover, novel methods for the complete stabilization of these algorithmic schemes are introduced. More specifically: In Section 2 of the manuscript in hand, the authors give the employed notation and symbolism. Moreover, they state a set of fundamental definitions upon which the comparison of two arbitrary floating-point numbers is achieved in finite precision. Eventually, a crucial definition is derived, which allows for the exact evaluation of a number of erroneous digits with which an arbitrary quantity is computed in a machine that employs a finite word length.
In Section 3, the authors demonstrate that, in any subtraction executed in a computing machine using a finite word length, there might be generated an even serious amount of finite precision error, which is due to two types of numerical inaccuracy: a deterministic or causal one and a random or an erratic error. The authors establish that the causal error is due to the difference between the exponent of the obtained subtraction result and the maximum exponent of the subtraction operands. On the other hand, the erratic error is intimately connected to the method that the computing machine employs in order to fill in the missing digits.
In Section 4, the exact sources of finite precision error in the Bernoulli chaotic map are shown for the first time. It is demonstrated that these sources make the results this algorithm offers totally unreliable, after a relatively very small number of iterations; this renders the classical execution of the Bernoulli map totally inapplicable in practice. We strongly emphasize that this rapid failure of the Bernoulli algorithm occurs even when a particularly large finite word length is employed, say including 40,000 decimal digits or more, e.g., by using tools offering unlimited precision arithmetic. On the basis of these results, a novel method for the complete stabilization of the Bernoulli map is introduced.
In Section 5, the authors proceed along similar directions, in connection with Backer's map. In fact, the exact sources of finite precision error of this chaotic map are introduced for the first time. It is, again, demonstrated that these sources make the results offered by Baker's algorithm completely unreliable, after a relatively very limited number of recursions; this renders the classical execution of the Baker's map, too, entirely inapplicable in practice. We, once more, would like to emphasize that these serious numerical problems appear causally, too, independently of the employed finite word length, no matter how big this number is. Using the previous results, a novel method for the complete stabilization of the Baker's map is accomplished for the first time.
In Section 6, a conclusion incorporating a summary of the obtained results is presented.

Employed Symbolism, Notation and a Number of Fundamental Definitions
For any quantity expressed in scientific format or canonical form, like for example the IEEE 754 floating-point format, we shall employ the notation (i) for the mantissa of and (ii) for the exponent of . The analysis that will be presented below is made in connection with the decimal arithmetic system, simply because this is far more familiar to the user. However, we stress that the related analysis is very well applicable to the binary system, too. In fact, both the theoretical and the experimental results introduced in the present work show that the associated approach, which is based on the decimal radix, is a very reliable and robust model for the actual procedures that take place in a contemporary computing machines that use the binary radix.

Abbreviations and Employed Notation
 Acronym e. d. d. stands for "erroneous decimal digits", while acronym c. d. d. stands for "correct decimal digits" and acronym d. d. stands for "decimal digits".  The abbreviation f. p. e. stands for "finite precision error". We note, in passing, that a number of researchers prefer the name "round-off error" to the "finite precision" one. However, we will establish in the following that the generated and accumulated error in an algorithm, after a number of iterations, when it is executed, is not a "round off error" anymore; for this reason, we have definitely preferred to use the term f. p. e.  The term "the algorithm is destroyed due to f. p. e." or "the algorithm fails" expresses the fact that the algorithm in hand offers completely unreliable/erroneous results, at a specific recursion.
Next, suppose that one uses a computing machine in which all quantities are written in canonical form with decimal digits in the mantissa. Suppose in addition that all operations in this machine are performed with the same finite word length and let be an arbitrary quantity in this computing environment. In the ideal case, where all number representations and operations are made with infinite precision, then let this arbitrary quantity take the value (superscript for correct value). In the following, we will give a rigorous relation among and by means of the subsequent definitions. Hence:

⇔ 7
Another representative example is demonstrated below: 6.957600112134568 10 6.957599577134568 10 A simple inspection might lead someone to deduce that these two numbers differ by twelve (12) decimal digits. Actually, and according to Definition 1 the following hold: Their absolute difference is | | 5.350000000000000 10 Hence:

⇔ 9
Therefore, the two aforementioned numbers differ in nine (9) decimal digits, contrary to a probable initial expectation.

⟹ 0
On the basis of the previous analysis, the sought-for relation between and is given via the subsequent definition: We also note that in IEEE 754 floating-point format, the mantissa is represented in a computer machine by a specific number of bits, say . Then, the same number is represented in the decimal system by decimal digits (d. d.), where is approximated in practice by the nearest integer of quantity • 2.

Establishing That in an Arbitrary Subtraction the Finite Precision Error Consists of Two Types: A Deterministic and an Erratic One
In the present section, we will show that during the execution of an arbitrary subtraction in a computing machine, two types of finite precision error may emerge: the first kind of f. p. e. necessarily arises in many circumstances, when specific conditions hold and then, it may be computed explicitly via a closed formula; for this reason, we shall employ the term "causal" or "deterministic" f. p. e for it. The other kind of f. p. e., which may be generated in an arbitrary subtraction is strongly associated with the random way with which the computer fills in the mantissa digits, when it executes a left shift so as to restore the number in its canonical form; consequently, we shall employ the name "erratic f. p. error" for this form of f. p. e. We will try to analyze these two types of "round-off error", immediately below.
The associated analysis will be made in the decimal arithmetic system without any loss of generality; as we have already pointed out, this approach will offer a very reliable, robust and comprehensive model of the actual procedures that take place in a computing machine, based on the standard IEEE 754 form.

The Deterministic f. p. Error during Subtraction
In the present section, we will demonstrate that during any subtraction of numbers having the same sign, it is possible to have an amount of finite precision error generated causally. We would like to stress that this generated amount of f. p. e. may be arbitrarily large, in the sense that the number of erroneously computed decimal digits in the operation of subtraction may increase up to the finite word length itself. In the following, we will establish that this deterministic f. p. e. is due to the difference between the exponent of the result of the subtraction and the maximum exponent of the subtracted terms. In fact, if is the maximum exponent of the two subtraction operands, both written in scientific format, and if is the exponent of the difference, when written in canonical form, then, causally, additional e. d. d. are accreted in the mantissa. For this reason, exactly, we shall call this type of f. p. error "causal" or "deterministic", while we shall also use the term "exponent plunge" for the difference: We will attempt to clarify the previous statements by means of  is computed with the first correct decimal digits. In other words, the initial number of erroneous decimal digits has been increased by in a causal manner. We should stress that this number of e. d. d. is generated deterministically; however, it may be modified somehow in an erratic manner, as it will be briefly indicated in Section 3.2.
Proof of Proposition 1. In the beginning, we must clarify that, since is the number of correct d. digits in the mantissa of and , then it indeed holds that ⋅ 10 and ⋅ 10 . □ Now, we consider that is of order , strictly smaller than and , and let .
Consequently, ⋅ 10 10 ⋅ 10 ⋅ 10 10 ⋅ 10 10 ⋅ 10 . In addition, we have adopted the assumption that 1 ⋅ 10 10 holds. Therefore, quantity is computed with additional erroneous decimal digits or equivalently with correct decimal digits, for the following reasons: After subtracting the mantissa of and and since the exponent of the subtraction result plunges by , the obtained result will have the first decimal digits equal to zero; i.e., However, at this stage, the machine performs an equivalent of decimal left shifts in order to restore into its scientific format. These left shifts will generate extra incorrect decimal digits in the "tail" of the mantissa of the difference. Since we have assumed that inequality 1 ⋅ 10 10 holds, error mantissae and do not contribute additionally to the overall f. p. error during subtraction.
We would like to point out that the operation of subtraction has the very important peculiarity "to be able" to generate at once, an arbitrarily large number of e. d. d. up to the employed word length. The other fundamental operations, addition, multiplication and division, do not have this property, as the authors will support in other research works. However, both multiplication and division may also generate a large number of e. d. d., when they are repeatedly applied.

The Erratic Finite Precision Error Appearing in the Operation of Subtraction
According to the analysis introduced in Section 2, the eventual causal error that is generated in the operation of subtraction may be modified in an erratic-stochastic manner, strongly associated with the way the computing machine fills the digits that are deterministically lost during the subtraction. Evidently, the restoration of the lost digits takes place via successive left shifts, made as a rule in the binary radix environments of a contemporary computing machine. However, due to the aforementioned relation between the numbers of a set of binary digits in one hand and the equivalent decimal digits on the other, one may safely consider that the left shifts that restore the canonical form, are made in the decimal arithmetic system; this statement will be made definitely clear in the subsequent analysis and it will be established in a forthcoming manuscript of the authors.
In any case, the underlying reasons for the appearance of an erratic finite precision error in the result of an arbitrary subtraction are the following: 1. Suppose that the first digit to be replaced is the one located at the ℎ position, where 1; according to the convention in symbolism adopted in the present work, is the number of the correct most significant (MS) decimal digits and the exponent's eventual plunge occurring after the subtraction (Section 3.1). Then, the event of obtaining an additional correct decimal digit is equivalent to the event that the computing machine fills the ℎ position of the mantissa of difference with the correct digit. 2. The event of obtaining two correct decimal digits in the mantissa of is equivalent to the case where the machine fills both the ℎ and 1 ℎ digits with the correct ones and so forth. 3. The eventuality that the number of correct d. d. after the evaluation of the deterministic error is increased by one, is equivalent to the case where the round-off procedure that occurs in the ℎ position of the mantissa of , generates an additional erroneous decimal digit. 4. Clearly, if the round-off approximation that the machine performs in the mantissa of does not generate an additional erroneous d. d., then the erratic error does not change the effects of the deterministic one.
Suppose that one knows the algorithm with which the computing machine fills in the least significant (LS) lost digits after subtraction and the associated statistical properties. Then, it is rather straightforward, but lengthy and tedious to evaluate the various probabilities of modifying the deterministic error by a specific number of decimal digits. Of course, such an evaluation requires knowledge of the statistical distribution of the accreted finite precision error, during the execution of an algorithm; the authors will exhaustively tackle this problem for various pdfs in future work. For the time being, we quote a restricted number of quite typical probability values concerning the appearance of an erratic finite precision error in an arbitrary subtraction result. A class of associated results are presented in Table 1. This table refers to the case where the finite precision error generated by the computing machine follows a binomial Gaussian distribution. In fact, it refers to the corresponding probabilities of changing the deterministic error, for two different values of the standard deviation of the binomial Gaussian distribution. The results appearing in this table refer to the case where the finite precision error generated during an arbitrary subtraction follows a joint Gaussian distribution with standard deviation or . Each row of the Table corresponds to the modification of the deterministic error by a specific number of decimal digits. Finally, we have employed the symbolism , since the probabilities presented here, correspond to the "worst case scenario" by far, where, in the subtraction , operands and have already been evaluated with exactly the same number of erroneous LS d. d.

A Brief Description of Bernoulli's Map and Its Finite Precision Error Properties
Let be a floating-point quantity such that 0 1. Then, the Bernoulli map, starting from generates a sequence of floating-point numbers as follows: It has been observed by many researchers (e.g., [1]) that the number of incorrect decimal digits generated during the computation of the Bernoulli map grows with the number of performed iterations. This continuous increase of the accumulated amount of finite precision error in , eventually makes the results of the computations totally unreliable. In [1], it has been shown that this round-off error practically doubles at each iteration.

The Actual Cause of Failure of the Bernoulli Map Due to Finite Precision
In the following, we shall give an explanation concerning how this finite precision error is generated and accumulated and, using these results, we will show a method for generating the correct values of the Bernoulli map, for an arbitrarily large number of iterations, when has a finite representation. In order to achieve the goal of determining exactly how the finite precision error is accreted during the execution of the Bernoulli map, the following Lemma will prove very useful: Proof of Lemma 1. Since has the last decimal digits erroneous, then it can be written as follows: where, as before, is the total amount of finite precision error accumulated in . Consequently, when is multiplied by the error free number it follows that: □ Now, one may distinguish the following cases: 1. | • | 10 and | • | 10 simultaneously hold.
Then, according to Definitions 1 and 2, quantity • is computed with an additional correct decimal digit. Indeed, in this case Equation (9) is transformed into: The last equation confirms the fact that quantity • has been computed with one less erroneous decimal digit in the mantissa. 2. | • | 10 and | • | 10 simultaneously hold. Now, one deduces: Then, according to Definitions 1 and 2, quantity • is computed with an additional erroneous decimal digit in its mantissa, as compared to . 3. | • | 10 and | • | 10 simultaneously hold or | • | 10 and | • | 10 are verified.
It follows immediately from the analysis performed in 1. and 2. above, together with application of Definitions 1 and 2 that • is computed with exactly the same number of correct digits in the mantissa, as compared to .
Next, suppose that we execute Bernoulli's map starting from a non-zero initial value ; we shall demonstrate the finite precision error causation and propagation, by means of the entire previous analysis, for a specific, but completely arbitrary and representative example. Indeed, let us assume that all operations are executed in a computing machine with sixteen (16) decimal digits in the mantissa and let:

10
Then, 1.10045420000000 10 where the last digit '1′, shown in bold, is erroneous. The actual cause for the generation of this erroneous decimal digit, according to the analysis of Section 3, is the following: Multiplication 2 generates a quantity of order 10 . When one (9) is subtracted from it, the result is a quantity of order 10 ; in other words, a plunge of order two takes place in the exponent of the obtained result. In fact, This plunge generates a deterministic numerical error of two decimal digits; in addition, an erratic error is generated with probabilities of order shown in Table 1. From this table one deduces that, since the exponents 2 and 1 are equal to zero, it is quite probable that the finite precision error maybe relaxed by one decimal digit. Application of Lemma 1, Case (ii), demonstrates that this is indeed the case. where, once more, the erroneous digits are shown in bold. We observe that, in the computation, there is no need for subtracting 1 from , , , since and all these three numbers are smaller than 1/2. On the contrary, each result has been obtained by doubling the previous value of the sequence. Therefore, Lemma 1 can be applied, with 2, ensuring that the number of erroneous decimal digits of quantities , , remains equal to one ( 1).
During the computation of , the f.p. error has been doubled temporarily, thus generating an additional e. d. d., but only for a while. In fact, execution of the multiplication 2 has also increased the order of the obtained result by one; when right-shift is performed in order to restore the canonical form, an additional correct digit is produced, in full accordance with the results of Lemma 1. Therefore, is eventually computed with 2 1 1 erroneous decimal digit in the mantissa, as shown below:

10
By similar arguments, one may explain the finite precision error generation and accumulation in , , , shown below: 3.52145344000000 10 7.04290688000000 10 4.08581376000000 10 .
During the computation of an additional erroneous decimal digit will appear, since the error is doubled, and its exponent increases by one, while the overall exponent remains the same and no right shift is performed this time; this corresponds to Case (ii) of Lemma 1. In fact,

10
With a very similar reasoning, one may predict that the number of incorrect decimal digits in , will remain the same, namely two; indeed, doubling of the number relaxes the error by one digit, while the subtraction increases the number of erroneous digits by one, according to the analysis results of Sections 2 and 3. Therefore, 6.3432550400000 10 2.6865100800000 10 In the next iteration, when is doubled to compute , we observe that the order of the result does not change, given that the integer part of is smaller than five (5), while the exponent of the accumulated numerical error (6.2 10 ) increases by one. Consequently, Case (ii) of Lemma 1 holds and is evaluated with the last three d. d. erroneous. In fact,

10
Next, 2 1 and the conditions introduced in Sections 2 and 3 hold. Consequently, one expects that will be computed with four (4) e. d. d. and indeed:

10
Now, evaluation of , and follows the results of Lemma 1, Case (iii) while evaluation of satisfies the analysis made in Section 3, offering:

10
Doubling in order to evaluate , gives rise to the "activation" of Case (ii) of Lemma 1 and thus

10
In full accordance with all previous analysis

10
However, 2 1 and now the plunge of the exponent of is not compensated by the generation of a correct decimal digit and hence,

10
The computation of continued in a very analogous manner and so in the 53rd iteration the element of the Bernoulli map has been computed with all digits of the mantissa incorrect.
The analysis associated with the aforementioned example dictates that the actual reason of generation and accumulation of finite precision error during the execution of the Bernoulli's map in a computing machine is independent of the finite word length that the machine employs. A large number of performed experiments using finite word length from eight (8) decimal digits to forty thousand (40,000) d. d. fully confirmed this statement.

Obtaining a Bernoulli Map Free of Finite Precision Error.
The main observation which may lead to a robust computation of any desired number of , is the following: Suppose that the initial value of the Bernoulli's map is correctly given in canonical form, with decimal digits in the mantissa and exponent 1. Then, all subsequent terms of the Bernoulli map, where 1 is an arbitrary natural number, will also be computed with precisely decimal digits in the mantissa. In addition, all intermediate Consequently, if we force the last, after the ℎ one, decimal digit in the representation of to be zero then, the obtained result will always be free of round-off error. We accomplish that, in a rather straightforward manner, by transforming the number into a string and obtaining both the mantissa of the current as well as its exponent. Subsequently, we zero the last 1 entries of the string of the mantissa and we project the number back to floating point arithmetics. In this way, we will obtain totally error free values of the Bernoulli map, in connection to all decimal digits located between the first and the ℎ digits' places in the mantissa.
A flowchart of the Bernoulli map version that offers f. p. error-free results is presented in Figures 1 and 2.  We note that special care has been taken to circumvent eventual difficulties due to the equivalent representation of a floating-point number with a number of terminal nines. We will manifest the related approach with an example: in a computing machine, which uses a finite word length corresponding to 8 (eight) decimal digits in the mantissa, numbers 1.2399999 and 1.24 must be treated as equivalent. To achieve that, we have rounded the floating-point number from the 4 ℎ to the 8 ℎ decimal digit. Evidently, the position where the rounding process takes place is automatically spotted by applying simple logical rules, string conversions and rounding methods. We stress that the entire aforementioned method and the corresponding implementation add negligibly to the overall execution time. Clearly, in the case where 18 ⇔ 16, then the aforementioned method for correcting Bernoulli's map results may be implemented using the hardware capabilities of the computing machine. On the contrary, when 18, one must switch to software implementation of operations with arbitrary precision, like the powerful multiple-precision floating-point computations with correct rounding (MPFR) library. The related performed experiment fully supports the previous analysis. For example:

10
where 16 the numbers of digits in the mantissa of , 1 the exponent of . Then, the employed finite word length is the classical double precision offered by the vast majority of contemporary computing machines.
A. The classical method of Bernoulli map computation offers completely unreliable/erroneous results after fifty-three (53) iterations, when it is implemented by the standard hardware double precision arithmetic (IEEE 754 double-precision floating-point format). Here are the results where the algorithm fails completely for the first time: 1.250 10 8.62483456 10 The correct values have been obtained by executing the Bernoulli's algorithm with approximately one thousand (1000) decimal digits in the mantissa in the MPFR environment. Evidently, the associated results have been projected in a 16 decimal digits word length. B. When mantissa of twenty thousand (20,000) decimal digits precision is used, the classical method of Bernoulli map gives incorrect results after the 16th d. d. in the 66,387th recursion. We will manifest this failure of the classical algorithm, by projecting the obtained results in seventeen (17) decimal digits due to obvious space limitations. Then, one obtains 7.809960427520000 10 5.619920855039999 10 The correct results have been obtained by computing the Bernoulli map with forty thousand (40,000) decimal digits in the mantissa in the MPFR environment and they are shown below: 7.809960427520000 10 5.619920855040000 10 Clearly, again, due to the space limitations, only the first seventeen (17) decimal digits are shown while in all subsequent decimal digits up to twenty thousand (20,000) were totally erroneous. We note that after a very small number of subsequent iterations the Bernoulli's map offer totally unreliable/erroneous results.
C. The method proposed in the present work continues to give totally error free results.
Actually, the computation of the classical Bernoulli map performed in forty thousand (40,000) decimal digits failed completely, while the proposed method continued to give absolutely correct results. Evidently, the failure of the classical Bernoulli map and the error free results of the proposed method have been verified by comparing them with the corresponding results generated when eighty thousand (80,000) decimal digits in the mantissa have been use.
At this point, we must emphasize that it is impossible to stabilize any algorithm which generates such a type of f. p. e. whatsoever, when the initial value is a finite representation of a non-rational number. For example, if √2 and one keeps decimal digits for representing it, then a straight-forward analysis as the one given above indicates that the f. p. error will inevitably propagate very fast and that it will eventually destroy the algorithm. Namely, the initial truncation, taking place in √2 representation, will inevitably propagate in any analogous algorithm whatsoever. This means that if one succeeds in obtaining an analytic solution of the problem, then the Bernoulli algorithm or any other similar one, when executed in a computing machine with d. d. word length, will eventually offer results radically different than the analytic solution. This holds true for any other number, which needs an infinite non-periodic sequence of digits for its representation.

A Brief Description of Baker's Map and Its Finite Precision Error Properties
Let , be floating point quantities such that 0 1 and 0 1. Then, the folded Baker's map is a sequence , , ∈ of floating-point numbers, starting from , defined via: Baker's map suffers from very serious finite precision error, quite similar to the one referred in Section 4.1, in connection with Bernoulli's map.

The Actual Cause of Failure of the Baker's Map Due to Finite Precision
The actual cause of generation and accumulation of f. p. e., as far as is concerned, is practically the same with the previously analytically presented case of the Bernoulli shift computation. In other words, in connection to quantities , the following sequence of actions in the computing machine generates a continually increasing error due to the employed finite word length: A. Execution of the subtraction 1 , whenever necessary, generates a plunge of order in the exponent of the obtained result; equivalently the closer is to one, the smaller the exponent 10 of the difference and hence, according to Proposition 1, the greater the number of decimal digits of that they will be erroneous, as far as the deterministic f. p. error is concerned. B. On the other hand, multiplication of quantity 1 by two may modify the deterministic error, according to the results of Lemma 1. In fact, if conditions of Case (ii) of Lemma 1 occur, then the overall f. p. e. increases by one decimal digit in comparison with the causal finite precision error. C. For similar reasons, multiplication of by two also may increase the overall number of the already generated and/or accumulated erroneous digits, again when Case (ii) of Lemma 1 holds. Thus, in the end, the accumulation of f. p. e. becomes dominant and the folded Baker's map fails, after an impressively small number of iterations.
On the contrary, computation does not, in practice, generate f. p. e. for the following reasons: i.
We note that the operation 1 may indeed generate f. p. e. However, in contrast to what happens in the computation, the plunge of the exponent of the obtained result is one (9) when 0, since inequality 0 always holds. Consequently, may be computed with one erroneous d. d. at most, due to the exponent's plunge of one, according to Proposition 1. We must point out that an additional erratic error is not generated this time, because the term 1 in 1 is error free. ii. At the same time, each division of by two also divides the amount of error with which is produced. One may safely say that two or three successive divisions by two (2) in practice reduce the number of the accumulated e. d. d. by one decimal digit. Moreover, the essence of the folded Baker's map algorithm itself, forces the operation of division to occur after a rather quite limited number of subtractions of type 1 .
Eventually, one expects that statistically, with very high probability, will be computed with only one or two or non-erroneous decimal digits in the mantissa. Extensive related experiments performed by the authors fully support this claim.
Finally, we would like to, once more, emphasize that the classical Baker's map fails after a relatively particularly small number of recursions even when a large finite word length is employed, e.g., 40,000 decimal digits in the mantissa.

A Method for Generating the Baker's Map Elements Free of Finite Precision Error
Therefore, in order to stabilize folded Baker's map algorithm, a quite analogous method to the one presented in Section 4.3 has been applied here, too. In fact, one may prove with analogous arguments that A. When the initial value , is of order 10 , with a finite number of decimal digits in the mantissa, then quantity of folded Baker's map is always correctly computed, should 1 2 decimal digits in the mantissa were employed. The justification of this statement follows with arguments completely analogous to the ones stated in the Bernoulli's map case.
B. Therefore, one may apply a very similar method to the one presented in Section 4.3, in order to stabilize the component of folded Baker's map, too. Extended experiments, performed by the authors, fully support this statement.
A flowchart of the Baker's map version that offers f. p. error-free results is given in Figure 3. An indicative example follows. We have computed the folded Baker's map with the following initial condition for , using sixteen (16) decimal digits in the mantissa and 1:

10
The choice of the initial value may be arbitrary, since a practically negligible f. p. e. is generated in the computation, as described previously. We have evaluated the and values of this map for a number of iterations, first with the classical manner and next with the previously introduced stabilization method. The related comparative results are shown in Table 2, from where it is evident that, after fifty-four (54) iterations, the classical manner of computation completely fails, i.e., it generates results with all the sixteen d. d. erroneous. On the contrary, the method introduced in this manuscript manifested no erroneous decimal digits at all. We note that in order to obtain these results, the folded Baker's map has been executed in parallel with four thousand (4000) decimal digits precision and by application of Definitions 1 and 2, where the first sixteen digits of the 4000 d. d. representation played the role of the correct quantity. We would like to stress that the evaluation of by means of the introduced method continues to give correct results for many more thousands of iterations. Moreover, in the end, comparison of these results with those obtained with a mantissa even larger than 4000 d. d., indicates that the evaluation with 4000 d. d. eventually fails, while the introduced method offered practically completely correct results. The results of folded Baker's map obtained via the following three different methods: (a) via the classical manner indicated as "Classical Method" in the Table, (b) via the introduced method, labeled "Introduced Method" and (c) with four thousand digits in the mantissa, where the sixteen of these digits are presented in the Table and they are labeled "Correct Values". We stress that the values of the Classical Method have been obtained by evaluating the iterative Equation (12) in "triple" precision, From the table, it is evident that when using the classical method, completely fails after 54 iterations; on the contrary during all these iterations, evaluation of by by means of the introduced method, offered totally correct results.
We would like to emphasize that the folded Baker's map manifested the same behavior, as far as f. p. error is concerned, in connection with many hundreds of tested inputs having a great variability in the order 10 , the exact value and the number of decimal digits of .
We would simply like to once more emphasize that if the input values , in the folded Baker's map are irrational numbers, then stabilization of the algorithm is impossible due to the nature of the computing machines that use a finite word length for all computations.

Conclusions
In the present manuscript, it has been established that the Bernoulli's and Baker's maps indeed offer totally erroneous results, after an impressively small number of recursions; for example, when IEEE standard double precision is employed, both chaotic maps fail after few tens of iterations, frequently less than sixty. The actual reason for the causation and accretion of this, exponential growing, finite precision error, follows immediately from the analysis introduced in Section 3 and Lemma 1; the corresponding approach is novel.
Using the aforementioned results, methods for properly executing the Bernoulli's and Folded Baker's maps are introduced in Sections 4 and 5. These methods, for the first time, allow the two chaotic maps to run for many hundreds of thousands of iterations, offering correct results practically free of finite precision error. A considerable number of associated experiments fully confirm this claim, as it is discussed in Sections 4 and 5 of the present manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.