Resolution of Initial Value Problems of Ordinary Differential Equations Systems

: In this work, we present some techniques applicable to Initial Value Problems when solving a System of Ordinary Differential Equations (ODE). Such techniques should be used when applying adaptive step-size numerical methods. In our case, a Runge-Kutta-Fehlberg algorithm (RKF45) has been employed, but the procedure presented here can also be applied to other adaptive methods, such as N-body problems, as AP3M or similar ones. By doing so, catastrophic cancellations were eliminated. A mathematical optimization was carried out by introducing the objective function in the ODE System (ODES). Resizing of local errors was also utilised in order to adress the problem. This resize implies the use of certain variables to adjust the integration step while the other variables are used as parameters to determine the coefﬁcients of the ODE system. This resize was executed by using the asymptotic solution of this system. The change of variables is necessary to guarantee the stability of the integration. Therefore, the linearization of the ODES is possible and can be used as a powerful control test. All these tools are applied to a physical problem. The example we present here is the effective numerical resolution of Lemaitre-Tolman-Bondi space-time solutions of Einstein Equations.


Introduction
In mathematics, a dynamical system is a system in which a function describes the time-dependence of a point in a geometrical space. Numerous examples can be found in the literature, applied to a wide variety of fields in science and technology. In order to make a prediction about the system's future behaviour, an analytical solution of such equations or their integration over time through computer simulations must be carried out. When modelling such a dynamical system to be able to investigate certain aspects of its behaviour, we end up generating a system of ordinary differential equations (ODES), partial differential equations (PDE), and algebraic equations (AE), whose analogous solution is unknown to us in the vast majority of cases. We need to resort to numerical methods for ODES which allow us to evaluate approximate solutions with a certain tolerance after fixing the indeterminate constants of the system, either by initial values (IVP or initial value problem) or boundary conditions (boundary Cauchy problem). In any case, we will find a large number of numerical methods at our disposal that can be divided into two large groups: (i) The predictor-corrector type (also called Adams), either implicit or explicit. These are usually multistep methods which we can use when we want to increase the order of the method (that is, their precision), and usually require uniform pitch spacing x i+1 = x i + h (prior discretization of the interval [x 0 , x Nh ]). These are more effective methods in the sense that they usually require fewer evaluations from the ODES, but it is clear that if the value in x 0 + h depends on the previous values of k, x k−1 , . . . , x 0 , it will be necessary to generate or impose more initial conditions, and it is not very clear whether we can consider the integration of the ODES as an IVP.
(ii) Runge-Kutta (RK)-type [1] are also explicit or implicit, but they are one-step and manage to increase their order by calculating in more intermediate points of the interval [x i , x i + h]. In this way, we can say that the value in x i + h depends only on the value in x i , so we are talking about the IVP in each subinterval. This feature of the RK methods allows us to use a variable h (different in each step) and adapt them to the particular ODES to achieve the same estimated error order in each step. See the Ref. [2].
Consequently, our first choice was between an Adams or an RK. We opted for the RKF45 or explicit Runge-Kutta-Felberg45 [3], which is one of the most widely used among the adaptive step RKF45. However, as we intended to also address more realistic simulations with density defects compensated by a wall at a certain distance, we implemented the Cash-Karp variant [4], which allowed us to choose between an RKF45 of order O(h 5 ) or one of variable order. In such a case, rapid changes in the gradient of some functions must be expected, and must therefore be taken into account so as to be prepared for such an eventuality. Once our dynamical system is modelled, we must proceed to establish the objective function that models the particular aspect that we want to simulate for investigating its evolution and be in a position to predict future values (meteorology, evolution of the pandemic, volcanology, behaviour of a structure in the face of earthquakes, and so forth.). The fields of application are multiple, and the accuracy of the predictions depends on the accuracy of the model and the rigour of the tests required for the applied integration method.
Nowadays, there are applications of all kinds that include sophisticated solve, odeint, or similar commands, such as MAPLE [5], MATLAB [6], Mathematica [7] or Python [8], among others. These methods require writing our system in a certain way and assigning values to certain parameters that configure the method to be applied. After a while, it prints results and/or graphs which should be interpreted without clearly knowing what has been done, why the method has changed in certain cases, and so forth, and it is therefore difficult to make modifications to the integrating algorithm. At most, we can change our dynamic system or try to vary its parameters in a series of tests and errors that we hope will provide us with a more or less stable response. Then, we calculate means, variances, and other statistical properties which are published. Of course they make our work easier, but we must always keep in mind that those applications or commands are quite generic, and they have not been written for our specific problem. They will be very sophisticated, but it is very difficult for them to be optimal to give us control of the process. What is very fast is usually not very reliable. If we aim to take control over the evolution of our numerical simulation, some tools of Numerical Analysis must be taken into account whenever we face the resolution of a Dynamical System. Such tools are described in Section 2. When solving Dynamical Systems, the convergence and stability analysis of the problem must also be considered; see, for instance [9,10]. In the example we present here, such study was also undertaken.
In this paper, we solve the ODE (Initial Value Problem) associated with the gravitational effect of a pressureless, spherically symmetric perturbation on free photons described within the General Relativity Theory (GRT) and follow null geodesics. These null geodesics describe the Cosmic Microwave Background (CMB) Radiation. We numerically compute the anisotropies on CMB caused by a great inhomogeneity in the universe [11][12][13][14][15][16][17][18]. A detailed explanation of the main physical results and where they were published are presented in Section 8. Some important definitions related with our problem are presented in Appendix A.
Examples of modern physical problems where analytic solutions of differential equations are not possible, such as the problem we present in the present paper, is found in quantum systems. Many body quantum systems appear and have to be solved by applying advanced algorithms [19][20][21]. The ODEs are the Schroedinger and Dirac equations [20,22,23]. In such kinds of problems, the consideration of the finite size of nucleons and nuclei needs the use of advantageous numerical techniques [20,21,23]. Nowadays, there exists an increasing interest in dealing with numerical solutions of wave functions for N-body problems using the field of machine learning methods [24][25][26][27].
Other kinds of problems which also deal with the development of numerical integration schemes for physical systems are based on the theory of variations. The integration methods, known as discrete variational integrators, which treat the discretization of the Hamilton's principle, can be found in the said theory. It is worth noting that the preservation of the geometric features of the dynamical system makes these methods appropriate for both conservative and nearly dissipative systems [28][29][30][31]. Systems that present highly oscillatory behaviour [28] are one of the most difficult challenges to solve ODEs. Here, standard numerical methods require a huge number of (time) steps to trace the oscillations, while geometric integrators are advantageous tools [32]. Geometric integrators face highly qualified simulations for long-term integration processes avoiding spurious effects [32][33][34][35][36][37]. It is interesting to derive numerical integration schemes that respect the geometric characteristics of such systems [38,39].
In a totally different field such as image processing, the Ref. [40] deals with the blind deconvolution of an image looking for the convolution kernel using fractional powers of the Laplacian (see e.g., [41,42] and references therein), which turns the method into a dynamic assimilable regularization system to our rescaled variables, although ODES are replaced by successive FFTs (Fast Fourier Transform). Likewise, the Ref. [43] deals with the restoration of images by means of non-linear Gaussian approximations on the Luminance of the image.
The example we present here is the effective numerical resolution of Lemaître-Tolman-Bondi (LTB) space-time solutions of Einstein Equations (EE); see [44]. These techniques can also be used in other LTB applications, so knowing them is of great interest. In the near future, we plan to explain similar techniques that we have applied to N-body algorithms in the cosmological treatment of large-scale structure, such as in the Hydra project (see [45,46]). All in all, we also intend to share some of the important features of the algorithms we have designed in relativistic positioning systems (see [47]), which we are currently working on. All of these and similar techniques are powerful tools for other astrophysics, physics, and engineering issues in general. We find it appropriate to present them so that they may be used, where appropriate, in the improvement of numerical methods for solving systems of equations both in our field and in science and technology in general.
The paper is organized in the following sections. Section 2 describes some previous considerations about the techniques we apply. Section 3 introduces the main features of the physical problem that is solved with our numerical and computational techniques. Section 4 gives details of the EE solutions used in the cosmological problem solved. In Section 5, the numerical algorithm CS is described. Section 6 deals with other applications of the CS algorithm. Section 7 introduces the numerical resolution of the null geodesic equations that describe the CMB anisotropies in a LTB universe. In this section, our particular application of the RKF algorithm is also described. Some words about the convergence and stability of the algorithms are also presented in Section 7. Section 8 presents the main results and a brief discussion of them, in conjunction with other papers of the current literature. Section 9 has a summary of our paper and the main conclusions. This section includes: (i) a short summary of the paper's novelties and its scientific contribution, (ii) the conclusions extracted by the present study, and (iii) future extensions and outlook. At the end of the paper, we included two appendices which try to add more information about some of the aspects we consider relevant in our work.

Numerical Analysis Tools
In this section, we specify certain tools of Numerical Analysis that must be considered to solve a Dynamical System. These tools allow us to take control of the evolution of our numerical simulation. Sometimes, this process uses more CPU time, and in some cases, also RAM memory. Nevertheless, following it, we have a guarantee that the solution will fit the problem to be solved.
The aspects that should be considered the most in these integrations are: (1) Working with computers that use finite precision mathematics, the first question to keep in mind is that in computational models, you have to try to avoid, whenever possible, the subtraction of functions that can have values of the same order. Such a subtraction produces a smaller result than that of the operands and it can lead to the loss of the most significant digits of the given value and produce the named catastrophic cancellation; see [48] (chapter 1). This operation is especially disastrous in numerical derivation processes as well as in recursive algorithms which are quite fashionable for their beauty and the simplicity of their implementation, as checked in the Ref. [48] (problem 2, chapter 1). It is also dangerous when a derivative (total or partial) is expected to change signs (detection of extremals in the corresponding function) because if it changes signs, it means that it has had to reach zero and lost all its significant digits. Therefore, we must always look for alternative formulas to those that appear in books and manuals (see Appendix B for the IEEE standards) for floating point computation, for intrinsic functions of programming languages, and for numerical derivatives. (2) The next step is to try to base our model on another with an analytical solution that is the limit of ours when a certain parameter tends to zero, for example (asymptotic model). If the asymptotic model has an analytical solution that provides us with certain AE to obtain values of some variables near those of our model, such values eventually serve to rescale or renormalize the variables of our model. This technique is very common in the N-body processes of physics (renormalization of sources), and even in the numerical solution of hyperbolic ODE and PDE which requires reconstruction of the axes at each step to avoid distortions if the coefficients vary with time. (3) At this point we need to verify that the functions we are going to use are evaluable with a precision greater than the tolerance we impose (standard IEEE). This analysis shows to what extent it is feasible to solve our model and which problems require modifications of the integrating algorithm. (4) A very important tool in simulations is to make variable changes, such as passing an affine parameter λ to the t variable that focuses more on knowing how much we have advanced and how much we still have to. We can also rely on the easy asymptotic solution to be calculated in order to define new variables that are quotients of the same variable in each system. These ratios have a very small range of variation, and we reduce the impact of noise (local error) on the objective function. Sometimes it is even possible to decouple some ODEs, as in the case discussed in the present paper. (5) We enter the phase of incorporating our objective function into the dynamical system that we have modelled through its analytical derivation (if it is an AE) so that it can evolve with the global model. If the values that this function take are very small compared to those obtained from the integration from other ODEs, we have to rescale the estimated values of the relative errors of each ODE in each step to avoid situations where the big ones dominate the small ones. (6) The decisive step is to modify the integrating method so that it only takes into account the relative error of the principal variables for the purpose of re-evaluating the integration step. (7) Extremal detection is also very important in order to modify the step. Usually, some of these extremals (zeros of the derivatives) indicate the areas of greatest effect and should not be ignored. In our RKF45ng algorithm, we indicate a way to achieve this, which has given us very good results in the past, although it is probably very improvable. (8) Finally, any modification we introduce must be intensively tested. The most accurate test is to get the algorithm of our nonlinear dynamical system to obtain the values of the asymptotic system when we make the corresponding parameter zero (background test), at least within the precision of the machine used for the defined variables. If the final algorithm passes this test, we can be sure that it will pass any other reasonable test.

The Physical Problem to Solve
In the present paper, we show the numerical algorithm built to solve the equations that compute the gravitational effect of a pressureless spherically symmetric perturbation on free photons radiation. Those photons are described in the General Relativity Theory (GRT) and follow null geodesics. Moreover, they account for Cosmic Background Radiation (CBR). For some important definitions related with our problem, see Appendix A. Now, the applied cosmological theory is presented. A spherically symmetric pressureless perturbation may be described in Cosmology using the following exact solutions of Einstein Equations: (a) Friedmann-Lemaître-Robertson-Walker solution (FLRW) which represents a homogeneous and isotropic space-time (with constant energy density ρ B ). This is the background Universe. (b) Lemaître-Tolman-Bondi solution (LTB) which accounts for a Universe with a unique great perturbation (an excess or a defect of energy density ρ over the mean) which is a pressureless spherically symmetric solution and which asymptotically tends to a FLRW metric (background). This is a parametric solution and requires numerical methods to determine the necessary functions in every space-time point. See ( [44,49]).
If an observer O is placed in an arbitrary point of LTB, photons reaching from different directions will be affected in a different way by the perturbation. Given a direction ψ at t 0 , we call T(ψ) to the photon temperature received by O evolving in LTB and T B (ψ) that is one received by the same observer but evolving in the asymptotic FLRW. We are interested in computing; see [50] This temperature contrast is so far named the gravitational anisotropy produced by the perturbation, and it is supposed to have values of the order of 10 −5 . Notice that the spherical symmetry and the absence of external fields allows us to ensure that all planes with ϕ = constant are equivalent, and for this reason only, the case ϕ = 0 will be studied. This fact can be seen in Figure 1. The procedure to solve the problem is as follows: 1. Build a suitable algorithm (we will name it CS) to numerically solve the parametric LTB metric with sufficient precision.

2.
Generate a second numerical algorithm (we will call it RKF45ng) that allows the integration of the null geodesics equations (or paths of free photons) in LTB with appropriate initial conditions. 3.
Compute the gravitational anisotropy.

4.
Design suitable controls and tests to check the consistency of the obtained values. In order to do that, the following data are considered: 4.1. If the LTB perturbation, while photons are crossing it, stays with small density contrasts ρ − ρ B ≤ 0.01ρ B , it is admissible to make a first-order analytical approximation, both in the Einstein Equations and in the null geodesics. Then one can integrate this linear analytical approximation. A linear code has been built for such a case. This code supplies a method to compare with the case when ρ = ρ B . 4.2. If one integrates the null geodesics of LTB generated with a null perturbation (ρ = ρ B ), one should obtain a FLRW Universe and, therefore, null values of the anisotropy. This is a good test to detect systematic errors and loss of precision. Moreover, it will be the main tool to guarantee that the RKFng is stable.
That means our LTB null geodesics integration algorithm is nonlinear in the sense that no analytical approximations of the equations to integrate are done, although it should also be useful for the first studied linear case. Such a requirement should originate in additional problems in the error control and in the numerical stability of the system to integrate in these two assumptions: (1) When LTB is very close to the FLRW limit (points far away from the symmetry centre); and (2) When the null geodesic is very near to the symmetry centre. The main goal of our research is the detection and solution of such kinds of problems.

LTB Solution of EE
The spherically symmetric conditions imposed to the LTB universe, as well as the use of comoving coordinates and null pression, make most of the components of Einstein's tensor null. Thus, their evolution equations are reduced to: The G 44 = −8πGρ(M, t) equation becomes in a relation between M and R(M, t) through ρ(M, t). This relation may be expressed through one of the two following equivalent equations: and the second one allows us to obtain ∂R/∂M; then, this partial derivative can be obtained with the same relative precision as that obtained in R and ρ. The equations G 11 = 0 and G 14 = 0 are analytically integrable and lead to the parametric solution of LTB with both arbitrary functions f (M) and t 1 (M) to be determined with the initial conditions. This solution is: (2) On the one hand, the spherically symmetric LTB metric stands for: where: Function f (M) is subjected to the restriction f (M) > −1 so that the metric has no problems. Furthermore, once f (M) is known, ∂R/∂t can be computed. Then, this function establishes the variation profile of the radial coordinate with respect to t. On the other hand, f (M) plays the role of the difference between kinetic and potential energies of the M shell. Its sign allows to distinguish the connected and non-connected shells of the gravitational field. As f (M) does not depend on time, the connexion of the shell is determined by the initial conditions.
Once this equation has been solved, the next step is to compute the Christoffel symbols which appear in the null geodesics equations, and read: Such functions will appear as coefficients of a coupled differential equations system when integrating the null geodesics. Therefore, maximum precision is required. As a matter of fact, the Γ 1 11 symbol is the only one which is the sum of two functions and can have precision problems in the case where it becomes very small.

FLRW Solution of EE
For a FLRW universe, one has: and that is why if one takes: then FLRW becomes a LTB with constant ρ = ρ B for every t and r as a comoving radial coordinate (independent of t) and of a(t) as the scale factor. The computations to complete, in this case, are evident. Note that these equations are AE, not ODE or PDE.

CS Algorithm
In order to complete the effective numerical computation of the parametric LTB solution, one has to perform the following steps:

Initial Functions
To determine the two arbitrary functions of the parametric solution, one has to proceed in the following way: The function t 1 (M) usually becomes zero at t D , although it could not be so.
To determine the function f (M) through the density profile defined at t D (see Equation (A1) at Appendix A), one has to proceed this way: (i) Computation of R(M,t): From Equation (1) and the density profile, one has This equation has a unique positive solution of x = R/R v for each M ≥ 0. Once M is known, it can be written in the following way: and it can be applied to any method of roots computation as the solution is unique and simple. As in the FLRW, one has M B = 4πρ B R 3 B /3, and x 0 = R B /R v can be taken as an initial value for the Newton method. It still has to be determined whether the error tolerance is admissible for R. Therefore, a method to obtain the radial coordinate R(M, t D ) associated to each shell is achieved with a prefixed relative error bound. (2) and (3), one has: If z = 0, one of these equations has a solution (see Figure 2), from which z is obtained through a roots calculus and, afterwards, f (M) with the same precision when numerically solved for an arbitrary M and its corresponding R(M, t D ). If no equations have a solution, that means that in the shell, the following is accomplished: and then, f (M) = 0 (limiting shell).

The Other Functions at t D
In order to determine the rest of the necessary functions, one should proceed this way:

1.
As ρ is known at t D (initial profile), in the Equation (1), one can directly compute the value ∂R ∂M = 1 4πρR 2 with the same relative error bound as R.

2.
Furthermore, once f (M) and R(M, t D ) are known, one can also obtain with the same relative error bound. So as to know the sign of this derivative, one needs to know whether, at a later time, R(M, t) increases or decreases for the same shell. The advantage of this procedure is that the derivative value is precise and no significant digits are lost (see Appendix B).

3.
The other functions appearing in the Christoffel symbols such as ∂ 2 R/∂t∂M, ∂ 2 R/∂M 2 y ∂Φ/∂M need derivation numerical techniques (a t = constant) of the known functions ∂R/∂t, ∂R/∂M and Φ = 1 + f (M), although one cannot now guarantee a fixed relative error bound, as the catastrophic cancellation phenomenon can appear through the loss of significative digits with the subtraction of similar numbers. The logarithmic derivative is crucial here.
We tried to choose ∆M so that the possible maximum between the truncation and the rounding errors was the smaller one (see Appendix B).

Functions for t > t D
As f (M) is now known, given a value of M, one can directly choose among the Equations (2)-(4) to determine which is suitable for this shell. By substituting t D by the desired t and, with the same change z = (R f )/(GM), one has: Let us now compute the approximated root for z (if necessary) and let us obtain the corresponding R(M, t). From now on, easy computations allow us to obtain the rest of the necessary functions at each point (M, t).
For example, for known R(M, t) and f (M), Equation (6) gives the value of |∂R/∂t|, and a numerical derivation at a constant t value gives ∂R/∂M. Then Equation (1) gives the value of ρ(M, t). The rest of the procedure is as in the former case.
Once all the commented functions are obtained, one can compute the Christoffel symbols at any shell M and at any time t.

Additional Utility of the CS Algorithm
A variation of this algorithm allows to obtain, with different FLRW background universes, the ε and R v parameters at t D , and this lets us "simulate" the present profiles of certain realistic structures, such as the Boötes Void, the Great Attractor, the Virgo Cluster, and so forth. To do so, one proceeds as follows: 1.

2.
With the core "radius" Additionally choose a value of the contrast for t D ε 1 and compute the ρ B (t D ) value of the FLRW. 3.
Simplify the CS algorithm to obtain only the R(M, t) and ρ(M, t) values. Apply it only at t D and t 0 for a predetermined sample of shells M i and obtain R(M i , t 0 ) and Additionally compute R v or the present core radius by interpolation, first searching the value of the core shell, In this way, a function of two variables has been defined: evaluable for each pair of initial data. 7.
Now the numerical secant method for systems of equations allows to perform iterations, starting with two initial values (ε 1 , R v1 ) y (ε 2 , R v2 ), until it is able to find the desired present maximum contrast and core radius (in the case where it converges, of course).
This process is able to simulate perturbations with positive contrast (overdensities), with the same present linear contrast (as shown in Figure 3) and for universes with different values of Ω. Initial parameters of each overdensity are shown in Table 1b. For these cases, high precision is not required in the PARAM algorithm as the present contrast is obtained from observational low-precision data. The initial R v are all of the order of 180/51 h −1 Mpc, and that is the reason why they are not written in Table 1b.  Table 1b. So as to generate initial values that lead to nonlinear negative present contrasts (underdensities), a negative present profile was chosen (see Figure 4) and the data found are written in Table 1a. The relation R v1 ≈ 22.5/51 h −1 Mpc is accomplished here as well, but is not included in Table 1a, as it is the same one for all underdensities.
The initial values at t D that generate each underdensity appear in Table 1a.

(a) Nonlinear Underdensities
Case The data of Table 1b have been used to generate overdensities. The effects of these overdensities in the CMB anisotropies have been used to compare the results of our nonlinear algorithm with the linear algorithm stated before. Meanwhile, the underdensities created with the data of Table 1a have been used to test the nonlinear results.

Resolution of the Null Geodesics EDOS
From Equation (5) and the fact that ϕ = constant for spherical symmetry reasons, along any null geodesic, one has: ds 2 = 0 = g 11 dM 2 + g 22 dθ 2 − dt 2 , where On the other hand, if one defines λ as the affine parameter of the geodesic and the tangent components in a point of the geodesic as (K 1 , K 2 , K 3 , K 4 ), the following relations stand: Finally, the free photons evolution in GRT must accomplish the following condition In LTB, that means the resolution of the following system: where the Christoffel symbols we must compute with the CS algorithm appear. The independent variable is the affine parameter λ and the dependent variables are M, θ, t, K 1 , K 2 , K 4 . These equations form a system of six coupled differential first-order equations that have to be integrated with the suitable initial conditions. As when one gives conditions in t D , one has many problems to ensure that the photon arrives at the observer at t 0 , and the best procedure is to do it in the inverse way. It is supposed that the observer is looking in the direction that forms an angle Ψ 0 with the OC s axis, and then the initial conditions should be: where the 0 subscript stands for the initial value while M O is the M coordinate of the observer shell. Let us also suppose that CMB is homogeneous in the hypersurface t = t D and that the temperature of all the photons that start from that hypersurface is constant. In these conditions, it is well-known that the gravitational anisotropy to be computed verifies: where a(t) is the scale factor. Theoretically, one only has to compute K 4 D by integrating (backwards in time) the system of equations. In any way, it can be supposed that problems with the stability of the system will appear in the case where you need to obtain δ B of the order of 10 −5 with Equation (18), which should be the precision required in K 4 D to obtain at least seven correct decimals in the quotient and to be able to guarantee that δ B has at least three correct significative digits. If what one tries to do is to obtain quasi null anisotropy (FLRW limit), the precision requirements should be even greater.
As it can be noticed, the problem is not only the election of an appropriate integration method. One will also need for the integration step to also adapt to a precision that one elects, as for great values of M, LTB tends to FLRW. A very small step is only valid to increase the global round error and the CPU cost. Moreover, in these conditions, the gravitational anisotropy will change very little. (b) For small values of M, the density contrast increases in one direction and decreases in the opposite one, so one has to be very precise in the chosen step, as in this case, the greater changes of the gravitational anisotropy are present.
We have chosen a Runge-Kutta-Fehlberg of order 4(5) with a variable step requiring, at first, a local truncation relative error of the order of 10 −10 in every variable. Of course, this forces us to demand that the roots x and z of the equations solved with the CS algorithm also have the same relative precision, although the functions determined through numerical derivation may not have it.

Numerical Instability of the Initial System
When system A was tried to be integrated for ε = 0 (FLRW test) with the VAX (From the IFIC, of València) computer, it was found that, in some directions Ψ 0 , this integration stopped without reaching t D because it found negative values of M, despite the quadruple precision required to minimize the rounding errors. This absurd result indicated uncontrolled oscillations in the system coefficients (numerical instability). When Γ α βγ implied values where plotted amplified oscillations were detected in the Γ 1 11 , as it was predictable for, it was the only term with sum/subtraction of functions.

First Numerically Stable System
Problem 1. In order to avoid the problems with the Γ 1 11 term, it must not appear in the system to integrate.

Solution 1:
Substitute the equation where the Γ 1 11 term appears by the control equation. At the same time, the independent variable is also changed considering now the t variable instead of the affine parameter λ. In this way, a greater control is achieved on the time that the integration must be stopped.
Once these changes are made, the new system to solve is the following one: System B: where the dλ = dt/K 4 equation is unnecessary because the value of this parameter is not interesting and the equation eliminated is now used as the control equation with the purpose of controlling the Γ 1 11 oscillations (by comparing the obtained value when clearing the unknown in this equation, with the value obtained in the CS algorithm).
The integration of System B with ε = 0 (FLRW test) allows to arrive at t D if |Ψ 0 | ≥ 0.1 grades. Then, the numerical instability of the system has disappeared, and the diagnosis of Problem 1 has been confirmed. Nevertheless, the final values of δ B are of the order of 10 −5 , depending on the initial value of Ψ 0 , although they should be close to 0. This fact indicates that, either (1) there are systematic errors in the integration or (2) the step fit of the integration is not suitable. As a matter of fact, both cases appear when the observation angle Ψ 0 is small.
In the case that Ψ 0 is small, the trajectory of the photon which comes from t D is each time nearer to C s , and therefore travels through shells with decreasing M until it starts to go far away from C s . This indicates that dM/dt changes its sign at some point of the photon trajectory defining a minimum at M < M O . This minimum should be detected by the integration method or else systematic errors might appear. Moreover, the objective function does not help in the adjustment of the integration step, and this is dangerous as if significative digits should be lost, then:

Inclusion of the Objective Function
In the previous section, (Section 7.2), some troubles have been presented. They should be summarized this way: The possible change in the sign of dM/dt must be detected. (2b) The objective function must be included in the system of equations.

Solution 2.
(2a) The second line of system B suggests that dM/dt could be cleared as a function of dθ/dt or vice versa, but then the sign will not be well-defined. This equation is of the type 1 = cos 2 α + sin 2 α as both g 11 and g 22 are positive, and therefore, an angle α must exist, such as one that can define these derivatives through trigonometric functions.
As a matter of fact, let (K 1 , K 2 , 0, 0) be the spatial projection of the tangent vector to the null geodesic and ( g 11 (M, t), 0, 0, 0) be the radial direction which joins C s with the photon. The cosine of the angle formed by these two vectors is given by the quotient of the dot product of both vectors and their norms, that is: which when using the first and second equations of system B, the following is obtained: and allows for rewriting of the other equations in terms of α. Now the sign of dM/dt is perfectly defined, and this derivative changes its sign when α = π/2 ± nπ with n as an integer.
On the other hand, the initial value of α at t 0 is, evidently, Ψ 0 (see Figure 5), and the following can be written: Now the integration from t 0 to t D is done in two parts when cos α changes its sign, both of them with constant signs. In this way, it is definite that no "jumps" in the extremal point are produced because this is in a great step-interval of integration. This modifies the RKF45ng in the following way (if |Ψ 0 | < π/2): (i) While |α| ≤ π/2 both in t and in t − h, the RKF45ng controls the change of the step. (ii) The first time that a change of sign between t and t − h appears, the increase of the step h is forbidden until indicated, the values in t − h are not accepted, and the step is divided by 10. The integration follows with such a restriction and every time that |α| > π/2 is overcome, the new value is not accepted and a division by 10 is made in h. (iii) When |h| < tol h or π/2 − |α| < tol α , the control of the step is returned to the RKF45ng so that the integration is completed by increasing or decreasing the step.
As our goal is to be near the extremal point with a desired precision and it is also known that the extremal point is unique (if it exists), no more changes are needed, except by the obvious ones that must be done when the integration is from t D to t 0 in the double integration test proposed further on.
(2b) With the aim that the integration algorithm also controls the error of the objective function, the auxiliary function is defined as follows: that verifies the relation δ B = δ(t D ). The total derivative with respect to the time of this equation, gives a new differential equation for δ with initial value δ(t 0 ) = 0 and that leads to the following system: System C: with the variables α, δ and M decoupled from the other ones, although the integration of system C with ε = 0 does not substantially improve the results from System B. Our conclusion is that the true problem with this test is in the different value ranges that the variables and their total derivatives with respect to t have. Thus one has that: (i) M has a very great range of positive values that depends on the observation direction Ψ 0 , but dM/dt has a constant sign in some directions and changes in other ones. (ii) α always has values in ]0, π] (if the observation direction is positive) and has monotonous behaviour when travelling along the null geodesic. (iii) δ achieves a maximum (As M achieves a minimum) whether |Ψ 0 | mod π is small but, in any case, its variation range is very small.

Rescaling of the Variables. RKF45ng Algorithm
Therefore, rescaling the truncation error in each variable or the integration method is necessary, which will adjust the step according to the truncation error of the dominant variable.
The question here is that we are building an general algorithm, and there is no "perfect rescaling" that allows us to consider all the possible combinations of LTB, linear or not, with success guarantees. Moreover, the scaling of errors will depend on the observation direction, and this cannot be achieved with the classical method of defining a "relative weights" vector for the variables.
The only way to make all the integration variables have a similar truncation error is to make a change in the variables M and α to another y 1 and y 2 , that represents small changes in their values and their derivatives values. Such a change of variable must be considered as variable rescaling, at every integration step, that helps the RKF45ng to control the local truncation error according to the physical conditions of the problem.
Given the Ψ 0 direction and the null geodesic of the LTB γ that arrives to O in this direction, let γ B be a fictitious null geodesic of FLRW that arrives to O in the same direction. (a) Associate point of both geodesics with the same coordinate t. The point (M, α, t) ∈ γ is made corresponding with the point (M B , α B , t) ∈ γ B . (b) Use M B and α B to define the integration variables, y 1 = M B /M, y 2 = α B /α. Now, the variables y 1 , y 2 , δ will be the ones that control the step and the two first variables have the range reduced in the neighbourhood of 1.
In terms of the variables δ, y 1 and y 2 , the system to solve is formed by the following equations: If the order of the variables chosen is: δ, y 1 , y 2 , α, α B , M B , the rescaling vector can be defined as scale = ( f ac δ , 1, 1, 0, 0, 0) that multiplies the truncation error of each variable. This definition implies that the RKF45ng will not consider the variables α, α B , M B to adjust the step. For y 1 and y 2 will simply consider their truncation error, while for δ it will consider its truncation error multiplied times f ac δ . Such an election first allows the integration with f ac δ = 1 and, if the results are not satisfactory with the new suitable tests, f ac δ may progressively be increased and force the RKF45ng to make smaller steps.
More concisely, define: (a) A ∆ i = (y n+1 ) i − (y * n+1 ) i for each integration variable (see [4]). (b) Now the vector reesc i = ∆ i * scale i . (c) Finally, ∆ = reesc 1 (the maximum modulus component). Then the RKF45ng is now prepared so that it only considers the truncation errors of the first three variables to adjust the step.
Theoretically, there is no limit for f ac δ . Thus, with f ac δ = 10 5 , the condition ∆ ≈ 10 −10 reads 10 5 * ∆ 1 ≤ 10 −10 . The local truncation error in δ is required to be of the order of 10 −15 ! and the relative error in the final step of the order of 10 −12 in 1000 steps. This must be avoided as the greater f ac δ is, the greater the CPU cost is, more integration steps are done, and the global rounding error increases.
The variables y 1 , y 2 should oscillate around the unity, so that the f ac δ = 1 and the condition of absolute error of local truncation is ≈ 10 −10 , and leads to us asking whether 9 or 10 are correct decimal digits in y 1 , y 2 and δ. If δ reaches values of the order of 10 −5 , at least two or three correct digits are expected in 1000 integration steps. Such precision is acceptable to determine the order of the gravitational anisotropy, and it makes no sense to demand more precision given the CPU cost and the problem conditions. ≤ tolr 2 × 10 −4 . As some of the computed functions by derivation should have a relative error greater than the local truncation error, it must be concluded that the final system is numerically stable for the FLRW limit. 7.6. Convergence and Stability. New Tests for the RKF45ng When ε = 0 Once the stability of the system for the FLRW limit has been checked, the guarantee that the final system is working properly for great values of M is achieved, and therefore, one can be confident that the process of integration does not "contaminate" the δ variable at t D . A check of the final results obtained for LTB is lacking. In order to do this, the following test was designed. [valini1] = rk f 45ng(val f in, +1), the difference |valini − valini1| allows to "estimate" the recovered digits of each initial variable.

Direction Test
Integrate the final system with Ψ 0 , where −Ψ 0 and Ψ 0 ± 2π allows us to detect the possible disarrangements in the rescaling vector when the modulus of α or its sign are changed.

Extrapolation Test
The final plots of δ B as a function of Ψ 0 analysis allows us to determine the smoothness of this curve and the appropriate working of the extrapolation to Ψ 0 = 0.

Linear Test
If some of the inhomogeneities that are in a linear regime at present are generated, one can obtain δ B with the linear code and with the rk f 45ng, and compare their results.
For the case of ε = 0, tolr = 10 −10 , f ac δ = 1, with local truncation of the order of rescaled tolr and quadruple precision in the VAX of the IFIC.
For the linear test, the following steps are given: 1.
With both codes, linear and rk f 45ng, the present temperature contrast δ B is computed for values of Ψ 0 = 1, 2, · · · , 180. In Ψ 0 = 0 the result is obtained by extrapolation. This way, the values of δ B are known in 181 evenly spaced points.

2.
By taking into account that T(t 0 )/T D = (1 + δ B )/(1 + Z D ), the T 0 (Ψ 0 ) temperatures in each direction and their mean are computed: 3. Now the new contrast is computed: As a consequence of this definition, the monopole of δ T is null.

4.
The dipolar contribution to δ T is related with the Observer-Symmetry Centre relative velocity (Doppler Effect). The other contributions to δ T are named δ R or nondipolar anisotropy contribution, and are computed by subtracting the dipolar part from δ T .

5.
Given any inhomogeneity O 3 -O 7 of Table 1b, δ T plots of the linear solution and of the rk f 45ng solution are indistinguishable.

6.
For the case O 4 , the plots δ R (Ψ 0 ) are shown in Figure 6a. The continuous plot corresponds to the linear computations, while the discontinuous plot corresponds to the non-linear ones (rk f 45ng algorithm). The figure shows that both results are very similar. This means there is very simultaneous effectiveness in both schemes. For the other inhomogeneities of Table 1b, we also obtained very similar results for both algorithms (linear and non-linear). Figure 6b plots the non-linear computations of δ R (Ψ 0 ) for the cases O 3 -O 7 of Table 1b. In Figure 6b, we can appreciate that the CMB anistropy grows as the values of Ω decrease. Therefore, these kinds of CMB anisotropies, δ R (Ψ 0 ) values, are greater as the model of universe is more open.

Results and Discussion
Next, some of the most important physical results of the numerical work presented in this paper are discussed, as well as an indication of where they have been published. The detailed numerical and computational techniques that were not presented in such papers are presented here. The value of the results presented here and their original features have been analysed throughout our present paper.
The first results of the physical application presented in this paper can be found in the Ref. [11]. This was the first time our method was presented. Linear tests and preliminary results with non-linear structures were demonstrated. The first conclusion was that, in linear structures, both our non-linear and linear algorithms produced graphically indistinguishable results. The results had to be expanded down to the quadrupole to appreciate the difference. As a side-effect, we found that our non-linear algorithm produced higher levels of precision in δ B than expected. The reason for this was that even the octupole (with values between 10 −7 and 10 −9 ) "appeared" to behave properly, although only tolr = 10 −10 was required as the local error bound in each step and between 800 and 1200 steps were performed in the cases treated there. A less robust code would have presented octupole noise contamination in all likelihood.
Concrete simulated experiments with the non-linear version (part) of the algorithm were performed on the clearly non-linear structures of the Great Attractor and the Virgo cluster [12]. These calculations referred to the computation of the quadrupole. This was separated into two parts: (a) Q D or relativistic quadrupole and (b) Q R or the remainder. Comparisons with results of COBE satellite data were made, which revealed that Q R ≈ 10 −7 was too small to be detected with the technology of the epoch (nineties of the past century). At the same time, the robustness of our code in [11] was confirmed, which reinforced our results. The effect of a Great Attractor-like structure located at Z 0 was also studied between redshifts 2 and 30 [13]. If the density parameter Ω 0 was varied between 0.15 and 0.4, the conclusion was that the pure gravitational effect of these structures was sufficient to explain a significant part of the large-scale anisotropy of the Universe. Such a conclusion meant that it might be necessary to review the concept of large-scale angular anisotropic generation. The results presented in the Ref. [13] were published as a rapid Letter due to the importance of their findings.
More results were obtained after the previous papers. In the Ref. [14], a modification of the density profile at t D was presented. This modification included two contrast values ε 1 < 0 and ε 2 > 0 and two radii of the core R v1 >> R v2 , that is, two superimposed disturbances with the same centre of symmetry. These values should be adjusted so that at a certain R >> R v1 , in t D , the density contrast would be null (compensated structures at a predetermined distance). To do this, the PARAM algorithm was modified in a nontrivial way. Many graphs and tables which helped clarify the analysis were generated, and important conclusions, for instance, the compatibility with observational data, were obtained. The most interesting findings were produced for very open universes and objects very far away from the observer. A deeper study of the structures presented in the Ref. [13] can be found in [15]. A more in-depth study of great underdensities was also performed. in the Ref. [16], the study of the Boötes void with a similar technique to that of paper [14] was carried out; that is, simulating a density vacuum surrounded by a "wall" with excess density which compensated the internal void.
Moreover, we have the following papers about CMB anisotropies in LTB universes: [51][52][53][54][55][56]. The reader may find comprehensive explanations, including important consequences of previous applications performed with our algorithm, in the book ?? and in the review paper of the Ref. [18]. The latter paper also outlines an overview of CMB and CMB anisotropies research so far.
At this stage, we would like to say some words about the results and the convergence and stability of our algorithms. They are written in the next three paragraphs.
With the example we were trying to solve, we have proven that with the expected signal of our objective function, the temperature contrast, or anisotropy of the CMB in an LTB universe, was persistently polluted by the integration method. If ε = 0, the density contrast was null and our objective function should be 0 for any value of t, in particular for t = t D , since we would be integrating in a FLRW universe or asymptotic limit. It could be argued that the EDOS of the LTB is inadequate for the FLRW, but it was clear that at a great distance from the centre of the disturbance in both universes they almost coincided in any direction of observation and the integration method was contaminating the signal we were studying. Our integration method was one of the most widely used among Runge-Kutta ones-the Kasp-Carl version of the RKF45. However, since we did not need to resort to the coefficients which provide methods of order h 3 , h 2 and h 1 , we had actually used an RKF45. The convergence and theoretical stability of such a method is sufficiently analysed in the bibliography (see, for instance, [3,4]).
Our tools are now successively presented to show which specific problems improve, and to make a normalized analysis of the value of each tool, and with that aim, specific parameters of precision requirements are given. These were: (i) tolr = 10 −10 as the bound of the local truncation error and of the relative precision of the root calculation when it was performed, (ii) ε = 0 so that our objective function gave us the order of the polluting noise and (iii) calculation in quadruple precision to minimize the effect of rounding error (see Appendix B). With this we intended to estimate the impact of the noise in our objective function. We assumed that the initial system could not reach t D in some directions because it obtained M < 0 in some layers, which was absurd. We rewrote our ODES to avoid this problem. Thus, we managed to get t D up to 0.1 degree observation angles in a first correction. It was only after applying all the mentioned tools and getting the final system, that we verified that our objective function was less than tolr 2 /100 2 . This was a resounding zero given the tolerance, tolr, required for each function and its derivatives. It must be noted that if the local truncation error was of the order of tolr, when performing about 1000 integration steps, the theory told us that the global truncation error could be of the order of 1000 × tolr, that is, it could become of the order of 10 −7 . On the other hand, we checked that the overall noise was of an order less than 10 −24 when working with quadruple precision, that is, about 38 decimal places. These findings ensure that our algorithms are robust, sufficiently converge, and are very stable.
Our algorithms have been used in a series of papers that are described above. They have been applied both in linear and non-linear structures. Some interesting results about stability and convergence can be summarized as follows: (i) The double integration test gives values in the rang |valini − valini1| ∈ [10 −8 , 10 −7 ] for δ B . As the initial value of δ B = 0 and the final value is δ B ≈ 10 −5 , it is deduced that the final value should have at least 8-9 correct decimal digits, which is enough. As a matter of fact, it is highly possible that it has more correct digits due to the fact that the way to "force" the approximation to α = π/2 is not equivalent for both integrations. (ii) The other initial variables give similar relative error results, i.e., |valini − valini1|/|valini| ∈ [10 −8 , 10 −7 ] . (iii) All the published plots are smooth enough to allow extrapolation at Ψ 0 = 0.
There are recent examples of optimization of numerical methods similar to ours. For instance, the numerical resolution of the Dirac equations (see [19] where the DiracSolver is introduced. See also references therein). Another technique which improves numerical algorithms following a procedure that has some similarities with our work is that presented in the Ref. [38] This work has points of connection with our research as the tests have similarities in the standards used (see also references therein in the Ref. [38]). There is a recent numerically research in Astrophysics which could be connected to ours too. in the Ref. [57] the authors numerically worked with the McVittie's metric and the Mathisson-Papapetrou (MP) equations in the post-Newtonian approach [58]. They analytically and numerically investigated the orbits of spinning particles around black holes in the post-Newtonian limit and in the presence of cosmic expansion. On the one hand, we could connect the study with LTB and LFRW metrics with McVittie's one. On the other hand, the spinning particle orbits could be compared with our numerical Relativistic Positioning Systems techniques used in the Refs. [47,59].

Summary and Conclusions
We have presented here the resolution of an ODES applied to a cosmological problem. We chose a RKF45 instead of an Adams. Here, we describe all the adaptations we have made to solve our physical problem. The objective function has been defined. The originality of our work is that the numerical tools used in our algorithms, which are particular to our problem, should be extended to other problems in the fields of physics and engineering as we remark in the text. The accuracy of the predictions depends on the accuracy of the model and the rigour of the tests. This is also discussed along the text. The numerical algorithms built to solve the physical problem have been described for the use of adaptive step-size numerical methods. In their application, some general properties have been used, and can be extrapolated to other problems. Catastrophic cancellations have been suppressed, and such kinds of suppression can be considered in similar problems. Additionally, the introduction of the objective function helps the optimization of the resolution of the ODES. Moreover, local error resizing is applied, and its usefulness is explained in our paper. This resizing is performed by the use of the asymptotic solution of the ODES. This technique can also be applied to similar problems. The change of variables guarantees the stability of the solution. The linearization of the problem is used as a powerful control test. What we state here is that this kind of tool should solve a wide range of problems that are unstable or do not have a proper solution with the modern numerical tools. These results represent one of the main novelties of our paper and their scientific contribution.
Some conclusions can be extracted by our present study. The IVP techniques used in our research can be applied to solve ODES. Such techniques should be used when applying adaptive step-size numerical methods with variables in a wide range of values and that seek an almost residual objective function value. In order to consider these Dynamical Systems as a disturbance of the asymptotic system, it is necessary to find an asymptotic solution on which to rely. In this case, the Dynamical System is remodeled in order to study the evolution of this disturbance. This is the meaning of our change to the variables y 1 , y 2 and the rescaling of the errors. The RKF45 algorithm used and its adaptive methods can be applied to other similar problems, such as N-body problems, like AP3M, or similar. The way that catastrophic cancellations are eliminated should be used in problems where ODES have to be solved. Mathematical optimization was carried out by introducing the objective function in the ODES. We encourage the use of this procedure when an ODES has similar treatment. Local error resizing was also executed to solve the problem. This resize implies the use of certain variables to adjust the integration step, while the other variables are used as parameters to determine the coefficients of the ODES. This resize is made by using the asymptotic solution of this system. This approach has demonstrated to be very useful and should be applicable to other similar problems as ours. The change of variables is necessary to guarantee the stability of the integration. The linearization of the ODES, when possible, should work as a powerful control test. All the skills named in these paragraphs are powerful tactics to be applied to physical problems where ODES are needed.
We will now say a few words about future extensions of our research and an outlook. The case we have treated here is the effective numerical resolution of LTB space-time solutions of EE. These tools can also be applied to other LTB problems which make them relevant. In addition, our procedures may be used with other metrics, such as McVitte's one [57]. As a matter of fact, we have used them in N-body algorithms when studying the large-scale structure of the universe in Cosmology, as well as the CMB anisotropies, both in the framework of the Hydra project (see [45,46]) or similar (see [60]). The Hydra code is based on AP3M (Adaptive Particle-Particle, Particle-Mesh) and is adapted with our routines to the evolution of Dark Matter (with and without gas) in Cosmology. At present, some of our techniques are also being applied in Relativistic Positioning Systems in the Solar System (see [47,59]). We solved an ODES of 36 equations of which initial values were obtained from ephemeris and evolved with EE using an asymptotic system of Schwarzschild's metric. Our rescaling approach is crucial as the differences from satellites' and celestial objects' semi-axes in the solar system must be solved. In the near future, we want to present the progress of our skills and the new algorithms built so far in another paper. All of these and similar techniques are powerful tools for other astrophysics, physics and engineering issues in general. We hope all the procedures presented here and in other publications will be useful in the resolution of ODES, both in our specific field and in science and technology in general.

1.
Comoving coordinates. These coordinates assign constant spatial coordinate values to observers who measure the universe as isotropic. As such observers move along with the Hubble flow, they are called comoving observers. CMB is also perceived to be isotropic. It is useful, for our purposes, to assign comoving coordinates to every point of the universe because LFRW and LTB universes are in expansion. Moreover, due to the spherical symmetry of such universes we choose coordinates P(M, θ, ϕ, t), with the origin in the symmetry centre C s of the perfect fluid which is their source. Furthermore, we take the coordinates O(M O , π, 0, t) (beeing O the observer), as the ϕ coordinate becomes useless in our study. Additionally, for symmetry reasons, it is sufficient to study the spatial half-plane that contains the OC s axis (see Figure 1). The M coordinate is a label associated to every spatial spherical surface with centre in C s , named shell, and is related to the mass-energy contained in the surface defined by the shell. The radial coordinate R(M, t) of such a shell depends on the time and is related with M and ρ(M, t) through the EE. In FLRW, we choose an arbitrary point as the origin of coordinates and an arbitrary direction as the axis (at constant t, all the axes are equivalent) and the coordinates Q(r, θ, ϕ, t) are also defined, where r is a radial coordinate, which is also comoving. The shells in this universe are a mere auxiliar definition which depends on the origin of coordinates.

2.
Decoupling time. This is defined as the time t D , where it can be considered that the dominant effect over the free photons is gravitational. This moment is as far as it is measured by the spectral lines Redshift Z D . Decoupling time should be taken at Z D ≈ 1100. From this moment, photons freely propagate in the gravitational field of the Universe and its structures. As we are interested in computing the gravitational effect produced by nonlinear structures, we can take initial values at some suitable Z D < 1100. In our work, we take initial values at Z D = 50 ≈ 10 3 Mpc (1 Mpc ≈ 3.2 10 6 lightyear).

3.
Scale factor. This is the time function a = a(t) included in the FLRW metric. The distances among galaxies are proportional to the scale factor and the following relation stands a(t) = a(t 0 )/(1 + Z), where t 0 is the present time and Z is the redshift at time t.

4.
Critical density This is the mass density value ρ c such that, when it exceeds, the effect of gravity in the universe is great enough to stop its expansion in finite time. In our work, we used it to define different values of ρ B as background FLRW universes for our model of LTB. This definition is made through the density parameter Ω = ρ B /ρ c .

5.
Density profile. This is an initial condition to determine the LTB metric. At the initial time t D we take, in this work, a density profile given by: where ε gives the maximum density contrast (ρ − ρ B )/ρ B and R v indicates the radial coordinate at which this contrast is reduced to a half (we call it core). As t varies, ρ(M, t) changes. The new profile in t may change its shape if the perturbation is nonlinear. Nevertheless, with linear perturbations it is acceptable that the shape of the profile remains the same. Both R v and ε besides Ω, are initial values, for the searched algorithm. Ω ∈ [0.2, 1[ have been taken in the present work. With this initial profile, the spherical symmetry is guaranteed and the asymptotic FLRW limit with ρ B = cte.

Appendix B. Numerical Aspects to Take into Account
Appendix B.1. IEEE Norm The standard representation of numbers in floating point arithmetic (scientific calculus) is of the type y = ±(.d 1 d 2 . . . d t ) β × β e , and is defined by four integer values. The base β, the precision or word length t, and the range of exponents e min ≤ e ≤ e max . The mantissa m = ±(.d 1 d 2 . . . d t ) accomplishes 0 ≤ m < 1 and 0 ≤ d i < β. In order to ensure unique representation, d 1 = 0, and, in this case, the representation is said to be normalized; if the base is also 2, the d 1 = 1, so it is not necessary to save it as the decimal point.
The word length t is chosen for each variable by assigning a type such as float, double, quadruple, and so forth, and here comes the concept of precision and rounding error for each variable. The IEEE standard specifies three basic formats, among others. In order not to extend ourselves too much, we limit ourselves to binary machines (see Table A1). The IEEE standard also specifies that if w is an elementary operation on the operands (x, y), the result of z = (x w y) must be given as if the operation (x w y) was performed exactly and, when obtaining the result, it was rounded. This standard allows us to guarantee that the relative rounding error of the elemental operations is of the order of the values given in column 5 of the table above. Naturally, the loss of significant digits has nothing to do with rounding, but with the negligence of the programmer.

Appendix B.2. Intrinsic Functions
We refer in this section to the functions that compilers incorporate as elementary functions exp, log, sqrt, all hyperbolic and trigonometric functions and their inverses, and so forth. Such functions are optimized not only in CPU time, but also choose the most suitable method to the value of the input variable to ensure that the error in the output is of the order of the rounding unit. Such functions should be used whenever possible in order to have an estimate of rounding.

Appendix B.3. Numerical Derivation
In numerical analysis, it is well-known that given g(x) ∈ C 2 and ∀h, Taylor's development allows us to write g(x + h) = g(x) + g (x)h + 1 2 g (ξ)h 2 , ξ ∈] min(x + h, x), max(x + h, x)[, so it is very common to calculate the numerical derivative of the function g(x) using the Mean Value Theorem where E(g ) designates a bound of the truncation error for the derivative, error because we have truncated the Taylor expansion in the linear approximation.
With the relative error of g we are already presented with the first problem because we should calculate E(g )/ | g (x) | and that can give problems in an environment of the extremals of g(x) where the derivative is close to 0. It is true that if the derivative vanishes it should be g(x + h) = g(x), but in finite precision arithmetic finding, strict equality is a lottery and we cannot subject our algorithms to such chance. Additionally, E(g ) can lose significant digits if h is too small, making our attempt to control the error even more disastrous.
We refer to this problem when we say that the truncation of the derivative is of order O(h) while rounding is of order O(1/h). We are actually talking improperly and we should say that rounding is in the order of the rounding unit mentioned in the IEEE standard, divided by | h |. If h is small, we are losing precision in the derivative, as if our variable double became float in a single calculation when h ≈ 10 −8 . Due to this circumstance, it is essential to implement ways to detect the extremals by means of geometric or trigonometric formulas that allow us to avoid these problems.
While it does not seem feasible to avoid amplification by h of any errors in the evaluation of g(x), we can protect ourselves against the possible catastrophic cancellation in ∆ g = g(x + h) − g(x) when both evaluations have the same sign since otherwise, there is no problem. The solution is so simple as to use the Taylor expansion of the function log(| g(x + h) |). Suppose g(x + h), g(x) > 0, and we will have: log(g(x + h)) = log(g(x)) + g (x) g(x) h + 1 2 with the same conditions for ν that ξ had before. If now we are left with the linear approximation and we solve g (x), we are left with: g (x) = g(x) log(g(x + h)) − log(g(x)) h = g(x) h log g(x + h) g(x) .
Now the quotient of the evaluations of g is an elementary operation and log is an intrinsic function so all operations are within the scope of the rounding unit corresponding to the type of declared variable. The h can be very small and will only affect the value of g , but since both values g(x + h) and g(x) will be very close, the quotient will be close to 1 and its logarithm tends to zero. It does not solve all the problems, but it helps to a high degree in our case to calculate the three partial derivatives with respect to M in the CS algorithm.