Estimating the Highest Time-Step in Numerical Methods to Enhance the Optimization of Chaotic Oscillators

: The execution time that takes to perform numerical simulation of a chaotic oscillator mainly depends on the time-step h . This paper shows that the optimization of chaotic oscillators can be enhanced by estimating the highest h in either one-step or multi-step methods. Four chaotic oscillators are used as a case study, and the optimization of their Kaplan-Yorke dimension ( D KY ) is performed by applying three metaheuristics, namely: particle swarm optimization (PSO), many optimizing liaison (MOL), and differential evolution (DE) algorithms. Three representative one-step and three multi-step methods are used to solve the four chaotic oscillators, for which the estimation of the highest h is obtained from their stability analysis. The optimization results show the effectiveness of using a high h value for the six numerical methods in reducing execution time while maximizing the positive Lyapunov exponent ( LE + ) and D KY of the chaotic oscillators by applying PSO, MOL, and DE algorithms.


Introduction
Chaotic oscillators are dynamical systems modeled by nonlinear ordinary differential equations (ODEs), in the form of initial value problems:ẋ = f (x), solved by the initial condition x(t 0 ) = x 0 . As already known, the main characteristic of a chaotic system relies on its high sensitivity to initial conditions, meaning that a very small variation causes significant differences in the system's response, thus causing random and disorderly behavior [1]. The term "chaos" is very often associated with disorder and unpredictability, so that chaotic oscillators are good candidates to develop cryptographic applications [2,3], design random number generators [4,5], and secure communication systems [6][7][8], among others.
The biggest challenge when solving nonlinear ODEs that model chaotic behavior relies on choosing both an appropriate numerical method and the selection of the highest step-size h that reduces execution time. This matters when optimizing characteristics of chaotic oscillators because a small h increases the execution time exponentially. On the one hand, choosing the highest h is not a trivial task; in fact, one must analyze the stability of the numerical method being applied to guarantee convergence to the solution. On the other hand, among the available one-step and multi-step numerical methods, if they are explicit or implicit, one must also choose the method executing the lowest execution time to increase the throughput and operating frequency in a chaotic system. For instance, in linear dynamical systems, h can be directly determined from the evaluation of the eigenvalues, which are related to the natural frequencies of the system. However, in non-linear dynamic systems, the problem is more complex, and one must analyze both the eigenvalues and the stability of the numerical method.
Recently, it has been demonstrated that both stochastic nature-inspired metaheuristics and deterministic global optimization methods are competitive and surpass one another in dependence on the available budget of function evaluations [9,10]. For instance, the authors in [11] applied a DIRECT-type technique [12] to a black-box global optimization problem with expensive function evaluations, which is challenging for numerical methods due to the practical limits on computational budgets often required by intelligent systems. Both DIRECT-type techniques and metaheuristics can be applied to optimize characteristics of chaotic oscillators, such as maximizing the positive Lyapunov exponent (LE+) or Kaplan-Yorke dimension (D KY ), where the challenge is solving the mathematical model multiple times until a minimum error is accomplished or a desired number of generations is executed. Therefore, the number of calls to the model and the time required to solve it are the main issues. In this manner, to reduce the execution time, we propose estimating the maximum h of a numerical method. The case study includes the application of three representative one-step and three multi-step numerical methods in order to solve four well-known chaotic oscillators. whose LE+ and D KY are maximized by applying three metaheuristics, namely, the particle swarm optimization (PSO), many optimizing liaison (MOL), and differential evolution (DE) algorithm. These metaheuristics were chosen because they have been successfully applied to optimize chaotic oscillators, such as in [13], where PSO is applied to solve the parameter identification problem of a fractional-order discrete chaotic system. Additionally, in [14], PSO is applied to chaotic systems formulating the problem as a multi-dimensional one, and in [15], PSO is applied again to optimize the parameters of the Rössler chaotic system. More recently, PSO has been used to explain a bifurcation parameter detection strategy, as shown in [16], and MOL and DE algorithms have also been applied to optimize integer/fractional-order chaotic systems in [17]. In this manner, PSO, MOL, and DE algorithms are applied herein using the same population and generations, in which LE+ and D KY are evaluated by the software time-series analysis (TISEAN) [18].
The rest of the manuscript is organized as follows: Section 2 shows four well-known chaotic oscillators taken as a case study, namely: Lorenz, Rössler, Lü, and an autonomous chaotic system introduced in [19]. Section 3 describes the six numerical methods applied to solve the four chaotic systems. Section 4 details the stability region of the six numerical methods and the computation of eigenvalues to estimate the highest h. Section 5 describes PSO, MOL, and DE algorithms. Section 6 describes the maximization of LE+ and D KY by applying PSO, MOL, and DE to the four chaotic oscillators. A discussion on the optimization results is given in Section 7. Finally, the conclusions are summarized in Section 8.

Chaotic Oscillators
Among the huge plethora of chaotic/hyperchaotic oscillators, which are classified according to their dynamical characteristics as self-excited and hidden attractors, this section shows four well-known models. The most well-known chaotic oscillator is the one introduced by Lorenz, whose mathematical model is given in (1) [20]. In this oscillator, chaotic behavior is observed by setting σ = 10, ρ = 28, and β = 8/3, with initial conditions x 0 = y 0 = z 0 = 0.1. Figure 1 shows the phase-space portraits of the Lorenz chaotic system.
The second oscillator considered herein is the Rössler one, whose mathematical model is given in (2) [21]. The chaotic behavior is observed by setting a = b = 0.2 and c = 5.7, with initial conditions x 0 = y 0 = z 0 = 1. The phase-space portraits among its state variables are shown in Figure 2.ẋ = −y − ż y = x + (ay) The third case study is the Lü chaotic system given in (3) [22]. In this oscillator, the nonlinear function is approximated by a piecewise-linear (PWL) one, which is known as a saturated non-linear function (SNLF), as described in (4). This chaotic system has the advantage of adding more PWL functions to generate multi-directional and multi-scroll chaotic attractors [17].
The Lü chaotic system generates chaotic behavior when the coefficients are set to a = b = c = d 1 = 0.7, and using initial conditions x 0 = y 0 = z 0 = 0.1. In this paper, the PWL functions has a saturation level of k = 5 and break points of bp = 1, so that the slope of the SNLF can be evaluated as m = k/bp. The phase-space portraits are shown in Figure 3.  The fourth chaotic system is the autonomous chaotic oscillator described in (5) and introduced in [19]. This oscillator has a quadratic term x 2 , and the chaotic behavior arises by setting a = 1, b = 1.1, and c = 0.42, and using initial conditions x 0 = 0.1 and y 0 = z 0 = 0. The phase-space portraits are shown in Figure 4.

Numerical Methods
Solving an initial value problem requires the use of numerical methods, which in general can be of a one-step or multi-step type [23]. In both cases, the mathematical model is solved by choosing an appropriate h with the aim of reducing execution time. Besides, as shown in the following section, h is related to the stability of the numerical method [24].
As mentioned in [25], the main objective of a numerical method is to provide an acceptable approximation of the behavior of a dynamical system in continuous-time. However, as there exist a huge number of methods, this section shows the application of three representative one-step and three multi-step methods. Some of them are explicit, and others implicit. In the former case, the method requires only past values at each iteration n to update the current value at n + 1, while the implicit method updates the value n + 1 at the same iteration n + 1, so that they require an explicit method to estimate f n+i .
The simplest one-step explicit method is known as the Forward Euler (FE), named in honor of Leonhard Euler. The iterative equation is given in Table 1. On the other hand, the simplest one-step implicit method is known as Backward Euler (BE), whose iterative equation is given in Table 1, and where the requirement of an additional calculation to approximate the problem solution at iteration n + 1 can be clearly appreciated. Therefore, the Forward Euler method can be applied first to evaluate f (x n+1 , t n+1 ), and afterwards, one can obtain x n+1 . Among the one-step methods, the Runge-Kutta family is quite important, and is very often used to simulate chaotic oscillators. The fourth-order Runge-Kutta (RK4) method is given in Table 1, and it was developed around 1900 by Carl David Tolm Runge and Martin Wilhelm Kutta [26]. RK4 is the most widely used due to its high accuracy, but when it is implemented on digital hardware, it requires more resources than FE and BE methods, as already shown in [23].
The most well-known explicit multi-step methods are called Adams-Bashforth, and the implicit multi-step ones are called Adams-Moulton algorithms. Similar to the one-step methods, an implicit multi-step one finds the solution of an initial value problem using an explicit method from the Adams-Bashforth family to estimate f (x n+1 ). Another type of multi-step method is focused on solving stiff problems, and are known as Gears algorithms [24]. Table 2 shows three multi-step methods that are used herein to solve the four chaotic oscillators given in the previous section. Table 1. One-step methods.

Stability Regions and Estimation of h
The stability region of a numerical method can be obtained by solving a first-order linear equation of the form y = f (x, y), where f (x, y) = −λ, for which λ is an eigenvalue. From the stability analysis, a numerical method is said to be numerically stable if the numerical error is not amplified, but it decreases with the evolution of the iterations. This behavior can be identified in the one-step methods known as Backward Euler and Trapezoidal. The numerical methods that do not have this property are said to be numerically unstable, and this behavior can be observed in Forward Euler in specific cases.
Let us solve the initial value problem of the formẋ = −λx, applying Forward Euler, Backward Euler, and Trapezoidal methods. Applying Forward Euler, one gets the iterative formulae given in (6), and applying Backward Euler, one gets (7), and applying the Trapezoidal method, one gets (8).
To show that Forward Euler is numerically unstable, the iterations beginning with the initial condition x 0 become: consequently, for the Forward Euler to be numerically stable, it must be necessary that (1 − λh) < 1 or 0 < λh < 2. In this case, if λ is a real and positive number, then h < 2/λ.
The Backward Euler and Trapezoidal methods are numerically stable, because from (7) and (8), one can observe that x n −→ 0 as n −→ ∞, and this ideally occurs regardless of h. The stability regions of FE, BE, and RK4 are shown in Figure 5. A similar analysis is performed for the multi-step methods, so that the stability regions of Adams-Bashforth 6, Adams-Moulton 6, and Gear 2 are shown in Figure 6. Looking at these figures, one can see that if the eigenvalues of a function increase, then h must decrease, and vice versa.  Table 1.
Analyzing the stability regions of the numerical methods helps to verify that the eigenvalues are inversely related to the value of h. In this manner, one must compute the eigenvalues (λ) of a dynamical system to estimate an h that guarantees stability. Taking the Lorenz system given in (1) as an example, it has three equilibrium points: EP 1 = (8.4852, 8.4852, 27), EP 2 = (−8.4852, −8.4852, 27) and EP 3 = (0, 0, 0). The Jacobian matrix is given in (9), which must be evaluated at the three equilibrium points EP * = (x * , y * , z * ) to obtain the eigenvalues listed in (10). Table 3 shows the Jacobians, equilibrium points, and eigenvalues of the chaotic oscillators described in Section 2.  Table 3. Jacobian, Equilibrium points, and Eigenvalues of the chaotic systems given in (1)- (5).  After computing the eigenvalues, one can evaluate the stability conditions, so that FE's stability is guaranteed when 0 < h < 2/λ, where λ is the largest eigenvalue of the system, that is, λ = 22.8277. Substituting λ in the inequality, one finds that FE is stable for h < 0.0876. However, in practice, this h value varies due to local and round-off errors of the methods. For instance, the local error is defined as the error that occurs at t = t n+1 , assuming that x n is the initial condition; whereas a total error is defined as the current accumulated error from t = 0 to t = t n+1 , with initial condition x 0 . In [27], these errors are described as local and global, where the local error by truncation (LTE) is first calculated, and then some form of stability is used to show that the global error can be limited in LTE terms. The global error refers to the approximated solution minus the exact solution error, and LTE refers to the error produced by the finite difference derivative approximation, and is therefore something that can be easily estimated using Taylor series expansions. Obviously, these are not the unique errors in numerical methods: according to [28,29], there are various sources of errors, and some of them are errors in the input data, rounding errors during calculations, simplifications in mathematical models, and even human errors. Besides, the stability analysis helps to estimate the highest h from which one can test lower values after observing the desired chaotic behavior.

Particle Swarm Optimization Algorithm
The particle swarm optimization (PSO) algorithm is a sub-field of computational intelligence belonging to the field of swarm and collective intelligence. Similar swarm algorithms are the Ant Colony Optimization [30] and Artificial Bee Colony [31]. PSO is based on a mathematical model developed by Kennedy and Eberhart in 1995 [32], which describes the social behavior of birds and/or fish through a model that is based on the basic principles of self-organization that can be used to describe complex systems, as for the chaotic oscillators. The population is based on particles forming a search group for the purpose of finding food, and generally, each individual continues his search according to his own experience and the experience of the whole group.
The main idea in PSO begins by initializing a set of particles in a search space, given a favorable initial position, assigning an initial velocity vector, and allowing the particles to change their position and velocity at each iteration according to some random parameters. The algorithm updates the position and velocity of the particles that follow the particle with the best result associated to their social behavior. Each particle remembers its best position and recognizes if its current position is the best among the other particles, that is, the global best. Mathematically, the update equations are given in (11) to describe the velocity and (12) to describe the position, at iteration ith, respectively. rand() is a function that returns some random real numeric values between 0 and 1; p best is the best position of the particle and g best represents the best global position among all the particles. c 1 and c 2 are two parameters that represent the confidence of the particle itself, named "cognition" and "swarm".

Many Optimizing Liaisons Algorithm
MOL is a simplified version of PSO proposed by Kenedy [33], but according to [34], after some research, it was named Many Optimizing Liaisons. Basically, MOL is based on eliminating the best-known position of the particle pbest in (11), which updates to (13): As shown in [35], MOL is faster and shorter than PSO, so that the selection of parameters is simpler compared to PSO. In addition, MOL is a purely social algorithm tending to follow the best swarm's particle (gbest). The w in (13) is the inertia weight that maintains a balance between global and local search, so that the exploration process of MOL quickly finds an optimum solution with a lesser number of iterations. In this paper, w and c 2 are set to −0.31 and 2, respectively.

Differential Evolution Algorithm
DE is a simple and effective evolutionary algorithm inspired by the theory of the evolution of species proposed by Darwin. DE is used to solve global optimization problems in a continuous domain. According to [36], DE works in two phases: initialization and evolution. In the first phase, the population is randomly generated, and afterwards, it goes through mutation, crossover, and selection processes, which are repeated until a stop criterion is met. During the initialization, the population is saved into a D-dimensional vector x G j = {x G 1,j , x G 2,j , . . . , x G D,j } for j = {1, 2, . . . , N p}, where N p is the population's size and G denotes the maximum number of generations. In the evolution phase, new individuals are generated by performing mutation (14), crossover (15), and selection (16) operations. In the mutation process, a mutant vector V G j is generated for each target vector X G j in generation G with the following form: where F is the scale factor with a value between 0 and 1; and r1, r2, r3 ∈ {1, 2, . . . , N p} are three different random scalars. The crossover process is performed using the target vector, mutant vector, and a crossover probability Cr, whose value is set between 0 and 1 in order to generate a trial vector U G j = {u G 1,j , u G 2,j , . . . , u G D,j }, which is generated as: (15) where i ∈ {1, 2, . . . , D}. Finally, in the selection process, the target and trial vectors are compared according to their fitness value, and the best of them survives for the next generation. The selection process is performed by (16).

Lyapunov Exponents and Kaplan-Yorke dimension
The high sensitivity to initial conditions of a chaotic oscillator is appreciated by analyzing two orbits produced by two quite close and different initial values. The orbits separate exponentially over time, causing the orbits to separate exponentially too. This phenomenon is quantified by evaluating the positive Lyapunov exponent (LE+) [37], which describes local instability in a chaotic motion. The existence of an LE+ means that there exists a high probability that the system has chaotic behavior [38].
For a chaotic oscillator that has three ODEs, like the systems described in Section 2, it has three Lyapunov exponents, where one must be positive, one zero, and one negative. The ordering of the Lyapunov exponents (LE) makes it possible to evaluate the Kaplan-Yorke dimension through (17), where k is an integer such that the sum of (LE i ) is not negative. The LE+ and D KY shown in Table 4 for the four chaotic oscillators were evaluated using TISEAN [18], whose chaotic time series were generated by applying the six numerical methods described in Tables 1 and 2, for the state variable x. One can also see the execution time of the numerical method (TimeNumMet) and the one taken by TISEAN:

Maximizing LE+ and D KY by PSO, MOL and DE Using the Highest h
The optimization of chaotic oscillators takes a long execution time, mainly associated to the numerical method and TISEAN. In this paper, the six methods described in Section 3 are applied to the four chaotic oscillators presented in Section 2. As already shown in Section 4, the selection of an appropriate h depends on the method's stability region. In this manner, the highest h for each method and for each chaotic oscillator is given in Table 5. As one can see, both TISEAN and the numerical method take less execution time. In all cases, the highest h is allowed by Runge Kutta 4. However, RK4 requires more additional calculations compared to the Adams-Moulton method, which allows an acceptable high h value. In optimizing LE+ and D KY , the method that better satisfies the trade-off between the number of calculations and the highest h is Gear of order 2, while the Backward Euler, in general, is the method that accepts the smallest h. Another issue is that, as h increases, D KY decreases, so that we solve this challenge by optimizing D KY by varying the design parameters of the four chaotic systems by applying the PSO, MOL, and DE algorithms. Thus, the pseudo-codes of PSO and DE are given in Algorithms 1 and 2, respectively. In all cases, the constraint is guaranteeing the highest h.
As one can observe from Tables 4 and 5, the longest execution time is required by TISEAN. Due to this limitation, some restrictions were established to determine that the system has unstable behavior. One of the characteristics that determines this instability is based on analyzing the eigenvalues of the chaotic oscillator. According to [39], if there are i and j such that Re[λi] < 0 and Re[λj] > 0, and if the system has two complex eigenvalues and one real, then the system is said to be unstable. Table 4. LE+ and D KY of (1)-(5) using a small h.

One-Step Method
Multi  Table 5. LE+ and D KY of (1)-(5) using the highest h allowed by each numerical method.  (14) Perform the crossover process according to (15) Replace the new design variables into the file Simulate the CO in TISEAN Evaluate each individual according to (16) Update the population by selecting the individuals with the greatest fitness value end for end for end procedure

Feasible Solutions Provided by PSO, MOL and DE
To perform the D KY optimization by applying PSO, MOL and DE, a number of generations equal to 100 was set for each metaheuristic, with a population equal to the number of design parameters multiplied by ten. The search spaces for the design parameters of the four chaotic oscillators were set as follows: 0.01 < σ < 30, 0.01 < ρ < 50 and 0.01 < β < 10 for Lorenz; 0.01 < a, b < 10 and 0.01 < c < 30 for Rössler; 0.01 < a, b, c, d 1 < 10 for Lü, and 0.01 < a, b, c < 10 for the Autonomous Chaotic System.
The optimization results are given in Table 6, where it can be appreciated that the execution time evaluating D KY and the one taken by TISEAN are enhanced, while keeping the highest h allowed by the numerical method. In this manner, for the Lorenz and Lü oscillators, the Gear of order 2 was used to perform the optimization, while for Rössler and the Autonomous Chaotic System, the Adams-Moulton of order 6 was applied. Analyzing the data in Table 5 and comparing them with those given in Figure 7, it can be seen that after the optimization process, the execution time for optimizing the Rössler and the Autonomous Chaotic System oscillators is significantly enhanced.

Conclusions
Performing the simulation of a chaotic system is time-consuming, and reducing its execution time is very challenging because there are many considerations that must be accomplished in order to guarantee chaotic behavior. This paper showed that the execution time is enhanced by estimating the highest h for each numerical method, which has been performed by stability analysis of the methods. The case study included four well-known chaotic oscillators, three presentative one-step methods, and three multi-step methods. We demonstrated that one can maximize D KY by applying metaheuristics, such as PSO, MOL and DE algorithms, while maintaining the highest h for the numerical method. As a result, the behavior of the three metaheuristics over 100 generations was shown in Figure 7, where the best global evolution of D KY for the four chaotic oscillators applying PSO, MOL and DE is appreciated. Finally, the optimization results listed in Table 6 confirmed the suitability of applying metaheuristics in the optimization of chaotic systems, and the usefulness of estimating the highest h to enhance execution time.