1. Introduction
The numerical integration of Ordinary Differential Equations (ODEs) is a foundational pillar of computational mathematics, with its theoretical framework rigorously established in the classic monographs of Butcher [
1], Lambert [
2], and Hairer et al. [
3,
4]. These works establish the theoretical underpinnings for various classes of integration schemes, spanning from Runge–Kutta and linear multistep families to extrapolation and Taylor-based methods, which collectively define the standard for solving initial value problems across diverse scientific domains. In the fields of structural mechanics and multibody dynamics, these ODEs, typically second-order systems, emerge from the spatial semi-discretization of continuous domains. While the finite element method (FEM) is the most common source of such systems [
5,
6,
7], they also arise frequently when applying boundary element (BEM), spectral, or collocation methods to partial differential equations.
For stiff systems where low-frequency modes dominate the response, implicit integrators such as the Newmark, Hilber–Hughes–Taylor (HHT), and generalized-α methods are the preferred choice [
8,
9,
10,
11]. These schemes offer unconditional stability and controllable numerical dissipation, properties that have been extensively analyzed for constrained mechanical systems and penalty formulations [
12,
13,
14,
15]. Conversely, explicit methods are mandatory for scenarios involving high-frequency phenomena, such as wave propagation, impact dynamics, and complex nonlinear multibody systems [
16,
17,
18,
19]. In these high-dimensional applications, including railway engineering and vehicle trackside simulations [
20,
21], the iterative overhead required by implicit solvers to ensure convergence becomes computationally prohibitive [
22].
Despite their maturity, existing explicit integrators face a severe algorithmic trade-off between computational cost and physical consistency. The standard central difference (CD) method, while robust for linear systems, becomes implicitly coupled when velocity-dependent nonlinearities, such as nonlinear damping or Coriolis forces, are present, requiring iterative solvers that destroy its strictly explicit advantage [
23,
24]. Furthermore, while second-order explicit Runge–Kutta (RK2) methods handle nonlinearities effectively, they require multiple function evaluations per step, doubling the cost in large-scale simulations [
25,
26]. Simultaneously, explicit linear multi-step (LMM) methods, such as the Adams–Bashforth family, suffer from shrinking absolute stability regions as their order of accuracy increases, often failing in high frequency applications where robust stability is critical [
27,
28].
To address these limitations, the present work proposes EIG-3, a novel explicit integration method grounded in Nordsieck’s polynomial approximations [
29] and the framework of Taylor-based high-order expansions. EIG-3 is designed to achieve second-order accuracy through a strictly single-stage, single-function evaluation architecture. Unlike RK2, EIG-3 provides a self-starting structure that achieves the same convergence order with half the evaluations per time step. Furthermore, by utilizing local higher-order information at the current integration point rather than historical data, it overcomes the stability degradation inherent to LMM schemes. Crucially, EIG-3 remains strictly explicit even in the presence of velocity-dependent nonlinearities, avoiding the iterative overhead of the CD method. Moreover, EIG-3 features a parametric framework that allows the user to modulate the algorithm’s stability boundaries and spectral properties. By selecting specific parameter values, the method’s stability limits and convergence characteristics can be tuned to suit the requirements of the physical system, offering a level of flexibility similar to the Newmark family, without sacrificing its single evaluation efficiency.
The remainder of this paper is structured as follows:
Section 2 presents the mathematical derivation of the EIG-3 formulas.
Section 3 details the stability boundaries and converge and spectral analysis [
30,
31,
32,
33].
Section 4 evaluates the algorithm’s performance, energy conservation, and computational efficiency through numerical benchmarks in multibody dynamics, and
Section 5 concludes the work.
2. Explicit Integrator Algorithm
2.1. Generating Function by Taylor Series Expansion
Second-order ordinary differential equations (ODEs) appear in numerous engineering fields, such as mechanical, thermodynamic, and electronic engineering. Despite their different forms, they can be expressed in the general form:
with the initial conditions:
where
is a general function defining the system.
Typically, the independent variable is time, and differentiation is indicated by dots. However, in this work, we adopt Lagrange notation, where derivatives are represented as . The task consists of determining a function that meets the differential Equation (1) for all , while fulfilling the initial conditions.
Following an explicit integration scheme, the Taylor series expansion of the unknown variables is performed around time
and truncated at the second derivative term:
where
,
represents the number of time steps,
, and
,
and
are the approximations to
,
and
respectively, in which
. Considering Equations (2) and (3),
is needed to advance.
The unknowns of the system are , , and , that are three terms with only the two Equations (2) and (3). The third equation that completes the system will obviously be the equilibrium expression (1). Note that all terms belong to the interval .
To control the convergence and stability properties of the method, the highest-order derivative is interpolated using a parameters
, leading to the modified equations:
We clarify that these are the linear interpolation coefficients used to approximate the second derivative within the time interval . Specifically, is used for the displacement () update formula, while is used for the velocity () update formula. This distinction is crucial for controlling the convergence and stability properties of the EIG-3 method.
2.2. Algorithm Implementation
The equations above are reorganized to isolate,
, ensuring numerical stability and minimizing round-off errors. The equilibrium equation is rewritten as:
The computational procedure consists of the following steps:
Initialization: At each time step , the values , and are known.
Compute : Use the equilibrium Equation (6) to determine it.
Update and : Compute them using Equations (4) and (5).
For the first time step, where
is unknown, it can be computed directly from the equilibrium equation:
Thus, the initialization process follows these steps:
Compute using the equilibrium Equation (7).
Use the Taylor series expansion to estimate and , Equations (2) and (3).
For computational efficiency and numerical robustness, polynomials in Equations (2) and (4) are evaluated using Horner’s method, which reduces the number of arithmetic operations and minimizes floating-point errors.
3. Analysis of the New Explicit Algorithm
The characteristics of the proposed explicit algorithm can be analyzed by considering the free vibration equation for a single degree of freedom (SDOF) undamped system:
and
,
, where
is the undamped natural frequency of the system. To examine the stability and accuracy properties, it is sufficient to consider an arbitrary function
for all
.
3.1. Stability
The amplification matrix describes how the unknown variables (including derivatives) evolve over time. It is a fundamental tool for assessing the stability of numerical integration algorithms [
34]. For the proposed explicit algorithms applied to Equation (8), the numerical scheme can be expressed as:
To evaluate the stability, the advancement Formulas (4) and (5) and the equilibrium Equation (8) are reformulated. The objective is to group the unknown state variables at the next time step into a vector
and the known values into
. Given that
the following coupled linear system is obtained by rearranging these equations:
The amplification matrix
is then derived as
, which allows for the recursive calculation with Equation (9). The explicit components of matrices
and
are given by:
where
. It can be easily verified that matrix
is always invertible. Thus, the amplification matrix is:
The properties of an explicit algorithm can be inferred from the spectral characteristics of this matrix [
11,
35].
3.1.1. Stability Condition
A numerical method is stable if the spectral radius of the amplification matrix satisfies:
If remains stable for every , with a positive constant, the scheme is considered conditionally stable. Conversely, if is stable for any , it is termed unconditionally stable.
The amplification matrix also gives information about the numerical damping introduced by the integration method. To assess this, we define two key measures:
where the exact period is
and the numerically obtained period is
.
The spectral properties of the method are characterized by the eigenvalues of the amplification matrix
. These eigenvalues can be expressed in terms of the numerical frequency
and the algorithmic damping ratio
as follows:
3.1.2. Characteristic Polynomial
The characteristic polynomial of the amplification matrix is computed using the Faddeev–LeVerrier’s method [
30,
31], which provides:
The polynomial coefficients are computed recursively as follows:
with
and
.
Thus, the characteristic polynomial has the following form:
where
The conditions that keep the spectral radius below one can be obtained by substituting
into (12), where
. Separating the real and the imaginary parts of the resulting equations, one obtains:
For the limiting case
, corresponding to
, the resulting equation is:
In this limiting case the eigenvalues of are and .
Similarly, for the other limiting case
, corresponding to
, one reaches:
In this limiting case, the condition is:
To better illustrate the stability criteria derived in (14),
Figure 1 presents the stable region in the
−
plane for
and
. It can be observed that the maximum allowable time step
decreases as the frequency
increases, but this effect can be “dumped” with an intelligent election of the parameters, restricting the stability zone to the shaded area defined by the inequality.
3.2. Consistency and Accuracy
The conventional approach to determining the local order of convergence using the amplification matrix is not the most straightforward in this case [
10,
11,
32,
35]. Instead, a more practical method is to approximate the exact solution by expanding the variables in a Taylor series truncated at the third derivative term. Under this framework, the proposed algorithm can be categorized based on the highest-order derivative included in its formulation, which is the third derivative. Consequently, the method introduced in this work is referred to as the explicit integration method of grade 3 (EIG-3).
Assuming that the truncation error in the second derivative term is negligible,
, we can express the approximation as:
where terms with an overbar represent approximate values. Specifically,
is the parameter defining the linear interpolation for the second-derivative approximation, consistent with the formulation previously established in Equations (4) and (5). Given that
and
, the local error
for the second derivative term is given by:
Using these expressions, the local errors for displacement
and velocity
are:
As observed, the local error in velocity is at least second order, ensuring the consistency of the method. A numerical integration method is only valuable if it achieves at least second-order accuracy. As demonstrated in the convergence analysis, this condition is met when , as reflected in Equation (15).
3.3. Comparison of Algorithms
A set of parameters is sought to satisfy three key properties:
Maximizing stability: ensuring the algorithm has the largest possible stability region.
Minimizing numerical damping: reducing artificial energy dissipation.
Optimizing convergence: selecting parameters that maximize the order of accuracy.
The parameter set is defined by the vector .
Fixing
, the condition that ensures a real eigenvalue of modulus 1 is:
This equation provides a stability boundary, assuming no other eigenvalues have a modulus greater than 1. For specific values of , simple expressions can be derived to determine the maximum stable step size . For instance:
When , stability is given for .
When , stability holds for .
The most favorable case is when , yielding stability for .
This last configuration offers an extended stability range, making it an optimal choice in terms of stability.
3.3.1. Spectral and Accuracy Comparisons
The numerical stability and performance of the EIG-3 method are evaluated through its spectral characteristics.
Figure 2 illustrates the spectral radius (
),
Figure 3 shows the algorithmic damping ratio (
), and
Figure 4 presents the relative period error (
).
These illustrations provide visual confirmation of the stability conditions derived in
Section 3.1. Specifically,
Figure 2 demonstrates that for the chosen parameter sets, the spectral radius remains below or equal to unity within the stable range of
, preventing the artificial growth of the solution. For
, the spectral radius is exactly equal to unity throughout the stable range, indicating a non-dissipative behavior. In this case, the term
associated with the third eigenvalue in the characteristic polynomial is zero, meaning the amplification matrix
has only two nontrivial eigenvalues:
As observed in
Figure 3, this specific case results in zero algorithmic damping (
), which is ideal for preserving energy in undamped systems. If
, the eigenvalues remain complex conjugates, ensuring stability. This agrees with Equation (16), confirming the stability boundary
for
.
Furthermore,
Figure 4 clarifies the dispersion characteristics; while
exhibits a slight period shrinkage, other configurations allow for controllable period elongation, providing the user with flexibility to tune the method according to the problem’s requirements. For other parameter choices, such as
, the figures reveal that stability is only achieved in a limiting case, without a continuous stable range of
.
3.3.2. Comparison with Other Explicit Methods
The performance of the EIG-3 method is benchmarked against several classical explicit integrators, including one-step methods (RK2, RK3), multistep methods (AB2, AB3), and the central difference (CD) method.
Figure 5 compares the spectral radii of these integrators. It is evident that higher-order methods like RK3 and AB3 possess significantly smaller stability regions. In contrast, EIG-3 and CD demonstrate superior stability boundaries, extending up to
, whereas RK3 is limited to
. This makes EIG-3 highly efficient for problems requiring larger time steps.
As shown in
Figure 6, EIG-3 (with
) and CD are the only methods that introduce zero artificial damping across their entire stability region. Other methods, particularly RK2, AB2 and AB3, introduce numerical dissipation that can prematurely attenuate the high-frequency response of the system.
Figure 7 illustrates the relative period errors. EIG-3 and CD exhibit identical dispersion characteristics, with a slight period shrinkage as
increases. While other methods like AB3 show period elongation, they do so at the cost of much stricter stability limits and higher dissipation.
3.3.3. Key Advantages of EIG-3 over Central Difference (CD)
As shown in
Figure 5 through
Figure 7, the CD method and EIG-3 exhibit identical spectral radii, damping behavior, and dispersion characteristics when
. This equivalence stems from their similar amplification matrices, meaning they share the same vector space representation for linear systems. However, EIG-3 offers three crucial advantages over CD in practical engineering applications:
If the ODE contains a nonlinear term dependent on the first derivative, , the standard CD scheme requires an iterative solution to preserve second-order accuracy. Because the central difference approximation for involves the unknown , the value cannot be computed directly. EIG-3 remains strictly explicit for any second-order ODE, avoiding iterative procedures and significantly improving computational efficiency in nonlinear problems.
While CD can be forced to be explicit by using a backward difference for the first derivative, this reduces the global convergence rate of that variable to . EIG-3 maintains second-order convergence for both the state variable and its derivative while remaining fully explicit.
Standard CD is a rigid, non-dissipative scheme. EIG-3 can be “tuned” via its parameters to introduce controlled numerical dissipation. This is vital for stabilizing simulations or filtering high-frequency noise in complex dynamical systems where a purely non-dissipative approach like CD may lead to numerical instability.
4. Numerical Examples
In this chapter, four numerical examples of second-order differential equations are presented to validate the properties of the proposed explicit integration method (EIG-3). All the test cases involve mechanical system models. The results are compared with those obtained using the Runge–Kutta (RK2), Adams–Bashforth (AB2), and central difference (CD) methods. The examples include:
A single degree of freedom (DoF) system.
An elastic hardening spring system.
An elastic softening spring system.
A spring pendulum system.
In the case of the spring pendulum problem, where the central difference method encounters nonlinearity, Newton’s second-order method is applied for solution refinement. The accuracy of the simulations is evaluated using the analytical solution for the single DoF case and by checking the conservation of mechanical energy for the other examples. Computational efficiency is assessed by averaging the runtime over ten simulations.
All simulations were implemented in C/C++ and executed on an Intel Xeon E5345 at 2.33 GHz (Clovertown architecture) running Windows 10. The source code was compiled using the -O3 optimization flag. The algorithm was programmed with a single thread (just one core was used, without hyperthreading).
To evaluate the performance of the EIG-3 method, two complementary metrics are employed. In
Section 4.1, where an analytical solution is available, the relative
error norm is used to measure absolute numerical precision. For the subsequent nonlinear and high-dimensional cases (
Section 4.2,
Section 4.3 and
Section 4.4), the relative mechanical energy drift is periodized. The ability of an integrator to maintain the system’s total energy is a fundamental indicator of its physical consistency and long-term numerical stability.
4.1. Single Degree of Freedom Model
The single degree of freedom (single DoF) model is widely used in structural dynamics to study vibration modes in discretized systems. This model consists of a spring, a point mass, and a damping element, representing different energy transfer mechanisms. The system is governed by the following equation:
where
is the natural frequency of the vibration mode,
is the damping rate and
is the external force. In this example,
,
,
, and
. Furthermore, the natural frequency of the system will be
rad/s.
A schematic representation of this model is shown in
Figure 8.
According to Equation (16), the maximum time step ensuring stability for the EIG-3 method with
is:
To illustrate stability behavior,
Figure 9 presents the solutions obtained using the EIG-3 method for four different time steps. The results confirm that the stability threshold is
s. Solutions with smaller time steps remain stable, while those exceeding this threshold become unstable.
The analytical solution of Equation (18) for the given conditions is:
Comparison of Methods
The problem was solved using the EIG-3, CD, AB2, and RK2 methods for different time steps, with a simulation duration of 1000 s. The accuracy was assessed using the analytical solution. The results are summarized in
Table 1, where “
Time” refers to the computational time in seconds, and
represents the relative error norm to the analytical solution.
The results confirm that EIG-3 and CD produce identical accuracy and computational cost, which is expected given their similar stability characteristics. On the other hand, the AB2 and RK2 methods exhibit smaller stability regions. For example, when using both methods become unstable when the algorithm becomes unstable. Although AB2 and RK2 are both second-order accurate methods, they perform less accurately than EIG-3 and CD at the same time step. The RK2 method incurs higher computational costs since it requires two function evaluations per time step.
4.2. Elastic Hardening Spring
This example is based on the work of Xie [
33]. The equation governing the motion of a hardening elastic spring is given by:
with the initial conditions
and
.
The period of oscillation for this system is
s, and the simulation runs for
seconds. Since this is a conservative system, accuracy is evaluated based on the drift in mechanical energy (energy drift), calculated as:
Table 2 presents the accuracy and computational cost for the different integration schemes across various time step sizes.
The results indicate that all four methods provide similar accuracy when a sufficiently small time step is used. However, for the AB2 and RK2 methods exhibit significantly larger errors compared to EIG-3 and CD. This is due to the fact that AB2 and RK2 become unstable at this step size. When the time step is reduced and all methods operate within their stability regions, their accuracy aligns with their expected order of convergence.
In terms of computational cost, EIG-3 and CD perform similarly, although CD is slightly more efficient. Since the given ODE is linear with respect to velocity, CD remains fully explicit in this case. AB2, despite only requiring one function evaluation per step, is slightly more expensive computationally. This effect is more pronounced for RK2, which requires two function evaluations per step, leading to a noticeable increase in cost, even though the function being evaluated is relatively simple.
4.3. Elastic Softening Spring
The elastic softening spring problem, also discussed in [
33], is governed by the following differential equation:
with the initial conditions
and
. The period of this nonlinear oscillator is
s. The simulation is run for a total time of
. As this system is conservative, the accuracy of the numerical methods is evaluated based on the drift in mechanical energy, which is computed as:
Due to the longer period of oscillation compared to the previous (hardening) example, the integration step sizes can be relatively larger. As shown in
Table 3, all methods are stable for
, but only EIG-3 and CD remain stable for
while AB2 and RK2 become unstable. These results are consistent with the stability boundaries illustrated earlier in
Figure 5,
Figure 6 and
Figure 7.
Although all methods display second-order convergence based on the evolution of the error, EIG-3 and CD consistently yield more accurate results for larger time steps. Interestingly, EIG-3 and CD also outperform AB2 and RK2 in precision at smaller time steps, even though the latter two methods are also nominally second order.
Regarding computational cost, this example demonstrates a higher function evaluation overhead compared to the elastic hardening case. This is partly due to the evaluation of the hyperbolic tangent function, which is computationally more expensive. Nevertheless, the system remains linear with respect to velocity, meaning CD retains its explicit nature and exhibits similar efficiency to EIG-3.
4.4. Spring Pendulum
The spring pendulum problem is a two degrees of freedom system composed of a massless rod, a linear spring, and a point mass, as illustrated in
Figure 10. The system moves under the influence of gravity, with acceleration due to gravity taken as
.
Given a mass 1 kg, a rod length of 0.5 m and a spring stiffness 98.1 N/m, the governing equations of motion are:
with initial conditions:
,
and
. This problem, detailed in [
28], is conservative, and energy is conserved over time. The system’s total mechanical energy is calculated as:
The simulation is run for 100 s. The performance of each method is summarized in
Table 4, showing both accuracy (via energy drift) and computational time for different time steps.
The results show that RK2 and CD achieve greater accuracy than EIG-3 and AB2, particularly at smaller time steps. For EIG-3, the slightly lower precision could stem from the initial error propagation. Nevertheless, all methods show a consistent convergence rate compatible with their theoretical order.
In terms of computational cost, EIG-3 proves to be more efficient than CD in this example. Despite CD often being classified as an explicit method, here it behaves as implicit due to the nonlinear velocity terms. Consequently, an iterative solution is required.
To handle this, Newton’s method was applied with the previous time step’s values as an initial guess. Each iteration involved solving a linear system using LDU factorization with Rook’s pivoting strategy, following [
36]. This introduces a significantly higher computational cost compared to EIG-3, which remains strictly explicit and does not require solving any system of equations.
A further comparison of EIG-3 with finer time steps is shown in
Table 5.
As seen, a time step of 2 × 10−6 s achieves better accuracy than CD at a higher step (1 × 10−5) but with less computational cost. This demonstrates the advantage of EIG-3 in handling nonlinear systems explicitly and efficiently.
5. Conclusions and Future Works
A new explicit integrator for second-order ordinary differential equations has been developed in this work. The proposed algorithm is based on a Taylor series expansion and allows for flexibility through the adjustment of a set of parameters. This parametric framework enables the custom tuning of stability and accuracy, offering a versatility typically associated with implicit structural integrators but within a strictly explicit context. Among the various configurations, a particular set of parameters has been identified that yields properties, both in terms of stability and convergence order, comparable to those of the central difference (CD) method. Notably, the new method only requires a single function evaluation per time step, making it simple to implement and computationally efficient.
To validate the theoretical foundations of the method, four numerical examples drawn from multibody dynamics have been solved. The performance of the new integrator (EIG-3) has been compared with three standard second-order explicit methods: Runge–Kutta 2 (RK2), Adams–Bashforth 2 (AB2), and central differences. In problems where the system is linear with respect to velocity, EIG-3 and CD exhibit similar computational costs, outperforming RK2 in efficiency, which is expected given RK2’s requirement for two function evaluations per step. Furthermore, the single evaluation of EIG-3 facilitates variable time-stepping, as it avoids the overhead of intermediate stages in RK methods. In addition, EIG-3 demonstrated a more robust stability range compared to the AB2 multistep method, validating the advantage of using local higher-order derivatives instead of distant previous points.
Although all methods analyzed are formally second-order, differences in precision were observed, with CD and RK2 showing slightly better accuracy in some linear cases. However, in problems involving nonlinear terms in velocity, CD requires the use of iterative solvers, such as Newton’s method, to maintain accuracy, which increases the computational cost. EIG-3, by contrast, remains strictly explicit even in the presence of such nonlinearities, thereby preserving its low computational overhead. In these scenarios, EIG-3 demonstrates superior performance compared to CD. When compared to RK2, EIG-3 typically completes the simulation in about half the time, depending on the complexity of the evaluated function.
Overall, EIG-3 emerges as a competitive and novel alternative to existing explicit integration methods for second-order ODEs. It offers a stability region comparable to that of the CD method while avoiding the need for iterative solvers in nonlinear problems. Additionally, within its stability limits, EIG-3 introduces no numerical damping, preserving the fidelity of the solution.
The Taylor-based derivation presented here serves as a modular foundation for a new family of integrators. Future work will focus on the development of higher-order versions of this integrator by incorporating higher-order derivative terms from the Taylor expansion. Further research will explore alternative parameter combinations to enhance both stability and convergence. Another promising direction is the extension of this approach to the solution of differential-algebraic equations (DAEs).
Author Contributions
Conceptualization, G.U., I.F.d.B., I.C. and H.U.; methodology, G.U., I.F.d.B., I.C. and H.U.; software, G.U. and I.F.d.B.; validation, G.U., I.F.d.B., I.C. and H.U.; formal analysis, G.U., I.F.d.B., I.C. and H.U.; investigation, G.U. and I.F.d.B.; resources, G.U.; data curation, I.F.d.B.; writing—original draft preparation, G.U., I.F.d.B., I.C. and H.U.; writing—review and editing, G.U. and I.F.d.B.; visualization, I.C. and H.U.; supervision, G.U. and I.F.d.B.; project administration, G.U.; funding acquisition, G.U., I.F.d.B., I.C. and H.U. All authors have read and agreed to the published version of the manuscript.
Funding
The authors thank the Basque Government for its financial support for this research work through the funding of the Research Group (IT1542-22). The authors are also grateful for the support of the project PID2021-124677NB-I00 funded by the Ministry of Science and Innovation through the state research agency MCIN/AEI/10.13039/501100011033 and by the “ERDF: A way of making Europe” program.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Butcher, J.C. Numerical Methods for Ordinary Differential Equations; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
- Lambert, J.D. Numerical Methods for Ordinary Differential Systems: The Initial Value Problem; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1991. [Google Scholar]
- Hairer, E.; Nørsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar] [CrossRef]
- Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar] [CrossRef]
- Bathe, K.-J. Finite Element Procedures; Klaus-Jurgen Bathe: Watertown, MA, USA, 2006. [Google Scholar]
- Zienkiewicz, O.C. El Método de los Elementos Finitos; Reverté: Barcelona, Spain, 1981. [Google Scholar]
- Hughes, T.J.R. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis; Courier Corporation: North Chelmsford, MA, USA, 2012. [Google Scholar]
- Gavrea, B.; Negrut, D.; Potra, F.A. The Newmark integration method for simulation of multibody systems: Analytical considerations. In Proceedings of the ASME Design Engineering Division; American Society of Mechanical Engineers: New York, NY, USA, 2005; pp. 1079–1092. [Google Scholar] [CrossRef]
- Negrut, D.; Rampalli, R.; Ottarsson, G.; Sajdak, A. On an implementation of the Hilber-Hughes-Taylor method in the context of index 3 differential-algebraic equations of multibody dynamics (DETC2005-85096). J. Comput. Nonlinear Dyn. 2007, 2, 73–85. [Google Scholar] [CrossRef]
- Hilber, H.M.; Hughes, T.J.R.; Taylor, R.L. Improved numerical dissipation for time integration algorithms in structural dynamics. Earthq. Eng. Struct. Dyn. 1977, 5, 283–292. [Google Scholar] [CrossRef]
- Chung, J.; Hulbert, G. A time integration algorithm for structural dynamics with improved numerical dissipation: The generalized-α method. J. Appl. Mech. 1993, 60, 371–375. [Google Scholar] [CrossRef]
- de Bustos, I.F.; Uriarte, H.; Urkullu, G.; García-Marina, V. A non-damped stabilization algorithm for multibody dynamics. Meccanica 2022, 57, 371–399. [Google Scholar] [CrossRef]
- Cuadrado, J.; Dopico, D.; Naya, M.A.; Gonzalez, M. Penalty, semi-recursive and hybrid methods for MBS real-time dynamics in the context of structural integrators. Multibody Syst. Dyn. 2004, 12, 117–132. [Google Scholar] [CrossRef]
- González, F.; Kövecses, J. Use of penalty formulations in dynamic simulation and analysis of redundantly constrained multibody systems. Multibody Syst. Dyn. 2013, 29, 57–76. [Google Scholar] [CrossRef]
- Arnold, M.; Brüls, O. Convergence of the generalized-α scheme for constrained mechanical systems. Multibody Syst. Dyn. 2007, 18, 185–202. [Google Scholar] [CrossRef]
- Belytschko, T.; Liu, W.K.; Moran, B.; Elkhodary, K. Nonlinear Finite Elements for Continua and Structures; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
- Pfeiffer, F.; Glocker, C. Multibody Dynamics with Unilateral Contacts; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
- Hallquist, J.O. LS-DYNA Theory Manual; Livermore Software Technology Corporation: Livermore, CA, USA, 2006; Volume 3, pp. 25–31. [Google Scholar]
- Shabana, A.A. Computational Dynamics; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Kasimu, A.; Zhou, W.; Yan, H.; Wang, Y.; Shen, C.; Zhang, Q. Evaluation of the vehicle-cargo and vehicle-trackside clearance of long and big railway freight vehicles. Alex. Eng. J. 2025, 119, 22–34. [Google Scholar] [CrossRef]
- Ying, P.; Tang, H.; Chen, L.; Ren, Y.; Kumar, A. Dynamic modeling and vibration characteristics of multibody system in axial piston pump. Alex. Eng. J. 2023, 62, 523–540. [Google Scholar] [CrossRef]
- Lee, K.C.; Alias, M.A.; Senu, N.; Ahmadian, A.; Lee, K.C.; Alias, M.A.; Senu, N.; Ahmadian, A. On efficient frequency-dependent parameters of explicit two-derivative improved Runge-Kutta-Nystro¨m method with application to two-body problem. Alex. Eng. J. 2023, 72, 605–620. [Google Scholar] [CrossRef]
- Kim, W.; Lee, J.H. An improved explicit time integration method for linear and nonlinear structural dynamics. Comput. Struct. 2018, 206, 42–53. [Google Scholar] [CrossRef]
- Urkullu, G.; de Bustos, I.F.; García-Marina, V.; Uriarte, H. Direct integration of the equations of multibody dynamics using central differences and linearization. Mech. Mach. Theory 2019, 133, 432–458. [Google Scholar] [CrossRef]
- Soares, D., Jr. A novel family of explicit time marching techniques for structural dynamics and wave propagation models. Comput. Methods Appl. Mech. Eng. 2016, 311, 838–855. [Google Scholar] [CrossRef]
- Kim, W. Higher-order explicit time integration methods for numerical analyses of structural dynamics. Lat. Am. J. Solids Struct. 2019, 16, e201. [Google Scholar] [CrossRef]
- Kim, W. A simple explicit single step time integration algorithm for structural dynamics. Int. J. Numer. Methods Eng. 2019, 119, 383–403. [Google Scholar] [CrossRef]
- Chung, J.; Lee, J.M. A new family of explicit time integration methods for linear and non-linear structural dynamics. Int. J. Numer. Methods Eng. 1994, 37, 3961–3976. [Google Scholar]
- Nordsieck, A. On Numerical Integration of Ordinary Differential Equations. Math. Comput. 1962, 16, 22–49. Available online: https://www.ams.org/mcom/1962-16-077/S0025-5718-1962-0136519-5/ (accessed on 21 May 2024). [CrossRef]
- Helmberg, G.; Wagner, P.; Veltkamp, G. On Faddeev-Leverrier’s method for the computation of the characteristic polynomial of a matrix and of eigenvectors. Linear Algebra Appl. 1993, 185, 219–233. [Google Scholar]
- Bär, C. The faddeev-leverrier algorithm and the Pfaffian. Linear Algebra Appl. 2021, 630, 39–55. [Google Scholar]
- Erlicher, S.; Bonaventura, L.; Bursi, O.S. The analysis of the generalized-α method for non-linear dynamic problems. Comput. Mech. 2002, 28, 83–104. [Google Scholar] [CrossRef]
- Xie, Y.M. An assessment of time integration schemes for non-linear dynamic equations. J. Sound Vib. 1996, 192, 321–331. [Google Scholar] [CrossRef]
- Mohammadzadeh, S.; Ghassemieh, M.; Park, Y. Structure-dependent improved Wilson-θ method with higher order of accuracy and controllable amplitude decay. Appl. Math. Model. 2017, 52, 417–436. [Google Scholar] [CrossRef]
- Hilber, H.M.; Hughes, T.J.R. Collocation, dissipation and [overshoot] for time integration schemes in structural dynamics. Earthq. Eng. Struct. Dyn. 1978, 6, 99–117. [Google Scholar] [CrossRef]
- de Bustos, I.F.; García-Marina, V.; Urkullu, G.; Abasolo, M. An efficient LDU algorithm for the minimal least squares solution of linear systems. J. Comput. Appl. Math. 2018, 344, 346–355. [Google Scholar] [CrossRef]
Figure 1.
Stable region in the − plane for (a) and (b) .
Figure 1.
Stable region in the − plane for (a) and (b) .
Figure 2.
Comparison of spectral radius of different parameters for EIG-3 configurations.
Figure 2.
Comparison of spectral radius of different parameters for EIG-3 configurations.
Figure 3.
Comparison of algorithm damping ratio for different EIG-3 configurations.
Figure 3.
Comparison of algorithm damping ratio for different EIG-3 configurations.
Figure 4.
Comparison of the relative period error for different EIG-3 configurations.
Figure 4.
Comparison of the relative period error for different EIG-3 configurations.
Figure 5.
Comparison of spectral radius for different explicit integrator methods.
Figure 5.
Comparison of spectral radius for different explicit integrator methods.
Figure 6.
Comparison of algorithm damping ratio for different explicit integrator methods.
Figure 6.
Comparison of algorithm damping ratio for different explicit integrator methods.
Figure 7.
Comparison of relative period errors for different explicit integrator methods.
Figure 7.
Comparison of relative period errors for different explicit integrator methods.
Figure 8.
Single degree of freedom problem.
Figure 8.
Single degree of freedom problem.
Figure 9.
Single degree of freedom problem with EIG3 with , , and s.
Figure 9.
Single degree of freedom problem with EIG3 with , , and s.
Figure 10.
Spring pendulum problem.
Figure 10.
Spring pendulum problem.
Table 1.
Results for single DoF with EIG-3, DC, AB2 and RK2.
Table 1.
Results for single DoF with EIG-3, DC, AB2 and RK2.
| Time Step | Methods |
|---|
| EIG-3 | CD | AB2 | RK2 |
|---|
| Time | | Time | | Time | | Time | |
|---|
| 1 × 10−1 | 0.0 | 1.99 | 0.0 | 1.99 | 0.0 | 7.32 × 1038 | 0.0 | 6.9 × 1023 |
| 1 × 10−2 | 3.1 × 10−3 | 0.26 | 3.1 × 10−3 | 0.26 | 7.8 × 10−3 | 2.0 | 0.015 | 1.03 |
| 1 × 10−3 | 4.1 × 10−2 | 2.6 × 10−3 | 4 × 10−2 | 2.6 × 10−3 | 6.9 × 10−2 | 2.6 × 10−2 | 0.109 | 1.07 × 10−2 |
| 1 × 10−4 | 0.45 | 2.7 × 10−5 | 0.39 | 2.7 × 10−5 | 0.74 | 2.6 × 10−4 | 1.15 | 1.1 × 10−4 |
Table 2.
Results for elastic hardening spring with EIG-3, DC, AB2 and RK2.
Table 2.
Results for elastic hardening spring with EIG-3, DC, AB2 and RK2.
| Time Step | Methods |
|---|
| EIG-3 | CD | AB2 | RK2 |
|---|
| Time | Energy Drift | Time | Energy Drift | Time | Energy Drift | Time | Energy Drift |
|---|
| 1 × 10−3 | 0.0 | 2.6 | 0.0 | 1.03 | 0.0 | 101.0 | 0.0 | 49.3 |
| 1 × 10−4 | 3.1 × 10−3 | 2.6 × 10−2 | 1.5 × 10−2 | 1.03 × 10−2 | 1.5 × 10−2 | 9.3 × 10−2 | 1.6 × 10−2 | 5.7 × 10−2 |
| 1 × 10−5 | 7.2 × 10−2 | 2.6 × 10−4 | 6.3 × 10−2 | 1.03 × 10−4 | 0.12 | 5 × 10−4 | 0.19 | 1.4 × 10−4 |
| 1 × 10−6 | 0.76 | 2.6 × 10−6 | 0.61 | 7.7 × 10−6 | 1.18 | 5.1 × 10−6 | 1.9 | 1.05 × 10−6 |
Table 3.
Results for elastic softening spring with EIG-3, DC, AB2 and RK2.
Table 3.
Results for elastic softening spring with EIG-3, DC, AB2 and RK2.
| Time Step | Methods |
|---|
| EIG-3 | CD | AB2 | RK2 |
|---|
| Time | Energy Drift | Time | Energy Drift | Time | Energy Drift | Time | Energy Drift |
|---|
| 1 × 10−2 | 0.0 | 2.7 | 0.0 | 0.59 | 0.0 | 45.2 | 1.5 × 10−2 | 20.4 |
| 1 × 10−3 | 1.7 × 10−2 | 2.7 × 10−2 | 1.6 × 10−2 | 5.9 × 10−3 | 1.9 × 10−2 | 6.4 × 10−2 | 3.8 × 10−2 | 2 × 10−2 |
| 1 × 10−4 | 0.16 | 2.7 × 10−4 | 0.25 | 5.9 × 10−5 | 0.23 | 2.8 × 10−4 | 0.41 | 4.7 × 10−5 |
| 1 × 10−5 | 1.57 | 2.7 × 10−6 | 1.5 | 8.4 × 10−7 | 2.29 | 2.4 × 10−6 | 4.1 | 4.7 × 10−7 |
Table 4.
Results for spring pendulum with EIG-3, DC, AB2 and RK2.
Table 4.
Results for spring pendulum with EIG-3, DC, AB2 and RK2.
| Time Step | Methods |
|---|
| EIG-3 | CD | AB2 | RK2 |
|---|
| Time | Energy Drift | Time | Energy Drift | Time | Energy Drift | Time | Energy Drift |
|---|
| 1 × 10−2 | 0.0 | 1.67 | 1.7 × 10−2 | 3.9 × 10−2 | 1.5 × 10−3 | 3.4 | 1.5 × 10−3 | 1.27 |
| 1 × 10−3 | 1.6 × 10−2 | 2.5 × 10−3 | 8.8 × 10−2 | 3.9 × 10−4 | 1.5 × 10−2 | 4.2 × 10−3 | 2.8 × 10−2 | 1.1 × 10−3 |
| 1 × 10−4 | 0.16 | 1.6 × 10−5 | 0.89 | 3.9 × 10−6 | 0.19 | 2.4 × 10−5 | 0.29 | 2 × 10−6 |
| 1 × 10−5 | 1.6 | 1.5 × 10−7 | 8.9 | 8 × 10−8 | 1.8 | 2.2 × 10−7 | 2.9 | 1.1 × 10−8 |
Table 5.
Results for spring pendulum with EIG-3 for different time steps.
Table 5.
Results for spring pendulum with EIG-3 for different time steps.
| Method | Time Step |
|---|
| EIG-3 | 8 × 10−6 | 6 × 10−6 | 4 × 10−6 | 2 × 10−6 | 1 × 10−6 |
| Time | 1.9 | 2.54 | 3.8 | 7.6 | 15.2 |
| Energy drift | 9.7 × 10−8 | 5.4 × 10−8 | 2.4 × 10−8 | 6.1 × 10−9 | 1.5 × 10−9 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |