Exploiting the Abstract Calculus Pattern for the Integration of Ordinary Differential Equations for Dynamics Systems: An Object-Oriented Programming Approach in Modern Fortran

: This manuscript relates to the exploiting of the abstract calculus pattern (ACP) for the (numerical) solution of ordinary differential equation (ODEs) systems, which are ubiquitous mathematical formulations of many physical (dynamical) phenomena. We present FOODIE, a software suite aimed to numerically solve ODE problems by means of a clear, concise, and efﬁcient abstract interface. The results presented prove manifold ﬁndings, in particular that our ACP approach enables ease of code development, clearness and robustness, maximization of code re-usability


Introduction 1.Background
The initial value problem (IVP, or Cauchy problem, see [1]) constitutes a class of mathematical models of paramount relevance, being applied to the modeling of a wide range of dynamic phenomena.Briefly, an IVP is an ordinary differential equation (ODE) system coupled with specified initial values of the unknown state variables, the solution of which are searched for at a given time after the initial time considered.
The prototype of IVP can be expressed as where U(t) is the vector of state variables being a function of the time-like independent variable t, U t = dU dt = R(t, U) is the (vectorial) residuals function, and U(t 0 ) is the (vectorial) initial conditions, namely the state variables function evaluated at the initial time t 0 .In general, the residuals function R is a function of the state variable U through which it is a function of time, but it can also be a direct function of time, and thus, in general, R = R(t, U(t)) holds.
The problem prototype (1) is ubiquitous in the mathematical modeling of physical problems: essentially, whenever an evolutionary (i.e., dyanamic) phenomenon is considered, the prevision (simulation) of the future solutions involves the solution of an IVP.As a matter of fact, many physical problems (fluid dynamics, chemistry, biology, evolutionaryanthropology, among others) are described by means of an IVP.
It is worth noting that the state vector variables U and its corresponding residuals function U t = dU dt = R(t, U) are problem dependent: the number and the meaning of the state variables as well as the equations governing their evolution (which are embedded into the residuals function) are different for the Navier-Stokes conservation laws with respect to the Burgers one, as an example.Nevertheless, the solution of the IVP model prototype can be generalized, allowing the application of the same solver to many different problems, thus eliminating the necessity of re-implementing the same solver for each different problem.
In this work, we present the FOODIE library: it is designed for solving the generalized IVP (1), being completely unaware of the actual problem's definition.The FOODIE library provides a high-level, well-documented, simple application program interface (API) for many well-known ODE integration solvers, with its aims being twofold:

•
Provide a robust set of ODE solvers ready to be applied to a wide range of different problems; • Provide a simple framework for the rapid development of new ODE solvers.

Related Works
There are many ODE solvers described in the literature.In [2], a SODES (stepwise ordinary differential equations solver) is presented: the authors describe an ODE solver able to provide a step-by-step ODE solution exploiting a computer Algebra system (CAS) written in the Python programming language.In the framework of computational fluid dynamics (CFD), and in particular for solving detailed chemical kinetics problems, in [3], a novel neural ODE solver, ChemNODE, is presented: exploiting the neural networks, the chemical source terms are predicted and integrated and the networks themselves are adjusted during the training to minimize errors.In [4], the problem of ODE solving is considered with respect to the computational efficiency point of view: the authors analyze the performance of three different solvers written in the C++ and Julia programming languages on both CPU and GPU architectures, with a special focus on the parallel optimization of the ODE solving algorithms.In [5], the Python framework TensorFlow is exploited to implement a neuralnetwork-based ODE solver: their approach, is hybrid in the sense that the neural model combines both physics-informed and data-driven kernels in order to improve the accuracy of the ODE solutions.The MAPLE CAS software (a symbolic and numeric computing environment) has been used in [6] to apply a novel iterative scheme based on the Mohand homotopy perturbation transform (MHPT) to the simulation of nonlinear shock wave equations, proving a good computational efficiency.
The ODE framework solver presented in this work has different aims, as explained in the following subsection.

Motivations and Aims
The FOODIE library is a free software application (https://github.com/Fortran-FOSS-Programmers/FOODIE,accessed 20 August 2023) and is designed by the authors of the current paper with the following specifications: FOODIE, meaning Fortran Object oriented Ordinary Differential Equations integration library, has been developed with the aim to satisfy the above specifications.The present paper is its first comprehensive presentation.
The Fortran (Formula Translator, [7,8]) programming language is the de facto standard into computer science field: it strongly facilitates the effective and efficient translation of (even complex) mathematical and numerical models into an operative software without compromise on computations speed and accuracy.Moreover, its simple syntax is suitable for scientific researchers that are interested (and skilled) in the physical aspects of the numerical computations rather than computer technicians.Consequently, we develop FOODIE using Fortran language: FOODIE is written by research scientists for research scientists.
One key-point of the FOODIE development is the problem generalization: the problem solved must be the IVP (1) rather than any of its actual definitions.Consequently, we must rely on a generic implementation of the solvers.To this aim, OOP is very useful (see [9]): it allows to express IVP (1) in a very concise and clear formulation that is really generic.In particular, our implementation is based on Abstract Calculus Pattern (ACP) concept.

The Abstract Calculus Pattern
The abstract calculus pattern provides a simple solution for the connection between the very high-level expression of IVP (1) and the eventual concrete (low-level) implementation of the ODE problem being solved.ACP essentially constitutes a contract based on an Abstract Data Type (ADT): we specify an ADT supporting a certain set of mathematical operators (differential and integral ones) and implement FOODIE solvers only on the basis of this ADT.FOODIE clients must formulate the ODE problem under integration defining their own concrete extensions of our ADT (implementing all the deferred operators).Such an approach defines the abstract calculus pattern: FOODIE solvers are aware of only the ADT, while FOODIE clients extend the ADT for defining the concrete ODE problem.
Is worth noting that this ACP emancipates the solvers implementations from any low-level problem-dependent details: the ODE solvers developed with this pattern are extremely concise, clear, maintainable and less errors-prone with respect a low-level (non abstract) pattern.Moreover, the FOODIE clients can use solvers being extremely robust: as a matter of facts, FOODIE solvers are expressed in a formulation very close to the mathematical one and are tested on an extremely varying family of problems.As shown in the following, such a great flexibility does not compromise the computational efficiency.

FOODIE Novelty
The main novelty of our approach is to combine many advantages of CAS programming with the computational performances of compiled (parallel) programming: as a matter of fact, CAS approaches generally enable fast and easy numerical methods implementation, but at the cost of low computational performances they being generally based on interpreted programming languages.On the other hand, compiled programming languages have (extremely) higher computational performances (especially on parallel architectures), but are more constrained with respect interpreted languages: the resulting programming approach requires, in general, more effort to implement complicated algorithms and more errors-prone.The novelty of the ACP implemented in FOODIE consists in enabling code development easiness, clearness and robustness, maximization of code re-usability and conciseness while retaining the computational performances of Fortran compiled programming language.

Manuscript Organization
The present paper is organized as following: in Section 2 a brief description of the mathematical and numerical methods currently implemented into FOODIE is presented; in Section 3 a detailed discussion on the implementation specifications is provided by means of an analytical code-listings review; in Section 4 a verification analysis on the results of FOODIE applications is presented; Section 5 provides an analysis of FOODIE performances under parallel frameworks scenario like the OpenMP and MPI paradigms; finally, in Section 6 concluding remarks and perspectives are depicted.

Mathematical and Numerical Models
In many (most) circumstances, the solution of Equation ( 1) cannot be computed in a closed, exact form (even if it exists and is unique) due to the complexity and nature of the residuals functions, that is often non linear.Consequently, the problem is often solved relying on a numerical approach: the solution of system (1) at a time t n , namely U(t n ), is approximated by a subsequent time-marching approximations U 0 = u 0 → u 1 → u 2 → ... → u N ≈ U(t n ) where the relation u i → u i+1 implies a stepping, numerical integration from the time t i to time t i+1 and N is the total number of numerical time steps necessary to evolve the initial conditions toward the searched solution U(t n ).To this aim, many numerical schemes have been devised.Notably, the numerical schemes of practical usefulness must posses some necessary proprieties such as consistency and stability to ensure that the numerical approximation converges to the exact solution as the numerical time step tends to zero.A detailed discussion of these details is out the scope of the present work and is omitted.Here, we briefly recall some classifications necessary to introduce the schemes implemented into the FOODIE library.
A non comprehensive classification of the most widely used schemes could distinguish between multi-step versus one-step schemes and between explicit versus implicit schemes.
Essentially, the multi-step schemes have been developed to obtain an accurate approximation of the subsequent numerical steps using the informations contained into the previously computed steps, thus this approach relates the next step approximation to a set of the previously computed steps.On the contrary, a one-step scheme evolves the solution toward the next step using only the information coming from the current time approximation.In the framework of one-step schemes family an equivalent accurate approximation can be obtained by means of a multi-stage approach as the one due to Runge-Kutta.The current version of FOODIE provides schemes belonging to both these families.
The other ODE solvers classification concerns with explicit or implicit nature of the schemes employed.Briefly, an explicit scheme computes the next step approximation using the previously computed steps at most to the current time, whereas an implicit scheme uses also the next step approximation (that is the unknown), thus it requires extra computations.The implicit approach is of practical use for stiff systems where the usage of explicit schemes could require an extremely small time step to evolve in a stable way the solution.Mixing together explicit and implicit schemes it is possible to build a family of predictor-corrector methods: using an explicit scheme to predict a guess for the next step approximation it is possible to use an implicit method for correcting this guess.
FOODIE currently implements the following ODE schemes: • one-step schemes: explicit forward Euler scheme, it being 1st order accurate; -explicit Runge-Kutta schemes (see [10,11]): * TVD/SSP Runge-Kutta schemes: The explicit forward Euler scheme for ODE integration is probably the simplest solver ever devised.Considering the system (1), the solution (approximation) of the state vector U at the time t n+1 = t n + ∆t (∆t being the time step considered) assuming to known the solution at time t n is: where the solution at the new time step is computed by means of only the current time solution, thus this is an explicit scheme.The solution is an approximation of 1st order, the local truncation error being O(∆t 2 ).As well known, this scheme has an absolute (linear) stability locus equals to |1 + ∆tλ| ≤ 1 where λ contains the eigenvalues of the linear (or linearized) Jacobian matrix of the system.This scheme is Total Variation Diminishing (TVD) in the stability region under the CFL limit, thus satisfies the maximum principle (or the equivalent positivity preserving property, see [13]).

The Explicit TVD/SSP Runge-Kutta Class of Schemes
Runge-Kutta methods belong to the more general multi-stage family of schemes.This kind of schemes has been designed to achieve a more accurate solution than the 1st Euler scheme, but without increasing the number of time steps used, as it is done with the multi-step schemes, see [11].Essentially, the high order of accuracy is obtained by means of intermediate values (the stages) of the solution and its derivative are generated and used within a single time step.This commonly implies the allocation of some auxiliary memory registers for storing the intermediate stages.
Notably, the multi-stage schemes class has the attractive property to be self-starting: the high order accurate solution can be obtained directly from the previous one, without the necessity to compute before a certain number of previous steps, as it happens for the multistep schemes.Moreover, one-step multi-stage methods are suitable for adaptively-varying time-step size (that is also possible for multi-step schemes, but at a cost of more complexity) and for discontinuous solutions, namely discontinued solutions happening at a certain time t * (that in a multi-step framework can involve an overall accuracy degradation).
In general, the TVD/SSP Runge-Kutta schemes provided by FOODIE library are written by means of the following algorithm: where N s is the number of Runge-Kutta stages used and K s is the sth stage defined as: It is worth noting that the Equations ( 3) and ( 4) can be easily adapted for implicit schemes.
A scheme belonging to this family is operative once the coefficients α, β, γ are provided.
We represent these coefficients using the Butcher's table, that for an explicit scheme where γ 1 = α 1, * = α i,i = 0 has the form reported in Table 1.
The Equations ( 3) and ( 4) show that Runge-Kutta methods do not require any additional differentiations of the ODE system for achieving high order accuracy, rather they require additional evaluations of the residuals function R.
The nature of the scheme and the properties of the solutions obtained depend on the number of stages and on the value of the coefficients selected.Currently, FOODIE provides 3 Runge-Kutta schemes having TVD or Strong Stability Preserving (SSP) propriety (thus they being suitable for ODE systems involving rapidly changing non linear dynamics) the Butcher's coefficients of which are reported in Tables 2-4.The absolute stability locus depends on the coefficients selected, however, as a general principle, we can assume that greater is the stages number and wider is the stability locus on equal accuracy orders.
It is worth noting that FODDiE also provides a one-stage TVD Runge-Kutta solver that reverts back to the explicit forward Euler scheme: it can be used, for example, into a Recursive Order Reduction (ROR) framework that automatically checks some properties of the solution and, in case, reduces the order of the Runge-Kutta solver until those properties are obtained.

The Explicit Low Storage Runge-Kutta Class of Schemes
As aforementioned, standard Runge-Kutta schemes have the drawback to require N S auxiliary memory registers to store the necessary stages data.In order to make an efficient use of the available limited computer memory, the class of low storage Runge-Kutta scheme was devised.Essentially, the standard Runge-Kutta class (under some specific conditions) can be reformulated allowing a more efficient memory management.Currently FOODIE provides a class of 2N registers storage Runge-Kutta schemes, meaning that the storage of all stages requires only 2 registers of memory with a word length N (namely the length of the state vector) in contrast to the standard formulation where N s registers of the same length N are required.This is a dramatic improvement of memory efficiency especially for schemes using a high number of stages (N s ≥ 4) where the memory necessary is an half with respect the original formulation.Unfortunately, not all standard Runge-Kutta schemes can be reformulated as a low storage one.
Similarly to the TVD/SSP Runge-Kutta class, the low storage class also provides a fail-safe one-stage solver reverting back to the explicit forward Euler solver, that is useful for ROR-like frameworks.

The Explicit Adams-Bashforth Class of Schemes
Adams-Bashforth methods belong to the more general (linear) explicit multi-step family of schemes.This kind of schemes has been designed to achieve a more accurate solution than the 1st Euler scheme using the information coming from the solutions already computed at previous time steps.
In general, the Adams-Bashforth schemes provided by FOODIE library are written by means of the following algorithm (for only explicit schemes): where N s is the number of time steps considered and b s are the linear coefficients selected.Currently FOODIE provides 2, 3, and 4 steps schemes having 2nd, 3rd and 4th formal order of accuracy, respectively.The b s coefficients are reported in Table 6.Similarly to the Runge-Kutta classes, the Adams-Bashforth class also provides a failsafe one-step solver reverting back to the explicit forward Euler solver, that is useful for ROR-like frameworks.
It is worth noting that for N s > 1 the Adams-Bashforth class of solvers is not selfstarting: the values of U t 1 , U t 2 , . . ., U t N s −1 must be provided.To this aim, a lower order multi-step scheme or an equivalent order one-step multi-stage scheme can be used.

The Implicit Adams-Moulton Class of Schemes
Adams-Moulton methods belong to the more general (linear) implicit multi-step family of schemes.This kind of schemes has been designed to achieve a more accurate solution than the 1st Euler scheme using the information coming from the solutions already computed at previous time steps.
In general, the Adams-Moulton schemes provided by FOODIE library are written by means of the following algorithm (for only implicit schemes): where N s is the number of time steps considered and b s are the linear coefficients selected.
Currently FOODIE provides 1, 2, and 3 steps schemes having 2nd, 3rd and 4th formal order of accuracy, respectively.The b s coefficients are reported in Table 7.  Similarly to the Runge-Kutta and Adams-Bashforth classes, the Adams-Moulton class also provides a fail-safe zero-step solver reverting back to the implicit backward Euler solver, that is useful for ROR-like frameworks.
It is worth noting that for N s > 1 the Adams-Moulton class of solvers is not self-starting: the values of U t 1 , U t 2 , . . ., U t N s −1 must be provided.To this aim, a lower order multi-step scheme or an equivalent order one-step multi-stage scheme can be used.

The Predictor-Corrector Adams-Bashforth-Moulton Class of Schemes
Adams-Bashforth-Moulton methods belong to the more general (linear) predictorcorrector multi-step family of schemes.This kind of schemes has been designed to achieve a more accurate solution than the 1st Euler scheme using the information coming from the solutions already computed at previous time steps.
In general, the Adams-Bashforth-Moulton schemes provided by FOODIE library are written by means of the following algorithm: where N  6 and 7.

The Leapfrog Solver
The leapfrog scheme belongs to the multi-step family, it being formally a centered second order approximation in time, see [18][19][20][21].The leapfrog method (in its original formulation) is mostly unstable, however it is well suited for periodic-oscillatory problems providing a null error on the amplitude value and a formal second order error on the phase one, under the satisfaction of the time-step size stable limit.Commonly, the leapfrog methods are said to provide a 2∆t computational mode that can generate unphysical, unstable solutions.As consequence, the original leapfrog scheme is generally filtered in order to suppress these computational modes.
The unfiltered leapfrog scheme provided by FOODIE is: FOODIE provides, in a seamless API, also filtered leapfrog schemes.A widely used filter is due to Robert and Asselin, that suppress the computational modes at the cost of accuracy reduction resulting into a 1st order error in amplitude value.A more accurate filter, able to provide a 3rd order error on amplitude, is a modification of the Robert-Asselin filter due to Williams known as Robert-Asselin-Williams (RAW) filter, that filters the approximation of U t n+1 and U t n+2 by the following scalar coefficient: where The filter coefficients should be taken as ν ∈ (0, 1] and α ∈ (0.5, 1].If α = 0.5 the filters of time t n+1 and t n+2 have the same amplitude and opposite sign thus allowing to the optimal 3rd order error on amplitude.The default values of the FOODIE provided scheme are ν = 0.01 α = 0.53, but they can be customized at runtime.The RA-filtered leapfrog scheme is widely used in numerical weather prediction and, in general, in atmospheric/oceanic circulation models probably because it is really simple to implement and has a good accuracy.However, the RAW filter reported in (10) is a major upgrade and generalizes a family of leapfrog solver ranging from unconditionally unstable, to conditionally stable, and up to unconditionally stable.Tuning the family coefficients it is possible to reach (almost) the ideal third order accuracy in amplitude while retaining a conditional stability, see [21].

Application Program Interface
In this section we review the FOODIE API providing a detailed discussion of the implementation choices.
As aforementioned, the programming language used is the Fortran 2008 standard, that is a minor revision of the previous Fortran 2003 standard.Such a new Fortran idioms provide (among other useful features) an almost complete support for OOP, in particular for ADT concept.Fortran 2003 has introduced the abstract derived type: it is a derived type suitable to serve as contract for concrete type-extensions that has not any actual implementations, rather it provides a well-defined set of type bound procedures interfaces, that in Fortran nomenclature are called deferred procedures.Using such an abstract definition, we can implement algorithms operating on only this abstract type and on all its concrete extensions.This is the key feature of FOODIE library: all the above described ODE solvers are implemented on the knowledge of only one abstract type, allowing an implementation-style based on a very high-level syntax.In the meanwhile, client codes must implement their own IVPs extending only one simple abstract type.
In the Section 3.1 a review of the FOODIE main ADT, the integrand type, is provided, while Sections 3.2-3.5 and 3.8 cover the API of the currently implemented solvers.
It is worth noting that all FOODIE public entities (ADT and solvers) must be accessed by the FOODIE module, see Listing 1 for an example on how access to all public FOODIE entities.r e a l _ m u l t i p l y _ i n t e g r a n d , & i n t e g r a n d _ m u l t i p l y _ r e a l g e n e r i c , p u b l i c : : assignment ( = ) => a s s i g n _ i n t e g r a n d endtype i n t e g r a n d The integrand type does not implement any actual integrand field, it being and abstract type.It only specifies which deferred procedures are necessary for implementing an actual concrete integrand type that can use a FOODIE solver.
As shown in Listing 2, the number of the deferred type bound procedures that clients must implement into their own concrete extension of the integrand ADT is very limited: essentially, there are 1 ODE-specific procedure plus some operators definition constituted by symmetric operators between 2 integrand objects, asymmetric operators between integrand and real numbers (and viceversa) and an assignment statement for the creation of new integrand objects.These procedures are analyzed in the following paragraphs.This procedure-function takes two arguments, the first passed as a type bounded argument, while the latter is optional, and it returns an integrand object.The passed dummy argument, self, is a polymorphic argument that could be any extensions of the integrand ADT.The optional argument t is the time at which the residuals function must be computed: it can be omitted in the case the residuals function does not depend directly on time.
Commonly, into the concrete implementation of this deferred abstract procedure clients embed the actual ODE equations being solved.As an example, for the Burgers equation, that is a Partial Differential Equations (PDE) system involving also a boundary value problem, this procedure embeds the spatial operator that convert the PDE to a system of algebraic ODE.As a consequence, the eventual concrete implementation of this procedure can be very complex and errors-prone.Nevertheless, the FOODIE solvers are implemented only on the above abstract interface, thus emancipating the solvers implementation from any concrete complexity.

Symmetric Operators Procedures
The abstract interface of symmetric procedures is shown in Listing 4.
Listing 4. Symmetric operator procedure interface.f u n c t i o n symmetric_operator ( l h s , rh s ) r e s u l t ( o p e r a t o r _ r e s u l t ) import : : i n t e g r a n d c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : l h s !< Left hand side.c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : r h s !< Right hand side.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : o p e r a t o r _ r e s u l t !< Operator result.e n d f u n c t i o n symmetric_operator This interface defines a class of procedures operating on 2 integrand objects, namely it is used for the definition of the operators multiplication, summation and subtraction of integrand objects.These operators are used into the above described ODE solvers, for example see Equations ( 2), (3), ( 6) or (9).The implementation details of such a procedures class are strictly dependent on the concrete extension of the integrand type.From the FOODIE solvers point of view, we need to known only that first argument passed as bounded one, the left-hand-side of the operator, and the second argument, the right-hand-side of the operator, are two integrand object and the returned object is still an integrand one.This agnostic nature is a feature shared by all FOODIE operators.

Integrand/Real and Real/Integrand Operators Procedures
The abstract interfaces of Integrand/real and real/integrand operators procedures are shown in Listing 5.
Listing 5. Integrand/real and real/integrand operators procedure interfaces.f u n c t i o n i n t e g r a n d _ o p _ r e a l ( l h s , rh s ) r e s u l t ( o p e r a t o r _ r e s u l t ) import : : integrand , R_P c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : l h s !< Left hand side.r e a l ( R_P ) , i n t e n t ( IN ) : : r h s !< Right hand side.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : o p e r a t o r _ r e s u l t !< Operator result.e n d f u n c t i o n i n t e g r a n d _ o p _ r e a l f u n c t i o n r e a l _ o p _ i n t e g r a n d ( l h s , rh s ) r e s u l t ( o p e r a t o r _ r e s u l t ) import : : integrand , R_P r e a l ( R_P ) , i n t e n t ( IN ) : : l h s !< Left hand side.c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : r h s !< Right hand side.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : o p e r a t o r _ r e s u l t !< Operator result.e n d f u n c t i o n r e a l _ o p _ i n t e g r a n d These two interfaces are necessary in order to complete the algebra operating on the integrand object class, allowing the multiplication of an integrand object for a real number, circumstance that happens in all solvers, see Equations ( 2), (3), (6) or (9).The implementation details of these procedures are strictly dependent on the concrete extension of the integrand type.

Integrand Assignment Procedure
The abstract interface of integrand assignment procedure is shown in Listing 6.
Listing 6. Integrand assignment procedure interface.s u b r o u t i n e a s s i g n m e n t _ i n t e g r a n d ( l h s , r h s ) import : : i n t e g r a n d c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : l h s !< Left hand side.c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : r h s !< Right hand side.endsubroutine a s s i g n m e n t _ i n t e g r a n d The assignment statement is necessary in order to complete the algebra operating on the integrand object class, allowing the assignment of an integrand object by another one, circumstance that happens in all solvers, see Equations ( 2), (3), (6) or (9).The implementation details of this assignment is strictly dependent on the concrete extension of the integrand type.

The Explicit forward Euler Solver
The explicit forward Euler solver is exposed (by the FOODIE main module that must imported, see Listing 1) as a single derived type (that is a standard convention for all FOODIE solvers) named euler_explicit_integrator.It provides the type bound procedure (also referred as method) integrate for integrating in time an integrand object, or any of its polymorphic concrete extensions.Consequently, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 7.
Listing 7. Definition of an explicit forward Euler integrator.use FOODIE , only : e u l e r _ e x p l i c i t _ i n t e g r a t o r type ( e u l e r _ e x p l i c i t _ i n t e g r a t o r ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it can be directly used without any initialization, for example see Listing 8. where my_integrand is a concrete (valid) extension of integrand ADT.
The complete implementation of the integrate method of the explicit forward Euler solver is reported in Listing 9.
Listing 9. Implementation of the integrate method of Euler solver.s u b r o u t i n e i n t e g r a t e (U, Dt , t ) c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! < Field to be integrated.r e a l ( R_P ) , i n t e n t ( IN ) : : Dt !< Time step.r e a l ( R_P ) , o p t i o n a l , i n t e n t ( IN ) : : t !< Time.U = U + U%t ( t = t ) * Dt r e t u r n endsubroutine i n t e g r a t e This method takes three arguments, the first argument is an integrand class, it being the integrand field that must integrated one-step-over in time, the second is the time step used and the third, that is optional, is the current time value that is passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.

The Explicit TVD/SSP Runge-Kutta Class of Solvers
The TVD/SSP Runge-Kutta class of solvers is exposed as a single derived type named tvd_runge_kutta_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • destroy: destroy the integrator previously initialized, eventually freeing the allocated dynamic memory registers; • integrate: integrate integrand field one-step-over in time.
As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 10.
use FOODIE , only : t v d _ r u n g e _ k u t t a _ i n t e g r a t o r type ( t v d _ r u n g e _ k u t t a _ i n t e g r a t o r ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 11.
Listing 11.Example of initialization of an explicit TVD/SSP Runge-Kutta integrator.c a l l i n t e g r a t o r%i n i t ( s t a g e s =3) In the Listing 11 a 3-stages solver has been initialized.As a matter of facts, from the Equations (3) and (4) a solver belonging to this class is completely defined once the number of stages adopted has been chosen.The complete definition of the tvd_runge_kutta_integrator type is reported into Listing 12.As shown, the Butcher's coefficients are stored as allocatable arrays the values of which are initialized by the init method.
Listing 12. Definition of tvd_runge_kutta_integrator type.type : : t v d _ r u n g e _ k u t t a _ i n t e g r a t o r i n t e g e r ( I_P ) : : s t a g e s =0 !Number of stages.r e a l ( R_P ) , a l l o c a t a b l e : : alph ( : , : ) ! alpha Butcher's coefficients.r e a l ( R_P ) , a l l o c a t a b l e : : b e t a ( : ) ! beta Butcher's coefficients.r e a l ( R_P ) , a l l o c a t a b l e : : gamm ( : ) ! gamma Butcher's coefficients.c o n t a i n s procedure , pass ( s e l f ) , p u b l i c : : d e s t r o y procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : i n t e g r a t e f i n a l : : f i n a l i z e endtype t v d _ r u n g e _ k u t t a _ i n t e g r a t o r After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 13.
Listing 13.Example of usage of a TVD/SSP Runge-Kutta integrator.type ( my_integrand ) : : my_field type ( my_integrand ) : : my_stages ( 1 : 3 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , s t a g e =my_stage , Dt = 0 . 1 ) where my_integrand is a concrete (valid) extension of integrand ADT.Listing 13 shows that the memory registers necessary for storing the Runge-Kutta stages must be supplied by the client code.
The complete implementation of the integrate method of the explicit TVD/SSP Runge-Kutta class of solvers is reported in Listing 14.
Listing 14. Implementation of the integrate method of explicit TVD/SSP Runge-Kutta class.s u b r o u t i n e i n t e g r a t e ( s e l f , U, s t a g e , Dt , t ) c l a s s ( t v d _ r u n g e _ k u t t a _ i n t e g r a t o r ) , i n t e n t ( IN ) : : s e l f !Actual RK integrator.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! Field to be integrated.c l a s s ( i n t e g r a n d ) , i n t e n t This method takes five arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third is the stages array for storing the stages computations, the fourth is the time step used and the fifth, that is optional, is the current time value that is passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.
It is worth noting that the stages memory registers, namely the array stage, must be passed as argument because it is defined as a not-passed polymorphic argument, thus we are not allowed to define it as an automatic array of the integrate method.

The Explicit Low Storage Runge-Kutta Class of Solvers
The low storage variant of Runge-Kutta class of solvers is exposed as a single derived type named ls_runge_kutta_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • destroy: destroy the integrator previously initialized, eventually freeing the allocated dynamic memory registers; • integrate: integrate integrand field one-step-over in time.
As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 15.
Listing 15.Definition of an explicit low storage Runge-Kutta integrator.use FOODIE , only : l s _ r u n g e _ k u t t a _ i n t e g r a t o r type ( l s _ r u n g e _ k u t t a _ i n t e g r a t o r ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 16.
Listing 16.Example of initialization of an explicit low storage Runge-Kutta integrator.c a l l i n t e g r a t o r%i n i t ( s t a g e s =5) In the Listing 16 a 5-stages solver has been initialized.As a matter of facts, from the Equation (5) a solver belonging to this class is completely defined once the number of stages adopted has been chosen.The complete definition of the ls_runge_kutta_integrator type is reported into Listing 17.As shown, the Williamson's coefficients are stored as allocatable arrays the values of which are initialized by the init method.procedure , pass ( s e l f ) , p u b l i c : : d e s t r o y procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : i n t e g r a t e f i n a l : : f i n a l i z e endtype l s _ r u n g e _ k u t t a _ i n t e g r a t o r After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 18.
Listing 18. Example of usage of a low storage Runge-Kutta integrator.type ( my_integrand ) : : my_field type ( my_integrand ) : : my_stages ( 1 : 2 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , s t a g e =my_stage , Dt = 0 . 1 ) where my_integrand is a concrete (valid) extension of integrand ADT.Listing 18 shows that the memory registers necessary for storing the Runge-Kutta stages must be supplied by the client code, as it happens of the TVD/SSP Runge-Kutta class.However, now the registers necessary is always 2, independently on the number of stages used, that in the example considered are 5.
The complete implementation of the integrate method of the explicit low storage Runge-Kutta class of solvers is reported in Listing 19.
Listing 19.Implementation of the integrate method of explicit low storage Runge-Kutta class.
s u b r o u t i n e i n t e g r a t e ( s e l f , U, s t a g e , Dt , t ) c l a s s ( l s _ r u n g e _ k u t t a _ i n t e g r a t o r ) , i n t e n t ( IN ) : : s e l f !Actual RK integrator.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! Field to be integrated.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : s t a g e ( 1 : 2 ) ! Runge-Kutta registers [1:2].r e a l ( R_P ) , i n t e n t ( IN ) : : Dt !Time step.r e a l ( R_P ) , i n t e n t ( IN ) : : t !Time.i n t e g e r ( I_P ) : : s !First stages counter.s e l e c t type ( s t a g e ) c l a s s i s ( i n t e g r a n d ) s t a g e ( 1 ) = U s t a g e ( 2 ) = U * 0 ._R_P do s =1 , s e l f%s t a g e s s t a g e ( 2 ) = s t a g e ( 2 ) * s e l f%A( s ) + s t a g e (1)% t ( t = t + s e l f%C( s ) * Dt ) * Dt s t a g e ( 1 ) = s t a g e ( 1 ) + s t a g e ( 2 ) * s e l f%B ( s ) enddo U = s t a g e ( 1 ) e n d s e l e c t r e t u r n endsubroutine i n t e g r a t e This method takes five arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third is the stages array for storing the stages computations, the fourth is the time step used and the fifth, that is optional, is the current time value that is passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.
It is worth noting that the stages memory registers, namely the array stage, must be passed as argument because it is defined as a not-passed polymorphic argument, thus we are not allowed to define it as an automatic array of the integrate method.

The Explicit Adams-Bashforth Class of Solvers
The explicit Adams-Bashforth class of solvers is exposed as a single derived type named adams_bashforth_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • destroy: destroy the integrator previously initialized, eventually freeing the allocated dynamic memory registers; • integrate: integrate integrand field one-step-over in time; • update_previous: auto update (cyclically) previous time steps solutions.
As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 20. c a l l i n t e g r a t o r%i n i t ( s t e p s =4) In the Listing 21 a 4-steps solver has been initialized.As a matter of facts, from the Equation ( 6) a solver belonging to this class is completely defined once the number of time steps adopted has been chosen.The complete definition of the adams_bashforth_integrator type is reported into Listing 22.As shown, the linear coefficients are stored as allocatable arrays the values of which are initialized by the init method.where my_integrand is a concrete (valid) extension of integrand ADT, times are the time at each 4 steps considered for the current one-step-over integration and previous are the memory registers where previous time steps solutions are saved.
The complete implementation of the integrate method of the explicit Adams-Bashforth class of solvers is reported in Listing 24.
Listing 24.Implementation of the integrate method of explicit Adams-Bashforth class.
s u b r o u t i n e i n t e g r a t e ( s e l f , U, previous , Dt , t , autoupdate ) c l a s s ( a d a m s _ b a s h f o r t h _ i n t e g r a t o r ) , i n t e n t ( IN ) : : s e l f !Actual AB integrator.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! Field to be integrated.c l a s s ( i n t e g r a n d ) , i n t e n t : : s !Steps counter.autoupdate_ = .t r u e .; i f ( p r e s e n t ( autoupdate ) ) autoupdate_ = autoupdate do s =1 , s e l f%s t e p s U = U + pr ev io u s ( s)% t ( t = t ( s ) ) * ( Dt * s e l f%b ( s ) ) enddo i f ( autoupdate_ ) c a l l s e l f%update_previous (U=U, p re vi ou s=p re vi o us ) r e t u r n endsubroutine i n t e g r a t e This method takes five arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third are the previous time steps solutions, the fourth is the time step used, the fifth is an array of the time values of the steps considered for the current one-step-over integration that are passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time and the sixth is a logical flag for enabling/disabling the cyclic update of previous time steps solutions.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.
It is worth noting that the method also performs the cyclic update of the previous time steps solutions memory registers.This can be disable passing autoupdate=.false.: it is useful in the framework of predictor-corrector solvers.

The Implicit Adams-Moulton Class of Solvers
The implicit Adams-Moulton class of solvers is exposed as a single derived type named adams_moulton_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • destroy: destroy the integrator previously initialized, eventually freeing the allocated dynamic memory registers; • integrate: integrate integrand field one-step-over in time; • update_previous: auto update (cyclically) previous time steps solutions.
As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 25. c a l l i n t e g r a t o r%i n i t ( s t e p s =3) In the Listing 26 a 3-steps solver has been initialized.As a matter of facts, from the Equation ( 7) a solver belonging to this class is completely defined once the number of time steps adopted has been chosen.The complete definition of the adams_moulton_integrator type is reported into Listing 27.As shown, the linear coefficients are stored as allocatable arrays the values of which are initialized by the init method.
Listing 27.Definition of adams_moulton_integrator type.After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 28.
Listing 28.Example of usage of an Adams-Moulton integrator.r e a l : : ti me s ( 1 : 3 ) type ( my_integrand ) : : my_field type ( my_integrand ) : : p re vi o us ( 1 : 3 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , p re vi o us=previous , Dt=Dt , t =t im es ) where my_integrand is a concrete (valid) extension of integrand ADT, times are the time at each 4 steps considered for the current one-step-over integration and previous are the memory registers where previous time steps solutions are saved.
The complete implementation of the integrate method of the implicit Adams-Moulton class of solvers is reported in Listing 29.
Listing 29.Implementation of the integrate method of explicit Adams-Moulton class.
s u b r o u t i n e i n t e g r a t e ( s e l f , U, previous , Dt , t , autoupdate ) c l a s s ( a d a m s _ b a s h f o r t h _ i n t e g r a t o r ) , i n t e n t ( IN ) : : s e l f !Actual AB integrator.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! Field to be integrated.c l a s s ( i n t e g r a n d ) , i : : s !Steps counter.autoupdate_ = .t r u e .; i f ( p r e s e n t ( autoupdate ) ) autoupdate_ = autoupdate i f ( autoupdate_ ) c a l l s e l f%update_previous (U=U, p re vi ou s=p re vi o us ) i f ( s e l f%s t e p s >0) then U = pr ev io u s ( s e l f%s t e p s ) + U%t ( t = t ( s e l f%s t e p s ) + Dt ) * ( Dt * s e l f%b ( s e l f%s t e p s ) ) do s =0 , s e l f%s t e p s − 1 U = U + pr ev i ou s ( s +1)% t ( t = t ( s + 1 ) ) * ( Dt * s e l f%b ( s ) ) enddo i f ( autoupdate_ ) c a l l s e l f%update_previous (U=U, p re vi o us=p re vi o us ) e l s e U = U + U%t ( t = t ( s + 1 ) ) * ( Dt * s e l f%b ( 0 ) ) e n d i f r e t u r n endsubroutine i n t e g r a t e This method takes six arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third are the previous time steps solutions, the fourth is the time step used, the fifth is an array of the time values of the steps considered for the current one-step-over integration that are passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time and the sixth is a logical flag for enabling/disabling the cyclic update of previous time steps solutions.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.
It is worth noting that the method also performs the cyclic update of the previous time steps solutions memory registers.This can be disable passing autoupdate=.false.: it is useful in the framework of predictor-corrector solvers.

The Predictor-Corrector Adams-Bashforth-Moulton Class of Solvers
The predictor-corrector Adams-Bashforth-Moulton class of solvers is exposed as a single derived type named adams_bashforth_moulton_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • destroy: destroy the integrator previously initialized, eventually freeing the allocated dynamic memory registers; • integrate: integrate integrand field one-step-over in time; As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 30.Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 31.
Listing 31.Example of initialization of an implicit Adams-Moulton integrator.c a l l i n t e g r a t o r%i n i t ( s t e p s =3) In the Listing 31 a 3-steps solver has been initialized.As a matter of facts, from the Equation (8) a solver belonging to this class is completely defined once the number of time steps adopted has been chosen.The complete definition of the adams_moulton_integrator type is reported into Listing 32.As shown, the linear coefficients are stored as allocatable arrays the values of which are initialized by the init method.where my_integrand is a concrete (valid) extension of integrand ADT, times are the time at each 4 steps considered for the current one-step-over integration and previous are the memory registers where previous time steps solutions are saved.
The complete implementation of the integrate method of the predictor-corrector Adams-Bashforth-Moulton class of solvers is reported in Listing 34.
Listing 34.Implementation of the integrate method of predictor-corrector Adams-Bashforth-Moulton class.s u b r o u t i n e i n t e g r a t e ( s e l f , U, Dt , t ) c l a s s ( i n t e g r a t o r _ a d a m s _ b a s h f o r t h _ m o u l t o n ) , i n t e n t ( i n o u t ) : : s e l f !Integrator.c l a s s ( i n t e g r a n d _ o b j e c t ) , i n t e n t ( i n o u t ) : : U ! Field to be integrated.r e a l ( R_P ) , i n t e n t ( i n ) : : Dt !Time steps.r e a l ( R_P ) , i n t e n t ( i n ) : : t !Times.i n t e g e r ( I_P ) : : s !Step counter.This method takes four arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third is the time step used, and the fourth is the current time.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.

The Leapfrog Solver
The explicit Leapfrog class of solvers is exposed as a single derived type named leapfrog_integrator.This type provides three methods: • init: initialize the integrator accordingly the possibilities offered by the class of solvers; • integrate: integrate integrand field one-step-over in time.
As common for FOODIE solvers, for using such a solver it must be previously defined as an instance of the exposed FOODIE integrator type, see Listing 35.
Listing 35.Definition of an explicit Leapfrog integrator.use FOODIE , only : l e a p f r o g _ i n t e g r a t o r type ( l e a p f r o g _ i n t e g r a t o r ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 36.

Listing 36. Example of initialization of an explicit Leapfrog integrator.
! default coefficients nu=0.01,alpha=0.53c a l l i n t e g r a t o r%i n i t ( ) ! custom coefficients c a l l i n t e g r a t o r%i n i t ( nu = 0 .0 1 5 , alpha = 0 .6 ) The complete definition of the leapfrog_integrator type is reported into Listing 37.As shown, the filter coefficients are initialized to zero, suitable values are initialized by the init method.
Listing 37. Definition of leapfrog_integrator type.type : : l e a p f r o g _ i n t e g r a t o r p r i v a t e r e a l ( R_P ) : : nu = 0 .0 1 _R_P !Robert-Asselin filter coefficient.r e a l ( R_P ) : : alpha = 0 .5 3 _R_P !Robert-Asselin-Williams filter coefficient.c o n t a i n s procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : i n t e g r a t e endtype l e a p f r o g _ i n t e g r a t o r After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 38.
Listing 38.Example of usage of a Leapfrog integrator.r e a l : : ti me s ( 1 : 2 ) type ( my_integrand ) : : f i l t e r _ d i s p l a c e m e n t type ( my_integrand ) : : my_field type ( my_integrand ) : : p re vi o us ( 1 : 2 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , p re vi o us=previous , f i l t e r = f i l t e r _ d i s p l a c e m e n t , Dt=Dt , & t =ti me s ) Where my_integrand is a concrete (valid) extension of integrand ADT, previous are the memory registers where previous time steps solutions are saved, filter_displacement is the register necessary for computing the eventual displacement of the applied filter and times are the time at each 2 steps considered for the current one-step-over integration.
The complete implementation of the integrate method of the explicit Leapfrog class of solvers is reported in Listing 39.
Listing 39.Implementation of the integrate method of explicit Leapfrog class.s u b r o u t i n e i n t e g r a t e ( s e l f , U, previous , Dt , t , f i l t e r ) c l a s s ( l e a p f r o g _ i n t e g r a t o r ) , i n t e n t ( IN ) : : s e l f !LF integrator.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : U ! Field to be integrated.c l a s s ( i n t e g r a n d ) , i n t e n t (INOUT) : : p re v io us ( 1 : 2 ) !Previous time steps solutions.r e a l ( R_P ) , i n t e n t ( i n ) : : Dt !Time step.r e a l ( R_P ) , i n t e n t ( IN ) : : t !Time.c l a s s ( i n t e g r a n d ) , o p t i o n a l , i n t e n t (INOUT) : : f i l t e r !Filter field displacement.U = pr ev io u s ( 1 ) + pr ev i ou s (2)% t ( t = t ) * ( Dt * 2 ._R_P ) i f ( p r e s e n t ( f i l t e r ) ) then f i l t e r = ( pr ev io us ( 1 ) − pr ev io u s ( 2 ) * 2 ._R_P + U) * s e l f%nu * 0 .5 _R_P p re vi ou s ( 2 ) = p re vi ou s ( 2 ) + f i l t e r * s e l f%alpha U = U + f i l t e r * ( s e l f%alpha − 1 ._R_P ) e n d i f p re vi ou s ( 1 ) = p re vi o us ( 2 ) p re vi ou s ( 2 ) = U r e t u r n endsubroutine i n t e g r a t e This method takes six arguments, the first argument is passed as bounded argument and it is the solver itself, the second is of an integrand class, it being the integrand field that must integrated one-step-over in time, the third are the previous time steps solutions, the fourth is the optional filter-displacement-register, the fifth is the time step used and the sixth is an array of the time values of the steps considered for the current one-stepover integration that are passed to the residuals function for taking into account the cases where the time derivative explicitly depends on time.The time step is not automatically computed (for example inspecting the passed integrand field), thus its value must be externally computed and passed to the integrate method.It is worth noting that if the filter displacement argument is not passed, the solver reverts back to the standard unfiltered Leapfrog method.
It is worth noting that the method also performs the cyclic update of the previous time steps solutions memory registers.In particular, if the filter displacement argument is passed the method performs the RAW filtering.

General Remarks
Table 8 presents a comparison of the relevant parts of Equations ( 2)-( 6) and ( 9) with the corresponding FOODIE implementations reported in Listings 9, 14, 19, 24 and 39, respectively.This comparison proves that the integrand ADT has allowed a very high-level implementation syntax.The Fortran implementation is almost equivalent to the rigorous mathematical formulation.This aspect directly implies that the implementation of a ODE solver into the FOODIE library is very clear, concise and less-errors-prone than an hardcoded implementation where the solvers must be implemented for each specific definition of the integrand type, it being not abstract.
Table 8.Comparison between rigorous mathematical formulation and FOODIE high-level implementation; the syntax "(s)" and "(ss)" imply the summation operation.

Tests and Examples
For the assessment of FOODIE capabilities the oscillator test is considered:

Oscillation Equations Test
Let us consider the oscillator problem, it being a simple, yet interesting IVP.Briefly, the oscillator problem is a prototype problem of non dissipative, oscillatory phenomena.
For example, let us consider a pendulum subjected to the Coriolis accelerations without dissipation, the motion equations of which can be described by the ODE system (11).
where the frequency is chosen as f = 10 −4 .The ODE system ( 11) is completed by the following initial conditions: where t 0 = 0 is the initial time considered.
The IVP constituted by Equations ( 11) and ( 12) is (apparently) simple and its exact solution is known: where ∆t is an arbitrary time step.This problem is non-stiff meaning that the solution is constituted by only one time-scale, namely the single frequency f .This problem is only apparently simple.As a matter of facts, in a non dissipative oscillatory problem the eventual errors in the amplitude approximation can rapidly drive the subsequent series of approximations to an unphysical solution.This is of particular relevance if the solution (that is numerically approximated) constitutes a prediction far from the initial conditions, that is the common case in weather forecasting.
Because the Oscillation system (11) posses a closed exact solution, the discussion on this test has twofolds aims: to assess the accuracy of the FOODIE's built-in solvers comparing the numerical solutions with the exact one and to demonstrate how it is simple to solve this prototypical problem by means of FOODIE.

Errors Analysis
For the analysis of the accuracy of each solver, we have integrated the Oscillation Equation (11) with different, decreasing time steps in the range [5000, 2500, 1250, 625, 320, 100].The error is estimated by the L2 norm of the difference between the exact (U e ) and the numerical (U ∆t ) solutions for each time step: where N s is the total number of time steps performed to reach the final integration time.
Using two pairs of subsequent-decreasing time steps solution is possible to estimate the order of accuracy of the solver employed computing the observed order of accuracy: where ∆t 1 ∆t 2 > 1.

FOODIE Aware Implementation of an Oscillation Numerical Solver
The IVP (11) can be easily solved by means of FOODIE library.The first block of a FOODIE aware solution consists to define an oscillation integrand field defining a concrete extension of the FOODIE integrand type.Listing 40 reports the implementation of such an integrand field that is contained into the tests suite shipped within the FOODIE library.The oscillation field extends the integrand ADT making it a concrete type.This derived type is very simple: it has 5 data members for storing the state vector and some auxiliary variables, and it implements all the deferred methods necessary for defining a valid concrete extension of the integrand ADT (plus 2 auxiliary methods that are not relevant for our discussion).The key point is here constituted by the implementation of the deferred methods: the integrand ADT does not impose any structure for the data members, that are consequently free to be customized by the client code.In this example the data members have a very simple, clean and concise structure: • dims is the number of space dimensions; in the case of Equation ( 11) we have dims = 2, however the test implementation has been kept more general parametrizing this dimension in order to easily allow future modification of the test-program itself; • f stores the frequency of the oscillatory problem solved, that is here set to 10 4 , but it can be changed at runtime in the test-program; • U is the state vector corresponding directly to the state vector of Equation (11); As the Listing 40 shows, the FOODIE implementation strictly corresponds to the mathematical formulation embracing all the relevant mathematical aspects into one derived type, a single object.Here we not review the implementation of all deferred methods, this being out of the scope of the present work: the interested reader can see the tests suite sources shipped within the FOODIE library.However, some of these methods are relevant for our discussion, thus they are reviewed.

dOscillation_dt, the Oscillation Residuals Function
Probably, the most important methods for an IVP solver is the residuals function.As a matter of facts, the ODE equations are implemented into the residuals function.However, the FOODIE ADT strongly alleviates the subtle problems that could arise when the ODE solver is hard-implemented within the specific ODE equations.As a matter of facts, the integrand ADT specifies the precise interface the residuals function must have: if the client code implements a compliant interface, the FOODIE solvers will work as expected, reducing the possible errors location into the ODE equations, having designed the solvers on the ADT and not on the concrete type.
Listing 41 reports the implementation of the oscillation residuals function: it is very clear and concise.Moreover, comparing this listing with the Equation ( 11) the close correspondence between the mathematical formulation and Fortran implementation is evident.
Listing 41.Implementation of the oscillation integrand residuals function.It is very simple and clear: firstly all the auxiliary data are copied into the operator result, then the state vector of the result is populated with the addiction between the state vectors of the left-hand-side and right-hand-side.This is very intuitive from the mathematical point of view and it helps to reduce implementation errors.Similar implementations are possible for all the other operators necessary to define a valid intregrand ADT concrete extension.

Assignment of an Oscillation Object
The assignment overloading of the oscillation type is the last key-method that enforces the conciseness of the FOODIE aware implementation.Listing 43 reports the implementation of the assignment overloading.Essentially, to all the data members of the left-hand-side are assigned the values of the corresponding right-hand-side.Notably, for the assignment of the state vector and of the previous time steps solution array we take advantage of the automatic re-allocation of the left-hand-side variables when they are not allocated or allocated differently from the right-hand-side, that is a Fortran 2003 feature.In spite its simplicity, the assignment overloading is a key-method enabling the usage of FOODIE solver: effectively, the assignment between two integrand ADT variables is ubiquitous into the solvers implementations, see Equation (3) for example.
Listing 43.Implementation of the oscillation integrand assignment.s u b r o u t i n e o s c i l l a t i o n _ a s s i g n _ o s c i l l a t i o n ( l h s , r h s ) c l a s s ( o s c i l l a t i o n ) , i n t e n t (INOUT) : : l h s !Left hand side.c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : r h s !Right hand side.s e l e c t type ( rh s ) c l a s s i s ( o s c i l l a t i o n ) l h s%dims = r h s%dims l h s%f = r h s%f i f ( a l l o c a t e d ( r hs%U) ) l h s%U = r h s%U e n d s e l e c t r e t u r n endsubroutine o s c i l l a t i o n _ a s s i g n _ o s c i l l a t i o n

FOODIE Numerical Integration
Using the above discussed oscillation type it is very easy to solve IVP (11) by means of FOODIE library.Listing 44 presents the numerical integration of system (11) by means of the Leapfrog RAW-filtered method.In the example, the integration is performed with 10 4 steps with a fixed ∆t = 10 2 until the time t = 10 6 is reached.The example shows also that for starting a multi-step scheme such as the Leapfrog one a lower-oder or equivalent order one-scheme is necessary: in the example the first 2 steps are computed by means of one-step TVD/SSP Runge-Kutta 2-stages schemes.Note that the memory registers for storing the Runge-Kutta stages and the RAW filter displacement must be handled by the client code.Listing 44 demonstrates how it is simple, clear and concise to solve a IVP by FOODIE solvers.Moreover, it proves how it is simple and effective to apply different solvers in a coupled algorithm, that greatly simplify the development of new hybrid solvers for self-adaptive time step size.
Listing 44.Numerical integration of the oscillation system by means of Leapfrog RAW-filtered method.use foo die , only : l e a p f r o g _ i n t e g r a t o r , t v d _ r u n g e _ k u t t a _ i n t e g r a t o r type ( l e a p f r o g _ i n t e g r a t o r ) : : l f _ i n t e g r a t o r !Leapfrog integrator.type ( t v d _ r u n g e _ k u t t a _ i n t e g r a t o r ) : : r k _ i n t e g r a t o r !Runge-Kutta integrator.type ( o s c i l l a t i o n ) : : r k _ s t a g e ( 1 : 2 ) ! Runge-Kutta stages.type ( o s c i l l a t i o n ) : : p re v io us ( 1 : 2 ) !Previous time steps solution.type ( o s c i l l a t i o n ) : : o s c i l l a t o r !Oscillation field.type ( o s c i l l a t i o n ) : : f i l t e r !Filter displacement.i n t e g e r : : s t e p !Time steps counter.r e a l : : Dt !Time step.c a l l l f _ i n t e g r a t o r%i n i t ( ) c a l l r k _ i n t e g r a t o r%i n i t ( s t a g e s =2) c a l l o s c i l l a t o r%i n i t ( i n i t i a l _ s t a t e = [ 0 .0 , 1 .0 ] , f =10 e4 , s t e p s =2) Dt = 1 0 0 .0 do s t e p =1 , 10000 i f (2 >= s t e p ) then c a l l r k _ i n t e g r a t o r%i n t e g r a t e (U= o s c i l l a t o r , s t a g e = r k _ s t a g e , Dt=Dt , t = s t e p * Dt ) p re vi ou s ( s t e p ) = o s c i l l a t o r e l s e c a l l l f _ i n t e g r a t o r%i n t e g r a t e (U= o s c i l l a t o r , p re vi o us=previous , f i l t e r = f i l t e r , Dt=Dt , & t = s t e p * Dt ) e n d i f enddo c a l l p r i n t _ r e s u l t s (U= o s c i l l a t o r ) 4.1.3.Adams-Bashforth Table 9 summarizes the Adams-Bashforth error analysis.As expected, the Adams-Bashforth 1 step solution, that reverts back to the explicit forward Euler one, is unstable for all the ∆t exercised.
The expected observed orders of accuracy for the Adams-Bashforth solvers using 2, 3 and 4 time steps tend to 1.5, 2.5 and 3.5 that are in agreement with the expected formal order of 2, 3 and 4, respectively.Comparing the errors of the finest time resolution, i.e., ∆t = 100, we find that the L2 norm decreases of the 2 orders of magnitude as the solver's accuracy increases by 1 order.This also means that fixing a tolerance on the errors, the higher is the solver's accuracy the larger is the time resolution available.As an example, assuming that admissible errors are of O(10 −2 ) with the 4-steps solver we can use ∆t = 625 performing N s = t f inal/625 numerical integration steps, whereas using a 3-steps solvers we must adopt ∆t = 100 performing 6.25 × N s numerical integration steps instead of N s .Considering that the computational costs is only slightly affected by the number of previous time steps considered (recalling Equation ( 6) one can observe that there is only one new evaluation of the residuals function R independently of the previous time steps considered, thus, the computational costs is affected only by the increasing number of residuals summations, the costs of which are typically negligible with respect the cost of R evaluation.), the accuracy order has strong impact on the overall numerical efficiency: to improve the numerical efficiency reducing the computational costs, the usage of high order Adams-Bashforth solvers with larger time steps should be preferred instead of low order solvers with smaller time steps.Figure 1 shows, for each solver exercised, the X(t) and Y(t) solution for t ∈ [0, 10 6 ]: the plots into the figure report a global overview of the solution for all the instants considered (left subplots) and a detailed zoom over the last instants of the integration (right subplots) for evaluating the numerical errors accumulation.For the sake of clarity, the strongly unstable solutions are omitted into the subplots concerning the final integration instants, namely the solutions for large ∆t. Figure 1 emphasizes the instability generation for some pairs steps number/∆t.The 2 and 4 steps solutions are instable for ∆t = 5000 → f * ∆t = 0.5.On the contrary, the 3 steps solution is stable, but the amplitude is dumped and the solution vanishes as the integration proceeds.The 2 and 4 steps solutions show a phase error that decreases as the time resolution increases, whereas 3 steps solution has null phase error.10 summarizes the Adams-Bashforth-Moulton error analysis.The same considerations done for the Adams-Bashforth solutions can repeated for the Adams-Bashforth-Moulton ones, thus they are omitted for the sake of conciseness.An interesting result concerns the observed errors: the O(10 −2 ) error is now obtained with ∆t = 1250 for the 4-steps solver, thus it is 2 times faster than the corresponding Adams-Bashforth 4-step solver.Considering that the computational costs of a single Adams-Bashforth-Moulton step is only slightly greater than the corresponding Adamas-Bashforth step, the efficiency increasing is not negligible.Figure 2 shows similar plots of Figure 1 above discussed.Differently from the Adams-Bashforth class, the amplitude damping feature is now possessed by the 2-steps solver, see Figure 2b, while all solutions show phase errors that decrease as the time resolution increases.

Adams-Moulton
Table 11 summarizes the Adams-Moulton error analysis.The implicit Adams-Moulton solvers behave much like the Adams-Bashforth-Moulton ones: they have similar errors and observed orders for the same formal order considered.However, the implicit Adams-Moulton class uses one less step with respect the corresponding Adams-Bashforth-Moulton class: this could lead to the promise of higher computational efficiency.Notwithstanding, for solving the implicit non-linearity embedded into the Adams-Moulton solvers an iterative algorithm must be employed: for the results presented, a 5 iterations of fixed point algorithm have been computed.This strongly reduces the eventual gain of computational efficiency with respect the Adams-Bashforth-Moulton class.Figure 3 shows similar plots of Figure 2 above discussed: there are not relevant differences between the 2 classes of solvers.The Leapfrog solutions are in agreement with the expected results: both unfiltered and RAW-filtered solutions show an observed order of accuracy that tends to the formal 2nd order, as reported in Table 12.The two solutions are almost the same, see Figure 4.    13 summarizes the error analysis of Low Storage Runge-Kutta solver.The 1 stage solution, that reverts back to the explicit forward Euler one, is unstable for all the ∆t exercised.
The expected observed orders of accuracy using 5, 6, 7, 12, 13 and 14 stages tend to 3.5 that are in agreement with the expected formal order 4. Comparing the errors of the finest time resolution, i.e., ∆t = 100, we find that the L2 norm decreases (slowly) as the number of stages increases.Figures 5 and 6 show the solutions for all the stages tested.14 summarizes the error analysis of TVD/SSP Runge-Kutta solver.The 1 stage solution, that reverts back to the explicit forward Euler one, is unstable for all the ∆t exercised.
The expected observed orders of accuracy using 2, 3, and 5 stages tend to 1.5, 2.5 and 3.5 that are in agreement with the expected formal order 2, 3 and 4, respectively.Comparing the errors of the finest time resolution, i.e., ∆t = 100, we find that the L2 norm decreases as the number of stages increases (roughly 2 order of magnitude for each stage).Figure 7 shows the solutions for all the stages tested.

Benchmarks on Parallel Frameworks
As aforementioned, FOODIE is unaware of any parallel paradigms or programming models the client codes adopt.As a consequence, the parallel performances measurements presented into this section are aimed only to prove that FOODIE environment does not destroy the parallel scaling of the baseline code implemented without FOODIE.
To obtain such a prove, the 1D Euler PDE system described previously is numerically solved with FOODIE-aware test codes that in turn exploit parallel resources by means:

•
CoArray Fortran (CAF) model, for shared and distributed memory architectures; • OpenMP directive-based model, for only shared memory architectures; In order to measure the performances of the parallel-enabled FOODIE tests, the strong and weak scaling have been considered.For the strong scaling the speedup has been computed: where N is the problem size, K the number of parallel resources used (namely the physical cores), T serial is the CPU time of the serial code and T parallel the one of the parallel code.The ideal speedup is linear with slop equals to 1.The efficiency correlated to the strong scaling measurement is defined as: The maximum ideal efficiency is obviously the unity.
For the of weak scaling measurement the sizeup has been computed: where N 1 is the minimum size considered and N K is the size used for the test computed with k parallel resources.If N K is scaled proportional to N 1 , the ideal sizeup is again linear and if N k = k • N 1 the slope is again linear.The efficiency correlated to the weak scaling is defined as: The maximum ideal efficiency is obviously the unity.The same 1D Euler PDEs problem is also solved by parallel-enabled codes that are not based on FOODIE: their solutions provide a reference for measuring the effect of FOODIE abstraction on the parallel scaling.

CAF Benchmark
This subsection reports the parallel scaling analysis of Euler 1D test programs (with and without FOODIE) being parallelized by means of CoArrays Fortran (CAF) model.This parallel model is based on the concept of coarray introduced into the Fortran 2008 standard: the array syntax is extended introducing the so called codimension that is a new arrays indexing.Essentially, a CAF enabled code is designed to be replicated a certain number of times and all copies, conventionally named images, are executed asynchronously.Each image has its own set of data (memory) and the codimension indexes are used to access to the (remote) memory of the other images.The CAF approach allows the implementation of Partitioned Global Address Space (PGAS) model following the SPMD (single program, multiple data) parallelization paradigm.The programmer must take care of defining the coarray variables and of synchronizing the images when necessary.This approach requires the refactoring of legacy serial codes, but it allows the exploitation of both shared and distributed memory architectures.Moreover, it is a standard feature of Fortran (2008), thus it is not chained to any particular compiler vendors extension.
The benchmarks shows in this section have been done on a dual Intel(R) Xeon(R) CPU X5650 exacores workstation for a total of 12 physical cores, coupled with 24 GB of RAM.In order to perform an accurate analysis 4 different codes have considered: The Euler conservation laws are integrated for 30 time steps by means of the TVD RK(5,4) solver: the measured CPU time used for computing the scaling efficiencies is the average of the 30 integrations, thus representing the mean CPU time for computing one time step integration.
For the strong scaling, the benchmark has been conducted with 240,000 finite volumes.Figure 8a summarizes the strong scaling analysis: it shows that FOODIE-based code scales similarly to the baseline code without FOODIE.For the weak scaling the minimum size is 24,000 finite volumes and the size is scaled linearly with the CAF images, thus N 12 = 288,000 cells.Figure 8b summarizes the weak scaling analysis and it essentially confirms that FOODIE-based code scales similarly to the baseline code without FOODIE.
Both strong and weak scaling analysis point out that for the computing architecture considered the parallel scaling is reasonable up to 12 cores, the efficiency being always satisfactory.
To complete the comparison, the absolute CPU-time consumed by the two families of codes (with and without FOODIE) must be considered.Table 15 summarizes the benchmarks results.As shown, procedural and FOODIE-aware codes consume a very similar CPU-time for both the strong and the weak benchmarks.The same results are shown in Figure 9.These results prove that the abstraction of FOODIE environment does not degrade the computational efficiency.

OpenMP Benchmark
This subsection reports the parallel scaling analysis of Euler 1D test programs (with and without FOODIE) being parallelized by means of OpenMP directives-based paradigm.This parallel model is based on the concept of threads: an OpenMP enabled code start a single (master) threaded program and, at run-time, it is able to generate a team of (many) threads that work concurrently on the parallelized parts of the code, thus reducing the CPU time necessary for completing such parts.The parallelization is made by means of directives explicitly inserted by the programmer: the communications between threads are automatically handled by the compiler (through the provided OpenMP library used as back-end).OpenMP parallel paradigm is not a standard feature of Fortran, rather it is an extension provided by the compiler vendors.This parallel paradigm constitutes an effective and easy approach for parallelizing legacy serial codes, however its usage is limited to shared memory architectures because all threads must have access to the same memory.
The benchmarks shown in this section have been done on a dual Intel(R) Xeon(R) CPU X5650 exacores workstation for a total of 12 physical cores, coupled with 24 GB of RAM.In order to perform an accurate analysis 4 different codes have considered: • FOODIE-aware codes: serial code; -OpenMP-enabled code; • procedural codes without using FOODIE library: serial code; -OpenMP-enabled code; These codes (see Appendix A.1 for the implementation details) have been compiled by means of the GNU gfortran compiler v5.2.0 with -O2 -fopenmp compilation flags.
The Euler conservation laws are integrated for 30 time steps by means of the TVD RK(5,4) solver: the measured CPU time used for computing the scaling efficiencies is the average of the 30 integrations, thus representing the mean CPU time for computing one time step integration.
For the strong scaling, the benchmark has been conducted with 240,000 finite volumes.Figure 10a summarizes the strong scaling analysis: it shows that FOODIE-based code scales similarly to the baseline code without FOODIE.For the weak scaling the minimum size is 24,000 finite volumes and the size is scaled linearly with the OpenMP threads, thus N 12 = 288,000 cells.Figure 10b summarizes the weak scaling analysis and it essentially confirms that FOODIE-based code scales similarly to the baseline code without FOODIE.
Both strong and weak scaling analysis point out that for the computing architecture considered the parallel scaling is reasonable up to 8 cores: using 12 cores the measured efficiencies become unsatisfactory, reducing below the 60%.
To complete the comparison, the absolute CPU-time consumed by the two families of codes (with and without FOODIE) must be considered.Table 16 summarizes the benchmarks results.As shown, procedural and FOODIE-aware codes consume a very similar CPU-time for both the strong and the weak benchmarks.The same results are shown in Figure 11.These results prove that the abstraction of FOODIE environment does not degrade the computational efficiency.

Concluding Remarks and Perspectives
The present manuscript provides detailed analysis of the implementation and tests of a software framework for the numerical solution of Ordinary Differential Equations (ODEs) for evolutionary (dynamic) problems.The numerical solution of general, non linear differential equations system of the form U t = R(t, U), U 0 = F (where U is the vector of state variables being a function of the time-like independent variable t, U t = dU dt , R is the (vectorial) residual function, it could be a non linear function of the solution U itself and F is the (vectorial) initial conditions function.) is an ubiquitous mathematical representation for many dynamic phenomena.As a consequence, the development of new mathematical and numerical methods for solving ODEs is of paramount interest for mathematicians and physicists: in particular, it is crucial to minimize implementation errors, to maximize source code clearness and conciseness and to speed-up the rapid implementation of new ideas while preserving computational efficiency.Such goals are The most CPU time consuming part of a finite volume scheme is the fluxes computation across the cells interfaces.Such a computation corresponds to a loop over all the cells interfaces.Listing A2 shows a pseudo-code example of such a computation.
Listing A2.Pseudo-code example of fluxes computation.do i =0 , Ni F ( : , i ) = compute_fluxes (U ( : , i ) , U ( : , i + 1 ) ) enddo In the pseudo-code of Listing A2 it has been emphasized that the fluxes across an interface depends on the cells at left and right of the interface itself.The key point for the parallelization of such an algorithm is to compute the fluxes concurrently using as much as possible the parallel resources provided by the running architecture.As a consequence, the above showed loop over the whole domain is split into sub-domains (explicitly or implicitly accordingly to the parallel model adopted) and the fluxes of each sub-domain are computed concurrently.procedure , pass ( l h s ) , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ i n t e g r a n d => e u l e r _ m u l t i p l y _ e u l e r procedure , pass ( l h s ) , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ r e a l => e u l e r _ m u l t i p l y _ r e a l procedure , pass ( r hs ) , p u b l i c : : r e a l _ m u l t i p l y _ i n t e g r a n d => r e a l _ m u l t i p l y _ e u l e r procedure , pass ( l h s ) , p u b l i c : : add => add_euler procedure , pass ( l h s ) , p u b l i c : : sub => s u b _ e u l e r procedure , pass ( l h s ) , p u b l i c : : a s s i g n _ i n t e g r a n d => e u l e r _ a s s i g n _ e u l e r procedure , pass ( l h s ) , p u b l i c : : a s s i g n _ r e a l => e u l e r _ a s s i g n _ r e a l !private methods procedure , pass ( s e l f ) , p r i v a t e : : p r i m i t i v e 2 c o n s e r v a t i v e procedure , pass ( s e l f ) , p r i v a t e : : c o n s e r v a t i v e 2 p r i m i t i v e procedure , pass ( s e l f ) , p r i v a t e : : s y n c h r o n i z e procedure , pass ( s e l f ) , p r i v a t e : : impose_boundary_conditions procedure , pass ( s e l f ) , p r i v a t e : : r e c o n s t r u c t _ i n t e r f a c e s _ s t a t e s procedure , pass ( s e l f ) , p r i v a t e : : riema nn_so lver f i n a l : : f i n a l i z e endtype euler_1D Serial, CAF enabled and OpenMP versions of Euler test share the same integrand API.In the serial version the cells fluxes are computed serially, whereas in CAF and OpenMP

s.
is the number of time steps considered for the Adams-Bashforth predictor/Adams-Moulton corrector (respectively) and b p,c s are the corresponding linear coefficients selected.Essentially, the Adams-Bashforth prediction U t N p s p is corrected by means of the Adams-Moulton correction resulting in U t N c s c In order to preserve the formal order of accuracy the relation N p s = N c s + 1 always holds.Currently FOODIE provides N c s = 1, 2, 3 → N c s = 2, 3, 4 steps schemes having 2nd, 3rd and 4th formal order of accuracy, respectively.The b p,c s coefficients are those reported in Tables

Listing 1 .Listing 2 .
Usage example importing all public entities of FOODIE main module.use foo die , only : integrand , & a d a m s _ b a s h f o r t h _ i n t e g r a t o r , & adams_moulton_integrator , & adam s_bashfor th_moult on_integ rator , & e u l e r _ e x p l i c i t _ i n t e g r a t o r , & l e a p f r o g _ i n t e g r a t o r , & l s _ r u n g e _ k u t t a _ i n t e g r a t o r , & t v d _ r u n g e _ k u t t a _ i n t e g r a t o r ! or simply use f o o d i e 3.1.The Main FOODIE Abstract Data Type: The Integrand Type The implemented ACP is based on one main ADT, the integrand type, the definition of which is shown in Listing 2. Integrand type definition.type, a b s t r a c t : : i n t e g r a n d !< Abstract type for building FOODIE ODE integrators.c o n t a i n s !public deferred procedures that concrete integrand-field must implement procedure ( t i m e _ d e r i v a t i v e ) , pass ( s e l f ) , d e f e r r e d , p u b l i c : : t !operators procedure ( symmetric_operator ) , pass ( l h s ) , d e f e r r e d , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ i n t e g r a n d procedure ( i n t e g r a n d _ o p _ r e a l ) , pass ( l h s ) , d e f e r r e d , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ r e a l procedure ( r e a l _ o p _ i n t e g r a n d ) , pass ( r h s ) , d e f e r r e d , p u b l i c : : r e a l _ m u l t i p l y _ i n t e g r a n d procedure ( symmetric_operator ) , pass ( l h s ) , d e f e r r e d , p u b l i c : : add procedure ( symmetric_operator ) , pass ( l h s ) , d e f e r r e d , p u b l i c : : sub procedure ( a s s i g n m e n t _ i n t e g r a n d ) , pass ( l h s ) , d e f e r r e d , p u b l i c : : a s s i g n _ i n t e g r a n d !operators overloading g e n e r i c , p u b l i c : : o p e r a t o r ( + ) => add g e n e r i c , p u b l i c : : o p e r a t o r ( − ) => sub g e n e r i c , p u b l i c : : o p e r a t o r ( * ) => i n t e g r a n d _ m u l t i p l y _ i n t e g r a n d , &

3. 1 . 1 .
Time Derivative Procedure, the Residuals Function The abstract interface of the time derivative procedure t is shown in Listing 3. Listing 3. Time derivative procedure interface.f u n c t i o n t i m e _ d e r i v a t i v e ( s e l f , t ) r e s u l t ( d S t a t e _ d t ) import : : integrand , R_P , I_P c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : s e l f !< Integrand field.r e a l ( R_P ) , o p t i o n a l , i n t e n t ( IN ) : : t !< Time.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : d S t a t e _ d t !< Result of the time derivative function of integrand field.e n d f u n c t i o n t i m e _ d e r i v a t i v e

Listing 8 .
Example of usage of an explicit forward Euler integrator.type ( my_integrand ) : : my_field c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , Dt = 0 . 1 )

Listing 17 .
Definition of ls_runge_kutta_integrator type.type : : l s _ r u n g e _ k u t t a _ i n t e g r a t o r i n t e g e r ( I_P ) : : s t a g e s =0 !Number of stages.r e a l ( R_P ) , a l l o c a t a b l e : : A ( : ) !Low storage *A* coefficients.r e a l ( R_P ) , a l l o c a t a b l e : : B ( : ) !Low storage *B* coefficients.r e a l ( R_P ) , a l l o c a t a b l e : : C ( : ) !Low storage *C* coefficients.c o n t a i n s

Listing 20 .Listing 21 .
Definition of an explicit Adams-Bashforth integrator.use FOODIE , only : a d a m s _ b a s h f o r t h _ i n t e g r a t o r type ( a d a m s _ b a s h f o r t h _ i n t e g r a t o r ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 21.Example of initialization of an explicit Adams-Bashforth integrator.

Listing 22 .Listing 23 .
Definition of adams_bashforth_integrator type.type: : a d a m s _ b a s h f o r t h _ i n t e g r a t o r p r i v a t e i n t e g e r ( I_P ) : : s t e p s =0 !Number of time steps.r e a l ( R_P ) , a l l o c a t a b l e : : b ( : ) ! b coefficients.c o n t a i n s procedure , pass ( s e l f ) , p u b l i c : : d e s t r o y procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : i n t e g r a t e procedure , pass ( s e l f ) , p u b l i c : : update_previous f i n a l : : f i n a l i z e endtype a d a m s _ b a s h f o r t h _ i n t e g r a t o r After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 23.Example of usage of an Adams-Bashforth integrator.r e a l : : ti me s ( 1 : 4 ) type ( my_integrand ) : : my_field type ( my_integrand ) : : p re vi o us ( 1 : 4 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , p re vi o us=previous , Dt=Dt , t =t im es )

Listing 25 .
Definition of an implicit Adams-Moulton integrator.use FOODIE , only : adams_moulton_integrator type ( adams_moulton_integrator ) : : i n t e g r a t o r Once an integrator of this type has been instantiated, it must be initialized before used, for example see Listing 26.Listing 26.Example of initialization of an implicit Adams-Moulton integrator.
type : : adams_moulton_integrator p r i v a t e i n t e g e r ( I_P ) : : s t e p s =−1 !Number of time steps.r e a l ( R_P ) , a l l o c a t a b l e : : b ( : ) ! b coefficients.c o n t a i n s procedure , pass ( s e l f ) , p u b l i c : : d e s t r o y procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : i n t e g r a t e procedure , pass ( s e l f ) , p u b l i c : : update_previous f i n a l : : f i n a l i z e endtype adams_moulton_integrator

Listing 30 .
Definition of an implicit Adams-Moulton integrator.use FOODIE , only : a d a m s _ b a s h f o r t h _ m o u l t o n _ i n t e g r a t o r type ( a d a m s _ b a s h f o r t h _ m o u l t o n _ i n t e g r a t o r ) : : i n t e g r a t o r

Listing 32 .
Definition of adams_bashforth_moulton_integrator type.type , extends ( i n t e g r a t o r _ m u l t i s t e p _ o b j e c t ) : : i n t e g r a t o r _ a d a m s _ b a s h f o r t h _ m o u l t o n p r i v a t e type ( i n t e g r a t o r _ a d a m s _ b a s h f o r t h ) : : p r e d i c t o r !Predictor solver.type ( integrator_adams_moulton ) : : c o r r e c t o r !Corrector solver.c o n t a i n s procedure , pass ( s e l f ) : : d e s t r o y procedure , pass ( s e l f ) : : i n i t i a l i z e procedure , pass ( s e l f ) : : scheme_number endtype i n t e g r a t o r _ a d a m s _ b a s h f o r t h _ m o u l t o n After the solver has been initialized it can be used for integrating an integrand field, as shown in Listing 33.Listing 33.Example of usage of an Adams-Bashforth-Moulton integrator.r e a l : : ti me s ( 1 : 3 ) type ( my_integrand ) : : my_field type ( my_integrand ) : : p re vi o us ( 1:3 ) c a l l i n t e g r a t o r%i n t e g r a t e (U=my_field , p re vi o us=previous , Dt=Dt , t =t im es ) do s =1 , s e l f%s t e p s s e l f%p r e d i c t o r%p re vi o us ( s ) = s e l f%p re v io us ( s ) s e l f%p r e d i c t o r%t ( s ) = s e l f%t ( s ) s e l f%p r e d i c t o r%Dt ( s ) = s e l f%Dt ( s ) enddo do s =1 , s e l f%s t e p s − 1 s e l f%c o r r e c t o r%p re vi o us ( s ) = s e l f%p r e d i c t o r%p r ev io us ( s +1) s e l f%c o r r e c t o r%t ( s ) = s e l f%p r e d i c t o r%t ( s +1) s e l f%c o r r e c t o r%Dt ( s ) = s e l f%p r e d i c t o r%Dt ( s +1) enddo c a l l s e l f%p r e d i c t o r%i n t e g r a t e (U=U, Dt=Dt , t = t ) c a l l s e l f%c o r r e c t o r%i n t e g r a t e (U=U, Dt=Dt , t = t ) i f ( s e l f%autoupdate ) & c a l l s e l f%update_previous (U=U, p re vi ou s= s e l f%previous , Dt=Dt , t =t , p r e v i o u s _ t = s e l f%t ) endsubroutine i n t e g r a t e

Listing 40 .
Implementation of the oscillation integrand type.type , extends ( i n t e g r a n d ) : : o s c i l l a t i o n p r i v a t e i n t e g e r ( I_P ) : : dims=0 !Space dimensions.r e a l ( R_P ) : : f = 0 ._R_P !Oscillation frequency (Hz).r e a l ( R_P ) , dimension ( : ) , a l l o c a t a b l e : : U ! Integrand (state) variables, [1:dims].c o n t a i n s !auxiliary methods procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : output !type_integrand deferred methods procedure , pass ( s e l f ) , p u b l i c : : t => d O s c i l l a t i o n _ d t procedure , pass ( l h s ) , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ i n t e g r a n d => & o s c i l l a t i o n _ m u l t i p l y _ o s c i l l a t i o n procedure , pass ( l h s ) , p u b l i c : : i n t e g r a n d _ m u l t i p l y _ r e a l => o s c i l l a t i o n _ m u l t i p l y _ r e a l procedure , pass ( r hs ) , p u b l i c : : r e a l _ m u l t i p l y _ i n t e g r a n d => r e a l _ m u l t i p l y _ o s c i l l a t i o n procedure , pass ( l h s ) , p u b l i c : : add => a d d _ o s c i l l a t i o n procedure , pass ( l h s ) , p u b l i c : : sub => s u b _ o s c i l l a t i o n procedure , pass ( l h s ) , p u b l i c : : a s s i g n _ i n t e g r a n d => o s c i l l a t i o n _ a s s i g n _ o s c i l l a t i o n procedure , pass ( l h s ) , p u b l i c : : a s s i g n _ r e a l => o s c i l l a t i o n _ a s s i g n _ r e a l endtype o s c i l l a t i o n f u n c t i o n d O s c i l l a t i o n _ d t ( s e l f , t ) r e s u l t ( d S t a t e _ d t ) c l a s s ( o s c i l l a t i o n ) , i n t e n t ( IN ) : : s e l f !Oscillation field.r e a l ( R_P ) , o p t i o n a l , i n t e n t ( IN ) : : t !Time.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : d S t a t e _ d t !Oscillation field time derivative.i n t e g e r ( I_P ) : : dn !Time level, dummy variable.a l l o c a t e ( o s c i l l a t i o n : : d S t a t e _ d t ) s e l e c t type ( d S t a t e _ d t ) c l a s s i s ( o s c i l l a t i o n ) d S t a t e _ d t = s e l f d S t a t e _ d t%U( 1 ) = − s e l f%f * s e l f%U( 2 ) d S t a t e _ d t%U( 2 ) = s e l f%f * s e l f%U( 1 ) e n d s e l e c t r e t u r n e n d f u n c t i o n d O s c i l l a t i o n _ d t Add Method, an Example of Oscillation Symmetric Operator As a prototype of the operators overloading let us consider the add operator, it being a prototype of symmetric operators, the implementation of which is presented in Listing 42.Listing 42.Implementation of the oscillation integrand add operator.fu n c t i o n a d d _ o s c i l l a t i o n ( l h s , rh s ) r e s u l t ( opr ) c l a s s ( o s c i l l a t i o n ) , i n t e n t ( IN ) : : l h s !Left hand side.c l a s s ( i n t e g r a n d ) , i n t e n t ( IN ) : : r h s !Right hand side.c l a s s ( i n t e g r a n d ) , a l l o c a t a b l e : : opr !Operator result.a l l o c a t e ( o s c i l l a t i o n : : opr ) s e l e c t type ( opr ) c l a s s i s ( o s c i l l a t i o n ) opr = l h s s e l e c t type ( rh s ) c l a s s i s ( o s c i l l a t i o n ) opr%U = l h s%U + r h s%U e n d s e l e c t e n d s e l e c t r e t u r n e n d f u n c t i o n a d d _ O s c i l l a t i o n

Figure 2 .
Oscillation equations solutions computed by means of Adams-Bashforth-Moulton solvers.

4 .
(a) Unfiltered (b) RAW-filtered Figure Oscillation equations solutions computed by means of Leapfrog solvers.

Figure 5 .
Oscillation equations solutions computed by means of low storage Runge-Kutta solvers with 1 and 5 stages.

Figure 7 .
Oscillation equations solutions computed by means of TVD/SSP Runge-Kutta solvers.
see Appendix A.1 for the implementation details) have been compiled by means of the GNU gfortran compiler v5.2.0 coupled with OpenCoarrays v1.1.0(an open-source software project for developing, porting and tuning transport layers that support coarray Fortran (CAF) compilers, see http://www.opencoarrays.org/(accessed on 20 August 2023).

Figure 8 .
(a) Strong scaling, number of cells 240,000 (b) Weak scaling, minimum number of cells 24,000 Scaling efficiency with CAF programming model.

Figure 9 .
Figure 9. CPU time consumed with caf programming model.

Figure 11 .
Figure 11.CPU time consumed with OpenMP programming model.
Appendix A.1.2.The Integrand API The conservative variables of 1D Euler's system can be easily implemented as a FOODIE integrand field defining a concrete extension of the FOODIE integrand type.Listing A3 reports the implementation of such an integrand field that is contained into the tests suite shipped within the FOODIE library.Listing A3.Implementation of the Euler 1D integrand type.type , extends ( i n t e g r a n d ) : : euler_1D p r i v a t e i n t e g e r ( I_P ) : : ord=0 !Space accuracy formal order.i n t e g e r ( I_P ) : : Ni=0 !Space dimension.i n t e g e r ( I_P ) : : Ng=0 !Number of ghost cells for boundary conditions handling.i n t e g e r ( I_P ) : : Ns=0 !Number of initial species.i n t e g e r ( I_P ) : : Nc=0 !Number of conservative variables, Ns+2.i n t e g e r ( I_P ) : : Np=0 !Number of primitive variables, Ns+4.r e a l ( R_P ) : : Dx= 0 ._R_P !Space step.type ( weno_interpolator_upwind ) : : weno !WENO interpolator.r e a l ( R_P ) , a l l o c a t a b l e : : U ( : , : ) ! Integrand (state) variables, whole physical domain [1:Nc,1:Ni].r e a l ( R_P ) , a l l o c a t a b l e : : cp0 ( : ) ! Specific heat cp of initial species [1:Ns].r e a l ( R_P ) , a l l o c a t a b l e : : cv0 ( : ) ! Specific heat cv of initial species [1:Ns].c h a r a c t e r ( : ) , a l l o c a t a b l e : : BC_L !Left boundary condition type.c h a r a c t e r ( : ) , a l l o c a t a b l e : : BC_R !Right boundary condition type.i n t e g e r ( I_P ) : : me=0 !ID of this_image().i n t e g e r ( I_P ) : : we=0 !Number of CAF images used.c o n t a i n s !auxiliary methods procedure , pass ( s e l f ) , p u b l i c : : i n i t procedure , pass ( s e l f ) , p u b l i c : : d e s t r o y procedure , pass ( s e l f ) , p u b l i c : : output procedure , pass ( s e l f ) , p u b l i c : : dt => compute_dt !ADT integrand deferred methods procedure , pass ( s e l f ) , p u b l i c : : t => dEuler_dt procedure , pass ( l h s ) , p u b l i c : : l o c a l _ e r r o r => e u l e r _ l o c a l _ e r r o r

Table 5 .
Williamson's table of low storage Runge-Kutta schemes.

Table 11 .
Oscillation test: errors analysis of explicit Adams-Moulton solvers; the implicit non-linearity is solved by 5 iterations of fixed point algorithm.

Table 12 .
Oscillation test: errors analysis of explicit Leapfrog solvers.

Table 13 .
Oscillation test: errors analysis of explicit Low Storage Runge-Kutta solvers.