Next Article in Journal
Airline Ranking Using Social Feedback and Adapted Fuzzy Belief TOPSIS
Previous Article in Journal
Sea Surface Wind Speed Retrieval from Marine Radar Image Sequences Based on GLCM-Derived Texture Features
Previous Article in Special Issue
Decentralized Consensus Protocols on SO(4)N and TSO(4)N with Reshaping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Tutorial

Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups

by
Yannik P. Wotte
*,
Federico Califano
and
Stefano Stramigioli
Robotics and Mechatronics, EEMCS, University of Twente (UT), Drienerlolaan 5, 7522 NB Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 878; https://doi.org/10.3390/e27080878
Submission received: 30 May 2025 / Revised: 13 August 2025 / Accepted: 18 August 2025 / Published: 19 August 2025
(This article belongs to the Special Issue Lie Group Machine Learning)

Abstract

Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical results for neural ODEs are frequently phrased on R n . We collect recent results for neural ODEs on manifolds and present a unifying derivation of various results that serves as a tutorial to extend existing methods to differentiable manifolds. We also extend the results to the recent class of neural ODEs on Lie groups, highlighting a non-trivial extension of manifold neural ODEs that exploits the Lie group structure.

1. Introduction

Ordinary differential equations (ODEs) are ubiquitous in the engineering sciences, from modeling and control of simple physical systems like pendulums and mass–spring–dampers, or more complicated robotic arms and drones, to the description of high-dimensional spatial discretizations of distributed systems, such as fluid flows, chemical reactions, or quantum oscillators. Neural ordinary differential equations (neural ODEs) [1,2] are ODEs parameterized by neural networks. Given a state x, and parameters θ representing the weights and biases of a neural network, a neural ODE reads as follows:
x ˙ = f θ ( x , t ) , x ( 0 ) = x 0 .
First introduced by [1] as the continuum limit of recurrent neural networks, the number of applications of neural ODEs quickly exploded beyond simple classification tasks: learning highly nonlinear dynamics of multi-physical systems from sparse data [3,4,5], optimal control of nonlinear systems [6], medical imaging [7], and real-time handling of irregular time series [8], to name but a few. Discontinuous state transitions and dynamics [9,10], time-dependent parameters [11], augmented neural ODEs [12], and physics-preserving formulations [13,14] present further extensions that increase the expressivity of neural ODEs.
However, these methods are typically phrased for states x R n . For many physical systems of interest, such as robot arms, humanoid robots, and drones, the state lives on differentiable manifolds and Lie groups [15,16]. More generally, the manifold hypothesis in machine learning raises the expectation that many high-dimensional data-sets evolve on intrinsically lower-dimensional, albeit more complicated, manifolds [17]. Neural ODEs on manifolds [18,19] presented significant steps to address this gap, with the first optimization methods for neural ODEs on manifolds. Yet, the general tools and approaches available on R n , such as running costs, augmented states, time-dependent parameters, control inputs, or discontinuous state transitions, are rarely addressed in a manifold context. Similar issues persist in a Lie group context, where neural ODEs on Lie groups [20,21] have been formalized.
Our goal is to extend further architectures and costs for neural ODEs from R n to arbitrary manifolds (cf. Table 1), and in particular Lie groups, and to equip the reader with the technical background for their own extensions. Here the main conceptual challenge lies in phrasing chart-independent optimization methods [18,19] in a manner that easily adapts to a variety of neural ODE architectures and cost functions [1,3,12]. To this end we present a systematic approach for deriving geometric versions of the adjoint sensitivity method [1,2], which is a memory-efficient and scalable tool for the optimization of neural ODEs (cf. Section 1.1). Such benefits extend to manifolds and Lie groups [18,19,20,21]. A second challenge, both conceptual and practical, lies in expressing various manifolds in terms of local charts and in expressing neural-net-parameterized functions, dynamics, and tensor fields in local charts. To this end we classify existing methods into extrinsic [19,20] and intrinsic [18,21,22] approaches, a distinction inspired by well-known differential geometric concepts. In our context the distinction suggests different parameterizations, affects numerical integration techniques, and affects scaling to high-dimensional dynamics. Specifically, our contributions are as follows:
  • Systematic derivation of adjoint methods for neural ODEs on manifolds and Lie groups, highlighting the differences and equivalence of various approaches—for an overview, see also Table 1;
  • Summarizing the state of the art of manifold and Lie group neural ODEs by formalizing the notion of extrinsic and intrinsic neural ODEs;
  • A tutorial on neural ODEs on manifolds and Lie groups, with a focus on the derivation of coordinate-agnostic adjoint methods for optimization of various neural ODE architectures. Readers will gain a conceptual understanding of the geometric nature of the underlying variables, a coordinate-free derivation of adjoint methods and learn to incorporate additional geometric and physical structures. On the practical side, this will aid in the derivation and implementation of adjoint methods with non-trivial terms for various architectures, also with regard to coordinate expressions and chart transformations.
The remainder of this article is organized as follows. A brief state of the art on neural ODEs concludes this introduction. Section 2 provides a background on differentiable manifolds, Lie groups, and the coordinate-free adjoint method. Section 3 describes neural ODEs on manifolds and derives parameter updates via the adjoint method for various common architectures and cost functions, including time-dependent parameters, augmented neural ODEs, running costs, and intermediate cost terms. Section 4 describes neural ODEs on matrix Lie groups, explaining the merits of treating Lie groups separately from general differentiable manifolds. Both Section 3 and Section 4 also classify methods into extrinsic and intrinsic approaches. We conclude with a discussion in Section 5, highlighting advantages, disadvantages, challenges, and promise of the presented material. Appendix A includes a background on Hamiltonian systems, which appear when transforming the adjoint method into a form that is unique to Lie groups.

1.1. Literature Review

For a general introduction to neural ODEs, see [25]. Neural ODEs on R n with fixed parameters were first introduced by [1], and parameter optimization via the adjoint method allowed for intermittent and final cost terms on each trajectory. The generalized adjoint method [2] also allows for running cost terms. Memory-efficient checkpointing is introduced in [26] to address stability issues of adjoint methods. Augmented neural ODEs [12] introduced augmented state spaces to allow neural ODEs to express arbitrary diffeomorphisms. Time-varying parameters were introduced by [11], with similar benefits to augmented neural ODEs. Neural ODEs with discrete transitions were formulated in [9,10], with [9] also learning event-triggered transitions common in engineering applications. Neural controlled differential equations (CDEs) were introduced in [27] for handling irregular time series, and parameter updates reapply the adjoint method [1]. Neural stochastic differential equations (SDEs) were introduced in [28], relying on a stochastic variant of the adjoint method for the parameter update. The previously mentioned literature phrases dynamics of neural ODEs on R n .
Recent trends in research on neural ODEs focus on structure preservation to improve performance and reduce training time by appropriately restricting the class of parameterized vector fields. This includes symmetry preservation by equivariant [23] and approximately equivariant neural ODEs [29], which tackle symmetric and approximately symmetric time series and dynamics, e.g., in N-body dynamics and molecular dynamics. It also includes physics preservation in a physics learning context, where Hamiltonian neural networks [30,31] and (generalized) Lagrangian neural networks [32,33,34] improve performance by guaranteeing energy conservation. In control and model order reduction, port-Hamiltonian neural ODEs [3,35] further allow for learning models that interact with external ports in a power-preserving manner. These methods also phrase dynamics on R n and frequently apply the adjoint method for parameter updates.
Neural ODEs on manifolds were first introduced by [19], including an adjoint method on manifolds for final cost terms and application to continuous normalizing flows on Riemannian manifolds, but embedding manifolds into R n . Neural ODEs on Riemannian manifolds are expressed in local exponential charts in [18], avoiding embedding into R n and considering final cost terms in the optimization. Charts for unknown, non-trivial latent manifolds together with dynamics in local charts are learned from high-dimensional data in [22], also including discretized solutions to partial differential equations. Parameterized equivariant neural ODEs on manifolds are constructed in [23], also commenting on state augmentation to express arbitrary (equivariant) flows on manifolds.
Neural ODEs on Lie groups were first introduced in [36] on the Lie group S E ( 3 ) to learn the port-Hamiltonian dynamics of a drone from an experiment, expressing group elements on an embedding R 12 , and the approach was formalized to port-Hamiltonian systems on arbitrary matrix Lie groups in [20], embedding m × m matrices in R m 2 .
Neural ODEs on S E ( 3 ) were phrased in local exponential charts in [24] to optimize a controller for a rigid body using a chart-based adjoint method in local exponential charts. As an alternative, a Lie algebra-based adjoint method on general Lie groups was introduced in [21], foregoing Lie group-specific numerical issues of applying the adjoint method in local charts.
The choice of numerical solver in integrating neural ODEs and adjoint sensitivity equations is a nuanced area with much active research, especially for highly stiff [37], highly nonlinear [38,39], and structure-preserving neural ODEs [40]. We point towards the aforementioned sources for the interested reader. Results are expected to carry over into a manifold and Lie group context, where they hold in local charts. Also Lie group integrators [41,42] may be of interest for geometrically exact integration but are not well-investigated in a neural ODE context [20,21].
The optimization of neural ODEs via adjoint sensitivity methods is also referred to as “optimize-then-discretize” [25,43], since the formulation of the continuous adjoint system (called “optimize”) precedes their numerical solution (called “discretize”). This is opposed to “discretize-then-optimize” approaches, in which the neural ODE is first solved numerically (discretize) and gradients are then backpropagated through the numerical solver (optimize) [25,37,43]. Comparing the two, the constant memory efficiency of “optimize-then-discretize” approaches allows them to scale better to high-dimensional systems, giving them an edge for cases with more than 100 parameters and states [43]. Instead, “discretize-then-optimize” boasts higher accuracy and speed for low-dimensional systems, as well as highly stiff systems in which adjoint methods struggle with stability [37]. A popular discrete alternative to neural ODEs for physics-informed dynamics learning is given by variational integrator networks (VINs) [44,45], phrasing Lagrangian and Hamiltonian dynamics as discrete systems that conserve energy and the symplectic structure of the continuum dynamics [46,47]. Recent work [48] on Lie group forced VINs (LieFVINs) also allows inputs to the Lagrangian and Hamiltonian dynamics to be included in the variational formulation, allowing discrete optimal control. Both VINs and LieFVINs are applicable in a Lie group context, where they conserve geometry, symplecticity, and energy. The approach does not use adjoint methods for optimization and outperforms neural ODEs in the investigated conservative, low-dimensional dynamical systems [44,45,48]. Compared to continuous neural ODEs, both VINs and LieFVINs are discrete, which removes overhead from ODE solvers for lightweight applications, but their necessarily energy-based formulation presently restricts their use cases to conservative physical systems. We mention this promising area for completeness but narrow our attention to a geometric “optimize-then-discretize” approach via adjoint methods in the remainder of this article.

1.2. Notation

For a complete introduction to differential geometry see, e.g, [49], and for Lie group theory see [50].
Calligraphic letters M , N , denote smooth manifolds. For conceptual clarity, the reader may think of these manifolds as embedded in a high-dimensional R N , e.g., M R N . The set C ( M , N ) contains smooth functions between M and N , and we define C ( M ) : = C ( M , R ) .
The tangent space at x M is T x M and the cotangent space is T x * M . The tangent bundle of M is T M , and the cotangent bundle of M is T * M . Then X ( M ) denotes the set of vector fields over M , and Ω k ( M ) denotes the set of k forms, where Ω 1 ( M ) are co-vector fields and Ω 0 ( M ) = C ( M ) are smooth functions V : M R . The exterior derivative is denoted as d : Ω k ( M ) Ω k + 1 ( M ) . For functions V C ( M × N , R ) , with x M , y N , we denote by d x V ( y ) T x * M the partial differential at x M . Curves x : R M are denoted as x ( t ) , and their tangent vectors are denoted as x ˙ T x ( t ) M .
A Lie group is denoted by G and its elements by g , h . The group identity is e G , and I denotes the identity matrix. The Lie algebra of G is g , and its dual is g * . Letters A ˜ , B ˜ denote vectors in the Lie algebra, while letters A , B denote vectors in R n .
In coordinate expressions, lower indices are covariant and upper indices are contravariant components of tensors. For example for a ( 0 , 2 ) -tensor M the components M i j are covariant, and for non-degenerate M the components of its inverse M 1 are M i j , which are contravariant. We use the Einstein summation convention a i b i : = i a i b i ; i.e., the product of variables with repeated lower and upper indices implies a sum.
Denoting W as a topological space, D the Borel σ -algebra, and P : D [ 0 , 1 ] a probability measure, the tuple ( W , D , P ) denotes a probability space. Given a vector space L and a random variable C : X L , the expectation of C with respect to P is E w P ( C ) : = W C ( w ) d P ( w ) .

2. Background

2.1. Smooth Manifolds

Given an n-dimensional manifold M , with U M being an open set and Q : U R n a homeomorphism, we call ( U , Q ) a chart and we denote the coordinates of x U as
( q 1 , , q n ) : = Q ( x ) , x U M .
Smooth manifolds admit charts ( U 1 , Q 1 ) and ( U 2 , Q 2 ) with smooth transition maps Q 21 = Q 2 Q 1 1 defined on the intersection U 1 U 2 , and a collection A of charts ( U , Q ) with smooth transition maps is called a smooth atlas. For examples of local charts for particular manifolds, see [49], Example 1.4, Example 1.5. A vector field  f X ( M ) assigns a vector f ( x ) T x M at any point x M . This defines a dynamic system, also shown in a local chart ( U , Q ) with components f i ( q ) , q ˙ i R :
x ˙ = f ( x ) = f i ( q ) Q i ; x ( 0 ) = x 0 ,
q ˙ i = f i ( q ) ; q ( 0 ) = Q ( x 0 ) .
Solutions of (3) are then found by numerical integration of (4), applying chart transitions (e.g., q 2 ( t ) = Q 21 q 1 ( t ) from q 1 ( t ) = Q 1 x ( t ) to q 2 ( t ) = Q 2 x ( t ) ) during integration to avoid coordinate singularities (cf. Section 3.1.2). Denote the solution of (3) by the flow operator
Ψ f t : M M ; Ψ f t ( x 0 ) : = x ( t ) .
For a real-valued function V C ( M ) , its differential is the covector field
d V Ω 1 ( M ) ; d V = V q i d Q i .
Additionally, given a smooth manifold N and a smooth map φ : N M , with ( U , Q ) and ( U ¯ , Q ¯ ) appropriate charts of M and N , respectively, the pullback of d V via φ is
φ * d V Ω 1 ( N ) ; φ * d V : = d ( V φ ) = φ j q ¯ i V q j d Q ¯ i .
With a Riemannian metric M (i.e., a symmetric, non-degenerate (0,2) tensor field) on M , the gradient of V is a uniquely defined vector field V X ( M ) given by
V : = M 1 d V = M i j V q j q i .
When M = R n , we assume that M is the Euclidean metric and pick coordinates such that the components of the gradient and differential are the same. Finally, we define the Lie derivative of 1-forms, which differentiates ω Ω 1 ( M ) along a vector field f X ( M ) and returns L f ω Ω 1 ( M ) :
L f ω : = d d t Ψ f t * ω t = 0 = ω j ( q i f j ) d Q i + ( q j ω i ) f j d Q i .

2.2. Lie Groups

Lie groups are smooth manifolds with a compatible group structure. We consider real matrix Lie groups G G L ( m , R ) , i.e., subgroups of the general linear group
G L ( m , R ) : = { g R m × m | det ( g ) 0 } .
For g , h G the left and right translations by h are, respectively, the matrix multiplications
L h ( g ) : = h g ,
R h ( g ) : = g h .
The Lie algebra of G is the vector space g gl ( m , R ) , with gl ( m , R ) = R m × m being the Lie algebra of G L ( m , R ) .
Define a basis E : = { E ˜ 1 , , E ˜ n } with E ˜ i g R m × m , and define the (invertible linear) map Λ : R n g as (equivalently (e.g, [51]), Λ and Λ 1 are often denoted as the operators “hat” : R n R m × m and “vee” : R m × m R n , respectively)
Λ : R n g ; ( A 1 , , A n ) i A i E ˜ i .
The dual of g is denoted g * , and given the map Λ we call Λ * : g * R n its dual. For A ˜ , B ˜ g the small adjoint ad A ˜ ( B ˜ ) is a bilinear map, and the large adjoint Ad g ( A ˜ ) is a linear map
ad : g × g g ; ad A ˜ ( B ˜ ) = A ˜ B ˜ B ˜ A ˜ ,
Ad : G × g g ; Ad g ( A ˜ ) = g A ˜ g 1 .
In the remainder of this article, we exclusively use the adjoint representation ad A : R n R n , written without a tilde in the subscript A, and adjoint representation Ad g : R n × R n , which are obtained as
ad A : = Λ 1 ad Λ ( A ) Λ ( · ) ,
Ad g : = Λ 1 Ad g Λ ( · ) .
The exponential map  exp : g G is a local diffeomorphism given by the matrix exponential ([50], Chapter 3.7)
exp ( A ˜ ) : = n = 0 1 n ! A ˜ n .
Its inverse log : U log g is given by the matrix logarithm, and it is well-defined on a subset U log G ([50], Chapter 2.3):
log ( g ) = n = 1 ( 1 ) n + 1 ( g I ) n n .
Often, these infinite sums in (18) and (19) can be further reduced to a finite sums in m terms by use of the Cayley–Hamilton theorem [52]. A chart ( U h , Q h ) on G that assigns zero coordinates to h G can be defined using (19) and (13):
U h = { h g | g U log } ,
Q h : U h R n ; g Λ 1 log ( h 1 g ) ,
Q h 1 : R n G ; q h exp Λ ( q ) .
The chart ( U h , Q h ) is called an exponential chart, and a collection A of exponential charts ( U h , Q h ) that cover the manifold is called an exponential atlas.
The differential of a function V C ( G , R ) is the co-vector field d V Ω 1 ( G ) (see also Equation (6)). For any given g G we further transform the co-vector d V ( g ) T g * G to a left-trivialized differential, which collects the components of the gradient expressed in g * :
d g L V : = Λ * L g * d V ( g ) = q V g I + Λ ( q ) | q = 0 R n .
For a derivation of this coordinate expression, see ([21], Section 3).

2.3. Gradient over a Flow

We are interested in computing the gradient of functions with respect to the initial state of a flow. The adjoint sensitivity equations are a set of differential equations that achieve this. In the following, we show a derivation of the adjoint sensitivity on manifolds ([21], App. A2). Given a function C : M R , a vector field f X ( M ) , the associated flow Ψ f t : M M , and a final time T R , the goal of the adjoint sensitivity method on manifolds is to compute the gradient
d C Ψ f T ( x 0 ) .
In the adjoint method we define a co-state λ ( t ) = d ( C Ψ T t ) x ( t ) T x ( t ) * M , which represents the differential of C x ( T ) with respect to x ( t ) . The adjoint sensitivity method describes its dynamics, which are integrated backwards in time from the known final condition λ ( T ) = d C x ( T ) , see also Figure 1. The adjoint sensitivity method is stated in Theorem 1.
Theorem 1
(Adjoint sensitivity on manifolds). The gradient of a function C Ψ f T is
d C Ψ f T ( x 0 ) = λ ( 0 ) ,
where λ ( t ) T x ( t ) * M is the co-state. In a local chart ( U , Q ) of M with coordinates q ( t ) = Q ( x ( t ) ) , λ ( t ) = λ i ( t ) d Q i , the state and co-state satisfy the dynamics
q ˙ j = f j ( q ) , q ( 0 ) = Q ( x 0 ) ,
λ ˙ i = λ j q i f j ( q ) , λ i ( T ) = C q i x ( T ) .
Proof. 
Define the co-state λ ( t ) T x ( t ) * M as
λ ( t ) : = ( Ψ f T t ) * d C x ( T ) .
Then Equation (24) is recovered by application of Equation (7):
λ ( 0 ) = ( Ψ f T ) * d C x ( T ) = ( d C Ψ f T ) ( x 0 ) ,
A derivation of the dynamics governing λ ( t ) constitutes the remainder of this proof. By definition of λ ( t ) and the Lie derivative (9), we have that L f λ ( t ) = 0 :
L f λ ( t ) = d d s ( Ψ f s ) * λ ( t + s ) s = 0 = d d s λ ( t ) = 0 .
If we further treat λ as a 1-form λ Ω 1 ( M ) (denoted as λ by an abuse of notation), we obtain
L f λ = λ j ( q i f j ) d Q i + ( q j λ i ) f j d Q i = 0 .
The components satisfy the partial differential equation
λ j q i f j + f j q j λ i = 0 .
Impose that λ ( t ) = λ Ψ f t ( x 0 ) (this defines the 1-form λ along x ( t ) ); then
λ ˙ i = λ i q j q ˙ j = λ i q j f j .
Combining Equations (30) and (31) leads to Equation (26):
λ ˙ i = λ j q i f j .
Expanding the final condition λ ( T ) = d C x ( T ) in local coordinates (see Equation (6)) gives
λ ( T ) = C q i x ( T ) d Q i = λ i ( T ) d Q i λ i ( T ) = C q i x ( T ) .
Given a chart transition from a chart ( U 1 , Q 1 ) to a chart ( U 2 , Q 2 ) , e.g., during numerical integration of (26), the respective co-state components λ 1 , i and λ 2 , i are related by a transformation A i j = i Q 1 j Q 2 1 as follows:
λ i , 2 = A i j λ j , 1 .
A fact that will become useful in Section 4 is that Equations (25) and (26) have a Hamiltonian form. Define the control Hamiltonian H c : T * M R as
H c ( x , λ ) = λ f ( x , t ) = λ i f i ( q , t ) .
Then Equation (25) and Equation (26), respectively, of Theorem 1 follow as the Hamiltonian equations on T * M :
q ˙ j = H c λ j = f j ( q , t ) ,
λ ˙ i = H c q i = λ j q i f j ( q , t ) .
For a background on Hamilton’s equations, see also Appendix A.

3. Neural ODEs on Manifolds

A neural ODE on a manifold is an NN-parameterized vector field in X ( M ) —or including time dependence, it is an NN-parameterized vector field in X ( M × R ) , with t in the R slot and t ˙ = 1 . Given parameters θ R n θ , we denote this parameterized vector field as f θ ( x , t ) : = f ( x , t , θ ) . This results in the dynamic system
x ˙ = f θ ( x , t ) , x ( 0 ) = x 0 .
The key idea of neural ODEs is to tackle various flow approximation tasks by optimizing the parameters with respect to a to-be-specified optimization problem. Denote a finite time horizon T and intermittent times T 1 , T 2 , < T . Denote a general trajectory cost by
C f θ T ( x 0 , θ ) = F ( θ , Ψ f θ T 0 ( x 0 ) , Ψ f θ T 1 ( x 0 ) , , Ψ f θ T ( x 0 ) ) + 0 T r ( Ψ f θ s ( x 0 ) , s ) d s ,
with an intermittent and final cost term F and running cost r. Indicating a probability space ( M , D , P ) , we define the total cost as
J ( θ ) : = E x 0 P C f θ T ( x 0 , θ ) .
The minimization problem takes the form
min θ J ( θ ) .
Note that (41) is not subject to any dynamic constraint—the flow already appears explicitly in the cost C f θ T .
Normally, the optimization problem is solved by means of a stochastic gradient descent optimization algorithm [53]. In this, a batch of N initial conditions x i is sampled from the probability distribution corresponding to the probability measure P . Writing C i = C f θ T ( x i , θ ) , the parameter gradient θ J ( θ ) is approximated as
θ J ( θ ) = E x 0 P θ C f θ T ( x 0 ) 1 N i = 0 N θ C i .
In this section, we show how to optimize the parameters θ for various choices of neural ODEs and cost functions, with (39) being the most general case of a cost, and highlight similarities in the various derivations. In the following, the gradient θ C i is computed via the adjoint method on manifolds for various scenarios. The advantage of the adjoint method over, e.g., automatic differentiation of C i /backpropagation through an ODE solver is that it has a constant memory efficiency with respect to the network depth T.

3.1. Constant Parameters and Running and Final Cost

Here we consider neural ODEs of the form (38), with constant parameters θ and cost functions of the form
C f θ T ( x 0 , θ ) = F ( Ψ f θ T ( x 0 ) , θ ) + 0 T r ( Ψ f θ s ( x 0 ) , θ , s ) d s ,
with a final cost term F and a running cost term r. This generalizes [2] to manifolds. Compared to existing manifold methods for neural ODEs [18,54], the running cost is new.
The parameter gradient’s components θ C f θ T ( x 0 , t 0 ) , θ R n θ are then computed by Theorem 2 (see also [21]):
Theorem 2
(Generalized Adjoint Method on Manifolds). Given the dynamics (38) and the cost (43), the parameter gradient’s components θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
θ C f θ T ( x 0 , t 0 ) , θ = F θ ( x ( T ) , θ ) + 0 T θ λ j f θ j ( q ( s ) ) + r ( q ( s ) , θ , s ) d s .
where the state x ( s ) M and co-state λ ( s ) T x ( s ) * M satisfy, in a local chart ( U , Q ) with q ( t ) = Q ( x ( t ) ) , λ ( t ) = λ i ( t ) d Q i ,
q ˙ j = f θ j ( q , t ) , q ( 0 ) = Q ( x 0 ) , t ( 0 ) = t 0 ,
λ ˙ i = λ j q i f θ j ( q , t ) r q i , λ i ( T ) = F q i x ( T ) , θ .
Proof. 
Define the augmented state space as M = M × R n θ × R × R to include the original state x M , parameters θ R n θ , accumulated running cost L R , and time t R in the augmented state x : = ( x , θ , L , t ) M . In addition, define the augmented dynamics f aug X ( M ) as
x ˙ = f aug ( x ) = f θ ( x , t ) 0 r ( x , θ , t ) 1 , x ( 0 ) = x 0 : = x 0 θ 0 t 0 .
This is an autonomous system with final state x ( T ) = ( x ( T ) , θ , 0 T r ( x , θ , s ) d s , T ) . Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F ( x , θ ) + L .
Then Equation (43) can be rewritten as the evaluation of a terminal cost C aug x ( T ) :
C f θ T ( x 0 ) = ( C aug Ψ f aug T ) ( x 0 ) .
By Theorem 1, the gradient d C aug Ψ f aug T is given by
d ( C aug Ψ f aug T ) ( x 0 ) = λ ( 0 ) ,
and by Equation (26), the components of λ ( s ) satisfy
λ ˙ i = λ j q i f aug j , λ i ( T ) = q i C aug x ( T )
Split the co-state into λ q , λ θ , λ L , λ t ; then their components’ dynamics are as follows:
λ ˙ q , i = q i λ q , j f θ j ( q , t ) + λ L r ( q , θ , t ) , λ q ( T ) = F q ( x ( T ) , θ ) ,
λ ˙ θ , i = θ i λ q , j f θ j ( q , t ) + λ L r ( q , θ , t ) , λ θ ( T ) = F θ ( x ( T ) , θ ) ,
λ ˙ L = 0 , λ L ( T ) = L C aug ( x ( T ) , θ ) = 1 ,
λ ˙ t = t λ q , j f θ j ( q , t ) + λ L r ( q , θ , t ) , λ t ( T ) = t C aug ( x ( T ) , θ ) = 0 .
The component λ L = 1 is constant, so Equation (52) coincides with (46). Integrating (53) from s = 0 to s = T recovers Equation (44). λ t does not appear in any of the other equations, so Equation (55) may be ignored. □
In summary, the above proof depends on identifying a suitable augmented manifold M , with the goal that augmented dynamics f aug X ( M ) are autonomous such that the cost function C aug : M R on the augmented manifold rephrases the cost (43) as a final cost C aug ( x ( T ) ) . This allows Theorem 1 to be applied to derive the corresponding adjoint method. In later sections (Section 3.2), this process will be the main technical tool for generalizations of Theorem 2. The next sections describe common special cases of (38) and Theorem 2.

3.1.1. Vanilla Neural ODEs and Extrinsic Neural ODEs on Manifolds

The case of neural ODEs on R n (e.g., [2]) is obtained by setting M = R n . Scalar functions, vector fields, and tensor fields are readily expressed, see Table 2.
There is an overlap with extrinsic neural ODEs on manifolds (described, for instance, in [19]), which optimize the neural ODE on an embedding space R N , see also Figure 2.
We denote the embedding as ι : M R N , where x M and y R N . Optimizing the neural ODE on R N requires extending the dynamics f θ ( · , t ) X ( M ) to a vector field f θ ( · , t ) X ( R N ) such that
ι * f θ ( x , t ) = f θ ( ι ( x ) , t ) .
The dynamics f θ ( y , t ) are then used in Theorem 2, and also the co-state lives in T * R N .
As shown in [19], the resulting parameter gradients are equivalent to those resulting from an application in local charts, as long as it can be guaranteed that the integral curves of f ( y , t ) remain within ι ( M ) R N , i.e., are geometrically exact. Geometrically exact integration has to be guaranteed separately, either by integration in local charts [18] or stabilization techniques [55].
A strong upside of an extrinsic formulation is that existing neural ODE packages (e.g., [56]) can be applied directly. A downside to extrinsic neural ODEs is that finding f ( y , t ) may not be immediate, since tangency to ι ( M ) is required, see also Table 2. Finally, the extrinsic dimension N can be much larger than the intrinsic dimension n = dim M , leading to computational overhead that does not fully exploit the manifold hypothesis. Extrinsic methods for neural ODEs are the preferred choice when the intrinsic dimension n is small and there is a known embedding ι ( M ) R N with low extrinsic dimension N. Then the computational overhead due to N > n is negligible, and stabilization techniques [55] can be applied to guarantee geometrically exact integration.

3.1.2. Intrinsic Neural ODEs on Manifolds

The intrinsic case of neural ODEs on manifolds [18] is described by integrating the dynamics in local charts, see also Figure 3.
The advantage of intrinsic over extrinsic neural ODEs on manifolds is that the dimension of the resulting equations is as low as possible in the intrinsic case for a given manifold. The flexibility of chart representations gives intrinsic neural ODEs on manifolds the power to represent high-dimensional data distributions at their latent dimension, see especially [22] for learning charts from data and [18,21] for chart-switching methods during numerical integration. Numerical integration in local charts is also geometrically exact by default. However parameterization of scalar functions, vector fields, and tensor fields with neural networks in local charts, as well as their differentiation with respect to parameters, presents a source of complexity. There are three common methods to parameterize scalar-valued functions V C ( M ) in local charts (vector fields and tensor-valued functions directly follow by parameterizing their scalar component functions in an analogous way):
  • A partition of unity σ i : R n R with respect to a collection of charts ( U i , Q i ) can be used to sum over chart -components V i : R n R as V ( x ) : = i σ i ( Q i ( x ) ) V i ( Q i ( x ) ) , see examples in [49], Chapter 2, and [24].
  • The function V can be directly defined by chart representatives V i : = V Q i 1 , enforcing compatibility between overlapping charts ( U i , Q i ) , ( U j , Q j ) by soft constraints, which are implemented as additional cost terms V i ( q i ) V j Q j Q i 1 ( q i ) that are minimized on chart overlaps U i U j , see [18,22].
  • Given an embedding ι : M R N and V ¯ C ( R N ) , an extrinsic representation V i : = V ¯ ι Q i 1 is possible, see [19,21].
Advantages and disadvantages are summarized in Table 3.
In available state-of-the-art packages for neural ODEs, the chart dynamics are phrased as discontinuous dynamics with state transitions Q 1 Q 2 1 : R n R n , but implementation is not yet streamlined for local charts, chart transitions of the co-state (cf. Section 2.3), and custom adjoint sensitivity equations.

3.1.3. Structure Preservation

Structure-preserving architectures narrow down the class of learnable neural ODEs from arbitrary vector fields X ( M ) to subsets of X ( M ) with particular properties, improving training speed and performance (cf. Section 1.1). Examples of such subsets are (symmetry-preserving) equivariant dynamical systems [23] and (physics-preserving) Hamiltonian, Lagrangian, and port-Hamiltonian dynamical systems [3]. Given that a structure-preserving parameterization of the neural ODE is known in closed form, these are readily implemented in the above formalism.
For example, reusing results from Table 2 and Table 3, Hamiltonian and Lagrangian neural ODEs [30,32] are fully determined by scalar functions H θ C ( T * Q ) , L θ C ( T Q ) , respectively, and their gradients. Hamiltonian neural ODEs are advantageous for joint learning of the dynamics and energy of conservative physical systems, where the learned Hamiltonian vector fields X H θ ( T * Q ) are guaranteed to conserve the Hamiltonian H θ representing the total energy. Lagrangian neural ODEs likewise enable learning the dynamics of conservative physical systems and enable incorporation of dissipative terms [34] but do not directly represent the total energy.
Port-Hamiltonian neural ODEs [3,35] offer further expressiveness: besides a scalar Hamiltonian H θ C ( M ) , they offer degrees of freedom in a skew-symmetric ( 2 , 0 ) -tensor J θ (called a Poisson tensor), a positive-definite ( 2 , 0 ) -tensor R θ (called a dissipation tensor), and a linear input map B θ ( x ) : R k T x M . This allows learning the dynamics of non-conservative dynamical systems that can be coupled with known physical systems and control inputs through the input map, while jointly learning the total energy H θ , rate of energy dissipation R θ ( d H θ , d H θ ) , and externally supplied power (see [57] for an introductory overview). Most physical systems can be represented in a port-Hamiltonian form [58], giving this parametrization a high degree of expressiveness that has been used in dynamics learning [3], control [20,21], and model order reduction [35]. Albeit not investigated in practice, this expressiveness may also be a disadvantage compared to Lagrangian or Hamiltonian neural networks, resulting in overfitting when, e.g., small dissipation terms are learned where there is no dissipation. Generally speaking, choosing the most specific structure-preserving neural network is advised.

3.2. Extensions

The proof of Theorem 2 depended on identifying a suitable augmented manifold M , autonomous augmented dynamics f aug X ( M ) , and an augmented cost function C aug : M R that rephrases the cost (43) as a final cost C aug ( x ( T ) ) to apply Theorem 1. This approach generalizes to various other scenarios, including different cost terms, augmented neural ODEs on manifolds, and time-dependent parameters, presented in the following.

3.2.1. Nonlinear and Intermittent Cost Terms

We consider here the case of neural ODEs on manifolds of the form (38) with cost (39). This is a generalization of [1], in which intermittent cost terms appear for neural ODEs on R n . For the final and intermittent cost term F θ : M × M × × M R , we denote by d k F θ T x * M the differential with respect to the k-th slot and denote θ as a subscript to avoid confusion. The components of d k F will be denoted F k q i . In this case, the parameter gradient is determined by repeated application of Theorem 2:
Theorem 3
(Generalized Adjoint Method on Manifolds). Given the dynamics (38) and the cost (39), the parameter gradient’s components θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
θ C f θ T ( x 0 , t 0 ) , θ = F θ ( θ , x ( T 1 ) , x ( T 2 ) , , x ( T ) ) + 0 T θ λ j f θ j ( q ( s ) ) + r ( q ( s ) , θ , s ) d s .
where the state x ( s ) M satisfies (45) and the co-state λ ( s ) T x ( s ) * M satisfies dynamics with discrete updates at times T 1 , , T N 1 given by
λ ˙ q , i = q i λ q , j f θ j ( q , t ) + r ( q , θ , t ) ; λ q , i ( T ) = F θ N q i ( x ( T 1 ) , , x ( T ) )
λ i ( T k , ) = λ i ( T k , + ) + F θ k q i x ( T 1 ) , , x ( T ) ,
with T k , being the instance after a discrete update at time T k (recall that co-state dynamics are integrated backwards, so T k , < T k < T k , + ) and T k , + the instance before.
Proof. 
We introduce an augmented manifold M = M × × M × R n θ × R × R to include N copies of the original state x M , parameters θ R n θ , accumulated running cost L R , and time t R in the augmented state x : = ( x 1 , , x N , θ , L , t ) M . Let
ϱ T i ( t ) = 1 t T i 0 t > T i ,
and define the augmented dynamics f aug X ( M ) as
x ˙ = f aug ( x ) = ϱ T 1 ( t ) f θ ( x 1 , t ) ϱ T N 1 ( t ) f θ ( x N 1 , t ) f θ ( x N , t ) 0 r ( x N , θ , t ) 1 , x ( 0 ) = x 0 : = x 0 x 0 x 0 θ 0 t 0 .
This is an autonomous system with final state
x ( T ) = ( x ( T 1 ) , , x ( T N 1 ) , x ( T ) , θ , 0 T r ( x , θ , s ) d s , T ) .
Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F θ ( x 1 , , x N ) + L .
Then Equation (39) can be rewritten as the evaluation of a terminal cost C aug x ( T ) :
C f θ T ( x 0 ) = ( C aug Ψ f aug T ) ( x 0 ) .
Apply Equation (26), and split the co-state into λ q 1 , , λ q N , λ θ , λ L , λ t ; then their components’ dynamics are as follows:
λ ˙ q 1 , i = q i λ q 1 , j ϱ T 1 ( t ) f θ j ( q 1 , t ) , λ q 1 ( T ) = F θ 1 q ( x ( T 1 ) , , x ( T ) ) ,
λ ˙ q N , i = N q i λ q N , j f θ j ( q N , t ) + λ L r ( q N , θ , t ) , λ q N ( T ) = F θ N q ( x ( T 1 ) , , x ( T ) ) ,
λ ˙ θ , i = θ i λ q 1 , j ϱ T 1 ( t ) f θ j ( q 1 , t ) + + λ q N , j f θ j ( q N , t ) + λ L r ( q , θ , t ) , λ θ ( T ) = F θ θ ( x ( T 1 ) , , x ( T ) ) .
We excluded the dynamics of λ t , which does not appear in any of the other equations, and the constant λ L = 1 . Finally, define the cumulative co-state
λ q = ϱ T 1 ( t ) λ q 1 + + λ q N .
Its dynamics at t [ 0 , T ] T 1 , · , T N 1 are given by the sum of (65) to (66), letting q = q N :
λ ˙ q , i = λ ˙ q 1 , i + + λ ˙ q N , i
= q i λ q , j f θ j ( q , t ) + r ( q , θ , t )
λ q ( T ) = F θ N q ( x ( T 1 ) , , x ( T ) ) ,
with discrete jumps (58) accounting for the final conditions of λ q 1 , , λ q N , and the dynamics (67) can be rewritten as
λ ˙ θ , i = θ i λ q , j f θ j ( q , t ) + r ( q , θ , t ) ; λ θ ( T ) = F θ θ ( x ( T 1 ) , , x ( T ) ) .
Integrating this from s = 0 to s = T recovers Equation (57). □
Cost terms of this form are interesting for optimization of, e.g., periodic orbits [59] or trajectories on manifolds, where conditions at multiple checkpoints Ψ f θ T i ( x 0 ) may appear in the cost.

3.2.2. Augmented Neural ODEs on Manifolds and Time-Dependent Parameters

With state x M , augmented state α N (not to be confused with x M ), and parameterized φ θ : M N , augmented neural ODEs on manifolds are neural ODEs on the manifold M × N of the form
x ˙ α ˙ = f θ ( x , α ) g θ ( x , α ) ; x ( 0 ) α ( 0 ) = x 0 φ θ ( x 0 ) .
Time t is not included explicitly in these dynamics, since it can be included in α . This case also includes the scenario of time-dependent parameters θ ¯ ( t ) as part of α . As the trajectory cost, we take a final cost
C f θ , g θ T ( x 0 , θ ) = F ( Ψ f θ , g θ T x 0 , φ θ ( x 0 ) , θ ) .
This is a generalization of [11,12].
Theorem 4
(Adjoint Method for Augmented Neural ODEs on Manifolds). Given the dynamics (73) and the cost (74), the parameter gradient’s components θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
θ C f θ , g θ T ( x 0 , φ ( x 0 ) ) , θ = F θ ( x ( T ) , α ( T ) , θ ) + φ j θ λ α , j ( 0 ) + 0 T θ λ x , j f θ j ( q ( s ) ) + λ α , j g θ j ( q ( s ) ) d s .
where the states x ( s ) M , α ( s ) N satisfy (73) and co-states λ x ( s ) T x ( s ) * M , λ α ( s ) T α ( s ) * N , satisfy, in a local chart ( U , Q ) on M and U ¯ , Q ¯ on N ,
λ ˙ x , i = q i λ x , j f θ j ( q , q ¯ , t ) + λ α , j g θ j ( q , q ¯ , t ) , λ x , i ( T ) = F q i x ( T ) , α ( T ) , θ ,
λ ˙ α , i = q ¯ i λ x , j f θ j ( q , q ¯ , t ) + λ α , j g θ j ( q , q ¯ , t ) , λ α , i ( T ) = F q ¯ i x ( T ) , α ( T ) , θ .
Proof. 
Define the augmented state space as M = M × N × R n θ to include the states x M , α ( s ) N and parameters θ R n θ in the augmented state x : = ( x , α , θ ) M . In addition, define the augmented dynamics f aug X ( M ) as
x ˙ = f aug ( x ) = f θ ( x , α ) g θ ( x , α ) 0 , x ( 0 ) = x 0 : = x 0 φ θ ( x 0 ) θ .
This is an autonomous system with final state x ( T ) = ( x ( T ) , α ( T ) , θ ) . Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F ( x , α , θ ) .
Then Equation (43) can be rewritten as the evaluation of a terminal cost C aug x ( T ) . The gradient d C aug Ψ f aug T is given by an application of Equation (26). Split the co-state into λ x , λ α , λ θ ; then their components’ dynamics are as follows:
λ ˙ x , i = q i λ x , j f θ j ( q , q ¯ , t ) + λ α , j g θ j ( q , q ¯ , t ) , λ x ( T ) = F q ( x ( T ) , α ( T ) , θ ) ,
λ ˙ α , i = q ¯ i λ x , j f θ j ( q , q ¯ , t ) + λ α , j g θ j ( q , q ¯ , t ) , λ α , i ( T ) = F q ¯ i x ( T ) , α ( T ) , θ ,
λ ˙ θ , i = θ i ( λ x , j f θ j ( q , q ¯ , t ) + λ α , j g θ j ( q , q ¯ , t ) ) , λ θ ( T ) = F θ ( x ( T ) , α ( T ) , θ ) .
Since α ( 0 ) = φ θ ( x 0 ) also depends on θ , the total gradient of the cost with respect to θ is given by
θ i C f θ T ( x 0 , φ θ ( x 0 ) ) , θ = λ θ , i ( 0 ) + φ j θ i λ α , j ( 0 ) .
Integrate (82) to find λ θ , i ( 0 ) ; then Equation (75) is recovered. □
Augmented neural ODEs are universal function approximators ([25], Chapter 2). Potential applications of augmented neural ODEs on manifolds include, e.g., the optimization of guiding vector fields for path-following of closed or self-intersecting paths [60], where state augmentation sits at the core of formulating singularity-free guiding vector fields for self-intersecting paths. In the same context, discontinuous initializations g α ( x 0 ) allow globally stabilizing controllers to be represented for topologically non-trivial manifolds (e.g., the sphere S 2 ), where smooth controllers are necessarily not globally stable. A further degenerate application of Theorem 4 is obtained by removing x, i.e., fixing x = 0 and f θ ( x , α ) = 0 in Equation (73). Then both the dynamics g θ ( α ) and initial condition α ( 0 ) = φ θ ( 0 ) are parameterized by θ , allowing joint optimization of the parameters and initial condition. This is interesting for joint optimization and numerical continuation, e.g., [59].

4. Neural ODEs on Lie Groups

Just as a neural ODE on a manifold is an NN-parameterized vector field in X ( M ) (or, including time, X ( M × R ) ), a neural ODE on a Lie group can be seen as a parameterized vector field in X ( G ) (or X ( G × R ) ). Similarly to Equation (38), this results in a dynamic system
g ˙ = f θ ( g , t ) , g ( 0 ) = g 0 .
Yet, Lie groups offer more structure than manifolds: the Lie algebra g provides a canonical space to represent tangent vectors, and its dual g * provides a canonical space to represent the co-state. Similarly, canonical (exponential) charts offer a structure for integrating dynamic systems [41]. Frequently, dynamics on a Lie group induce dynamics on a manifold M : by means of an action
Φ : G × M M ; ( g , x ) Φ ( g , x ) ,
evolutions g ( t ) induce evolutions x ( t ) = Φ ( g ( t ) , x 0 ) on M . This makes neural ODEs on Lie groups interesting in their own right.
In this section, we describe optimizing (41) for the cost
C f θ T ( g 0 , θ ) = F ( Ψ f θ T ( g 0 ) , θ ) + 0 T r ( Ψ f θ s ( g 0 ) , θ , s ) d s ,
with a final cost term F and a running cost term r. We highlight the extrinsic approach and two intrinsic approaches, where one of the latter is particular to Lie groups.

4.1. Extrinsic Neural ODEs on Lie Groups

The extrinsic formulation of neural ODEs on Lie groups was first introduced by [20] and applies ideas of [54] (see also Section 3.1.1). Given G G L ( m , R ) , this formulation treats the dynamic system (84) as a dynamic system on R m 2 . Denote vec : R m × m R m 2 as an invertible map that stacks the components of an input matrix into a component vector (in canonical coordinates on R m × m and R m 2 , though this choice is not required.) and let proj G : R m × m G be a projection onto G R m × m . Further denote A y = vec 1 ( y ) and g y = proj G ( A y ) . A lift f θ ( y , t ) can then be defined as
f θ ( y , t ) = vec A y g y 1 f ( g y , θ , t ) .
As was the case for extrinsic neural ODEs on manifolds, the cost gradient resulting from this optimization is well-defined and equivalent to any intrinsically defined procedure. However, the dimension m 2 of the vectorization can be significantly larger than the intrinsic dimension of the Lie group.

4.2. Intrinsic Neural ODEs on Lie Groups

Theorem 2 directly applies to optimization of neural ODEs on Lie groups, given the local exponential charts (20) and (21) on G. This does not make full use of the available structure on Lie groups. Frequently, dynamical systems are of a left-invariant form (88) or a right-invariant form (89)
g ˙ = g Λ ρ θ L ( g , t ) ,
g ˙ = Λ ρ θ R ( g , t ) g .
Denote K ( q ) : T q R n R n as the derivative of the exponential map (see [21] for details). Then the chart representatives f θ i in a local exponential chart ( U h , Q h ) are
f θ L , i ( q , t ) = ( K 1 ) j i ( q ) ρ L , j ( Q h 1 ( q ) ) ,
f θ R , i ( q , t ) = ( K 1 ) j i ( q ) Ad Q h 1 ( q ) ρ R , j ( Q h 1 ( q ) ) .
Application of Theorem 2 then requires computing q j f θ L , i ( q , t ) or q j f θ R , i ( q , t ) . But this leads to significant computational overhead due to differentiation of the terms ( K 1 ) j i ( q ) (see [21]). Instead of applying Theorem 2, i.e., expressing dynamics in local charts, the dynamics can also be expressed in the Lie algebra g . Theorem 1 has a Hamiltonian form, which can be directly transformed into Hamiltonian equations on a Lie group (see also Appendix A). Applying this reasoning to Theorem 2, we arrive at the following form, which foregoes differentiating ( K 1 ) j i ( x ) :
Theorem 5
(Left Generalized Adjoint Method on Matrix Lie Groups). Given the dynamics (88) and the cost (86), or the dynamics (89) with ρ θ L ( g , t ) = Ad g 1 ρ θ R ( g , t ) , the parameter gradient θ C f θ T ( g 0 ) of the cost is given by the integral equation
θ C f θ T ( g 0 ) = F θ ( g ( T ) , θ ) + 0 T θ λ g ρ θ L ( g , s ) + r ( g , θ , s ) d s ,
where the state g ( t ) G and co-state λ g ( t ) R n are the solutions of the system of equations
g ˙ = f θ ( g , t ) , g ( 0 ) = g 0 ,
λ ˙ g = d g L λ g ρ θ L ( g , s ) + r ( g , θ , s ) + ad ρ θ L ( g , t ) λ g , λ g ( T ) = d g L F ( g ( T ) , θ ) .
Proof. 
This is proven in two steps. First, define the time- and parameter-dependent control Hamiltonian H c : T * M × R n θ × R R as
H c ( x , λ , θ , t ) = λ f θ ( x , t ) + r ( x , θ , t ) = λ i f θ i ( q , t ) + r ( q , θ , t ) .
The equations for the state and co-state dynamics (45) and (46), respectively, of Theorem 2 follow as the Hamiltonian equations on T * M :
q ˙ j = H c λ j = f θ j ( q , t ) ,
λ ˙ i = H c q i = λ j q i f θ j ( q , t ) r q i .
And the integral Equation (44) reads
θ C f θ T ( x 0 , t 0 ) , θ = F θ ( x ( T ) , θ ) + 0 T H c θ d t .
Second, rewrite the control Hamiltonian (95) on a Lie group G, i.e., H c : T * G × × R n θ × R R . By substituting λ g ( t ) = Λ * L g * λ ( t ) (see also Equation (A6)), this induces H c : G × g * × R n θ × R R ,
H c ( g , λ g , θ , t ) = λ g ρ θ L ( g , t ) + r ( g , θ , t ) .
Finally Hamilton’s equations (96) and (97) are rewritten in their form on a matrix Lie group by means of (A7) and (A8), which recovers Equations (93) and (94):
g ˙ = g Λ ( H c λ g ) ,
λ ˙ g = d g L H c + ad H c λ g λ g .
To find the final condition for λ g , use that λ g ( t ) = Λ * L g * λ ( t ) :
λ g ( T ) = Λ * L g * λ ( T ) = Λ * L g * d F ( g ( T ) , θ ) = d g L F ( g ( T ) , θ ) .
Similar equations also hold on abstract (non-matrix) Lie groups, see [21]. Compared to the extrinsic method of Section 4.1, Theorem 5 has the advantage that the dimension of the co-state λ g is as low as possible. Compared to the chart-based approach on Lie groups, Theorem 5 foregoes differentiating through the terms K j i ( q ) , avoiding overhead. Compared to a chart-based approach on manifolds, the choice of charts is also canonical on Lie groups. Although the Lie group approach foregoes many of the pitfalls of intrinsic neural ODEs on manifolds, implementation in existing neural ODE packages is currently cumbersome: the adjoint sensitivity equations (94) have a non-standard form, requiring an adapted dynamics of the co-state λ , but these equations are rarely intended for modification in existing packages. Packages for geometry-preserving integrators on Lie groups, such as [41], are also not readily available for arbitrary Lie groups.

4.3. Extensions

The proof of Theorem 5 relied on finding a control Hamiltonian formulation for Theorem 2. This approach generalizes to methods in Section 3.2, which rely on the use of Theorem 1. This is because Theorem 1 itself has a Hamiltonian form ([21,54]).
A further straightforward extension of the methods presented in this Section are port-Hamiltonian neural ODEs on Lie groups [20]. In [20], these are systems with a configuration on a Lie group G and momentum on g * . In terms of the theory presented above, such port-Hamiltonian dynamics can be phrased as a dynamic system on a product Lie group G × g * (taking vector addition as the group operation on g * ), recovering both extrinsic [20] and intrinsic [21] port-Hamiltonian neural ODEs on Lie groups.

5. Discussion

We discuss advantages and disadvantages of the main flavors of the presented formulations for manifold neural ODEs, expanding on the previous sections. We focus on extrinsic (embedding dynamics in R N ) and intrinsic (integrating in local charts) formulations. The prior comments can be summarized as follows:
  • The extrinsic formulation is readily implemented if the low-dimensional manifold M and an embedding into R N are known. This comes at the possible cost of geometric inexactness and a higher dimension of the co-state and sensitivity equations.
  • The co-state in the intrinsic formulation has a generally lower dimension, which reduces the dimension of the sensitivity equations. The chart-based formulation also guarantees geometrically exact integration of dynamics. This comes at the mild cost of having to define local charts and chart-transitions.
This dimensionality reduction is unlikely to have a high impact when the manifold M is known and low-dimensional, e.g., for the sphere M = S 2 or similar manifolds. However, when applying the manifold hypothesis to high-dimensional data, there might be non-trivial latent manifolds for which the embedding is not immediate and where the latent manifold is of a much lower dimension than the embedding data manifold. Then the intrinsic method becomes difficult to avoid. If geometric exactness of the integration is desired, local charts also need to be defined for the extrinsic approach, in which case the intrinsic approach may offer further advantages.
In order to derive neural ODEs on Lie groups, three approaches are possible: the extrinsic and intrinsic formulations on manifolds directly carry over to matrix Lie groups, embedding G G L ( m , R ) in R m 2 or using local exponential charts, respectively. A third option is a novel intrinsic method for neural ODEs on matrix Lie groups, which makes full use of the Lie group structure by phrasing dynamics on g (as is more common on Lie groups) and the co-state on g * , avoiding difficulties of the chart-based formalism in differentiating extra terms.
Prior comments on advantages and disadvantages of these flavors can be summarized as follows:
  • The extrinsic formulation on matrix Lie groups can come at much higher cost than that on manifolds, since the intrinsic dimension of G can be much lower than m 2 and a higher dimension of the co-state and sensitivity equations can be obtained. Geometrically exact integration procedures are more readily available for matrix Lie groups, integrating g ˙ in local exponential charts.
  • The chart-based formulation on matrix Lie groups struggles when dynamics are not naturally phrased in local charts. This is common; dynamics are often more naturally phrased on g . This was alleviated by an algebra-based formulation on matrix Lie groups. Both are intrinsic approaches that feature co-state dynamics that are as low as possible. However, the algebra-based approach still lacks readily available software implementation.
The authors believe that the algebra-based formulation is more convenient in principle and consider software implementations of the algebra-based approach as possible future work.
In summary, we presented a unified, geometric approach to extend various methods for neural ODEs on R N to neural ODEs on manifolds and Lie groups. Optimization of neural ODEs on manifolds was based on the adjoint method on manifolds. Given a novel cost function C and neural ODE architecture f, the strategy to present the results in a unified fashion was to identify a suitable augmented manifold M aug , augmented dynamics f aug X ( M aug ) , and cost C aug : M aug R such that the original cost function can be rephrased as C = C aug Ψ f aug T . To further derive optimization of intrinsic neural ODEs on Lie groups, we found a Hamiltonian formulation of the adjoint method on manifolds and subsequently transformed it into Hamiltonian equations on a matrix Lie group.

Author Contributions

Conceptualization, Y.P.W.; methodology, Y.P.W.; software, Y.P.W.; validation, Y.P.W.; formal analysis, Y.P.W.; investigation, Y.P.W.; resources, S.S.; data curation, Y.P.W.; writing—original draft preparation, Y.P.W.; writing—review and editing, Y.P.W., F.C., and S.S.; visualization, Y.P.W.; supervision, F.C. and S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Hamiltonian Dynamics on Lie Groups

We briefly review Hamiltonian systems on manifolds and matrix Lie groups (see also [21], App. A1).
Given a manifold Q with coordinate maps Q i : Q R and p i in the basis d Q i on T q * Q , we define the symplectic form ω Ω 2 ( T * M ) as
ω = d p i d Q i .
Let Y X ( T * Q ) ; then a Hamiltonian H C ( T * Q , R ) implicitly defines a unique vector field X H X ( T * Q ) by
d H ( Y ) = ω ( X H , Y ) .
In coordinates, X H has the components
q ˙ i = H p i ,
p ˙ i = H q i .
On a Lie group G, the group structure allows the identification of T * G G × g * G × R n , e.g., using the pullback L g * : T g * G g * of the left-translation map L g : G G , and Λ * : g R n , to define P g R n as
P g = Λ * L g * P .
Then the left Hamiltonian H L : G × g * R is defined in terms of H : T * G g as
H L ( g , P g ) = H g , P ) .
For a matrix Lie group, the left Hamiltonian equations read as follows:
g ˙ = g Λ ( H L P ) ,
P ˙ = d g L H L + ad H L P P ,
with Λ : R n g as in (13) and d g L H R n as in (23).

References

  1. Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, QC, Canada, 3–8 December 2018; Volume 109, pp. 31–60. Available online: http://arxiv.org/abs/1806.07366 (accessed on 13 August 2025).
  2. Massaroli, S.; Poli, M.; Park, J.; Yamashita, A.; Asama, H. Dissecting neural ODEs. Adv. Neural Inf. Process. Syst. 2020, 2020, 3952–3963. Available online: http://arxiv.org/abs/2002.08071 (accessed on 13 August 2025).
  3. Zakwan, M.; Natale, L.D.; Svetozarevic, B.; Heer, P.; Jones, C.; Trecate, G.F. Physically Consistent Neural ODEs for Learning Multi-Physics Systems. IFAC-PapersOnLine 2023, 56, 5855–5860. [Google Scholar] [CrossRef]
  4. Sholokhov, A.; Liu, Y.; Mansour, H.; Nabi, S. Physics-informed neural ODE (PINODE): Embedding physics into models using collocation points. Sci. Rep. 2023, 13, 10166. [Google Scholar] [CrossRef]
  5. Ghanem, P.; Demirkaya, A.; Imbiriba, T.; Ramezani, A.; Danziger, Z.; Erdogmus, D. Learning Physics Informed Neural ODEs with Partial Measurements. Proc. AAAI Conf. Artif. Intell. 2024, AAAI-25. Available online: http://arxiv.org/abs/2412.08681 (accessed on 13 August 2025). [CrossRef]
  6. Massaroli, S.; Poli, M.; Califano, F.; Park, J.; Yamashita, A.; Asama, H. Optimal Energy Shaping via Neural Approximators. SIAM J. Appl. Dyn. Syst. 2022, 21, 2126–2147. [Google Scholar] [CrossRef]
  7. Niu, H.; Zhou, Y.; Yan, X.; Wu, J.; Shen, Y.; Yi, Z.; Hu, J. On the applications of neural ordinary differential equations in medical image analysis. Artif. Intell. Rev. 2024, 57, 236. [Google Scholar] [CrossRef]
  8. Oh, Y.; Kam, S.; Lee, J.; Lim, D.Y.; Kim, S.; Bui, A.A.T. Comprehensive Review of Neural Differential Equations for Time Series Analysis. arXiv 2025, arXiv:2502.09885. Available online: http://arxiv.org/abs/2502.09885 (accessed on 13 August 2025).
  9. Poli, M.; Massaroli, S.; Scimeca, L.; Chun, S.; Oh, S.J.; Yamashita, A.; Asama, H.; Park, J.; Garg, A. Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online, 6–14 December 2021; Volume 34, pp. 9977–9989. Available online: http://arxiv.org/abs/2106.04165 (accessed on 13 August 2025).
  10. Chen, R.T.Q.; Amos, B.; Nickel, M. Learning Neural Event Functions for Ordinary Differential Equations. In Proceedings of the Ninth International Conference on Learning Representations (ICLR 2021), Virtual, 3–7 May 2021; Available online: http://arxiv.org/abs/2011.03902 (accessed on 13 August 2025).
  11. Davis, J.Q.; Choromanski, K.; Varley, J.; Lee, H.; Slotine, J.J.; Likhosterov, V.; Weller, A.; Makadia, A.; Sindhwani, V. Time Dependence in Non-Autonomous Neural ODEs. arXiv 2020, arXiv:2005.01906. Available online: http://arxiv.org/abs/2005.01906 (accessed on 13 August 2025). [CrossRef]
  12. Dupont, E.; Doucet, A.; Teh, Y.W. Augmented Neural ODEs. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; Available online: http://arxiv.org/abs/1904.01681 (accessed on 13 August 2025).
  13. Chu, H.; Miyatake, Y.; Cui, W.; Wei, S.; Furihata, D. Structure-Preserving Physics-Informed Neural Networks with Energy or Lyapunov Structure. In Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 24), Jeju, Republic of Korea, 3–9 August 2024. [Google Scholar] [CrossRef]
  14. Kütük, M.; Yücel, H. Energy dissipation preserving physics informed neural network for Allen–Cahn equations. J. Comput. Sci. 2025, 87, 102577. [Google Scholar] [CrossRef]
  15. Bullo, F.; Murray, R.M. Tracking for fully actuated mechanical systems: A geometric framework. Automatica 1999, 35, 17–34. [Google Scholar] [CrossRef]
  16. Marsden, J.E.; Ratiu, T.S. Introduction to Mechanics and Symmetry; Springer: New York, NY, USA, 1999; Volume 17. [Google Scholar] [CrossRef]
  17. Whiteley, N.; Gray, A.; Rubin-Delanchy, P. Statistical exploration of the Manifold Hypothesis. arXiv 2025, arXiv:2208.11665. Available online: http://arxiv.org/abs/2208.11665 (accessed on 13 August 2025).
  18. Lou, A.; Lim, D.; Katsman, I.; Huang, L.; Jiang, Q.; Lim, S.N.; De Sa, C. Neural Manifold Ordinary Differential Equations. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Online, 6–12 December 2020; Available online: http://arxiv.org/abs/2006.10254 (accessed on 13 August 2025).
  19. Falorsi, L.; Forré, P. Neural Ordinary Differential Equations on Manifolds. arXiv 2020, arXiv:2006.06663. Available online: http://arxiv.org/abs/2006.06663 (accessed on 13 August 2025). [CrossRef]
  20. Duong, T.; Altawaitan, A.; Stanley, J.; Atanasov, N. Port-Hamiltonian Neural ODE Networks on Lie Groups for Robot Dynamics Learning and Control. IEEE Trans. Robot. 2024, 40, 3695–3715. [Google Scholar] [CrossRef]
  21. Wotte, Y.P.; Califano, F.; Stramigioli, S. Optimal potential shaping on SE(3) via neural ordinary differential equations on Lie groups. Int. J. Robot. Res. 2024, 43, 2221–2244. [Google Scholar] [CrossRef]
  22. Floryan, D.; Graham, M.D. Data-driven discovery of intrinsic dynamics. Nat. Mach. Intell. 2022, 4, 1113–1120. [Google Scholar] [CrossRef]
  23. Andersdotter, E.; Persson, D.; Ohlsson, F. Equivariant Manifold Neural ODEs and Differential Invariants. arXiv 2024, arXiv:2401.14131. Available online: http://arxiv.org/abs/2401.14131 (accessed on 13 August 2025). [CrossRef]
  24. Wotte, Y.P. Optimal Potential Shaping on SE(3) via Neural Approximators. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2021. [Google Scholar]
  25. Kidger, P. On Neural Differential Equations. Ph.D. Thesis, Mathematical Institute, University of Oxford, Oxford, UK, 2022. [Google Scholar]
  26. Gholami, A.; Keutzer, K.; Biros, G. ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 19), Macao, 10–16 August 2019; Available online: http://arxiv.org/abs/1902.10298 (accessed on 13 August 2025).
  27. Kidger, P.; Morrill, J.; Foster, J.; Lyons, T.J. Neural Controlled Differential Equations for Irregular Time Series. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Online, 6–12 December 2020; Available online: http://arxiv.org/abs/2005.08926 (accessed on 13 August 2025).
  28. Li, X.; Wong, T.L.; Chen, R.T.Q.; Duvenaud, D. Scalable Gradients for Stochastic Differential Equations. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), Online, 26–28 August 2020; Available online: http://arxiv.org/abs/2001.01328 (accessed on 13 August 2025).
  29. Liu, Y.; Cheng, J.; Zhao, H.; Xu, T.; Zhao, P.; Tsung, F.; Li, J.; Rong, Y. SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases. In Proceedings of the 12th International Conference on Learning Representations (ICLR 2024), Vienna, Austria, 7–11 May 2024; Available online: http://arxiv.org/abs/2308.13212 (accessed on 13 August 2025).
  30. Greydanus, S.; Dzamba, M.; Yosinski, J. Hamiltonian Neural Networks. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; Available online: http://arxiv.org/abs/1906.01563 (accessed on 13 August 2025).
  31. Finzi, M.; Wang, K.A.; Wilson, A.G. Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Online, 6–12 December 2020; Available online: http://arxiv.org/abs/2010.13581 (accessed on 13 August 2025).
  32. Cranmer, M.; Greydanus, S.; Hoyer, S.; Research, G.; Battaglia, P.; Spergel, D.; Ho, S. Lagrangian Neural Networks. In Proceedings of the ICLR 2020 Deep Differential Equations Workshop, Addis Ababa, Ethiopia, 26 April 2020; Available online: http://arxiv.org/abs/2003.04630 (accessed on 13 August 2025).
  33. Bhattoo, R.; Ranu, S.; Krishnan, N.M. Learning the Dynamics of Particle-based Systems with Lagrangian Graph Neural Networks. Mach. Learn. Sci. Technol. 2023, 4, 015003. [Google Scholar] [CrossRef]
  34. Xiao, S.; Zhang, J.; Tang, Y. Generalized Lagrangian Neural Networks. arXiv 2024, arXiv:2401.03728. Available online: http://arxiv.org/abs/2401.03728 (accessed on 13 August 2025). [CrossRef]
  35. Rettberg, J.; Kneifl, J.; Herb, J.; Buchfink, P.; Fehr, J.; Haasdonk, B. Data-Driven Identification of Latent Port-Hamiltonian Systems. arXiv 2024, 37–99. Available online: http://arxiv.org/abs/2408.08185 (accessed on 13 August 2025).
  36. Duong, T.; Atanasov, N. Hamiltonian-based Neural ODE Networks on the SE(3) Manifold For Dynamics Learning and Control. In Proceedings of the Robotics: Science and Systems (RSS 2021), Online, 12–16 July 2021; Available online: http://arxiv.org/abs/2106.12782v3 (accessed on 13 August 2025).
  37. Fronk, C.; Petzold, L. Training stiff neural ordinary differential equations with explicit exponential integration methods. Chaos 2025, 35, 33154. [Google Scholar] [CrossRef]
  38. Kloberdanz, E.; Le, W. S-SOLVER: Numerically Stable Adaptive Step Size Solver for Neural ODEs. In Artificial Neural Networks and Machine Learning—ICANN 2023; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2023; Volume 14262, pp. 388–400. [Google Scholar] [CrossRef]
  39. Akhtar, S.W. On Tuning Neural ODE for Stability, Consistency and Faster Convergence. SN Comput. Sci. 2025, 6, 318. [Google Scholar] [CrossRef]
  40. Zhu, A.; Jin, P.; Zhu, B.; Tang, Y. On Numerical Integration in Neural Ordinary Differential Equations. In Proceedings of the 39th International Conference on Machine Learning (ICML 2022), Baltimore, MD, USA, 17–23 July 2022; ML Research Press: Baltimore, MD, USA, 2022; Volume 162, pp. 27527–27547. Available online: http://arxiv.org/abs/2206.07335 (accessed on 13 August 2025).
  41. Munthe-Kaas, H. High order Runge-Kutta methods on manifolds. Appl. Numer. Math. 1999, 29, 115–127. [Google Scholar] [CrossRef]
  42. Celledoni, E.; Marthinsen, H.; Owren, B. An introduction to Lie group integrators—Basics, New Developments and Applications. J. Comput. Phys. 2014, 257, 1040–1061. [Google Scholar] [CrossRef]
  43. Ma, Y.; Dixit, V.; Innes, M.J.; Guo, X.; Rackauckas, C. A Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions. In Proceedings of the 2021 IEEE High Performance Extreme Computing Conference (HPEC 2021), Online, 20–24 September 2021. [Google Scholar] [CrossRef]
  44. Saemundsson, S.; Terenin, A.; Hofmann, K.; Deisenroth, M.P. Variational Integrator Networks for Physically Structured Embeddings. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS 2020), Online, 26–28 August 2020; pp. 3078–3087. Available online: http://arxiv.org/abs/1910.09349 (accessed on 13 August 2025).
  45. Desai, S.A.; Mattheakis, M.; Roberts, S.J. Variational integrator graph networks for learning energy-conserving dynamical systems. Phys. Rev. E 2021, 104, 035310. [Google Scholar] [CrossRef] [PubMed]
  46. Bobenko, A.I.; Suris, Y.B. Mathematical Physics Discrete Time Lagrangian Mechanics on Lie Groups, with an Application to the Lagrange Top. Commun. Math. Phys. 1999, 204, 147–188. [Google Scholar] [CrossRef]
  47. Marsden, J.E.; Pekarsky, S.; Shkoller, S.; West, M. Variational Methods, Multisymplectic Geometry and Continuum Mechanics. J. Geom. Phys. 2001, 38, 253–284. [Google Scholar] [CrossRef]
  48. Duruisseaux, V.; Duong, T.; Leok, M.; Atanasov, N. Lie Group Forced Variational Integrator Networks for Learning and Control of Robot Systems. In Proceedings of the 5th Annual Conference on Learning for Dynamics and Control, Philadelphia, PA, USA, 15–16 June 2023; Volume 211, pp. 1–21. Available online: http://arxiv.org/abs/2211.16006 (accessed on 13 August 2025).
  49. Lee, J.M. Introduction to Smooth Manifolds, 2nd ed.; Springer: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  50. Hall, B.C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction; Graduate Texts in Mathematics (GTM); Springer: Berlin/Heidelberg, Germany, 2015; Volume 222. [Google Scholar] [CrossRef]
  51. Solà, J.; Deray, J.; Atchuthan, D. A micro Lie theory for state estimation in robotics. arXiv 2021, arXiv:1812.01537. Available online: http://arxiv.org/abs/1812.01537 (accessed on 13 August 2025). [CrossRef]
  52. Visser, M.; Stramigioli, S.; Heemskerk, C. Cayley-Hamilton for roboticists. IEEE Int. Conf. Intell. Robot. Syst. 2006, 1, 4187–4192. [Google Scholar] [CrossRef]
  53. Robbins, H.; Monro, S. A Stochastic Approximation Method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  54. Falorsi, L.; de Haan, P.; Davidson, T.R.; Forré, P. Reparameterizing Distributions on Lie Groups. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019), Naha, Okinawa, Japan, 16–18 April 2019; Available online: http://arxiv.org/abs/1903.02958 (accessed on 13 August 2025).
  55. White, A.; Kilbertus, N.; Gelbrecht, M.; Boers, N. Stabilized Neural Differential Equations for Learning Dynamics with Explicit Constraints. In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023; Available online: http://arxiv.org/abs/2306.09739 (accessed on 13 August 2025).
  56. Poli, M.; Massaroli, S.; Yamashita, A.; Asama, H.; Park, J. TorchDyn: A Neural Differential Equations Library. arXiv 2020, arXiv:2009.09346. Available online: http://arxiv.org/abs/2009.09346 (accessed on 13 August 2025). [CrossRef]
  57. Schaft, A.V.D.; Jeltsema, D. Port-Hamiltonian Systems Theory: An Introductory Overview; Now Publishers Inc.: Hanover, MA, USA, 2014; Volume 1, pp. 173–378. [Google Scholar] [CrossRef]
  58. Rashad, R.; Califano, F.; van der Schaft, A.J.; Stramigioli, S. Twenty years of distributed port-Hamiltonian systems: A literature review. IMA J. Math. Control Inf. 2020, 37, 1400–1422. [Google Scholar] [CrossRef]
  59. Wotte, Y.P.; Dummer, S.; Botteghi, N.; Brune, C.; Stramigioli, S.; Califano, F. Discovering efficient periodic behaviors in mechanical systems via neural approximators. Optim. Control Appl. Methods 2023, 44, 3052–3079. [Google Scholar] [CrossRef]
  60. Yao, W. A Singularity-Free Guiding Vector Field for Robot Navigation; Springer: Cham, Switzerland, 2023; pp. 159–190. [Google Scholar] [CrossRef]
Figure 1. (a) The problem of computing the gradient over a flow, highlighting the cotangent spaces d C x ( T ) T x ( T ) * M and d C Ψ f T ( x 0 ) = ( Ψ f T ) * d C x ( T ) T x 0 * M . (b) In the adjoint method we set λ ( t ) = d ( C Ψ T t ) x ( t ) , whose dynamics are uniquely determined by the property L f λ = 0 , allowing us to find λ ( 0 ) = d C Ψ f T ( x 0 ) by integrating λ ˙ backwards from λ ( T ) = d C x ( T ) .
Figure 1. (a) The problem of computing the gradient over a flow, highlighting the cotangent spaces d C x ( T ) T x ( T ) * M and d C Ψ f T ( x 0 ) = ( Ψ f T ) * d C x ( T ) T x 0 * M . (b) In the adjoint method we set λ ( t ) = d ( C Ψ T t ) x ( t ) , whose dynamics are uniquely determined by the property L f λ = 0 , allowing us to find λ ( 0 ) = d C Ψ f T ( x 0 ) by integrating λ ˙ backwards from λ ( T ) = d C x ( T ) .
Entropy 27 00878 g001
Figure 2. In the extrinsic formulation of neural ODEs on manifolds, the manifold M is embedded in R N as ι ( M ) R N , and a neural ODE f θ X ( R N ) is optimized.
Figure 2. In the extrinsic formulation of neural ODEs on manifolds, the manifold M is embedded in R N as ι ( M ) R N , and a neural ODE f θ X ( R N ) is optimized.
Entropy 27 00878 g002
Figure 3. In the intrinsic formulation of neural ODEs on manifolds, the neural ODE f θ X ( M ) is optimized in local charts, here ( U 1 , Q 1 ) and ( U 2 , Q 2 ) , and the state and co-state undergo chart transitions.
Figure 3. In the intrinsic formulation of neural ODEs on manifolds, the neural ODE f θ X ( M ) is optimized in local charts, here ( U 1 , Q 1 ) and ( U 2 , Q 2 ) , and the state and co-state undergo chart transitions.
Entropy 27 00878 g003
Table 1. Summary of neural ODEs on manifolds and Lie groups presented in this article.
Table 1. Summary of neural ODEs on manifolds and Lie groups presented in this article.
Name of Neural ODESubtypeTrajectory CostSubsectionOriginally Introduced in
Neural ODEs on manifolds (Section 3)ExtrinsicRunning and final costSection 3.1.1Final cost [19], running cost [21]
IntrinsicRunning and final cost, intermittent costSection 3.1.2 and Section 3.2.1Final cost [18], running cost [21], intermittent cost (this work)
Augmented, time-dependent parametersFinal costSection 3.2.2Augmenting M to T M [23], Augmenting M to M × N (this work)
Neural ODEs on Lie groups (Section 4)ExtrinsicFinal cost and intermittent costSection 4.1In [20]
Intrinsic, dynamics in local chartsRunning and final costSection 4.2In [21,24]
Intrinsic, dynamics on Lie algebraRunning and final costSection 4.2In [21]
Table 2. Parameterization of functions in extrinsic neural ODEs.
Table 2. Parameterization of functions in extrinsic neural ODEs.
FunctionVanilla Neural ODEExtrinsic Neural ODE
Scalar fields V θ ( x ) R V θ : R n R V θ : R N R
Vector fields f θ ( x , t ) T x M f θ : R n × R R n f θ : R N × R R N with tangency constraint [19], optional stabilization [55]
Components of ( p , q ) -tensor fields M j 1 , , j q i 1 , , i p ( x ) R M j 1 , , j q i 1 , , i p : R n R M j 1 , , j q i 1 , , i p : R N R , see footnote 1
1 A tangency constraint on contravariant components of ( p , q ) -tensors is not necessarily required for the vector field f θ to remain tangent to ι ( M ) and depends on the vector field under investigation.
Table 3. Parameterization of scalar functions and tensor components in intrinsic neural ODEs.
Table 3. Parameterization of scalar functions and tensor components in intrinsic neural ODEs.
Partition of Unity [24,49]Soft Constraint [18,22]Pullback [19,21]
Components from all local charts are summed, weighted by a partition of unity.Function is directly represented in local charts.Function is pulled back to local chart.
Allows representation of arbitrary smooth functions.Functions are smooth where charts do not overlap, but are not well-defined at chart transitions.Allows representation of arbitrary smooth functions.
Differentiating functions generally requires differentiating through chart transition maps, creating computational overload [24].Chart transition maps do not have to be differentiated.Chart representations of the embedding ι ( M ) are differentiated, possibly creating computational overload.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wotte, Y.P.; Califano, F.; Stramigioli, S. Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups. Entropy 2025, 27, 878. https://doi.org/10.3390/e27080878

AMA Style

Wotte YP, Califano F, Stramigioli S. Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups. Entropy. 2025; 27(8):878. https://doi.org/10.3390/e27080878

Chicago/Turabian Style

Wotte, Yannik P., Federico Califano, and Stefano Stramigioli. 2025. "Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups" Entropy 27, no. 8: 878. https://doi.org/10.3390/e27080878

APA Style

Wotte, Y. P., Califano, F., & Stramigioli, S. (2025). Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups. Entropy, 27(8), 878. https://doi.org/10.3390/e27080878

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop