Next Article in Journal
Analytical Representation and Applications of Solutions to a Loaded Fractional Integro-Differential Equation
Previous Article in Journal
Dynamic Behaviour of Double Basalt- and Double Flax FRP Tube-Confined Coconut Fibre-Reinforced Concrete Under Impact Loading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forming Invariant Stochastic Differential Systems with a Given First Integral

by
Konstantin Rybakov
Department of Mathematical Cybernetics, Moscow Aviation Institute (National Research University), 125993 Moscow, Russia
Dynamics 2026, 6(1), 6; https://doi.org/10.3390/dynamics6010006
Submission received: 4 January 2026 / Revised: 22 January 2026 / Accepted: 23 January 2026 / Published: 1 February 2026

Abstract

This article proposes a method for forming invariant stochastic differential systems, namely dynamic systems with trajectories belonging to a given smooth manifold. The Itô or Stratonovich stochastic differential equations with the Wiener component describe dynamic systems, and the manifold is implicitly defined by a differentiable function. A convenient implementation of the algorithm for forming invariant stochastic differential systems within symbolic computation environments characterizes the proposed method. It is based on determining a basis associated with a tangent hyperplane to the manifold. This article discusses the problem of basis degeneration and examines variants that allow for the simple construction of a basis that does not degenerate. Examples of invariant stochastic differential systems are given, and numerical simulations are performed for them.

Graphical Abstract

1. Introduction

This article considers the inverse dynamics problem, which involves forming dynamic systems with trajectories belonging to a given smooth manifold. Dynamic systems are assumed to be described by Itô or Stratonovich stochastic differential equations. Hereinafter, they are called invariant stochastic differential systems, and their trajectories are those of diffusion processes. The manifold is implicitly defined by a differentiable function that is a first integral of the system.
In the following, we give examples of invariant stochastic differential systems. The first example concerns the problem of spatial orientation control for a manned or unmanned aerial vehicle [1,2]. Consider the system of linear ordinary differential equations describing the rotation of a rigid body in three-dimensional space [3,4]:
λ ˙ 0 ( t ) = 1 2 λ 1 ( t ) ω 1 ( t ) λ 2 ( t ) ω 2 ( t ) λ 3 ( t ) ω 3 ( t ) , λ ˙ 1 ( t ) = 1 2 λ 0 ( t ) ω 1 ( t ) λ 3 ( t ) ω 2 ( t ) + λ 2 ( t ) ω 3 ( t ) , λ ˙ 2 ( t ) = 1 2 λ 3 ( t ) ω 1 ( t ) + λ 0 ( t ) ω 2 ( t ) λ 1 ( t ) ω 3 ( t ) , λ ˙ 3 ( t ) = 1 2 λ 2 ( t ) ω 1 ( t ) + λ 1 ( t ) ω 2 ( t ) + λ 0 ( t ) ω 3 ( t ) ,
where t T , T = [ 0 , T ] is a given time interval of rotation; λ ( t ) = [ λ 0 ( t ) λ 1 ( t ) λ 2 ( t ) λ 3 ( t ) ] T is the quaternion of rotation; ω ( t ) = [ ω 1 ( t ) ω 2 ( t ) ω 3 ( t ) ] T is the angular velocity that can be treated as a control input; and [ · ] T means transposition.
It is not difficult to see that
d d t λ 0 2 ( t ) + λ 1 2 ( t ) + λ 2 2 ( t ) + λ 3 2 ( t ) = 2 λ 0 ( t ) λ ˙ 0 ( t ) + λ 1 ( t ) λ ˙ 1 ( t ) + λ 2 ( t ) λ ˙ 2 ( t ) + λ 3 ( t ) λ ˙ 3 ( t ) = λ 0 ( t ) λ 1 ( t ) ω 1 ( t ) λ 0 ( t ) λ 2 ( t ) ω 2 ( t ) λ 0 ( t ) λ 3 ( t ) ω 3 ( t ) + λ 0 ( t ) λ 1 ( t ) ω 1 ( t ) λ 1 ( t ) λ 3 ( t ) ω 2 ( t ) + λ 1 ( t ) λ 2 ( t ) ω 3 ( t ) + λ 2 ( t ) λ 3 ( t ) ω 1 ( t ) + λ 0 ( t ) λ 2 ( t ) ω 2 ( t ) λ 1 ( t ) λ 2 ( t ) ω 3 ( t ) λ 2 ( t ) λ 3 ( t ) ω 1 ( t ) + λ 1 ( t ) λ 3 ( t ) ω 2 ( t ) + λ 0 ( t ) λ 3 ( t ) ω 3 ( t ) = 0 .
Thus, the modulus of the quaternion λ ( t ) is equal to one for the condition | λ ( 0 ) | 2 = λ 0 2 ( 0 ) + λ 1 2 ( 0 ) + λ 2 2 ( 0 ) + λ 3 2 ( 0 ) = 1 , and for the system of differential equations under consideration, the following identity holds:
| λ ( t ) | 2 = λ 0 2 ( t ) + λ 1 2 ( t ) + λ 2 2 ( t ) + λ 3 2 ( t ) = 1 ,
i.e., the system state belongs to a three-dimensional hypersphere centered at the origin and with unit radius (in four-dimensional space).
By assuming that the system is affected by random disturbances, which lead to inaccurate control implementation, we obtain the following system of linear stochastic differential equations with multiplicative noise:
λ ˙ 0 ( t ) = 1 2 ( λ 1 ( t ) ω 1 ( t ) + σ 1 V 1 ( t ) λ 2 ( t ) ω 2 ( t ) + σ 2 V 2 ( t ) λ 3 ( t ) ω 3 ( t ) + σ 3 V 3 ( t ) ) , λ ˙ 1 ( t ) = 1 2 ( λ 0 ( t ) ω 1 ( t ) + σ 1 V 1 ( t ) λ 3 ( t ) ω 2 ( t ) + σ 2 V 2 ( t ) + λ 2 ( t ) ω 3 ( t ) + σ 3 V 3 ( t ) ) , λ ˙ 2 ( t ) = 1 2 ( λ 3 ( t ) ω 1 ( t ) + σ 1 V 1 ( t ) + λ 0 ( t ) ω 2 ( t ) + σ 2 V 2 ( t ) λ 1 ( t ) ω 3 ( t ) + σ 3 V 3 ( t ) ) , λ ˙ 3 ( t ) = 1 2 ( λ 2 ( t ) ω 1 ( t ) + σ 1 V 1 ( t ) + λ 1 ( t ) ω 2 ( t ) + σ 2 V 2 ( t ) + λ 0 ( t ) ω 3 ( t ) + σ 3 V 3 ( t ) ) ,
where V ( t ) = [ V 1 ( t ) V 2 ( t ) V 3 ( t ) ] T is the vector Gaussian white noise, and σ = [ σ 1 σ 2 σ 3 ] T is the vector of intensities of random disturbances. Random disturbances in spatial orientation control are typical, and their source is usually found in the measurement system (gyroscopes and accelerometers). In a control system, disturbances are typically generated by a shaping filter based on Gaussian white noise. In this example, a direct effect of Gaussian white noise is assumed to simplify the mathematical model.
Equation (3) should be understood as Stratonovich stochastic differential equations [5]; then, the system state belongs to the same three-dimensional hypersphere.
The second example is of great importance for numerical methods to solve stochastic differential equations with multiplicative noise [6,7,8,9], including Equation (3). Consider the system of Itô stochastic differential equations:
d X 1 ( t ) =   d W 1 ( t ) , X 1 ( 0 ) =   0 , d X 2 ( t ) =   X 3 ( t ) d W 1 ( t ) , X 2 ( 0 ) =   0 , d X 3 ( t ) =   d W 2 ( t ) , X 3 ( 0 ) =   0 , d X 4 ( t ) =   X 1 ( t ) d W 2 ( t ) , X 4 ( 0 ) =   0 ,
where t T , T = [ 0 , T ] is a given time interval; W ( t ) = [ W 1 ( t ) W 2 ( t ) ] T denotes the standard vector Wiener process, i.e., W 1 ( t ) and W 2 ( t ) are independent standard Wiener processes.
Two equations from the system of Equation (4) have obvious solutions
X 1 ( t ) = 0 t d W 1 ( τ ) = W 1 ( t ) , X 3 ( t ) = 0 t d W 2 ( τ ) = W 2 ( t ) ,
and for the remaining equations, solutions are written as follows:
X 2 ( t ) = 0 t X 3 ( τ ) d W 1 ( τ ) = 0 t W 2 ( τ ) d W 1 ( τ ) , X 4 ( t ) = 0 t X 1 ( τ ) d W 2 ( τ ) = 0 t W 1 ( τ ) d W 2 ( τ ) .
For a fixed t, the random variables X 1 ( t ) , X 2 ( t ) , X 3 ( t ) , and X 4 ( t ) are used in numerical methods for solving stochastic differential equations, e.g., in the Milstein method [8] or in one of the Rosenbrock-type methods [10]. These random variables are essential in numerical methods that exhibit high convergence orders, provided that convergence is understood in a strong sense [6,9,11].
The random variables X 2 ( t ) and X 4 ( t ) are called iterated stochastic integrals of the second multiplicity. They can be represented as multiple stochastic integrals (t is not necessarily fixed):
X 2 ( t ) = 0 t 0 τ d W 2 ( θ ) d W 1 ( τ ) = 0 t 0 t 1 ( τ θ ) d W 2 ( θ ) d W 1 ( τ ) , X 4 ( t ) = 0 t 0 τ d W 1 ( θ ) d W 2 ( τ ) = 0 t 0 t 1 ( τ θ ) d W 1 ( θ ) d W 2 ( τ ) ,
where 1 ( t ) is the unit step function:
1 ( t ) = 1 for t > 0 , 0 for t 0 .
Then
X 2 ( t ) + X 4 ( t ) = 0 t 0 t 1 ( τ θ ) d W 2 ( θ ) d W 1 ( τ ) + 0 t 0 t 1 ( τ θ ) d W 1 ( θ ) d W 2 ( τ ) = 0 t 0 t 1 ( θ τ ) d W 1 ( θ ) d W 2 ( τ ) + 0 t 0 t 1 ( τ θ ) d W 1 ( θ ) d W 2 ( τ ) = 0 t 0 t 1 ( θ τ ) + 1 ( τ θ ) d W 1 ( θ ) d W 2 ( τ ) = 0 t 0 t d W 1 ( θ ) d W 2 ( τ ) = 0 t d W 1 ( τ ) 0 t d W 2 ( τ ) ,
i.e., we have the identity
X 2 ( t ) + X 4 ( t ) = X 1 ( t ) X 3 ( t ) , or X 2 ( t ) + X 4 ( t ) X 1 ( t ) X 3 ( t ) = 0 ,
and therefore, the system state belongs to a hypercylinder over a hyperbolic paraboloid (in four-dimensional space).
The theory of invariant stochastic differential systems began to develop actively in the late 1970s [12,13]. It generalizes the theory of invariant deterministic differential systems [14,15,16,17,18] that is utilized, e.g., in the design of aircraft control systems [19,20,21].
One of the problems in the theory of invariant stochastic differential systems surrounds obtaining stochastic differential equations for a given manifold. This problem is reduced to finding and then applying conditions for coefficients of such equations [22,23,24]. For each manifold, there are infinitely many invariant differential systems, among which we can distinguish systems with additional properties, e.g., stability [25]. As for deterministic differential systems, we can consider the synthesis of control for stochastic differential systems that ensures invariance [23,26]. Among the applied problems, we can note the epidemic spread analysis [26], the ecosystem evolution [26], financial mathematics problems [27], etc.
Conditions on coefficients of equations that ensure the existence of a first integral are usually written using a vector product in the space whose dimension coincides with the order of dynamic system or is greater by one [22,23,26]. The set of coefficients of stochastic differential equations is formed using determinants of functional matrices. Some elements of such matrices depend on a given manifold (these elements can be assumed to be known), while others are chosen arbitrarily under the additional condition of existence of solutions to stochastic differential equations. The described approach allows one to find the entire set of invariant stochastic differential systems corresponding to a given first integral. However, it is characterized by both the complexity of expressions describing coefficients of stochastic differential equations (this is especially evident with increasing the order of dynamic system) and the redundancy of functions that can be chosen arbitrarily (some functions are included in coefficients of equations nonlinearly).
The purpose of this study is to describe and examine a method for obtaining stochastic differential equations for a given manifold. The proposed method has the following advantages:
(1) The method provides simple expressions for coefficients of stochastic differential equations.
(2) The method ensures a minimum number of functions required to determine the entire set of invariant stochastic differential systems associated with a given first integral (coefficients of equations depend on these functions linearly).
(3) The method allows one to obtain stochastic differential equations with a degenerate diffusion matrix relative to a part of the state components.
The proposed method is distinguished by a convenient implementation of the corresponding algorithm for forming invariant stochastic differential systems within symbolic computation environments. The method utilizes the construction of a basis related to a tangent hyperplane to the manifold. The article discusses the problem of basis degeneration and examines variants that ensure the simple construction of a basis that does not degenerate.
The results of this study can be applied to synthesize the control that guarantees invariance [23,26] and to obtain stochastic differential equations to validate numerical methods primarily focused on systems of stochastic differential equations with first integrals [28,29,30,31]. If a first integral exists for a system of stochastic differential equations but an analytical solution cannot be obtained, such a first integral can be utilized to estimate the accuracy of numerical methods whose convergence is understood in a strong sense. Furthermore, the proposed method ensures a simple transition from an invariant deterministic differential system to a stochastic one while preserving the first integral.
In addition to this Introduction, the article contains several sections. Section 2 describes invariant stochastic differential systems and specifies invariance conditions. The proposed method for forming invariant stochastic differential systems is presented in Section 3. Additionally, Section 4 considers second-, fourth-, and eighth-order systems. Section 5 includes examples of invariant stochastic differential systems and the results of numerical simulations for them. Section 6 presents the conclusions of the article.

2. Invariant Stochastic Differential Systems

The term “invariant stochastic differential system” refers to a dynamic system whose mathematical model is represented by the Itô stochastic differential equation:
d X ( t ) = f t , X ( t ) d t + σ t , X ( t ) d W ( t ) , X ( t 0 ) = x 0 ,
provided that the solution to this equation X ( t ) almost surely (with probability 1) satisfies the relation
M t , X ( t ) = M ( t 0 , x 0 ) = const .
We assume that the vector X ( t ) = [ X 1 ( t ) X n ( t ) ] T R n describes the state of dynamic system, n 2 ; t T = [ t 0 , T ] denotes time; the moments t 0 and T are given; f ( t , x ) : T × R n R n and σ ( t , x ) : T × R n R n × s are vector and matrix functions, respectively; and W ( t ) = [ W 1 ( t ) W s ( t ) ] T represents the standard vector Wiener process. The Wiener process W ( t ) , which models disturbances that act on dynamic system, and the initial state x 0 R n are independent.
Components f i ( t , x ) of the vector function f ( t , x ) and elements σ i l ( t , x ) of the matrix function σ ( t , x ) satisfy conditions for the existence and uniqueness of solutions to stochastic differential equations, i = 1 , , n and l = 1 , , s . This means the Lipschitz condition and the linear growth condition with respect to x [5,6], namely
| f ( t , x ) f ( t , y ) | + σ ( t , x ) σ ( t , y ) c | x y | t T x , y R n
and
| f ( t , x ) | + σ ( t , x ) c ( 1 + | x | ) t T x R n ,
where c > 0 is a constant, and | · | and · are the vector modulus and the Frobenius norm of the matrix, respectively. Note that the above conditions can be weakened [32,33]. For instance, for many problems, it is sufficient to satisfy mentioned conditions locally rather than globally. The additional condition is the continuous differentiability of functions σ i l ( t , x ) with respect to x.
The nonconstant function M ( t , x ) : T × R n R is continuously differentiable with respect to t and twice continuously differentiable with respect to x. According to [12], such a function is called the first integral for Equation (5). Another definition of the first integral is formulated in [13].
Necessary and sufficient conditions for the function M ( t , x ) to be a first integral of Equation (5) are written in the following way [22]:
i = 1 n σ i l ( t , x ) M ( t , x ) x i = 0 , l = 1 , , s ,
M ( t , x ) t + i = 1 n f i ( t , x ) 1 2 j = 1 n l = 1 s σ i l ( t , x ) x j σ j l ( t , x ) M ( t , x ) x i = 0 ,
and these equalities should be satisfied on trajectories of the random process X ( t ) .
The invariant stochastic differential system can be defined by the equivalent Stratonovich stochastic differential equation:
d X ( t ) = a t , X ( t ) d t + σ t , X ( t ) d W ( t ) , X ( t 0 ) = x 0 ,
for which, in addition to the previously introduced notations, a ( t , x ) : T × R n R n is the vector function. Taking into account the well-known equation that relates drift coefficients in Equations (5) and (9) [6], we can rewrite equality (8) as follows:
M ( t , x ) t + i = 1 n a i ( t , x ) M ( t , x ) x i = 0 ,
where conditions on components a i ( t , x ) of the vector function a ( t , x ) are determined through conditions on functions f i ( t , x ) and σ i l ( t , x ) , i = 1 , , n and l = 1 , , s .
It is not difficult to see that the equality of the Itô differential for the random process M ( t , X ( t ) ) to zero is equivalent to relations (7) and (8). Similarly, the equality of the Stratonovich differential for the same random process to zero is equivalent to relations (7) and (10) [34,35].
Equation (6) defines a smooth manifold in T × R n , and trajectories of the random process X ( t ) with the initial condition X ( t 0 ) = x 0 belong to this manifold. If M ( t , x ) M ( x ) , then the term “dynamic manifold” may be used.
Necessary and sufficient conditions for the function M ( t , x ) to be a first integral of Equation (9) have a simple and clear geometric meaning. Equality (7) is the condition that each column of the matrix σ ( t , x ) and the gradient x M ( t , x ) are orthogonal in R n t T . For M ( t , x ) = M ( x ) , equality (10) is the orthogonality condition for the vector a ( t , x ) and the gradient x M ( t , x ) = M ( x ) in R n t T , and for M ( t , x ) M ( x ) , it is the orthogonality condition for the vector a ˜ ( t , x ) = [ 1 a T ( t , x ) ] T and the generalized gradient t , x M ( t , x ) .
Remark 1. 
Using the notation σ l ( t , x ) for the lth column of the matrix σ ( t , x ) and the notation ( · , · ) for the inner product in R n , equalities (7), (8), and (10) can be rewritten as
σ l ( t , x ) , x M ( t , x ) = 0 , l = 1 , , s , M ( t , x ) t + f ( t , x ) Σ ( t , x ) , x M ( t , x ) = 0 , M ( t , x ) t + a ( t , x ) , x M ( t , x ) = 0 ,
where
Σ ( t , x ) = 1 2 l = 1 s σ l ( t , x ) x σ l ( t , x ) .
For a nonautonomous dynamic system, we can convert it to an autonomous one using the additional equation d X 0 ( t ) = 1 with the solution X 0 ( t ) = t by introducing the extended state X ˜ ( t ) = [ X 0 ( t ) X T ( t ) ] T R n + 1 (its components are numbered from zero). Then equalities similar to (8) and (10) will have a simpler form.

3. Forming Invariant Stochastic Differential Systems

In this section, we define the set of ( n 1 ) linearly independent vectors orthogonal to the gradient x M ( t , x ) . To simplify notations, we introduce the vector G:
G = [ g 1 g 2 g n ] T , g i = M ( t , x ) x i , i = 1 , , n .
Next, we define vectors N 1 , , N n 1 as follows:
N 1 = [ g 2 g 1   0 0   0 ] T , N 2 = [ 0   g 3 g 2 0   0 ] T , N n 2 = [ 0   0 g n 1 g n 2   0 ] T , N n 1 = [ 0   0 0   g n g n 1 ] T ,
or, in general, N j = g j + 1 E j g j E j + 1 , j = 1 , , n 1 , where E 1 , , E n are columns of the identity matrix E of size n × n . Vectors G , N 1 , , N n 1 are functions of a pair ( t , x ) T × R n with values in R n . The arguments of these vector functions are not given for brevity.
By definition, vectors N 1 , , N n 1 are orthogonal to vector G. Furthermore, vectors N j and N k are orthogonal if | j k | > 1 ; i.e., the corresponding Gram matrix for the set of vectors G , N 1 , , N n 1 is tridiagonal in the general case.
Indeed, let j + 1 < k . Then, the vector N j can only have nonzero components with indices j and j + 1 , while the vector N k can only have nonzero components with indices k and k + 1 . Consequently, the inner product of vectors N j and N k is equal to zero since j < j + 1 < k < k + 1 . The same result holds if k + 1 < j .
Proposition 1. 
If g 2 0 and also g 3 0 , …, g n 1 0 for n > 3 , then vectors G , N 1 , , N n 1 are linearly independent, and the determinant of the matrix formed by these vectors is equal to ( 1 ) n 1 | G | 2 π n , where
π n = 1 for n = 2 , g 2 g 3 g n 1 for n > 2 .
Proof. 
First, we find the determinant of the ( n × n ) -matrix N G whose columns are vectors G , N 1 , , N n 1 . The case n = 2 is trivial:
N G = g 1 g 2 g 2 g 1 , det N G = g 1 2 g 2 2 = ( g 1 2 + g 2 2 ) = | G | 2 .
Consider the case n > 2 :
N G = g 1 g 2 0 0 0 g 2 g 1 g 3 0 0 g 3 0 g 2 g n 1 0 g n 1 0 0 g n 2 g n g n 0 0 0 g n 1 ,
using the Laplace expansion along the last row, i.e.,
det N G = ( 1 ) n 1 g n g 2 0 0 0 g 1 g 3 0 0 0 g 2 g n 1 0 0 0 g n 2 g n g n 1 g 1 g 2 0 0 g 2 g 1 g 3 0 g 3 0 g 2 g n 1 g n 1 0 0 g n 2 .
The first determinant on the right-hand side is equal to the product of diagonal elements, and the second one is similar in structure to the original determinant but for size ( n 1 ) × ( n 1 ) . Thus, by denoting D n = det N G , we obtain the recurrence formula
D n = ( 1 ) n 1 g 2 g 3 g n 1 g n 2 g n 1 D n 1 = ( 1 ) n 1 g n 2 π n g n 1 D n 1 .
Second, we show that
D n = ( 1 ) n 1 ( g 1 2 + g 2 2 + + g n 2 ) π n = ( 1 ) n 1 | G | 2 π n ,
using mathematical induction.
This formula is valid for n = 2 , since D 2 = | G | 2 . Further, we assume that
D n 1 = ( 1 ) n 2 ( g 1 2 + g 2 2 + + g n 1 2 ) π n 1 .
Then, in accordance with the recurrent Formula (15), we have
D n = ( 1 ) n 1 g n 2 π n g n 1 ( 1 ) n 2 ( g 1 2 + g 2 2 + + g n 1 2 ) π n 1 = ( 1 ) n 1 g n 2 π n + ( 1 ) n 1 ( g 1 2 + g 2 2 + + g n 1 2 ) π n = ( 1 ) n 1 ( g 1 2 + g 2 2 + + g n 2 ) π n = ( 1 ) n 1 | G | 2 π n .
Therefore, if g 2 0 and also g 3 0 ,…, g n 1 0 for n > 3 , then D n = det N G 0 , and vectors G , N 1 , , N n 1 are linearly independent. The proposition has been proven. □
Similarly, we can determine the set of n linearly independent vectors orthogonal to the generalized gradient t , x M ( t , x ) . For this, we introduce the following notations:
G ˜ = [ g 0 g 1 g n ] T , g 0 = M ( t , x ) t , g i = M ( t , x ) x i , i = 1 , , n ,
as well as
N ˜ 0 = [ 1   g 0 / g 1   0 0   0 ] T , N ˜ 1 = [ 0   g 2   g 1 0   0 ] T , N ˜ n 2 = [ 0   0 g n 1 g n 2   0 ] T , N ˜ n 1 = [ 0   0 0   g n g n 1 ] T ,
or, in general, N ˜ j = g j + 1 E j g j E j + 1 , j = 1 , , n 1 , where E 1 , , E n are columns of the identity matrix E of size ( n + 1 ) × ( n + 1 ) if these columns are numbered from zero. By definition, they are orthogonal to vector G ˜ . Vectors G ˜ , N ˜ 0 , N ˜ 1 , , N ˜ n 1 are functions of a pair ( t , x ) T × R n with values in R n + 1 . As before, the arguments of these vector functions are omitted for brevity.
Proposition 2. 
If g 1 0 and g 2 0 , …, g n 1 0 for n > 2 , then vectors G ˜ , N ˜ 0 , N ˜ 1 , , N ˜ n 1 are linearly independent, and the determinant of the matrix formed by these vectors is equal to ( 1 ) n | G ˜ | 2 π n , where π n is given by Formula (14).
Proof. 
The determinant of the ( ( n + 1 ) × ( n + 1 ) ) -matrix, whose columns are vectors G ˜ , g 1 N ˜ 0 , N ˜ 1 , , N ˜ n 1 , is equal to ( 1 ) n | G ˜ | 2 g 1 π n . The proof of this statement is the same as the proof of Proposition 1. According to the property of determinants, if all elements of column g 1 N ˜ 0 are divided by g 1 , then ( 1 ) n | G ˜ | 2 g 1 π n should also be divided by g 1 .
Thus, the determinant of the matrix, whose columns are vectors G ˜ , N ˜ 0 , N ˜ 1 , , N ˜ n 1 , is equal to ( 1 ) n | G ˜ | 2 π n . Consequently, if g 1 0 and also g 2 0 , …, g n 1 0 for n > 2 , then vectors G ˜ , N ˜ 0 , N ˜ 1 , , N ˜ n 1 are linearly independent. The proposition has been proven. □
Sets of linearly independent vectors orthogonal to the gradient x M ( t , x ) or the generalized gradient t , x M ( t , x ) can certainly be constructed in a different way. However, the proposed approach is quite sufficient due to the simplicity of implementation of the corresponding algorithm. If the conditions of Propositions 1 and 2 are satisfied, then any other set of linearly independent vectors is expressed through vectors defined above.
Additional orthogonalization, e.g., using the Gram–Schmidt process, is not assumed here, since vectors are functions of the point ( t , x ) T × R n in the general case. This complicates the implementation of the corresponding algorithm and entails the complexity of expressions describing components of orthogonal vectors.
Remark 2. 
For n > 2 , according to Proposition 1, vectors G , N 1 , , N n 1 are linearly independent if g 1 0 , g n 0 and g 2 0 , g 3 0 , …, g n 1 0 ; i.e., the function M ( t , x ) may not depend on components x 1 and x n of the vector x.
If g 1 0 , g m 0 , …, g n 0 , i.e., the function M ( t , x ) does not depend on components x 1 , x m , , x n ( 2 < m < n ), then as the set of linearly independent vectors, we can take N 1 , , N m 1 from the set (13), supplementing them with unit vectors E m + 1 , , E n , columns of the identity matrix E of size n × n . The determinant of the matrix, whose columns are vectors G , N 1 , , N m 1 , E m + 1 , , E n , is equal to ( 1 ) m 1 | G | 2 π m , where π m is given by Formula (14).
The independence of the function M ( t , x ) from components x 1 , x m , , x n does not limit the generality of reasoning, since components x i under the condition g i 0 can always be ordered in this way.
The same arguments are valid for the set of vectors G ˜ , N ˜ 0 , N ˜ 1 , , N ˜ n 1 . According to Proposition 2, they are linearly independent if g n 0 and g 1 0 , g 2 0 , …, g n 1 0 ; i.e., the function M ( t , x ) may not depend on the last component x n of the vector x. Let g m 0 , …, g n 0 , i.e., the function M ( t , x ) does not depend on components x m , , x n ( 1 < m < n ). Then, as the set of linearly independent vectors, we can take N ˜ 0 , N ˜ 1 , , N m 1 from the set (17), supplementing them with unit vectors E m + 1 , , E n , columns of the identity matrix E of size ( n + 1 ) × ( n + 1 ) , provided that these columns are numbered from zero.
Let N be the linear span of vectors N 1 , , N n 1 , the linear subspace of dimension ( n 1 ) :
N = span { N 1 , , N n 1 } ,
and let N f and N a be linear manifolds N + N 0 + Σ and N + N 0 , respectively:
N f = { N f : N f = N + N 0 + Σ , N N } , N a = { N a : N a = N + N 0 , N N } ,
where vectors N 0 and Σ are functions of a pair ( t , x ) T × R n with values in R n . The first vector is defined by the formula
N 0 = [ g 0 / g 1   0 0   0 ] T ,
and the second one is determined by relation (11).
The set N is the orthogonal complement of the set V = { V : V = α x M ( t , x ) , α R } ; i.e., an arbitrary linear combination of vectors N 1 , , N n 1 is orthogonal to the gradient x M ( t , x ) . Therefore, condition (7) can be rewritten as
σ l ( t , x ) N , l = 1 , , s ,
or
σ l ( t , x ) = u 1 l ( t , x ) N 1 + + u n 1 l ( t , x ) N n 1 ,
where functions u 1 l ( t , x ) , …, u n 1 l ( t , x ) can be chosen arbitrarily under the additional condition of existence of a solution to Equation (5). They represent expansion coefficients of columns σ l ( t , x ) relative to the set of linearly independent vectors N 1 , , N n 1 , the basis of the linear subspace N .
Conditions (8) and (10), taking into account the above notations, can be rewritten as follows:
f ( t , x ) N f ,
a ( t , x ) N a ,
or
f ( t , x ) = N 0 + Σ + u 1 0 ( t , x ) N 1 + + u n 1 0 ( t , x ) N n 1 , a ( t , x ) = N 0 + u 1 0 ( t , x ) N 1 + + u n 1 0 ( t , x ) N n 1 ,
where functions u 1 0 ( t , x ) , …, u n 1 0 ( t , x ) , like the previously introduced functions u 1 l ( t , x ) , …, u n 1 l ( t , x ) , can be chosen arbitrarily under the additional condition of existence of a solution to Equation (5).
The choice of functions u j l ( t , x ) for j = 1 , , n 1 and l = 0 , 1 , , s specifies deterministic and stochastic components of a dynamic system (drift and diffusion coefficients). It influences such properties of invariant stochastic differential systems as stability (or partial stability) and optimality. The study of such properties requires additional conditions, e.g., stability and optimality criteria.
Remark 3. 
The proposed approach has several advantages. First, it provides a minimum number of functions that define the entire set of invariant stochastic differential systems with a given first integral. Second, coefficients of Equations (5) and (9) depend on these functions linearly. If we consider such functions as the control inputs and formulate an optimal control problem for the system, then the linearity of coefficients ensures a simpler optimal control structure. Third, the definition of vectors N 1 , , N n 1 allows one to ensure a degenerate diffusion matrix relative to some components, which is often necessary in applied problems such as motion control.
Remark 4. 
If M ( t , x ) = M ( x ) , then the vector N 0 is equal to zero; therefore, N and N a coincide, and N f is the linear manifold N + Σ . In a particular case, the vector Σ can also be zero, then N = N a = N f . For example, this statement is valid for the system of Equation (4).
Next, we formulate invariance conditions that follow from the above reasoning.
Theorem 1. 
Let the conditions of Proposition 1 be satisfied if M ( t , x ) = M ( x ) , or the conditions of Proposition 2 be satisfied if M ( t , x ) M ( x ) . Then,
(1) For the invariance of a stochastic differential system defined by the Itô stochastic differential equation (5), it is necessary and sufficient that conditions (19) and (20) hold on trajectories of the random process X ( t ) ;
(2) For the invariance of a stochastic differential system defined by the Stratonovich stochastic differential Equation (9), it is necessary and sufficient that conditions (19) and (21) hold on trajectories of the random process X ( t ) .
The conditions of Theorem 1 can be weakened, taking into account Remark 2. The described approach can be applied when n > 2 and conditions g 2 0 , g 3 0 , …, g n 1 0 used in Propositions 1 and 2 are violated on some subset in T × R n . On such a subset, vectors G , N 1 , , N n 1 are not linearly independent; i.e., the basis of the linear subspace N degenerates. When the basis degenerates, condition (6) holds, and conditions (19), (20), and (21) are only sufficient but not necessary.
For example, consider the invariant stochastic differential system for n = 4 and s = 3 with the first integral M ( t , λ ) = M ( λ ) = ( λ 0 2 + λ 1 2 + λ 2 2 + λ 3 2 ) / 2 . The difference from Formula (2) is only in the numerical coefficient (see also the system of Equation (3) describing the rotation of a rigid body in three-dimensional space):
G = [ λ 0   λ 1   λ 2   λ 3 ] T , N 1 = [ λ 1 λ 0   0   0 ] T , N 2 = [ 0   λ 2 λ 1   0 ] T , N 3 = [ 0   0   λ 3 λ 2 ] T .
Here, the basis degenerates if λ 1 = 0 or λ 2 = 0 . In this problem, it is better to use vectors Λ 1 , Λ 2 , and Λ 3 , where
Λ 1 = [ λ 1 λ 0 λ 3 λ 2 ] T , Λ 2 = [ λ 2 λ 3 λ 0 λ 1 ] T , Λ 3 = [ λ 3 λ 2 λ 1 λ 0 ] T ,
instead of vectors N 1 , N 2 , and N 3 . This follows from the system of Equation (3), which can be rewritten in the form (9):
d λ 0 ( t ) = 1 2 λ 1 ( t ) ω 1 ( t ) λ 2 ( t ) ω 2 ( t ) λ 3 ( t ) ω 3 ( t ) d t + 1 2 σ 1 λ 1 ( t ) d W 1 ( t ) σ 2 λ 2 ( t ) d W 2 ( t ) σ 3 λ 3 ( t ) d W 3 ( t ) , d λ 1 ( t ) = 1 2 λ 0 ( t ) ω 1 ( t ) λ 3 ( t ) ω 2 ( t ) + λ 2 ( t ) ω 3 ( t ) d t + 1 2 σ 1 λ 0 ( t ) d W 1 ( t ) σ 2 λ 3 ( t ) d W 2 ( t ) + σ 3 λ 2 ( t ) d W 3 ( t ) , d λ 2 ( t ) = 1 2 λ 3 ( t ) ω 1 ( t ) + λ 0 ( t ) ω 2 ( t ) λ 1 ( t ) ω 3 ( t ) d t + 1 2 σ 1 λ 3 ( t ) d W 1 ( t ) + σ 2 λ 0 ( t ) d W 2 ( t ) σ 3 λ 1 ( t ) d W 3 ( t ) , d λ 3 ( t ) = 1 2 λ 2 ( t ) ω 1 ( t ) + λ 1 ( t ) ω 2 ( t ) + λ 0 ( t ) ω 3 ( t ) d t + 1 2 σ 1 λ 2 ( t ) d W 1 ( t ) + σ 2 λ 1 ( t ) d W 2 ( t ) + σ 3 λ 0 ( t ) d W 3 ( t ) ,
where W ( t ) = [ W 1 ( t ) W 2 ( t ) W 3 ( t ) ] T is the standard vector Wiener process corresponding to the vector Gaussian white noise V ( t ) .
At the same time, the first integral (2) corresponds to infinitely many invariant stochastic differential systems, and the system of Equation (22) describes only one of them.
If λ 1 0 and λ 2 0 , then vectors Λ 1 , Λ 2 , and Λ 3 are related to vectors N 1 , N 2 , and N 3 by expressions
Λ 1 = N 1 + N 3 , Λ 2 = λ 2 λ 1 N 1 λ 3 λ 2 + λ 0 λ 1 N 2 λ 1 λ 2 N 3 , Λ 3 = λ 3 λ 1 N 1 + 1 λ 0 λ 3 λ 1 λ 2 N 2 λ 0 λ 2 N 3 ,
but such a basis does not degenerate. Moreover, it is the orthonormal basis— | G | = | Λ 1 | = | Λ 2 | = | Λ 3 | = 1 —and the determinant of the matrix formed by vectors G , Λ 1 , Λ 2 , Λ 3 is | G | 4 = 1 (this is easy to verify). However, in the general case, it is difficult to propose a basis that is defined by the same simple formulae and has similar properties (the goal of the article is to propose precisely a simple method for forming invariant stochastic differential systems).
Consider again the system of Equation (4), defining iterated stochastic integrals of the second multiplicity. Here, n = 4 , s = 2 , and M ( t , x ) = M ( x ) = x 2 + x 4 x 1 x 3 . It is easy to show that such a system of equations can be obtained by the described method. Indeed,
G = [ x 3   1   x 1   1 ] T ,
hence,
N 1 = [ 1   x 3   0   0 ] T , N 2 = [ 0   x 1   1   0 ] T , N 3 = [ 0   0   1   x 1 ] T .
In this case,
σ 1 ( t , x ) = N 1 , σ 2 ( t , x ) = N 3 , f ( t , x ) = a ( t , x ) = 0 ( N 0 = 0 , Σ = 0 ) ,
and this corresponds to conditions (19), (20), and (21).

4. Invariant Stochastic Differential Systems of the Second, Fourth, and Eighth Orders

In the previous section, we noted that the basis (13) of the linear subspace N can degenerate. This section examines sequentially invariant stochastic differential systems of the second, fourth, and eighth orders. Here, a basis of the linear subspace N is related to the definition of the multiplication of complex numbers ( n = 2 ), quaternions ( n = 4 ), and octonions ( n = 8 ).
The following proposition is formulated for the case n = 2 . Although trivial, it is important in the general context.
Proposition 3. 
Let n = 2 and | G | 0 , where G is given by Formula (12). Then, vectors G and N 1 , where
N 1 = [ g 2   g 1 ] T ,
are orthogonal, and the determinant of the matrix formed by these vectors is equal to | G | 2 .
Proof. 
Indeed,
( G , N 1 ) = g 1 g 2 + g 1 g 2 = 0 ,
i.e., vectors G and N 1 are orthogonal, and
g 1 g 2 g 2 g 1 = g 1 2 + g 2 2 = | G | 2 .
The proposition has been proven. □
The vector N 1 in Proposition 3 differs in sign from the corresponding vector in set (13); therefore, Proposition 3 can be considered as the corollary of Proposition 1.
A classic example of a second-order invariant stochastic differential system is the Kubo oscillator [36]. Trajectories of such a system almost surely belong to a circular cylinder, and the phase trajectories lie on the circle (the cylinder projection onto the phase plane). Proposition 3 covers all invariant second-order stochastic differential systems, e.g., those with first integrals that correspond to elliptic, parabolic, and hyperbolic cylinders [37].
For n = 4 , the system of Equation (3) describing the rotation of a rigid body in three-dimensional space uses the non-degenerate basis. In this example, the first integral defines a three-dimensional hypersphere centered at the origin and with unit radius. However, the similar result can be formulated for more general invariant stochastic differential systems, namely with arbitrary first integrals.
Proposition 4. 
Let n = 4 and | G | 0 , where G is given by Formula (12). Then, vectors G , N 1 , N 2 , N 3 subject to
N 1 = [ g 2 g 1 g 4 g 3 ] T , N 2 = [ g 3 g 4 g 1 g 2 ] T , N 3 = [ g 4 g 3 g 2 g 1 ] T ,
are orthogonal, and the determinant of the matrix formed by these vectors is equal to | G | 4 .
Proof. 
Any of the vectors N 1 , N 2 , and N 3 can be obtained from vector G using the following transformation. Components of vector G are divided into pairs; then, elements in each pair are permuted, and the sign of one element in the pair changes after the permutation. Geometrically, such a transformation corresponds to rotating the points of the plane by a right angle.
This transformation ensures pairwise orthogonality of vectors G , N 1 , N 2 , N 3 . It can be verified by directly calculating the pairwise inner products (they are equal to zero).
Next, we find the determinant of the matrix formed by such vectors:
g 1 g 2 g 3 g 4 g 2 g 1 g 4 g 3 g 3 g 4 g 1 g 2 g 4 g 3 g 2 g 1 = g 1 g 1 g 4 g 3 g 4 g 1 g 2 g 3 g 2 g 1 g 1 ( g 1 2 + g 2 2 + g 3 2 + g 4 2 ) g 2 g 2 g 3 g 4 g 4 g 1 g 2 g 3 g 2 g 1 g 2 ( g 1 2 + g 2 2 + g 3 2 + g 4 2 ) + g 3 g 2 g 3 g 4 g 1 g 4 g 3 g 3 g 2 g 1 g 3 ( g 1 2 + g 2 2 + g 3 2 + g 4 2 ) g 4 g 2 g 3 g 4 g 1 g 4 g 3 g 4 g 1 g 2 g 4 ( g 1 2 + g 2 2 + g 3 2 + g 4 2 ) = ( g 1 2 + g 2 2 + g 3 2 + g 4 2 ) 2 = | G | 4 .
The proposition has been proven. □
Remark 5. 
In the context of this work, it is sufficient to show that the matrix N G formed by vectors G , N 1 , N 2 , N 3 is non-singular.
It is easy to see that
| G | = | N 1 | = | N 2 | = | N 3 | ,
i.e., vectors G / | G | , N 1 / | G | , N 2 / | G | , N 3 / | G | are orthonormal. The matrix formed by these vectors is orthogonal and, therefore, non-singular. This implies that the matrix N G is non-singular.
Next, we consider the case n = 8 .
Proposition 5. 
Let n = 8 and | G | 0 , where G is given by Formula (12). Then, vectors G , N 1 , , N 7 subject to
N 1 = [ g 2 g 1 g 4 g 3 g 6 g 5 g 8 g 7 ] T , N 2 = [ g 3 g 4 g 1 g 2 g 7 g 8 g 5 g 6 ] T , N 3 = [ g 4 g 3 g 2 g 1 g 8 g 7 g 6 g 5 ] T , N 4 = [ g 5 g 6 g 7 g 8 g 1 g 2 g 3 g 4 ] T , N 5 = [ g 6 g 5 g 8 g 7 g 2 g 1 g 4 g 3 ] T , N 6 = [ g 7 g 8 g 5 g 6 g 3 g 4 g 1 g 2 ] T , N 7 = [ g 8 g 7 g 6 g 5 g 4 g 3 g 2 g 1 ] T ,
are orthogonal, and the matrix formed by these vectors is non-singular.
The proof of this proposition is based on the reasoning used in both the proof of Proposition 4 and Remark 5. Additionally, it can be shown that the determinant of the matrix formed by vectors G , N 1 , , N 7 is equal to | G | 8 .
It is impossible to construct an orthogonal basis in a space of arbitrary dimension in the same way, i.e., by partitioning components of vector G into pairs, then permuting elements and changing the sign of one element in all pairs. The properties of algebras of complex numbers, quaternions, and octonions are significantly important here [38]. However, we can use the orthogonal basis corresponding to cases n = 4 or n = 8 in spaces of lower dimensions.
For example, consider the basis (24) corresponding to the case n = 4 , setting g 4 = 0 and not taking into account the last component (the projection of the basis in R 3 ):
N 1 = [ g 2 g 1 0 1 ] T , N 2 = [ g 3 0 1 g 1 ] T , N 3 = [ 0 1 g 3 g 2 ] T .
Vectors N 1 , N 2 , and N 3 are orthogonal to the vector G = [ g 1   g 2   g 3 ] T but the determinant of the matrix formed by vectors N 1 , N 2 , and N 3 is equal to zero. However, the rank of such a matrix is equal to two provided that | G | 0 , i.e., there exist two linearly independent vectors. As an example, we can consider diffusion on a sphere, namely a problem that involves a third-order invariant stochastic differential system [35]. Its state belongs to a sphere in three-dimensional space centered at the origin and with a radius determined by the initial state. This is precisely the set of vectors used in such a problem.
As another example, consider the basis (25) for the case n = 8 , setting g 7 = g 8 = 0 and ignoring the last two components (the projection of the basis in R 6 ):
N 1 = [ g 2 g 1 g 4 g 3 g 6 g 5 ] T , N 2 = [ g 3 g 4 g 1 g 2 0 1 0 1 ] T , N 3 = [ g 4 g 3 g 2 g 1 0 1 0 1 ] T , N 4 = [ g 5 g 6 0 1 0 1 g 1 g 2 ] T , N 5 = [ g 6 g 5 0 1 0 1 g 2 g 1 ] T , N 6 = [ 0 1 0 1 g 5 g 6 g 3 g 4 ] T , N 7 = [ 0 1 0 1 g 6 g 5 g 4 g 3 ] T .
Vectors N 1 , , N 7 are orthogonal to the vector G = [ g 1   g 2   g 3   g 4   g 5   g 6 ] T . The rank of the matrix formed by vectors N 1 , , N 7 is equal to five provided that | G | 0 , i.e., we can choose five linearly independent vectors from seven vectors. Projections of the basis in R 5 and R 7 are constructed similarly.
Next, we restrict ourselves to the case M ( t , x ) = M ( x ) (see Remark 4) and rewrite conditions (7), (8), and (10) as
σ l ( t , x ) N , l = 1 , , s ,
f ( t , x ) N f ,
a ( t , x ) N a ,
where
N = span { N 1 , , N n 1 } , N f = { N f : N f = N + Σ , N N } , N a = N .
In this case, vectors N 1 , , N n 1 are determined by the Formulae (23), (24), or (25) for n = 2 , n = 4 , or n = 8 , respectively. The vector Σ , as before, is defined by relation (11).
Now we can formulate weaker invariance conditions compared to Theorem 1.
Theorem 2. 
Let M ( t , x ) M ( x ) , n { 2 , 4 , 8 } , and | G | 0 , where G is given by Formula (12). Then,
(1) For the invariance of a stochastic differential system defined by the Itô stochastic differential equation (5), it is necessary and sufficient that conditions (26) and (27) hold on trajectories of the random process X ( t ) ;
(2) For the invariance of a stochastic differential system defined by the Stratonovich stochastic differential equation (9), it is necessary and sufficient that conditions (26) and (28) hold on trajectories of the random process X ( t ) .
For example, consider the case n = 4 and M ( t , x ) = M ( x ) = x 2 + x 4 x 1 x 3 (see also the system of Equation (4) defining iterated stochastic integrals of the second multiplicity). We restrict ourselves to the simplest condition (26) for brevity. Formula (24) defines the following vectors:
N 1 = [ 1 x 3   1   x 1 ] T , N 2 = [ x 1 1 x 3   1 ] T , N 3 = [ 1 x 1 1 x 3 ] T ,
since
G = [ x 3   1   x 1   1 ] T .
Then, condition (26) can be rewritten in the form
σ l ( t , x ) = u 1 l ( t , x ) N 1 + u 2 l ( t , x ) N 2 + u 3 l ( t , x ) N 3 , l = 1 , , s ,
where functions u 1 l ( t , x ) , u 2 l ( t , x ) , and u 3 l ( t , x ) can be chosen arbitrarily under the additional condition of existence of a solution to the corresponding stochastic differential equation.
For condition (27), vector Σ should be found, while condition (28) does not require any additions.
The approach described in [22,23,26] yields the following condition:
σ l ( t , x ) = q l ( t , x ) E 1 E 2 E 3 E 4 x 3 1 x 1 1 μ 1 l ( t , x ) μ 2 l ( t , x ) μ 3 l ( t , x ) μ 4 l ( t , x ) ν 1 l ( t , x ) ν 2 l ( t , x ) ν 3 l ( t , x ) ν 4 l ( t , x ) , l = 1 , , s ,
where the determinant is understood formally as for a vector product. Its first row is formed by unit vectors E 1 , , E 4 , columns of the identity matrix E of size 4 × 4 . The second row contains components of vector G, and the remaining rows contain functions μ 1 l ( t , x ) , …, μ 4 l ( t , x ) and ν 1 l ( t , x ) , …, ν 4 l ( t , x ) . The choice of these functions along with the function q l ( t , x ) 0 is limited by the existence condition for a solution to the corresponding stochastic differential equation.
According to [22,23,26], conditions for coefficients f ( t , x ) and a ( t , x ) are specified by the determinant of the ( 5 × 5 ) -matrix similar in structure to the above determinant of the ( 4 × 4 ) -matrix.
This example demonstrates that the proposed method provides simpler expressions for coefficients of stochastic differential equations and a minimum number of functions required to determine the entire set of invariant stochastic differential systems with a given first integral. All such functions are included in coefficients of equations linearly.

5. Computational Experiments

This section contains examples of invariant stochastic differential systems and the results of numerical simulations for them.
Example 1. 
This example examines the invariant stochastic differential system whose state belongs to the catenoid
x 1 2 + x 2 2 = cosh 2 x 3 ,
i.e., the first integral has the form M ( t , x ) = M ( x ) = x 1 2 + x 2 2 cosh 2 x 3 . The catenoid is a surface formed by rotating a catenary about an axis [39]. It represents a solution to a well-known problem in the calculus of variations, namely finding the minimal surface of revolution.
Here,
n = 3 , X ( t ) = [ X 1 ( t ) X 2 ( t ) X 3 ( t ) ] T , G = x M ( t , x ) = [ 2 x 1   2 x 2   sinh 2 x 3 ] T ,
and therefore, two linearly independent vectors (the condition of linear independence is x 2 0 ), orthogonal to the gradient, according to set (13), are represented in the form
N 1 = [ 2 x 2 2 x 1   0 ] T , N 2 = [ 0 sinh 2 x 3 2 x 2 ] T .
Since M ( t , x ) = M ( x ) , vector N 0 is equal to zero (see Remark 4).
Next, we restrict ourselves to a scalar Wiener process, i.e., s = 1 . Then, according to condition (19), we have
σ ( t , x ) = u 1 1 ( t , x ) N 1 + u 2 1 ( t , x ) N 2 .
Based on conditions (20) and (21), we obtain
f ( t , x ) = Σ + u 1 0 ( t , x ) N 1 + u 2 0 ( t , x ) N 2 , a ( t , x ) = u 1 0 ( t , x ) N 1 + u 2 0 ( t , x ) N 2 ,
where Σ is given by Formula (11):
Σ = 1 2 σ ( t , x ) x σ ( t , x ) .
Thus, a system whose trajectories in four-dimensional space belong to a hypercylinder over a catenoid can be described by Equation (5) or (9). Coefficients for these equations are given above. The initial state x 0 = [ x 10   x 20   x 30 ] T should belong to the catenoid, i.e., x 10 2 + x 20 2 = cosh 2 x 30 .
Below, we give a concrete example, assuming that u 1 0 ( t , x ) = 1 / 5 , u 2 0 ( t , x ) = 0 , u 1 1 ( t , x ) = 1 / 3 , and u 2 1 ( t , x ) = 1 / 10 . In this case,
σ ( t , x ) = [ 2 x 2 / 3 2 x 1 / 3 ( sinh 2 x 3 ) / 10 x 2 / 5 ] T , Σ ( t , x ) = [ 2 x 1 / 9 ( sinh 2 x 3 ) / 30   x 2 ( 18 sinh 2 2 x 3 91 ) / 450 x 1 / 15 + ( sinh 2 x 3 ) / 100 ] T , f ( t , x ) = [ 2 x 2 / 5 2 x 1 / 9 ( sinh 2 x 3 ) / 30 2 x 1 / 5 + x 2 ( 18 sinh 2 2 x 3 91 ) / 450 x 1 / 15 + ( sinh 2 x 3 ) / 100 ] T , a ( t , x ) = [ 2 x 2 / 5 2 x 1 / 5 0 ] T .
Equation (5) for the example under consideration is written in the form
d X 1 ( t ) X 2 ( t ) X 3 ( t ) = 2 X 2 ( t ) / 5 2 X 1 ( t ) / 9 ( sinh 2 X 3 ( t ) ) / 30 2 X 1 ( t ) / 5 + X 2 ( t ) ( 18 sinh 2 2 X 3 ( t ) 91 ) / 450 X 1 ( t ) / 15 + ( sinh 2 X 3 ( t ) ) / 100 d t + 2 X 2 ( t ) / 3 2 X 1 ( t ) / 3 ( sinh 2 X 3 ( t ) ) / 10 X 2 ( t ) / 5 d W ( t ) ,
and we also have the corresponding Equation (9):
d X 1 ( t ) X 2 ( t ) X 3 ( t ) = 2 X 2 ( t ) / 5 2 X 1 ( t ) / 5 0 d t + 2 X 2 ( t ) / 3 2 X 1 ( t ) / 3 ( sinh 2 X 3 ( t ) ) / 10 X 2 ( t ) / 5 d W ( t ) ,
where W ( t ) is the standard Wiener process.
Trajectories of the solution to stochastic differential equations mentioned above almost surely belong to a three-dimensional manifold in T × R 3 , the projection of which onto phase space R 3 is a catenoid.
It is not possible to find the analytical solution, so we will use a numerical method for modeling trajectories. Let t T = [ 0 , 10 ] , i.e., t 0 = 0 and T = 10 . We will apply the Milstein method [8] with the numerical integration step h = 0.01 and initial conditions
X 1 ( 0 ) = x 10 = 0 , X 2 ( 0 ) = x 20 = 1 , X 3 ( 0 ) = x 30 = 0 .
Let { t k } be a partition of the time interval [ 0 , 10 ] with the given step h, i.e.,
t k + 1 = t k + h , k = 0 , 1 , , N 1 , t N = 10 , N = 10 h ,
then for the Milstein method, we obtain
Y k + 1 = Y k + h a ( t k , Y k ) + h σ ( t k , Y k ) ξ k + h 2 σ ( t k , Y k ) x σ ( t k , Y k ) ξ k 2 , Y 0 = x 0 ,
where random variables ξ k have a standard normal distribution, and they are independent. Thus, { Y k } is the discrete approximation of the random process X ( t ) , and the vector Y k corresponds to time t k . Three sample phase trajectories of the numerical solution approximating the random process X ( t ) are shown in Figure 1. In the horizontal plane, we use axes for x 1 (left) and x 2 (right), and the vertical axis corresponds to x 3 .
The Milstein method provides first-order strong and, therefore, weak convergence [8]. This means that the error in the numerical solution can be estimated as follows:
ε = E | M 10 , X ( 10 ) M ( 10 , Y N ) | = E | M ( 0 , x 0 ) M ( 10 , Y N ) | c h ,
where E is the mathematical expectation, and c > 0 is a constant that does not depend on the step h.
When simulating 1000 sample trajectories of the numerical solution and averaging, the following estimates of the error are obtained: ε = 3.315 × 10 2 for h = 0.01 , ε = 3.295 × 10 3 for h = 0.001 , ε = 3.196 × 10 4 for h = 0.0001 . The reduction in the error that occurs as the numerical integration step decreases corresponds to the first-order convergence.
Further, we change initial conditions (for new initial conditions, the basis degenerates):
X 1 ( 0 ) = x 10 = 1 , X 2 ( 0 ) = x 20 = 0 , X 3 ( 0 ) = x 30 = 0 .
Figure 2 illustrates three sample phase trajectories of the numerical solution that approximates the random process X ( t ) . The axes in this figure are oriented similar to Figure 1.
When simulating 1000 sample trajectories of the numerical solution and averaging, the following estimates of the error are obtained: ε = 3.116 × 10 2 for h = 0.01 , ε = 3.262 × 10 3 for h = 0.001 , ε = 3.393 × 10 4 for h = 0.0001 . Here, the error depends on the numerical integration step in the same way.
Example 2. 
In this example, we consider the invariant stochastic differential system whose trajectories satisfy the condition
X 1 ( t ) + X 2 2 ( t ) + cos 2 t = C ,
i.e., with the first integral M ( t , x ) = x 1 + x 2 2 + cos 2 t .
In this case,
n = 2 , X ( t ) = [ X 1 ( t ) X 2 ( t ) ] T , G = x M ( t , x ) = [ 1   2 x 2 ] T , G ˜ = t , x M ( t , x ) = [ 2 sin 2 t   1   2 x 2 ] T .
From Formula (17), we find two linearly independent vectors orthogonal to the generalized gradient:
N ˜ 0 = [ 1   2 sin 2 t 0 ] T , N ˜ 1 = [ 0   2 x 2 1 ] T ,
consequently,
N 0 = [ 2 sin 2 t   0 ] T , N 1 = [ 2 x 2 1 ] T ,
where the vector N 1 is orthogonal to the gradient.
As in Example 1, we restrict attention to a scalar Wiener process by setting s = 1 . Here, condition (19) is written as follows:
σ ( t , x ) = u 1 1 ( t , x ) N 1 ,
and conditions (20) and (21) imply that
f ( t , x ) = N 0 + Σ + u 1 0 ( t , x ) N 1 , a ( t , x ) = N 0 + u 1 0 ( t , x ) N 1 ,
where Σ is given by Formula (11):
Σ = 1 2 σ ( t , x ) x σ ( t , x ) .
Therefore, a system whose trajectories in three-dimensional space belong to a generalized cylinder is described by Equation (5) or (9) with coefficients given above. The generalized cylinder is defined by initial conditions X 1 ( 0 ) = x 10 and X 2 ( 0 ) = x 20 ( t 0 ). Next, we consider three cases of initial conditions:
x 10 = 1 , x 20 = 1 ; x 10 = 1 , x 20 = 1 ; x 10 = 0 , x 20 = 2 .
For all the cases, we have
X 1 2 ( t ) + X 2 2 ( t ) + cos 2 t = 3 .
Let u 1 0 ( t , x ) = 1 / 10 and u 1 1 ( t , x ) = 1 / 5 . Then,
σ ( t , x ) = [ x 2 / 5   1 / 10 ] T , Σ ( t , x ) = [ 1 / 25   0 ] T , f ( t , x ) = [ 2 sin 2 t + x 2 / 5 1 / 25   1 / 10 ] T , a ( t , x ) = [ 2 sin 2 t + x 2 / 5   1 / 10 ] T .
In this example, Equation (5) is written as follows:
d X 1 ( t ) X 2 ( t ) = 2 sin 2 t + X 2 ( t ) / 5 1 / 25 1 / 10 d t + X 2 ( t ) / 5 1 / 10 d W ( t ) ,
and the corresponding Equation (9) has the form
d X 1 ( t ) X 2 ( t ) = 2 sin 2 t + X 2 ( t ) / 5 1 / 10 d t + X 2 ( t ) / 5 1 / 10 d W ( t ) ,
where W ( t ) is the standard Wiener process.
Trajectories of the solution to these stochastic differential equations almost surely belong to a manifold in T × R 2 .
Here, we will also use a numerical method to simulate trajectories. Let t T = [ 0 , 6.28 ] , i.e., t 0 = 0 and T = 6.28 2 π . We will apply the Artemiev method [7] (the Rosenbrock-type method) with the numerical integration step h = 0.01 and initial conditions specified earlier.
Let { t k } be a partition of the time interval [ 0 , 6.28 ] with the given step h, i.e.,
t k + 1 = t k + h , k = 0 , 1 , , N 1 , t N = 6.28 , N = 6.28 h ,
then for the Artemiev method, we have
Y k + 1 = Y k + E h 2 a ( t k , Y k ) x 1 × [ h a ( t k , Y k ) + h σ ( t k , Y k ) ξ k + h 2 σ ( t k , Y k ) x σ ( t k , Y k ) ξ k 2 ] , Y 0 = x 0 ,
where E is the identity matrix of size 2 × 2 .
As in Example 1, ξ k are independent random variables with a standard normal distribution, and { Y k } is the discrete approximation of the random process X ( t ) (the vector Y k corresponds to time t k ). Three sample trajectories of the numerical solution approximating the random process X ( t ) are depicted in Figure 3. In the horizontal plane, we have axes for t (left) and x 2 (right), and the vertical axis is for x 1 .
The Artemiev method has first order of strong and weak convergence [7,10]; i.e., the error in the numerical solution satisfies the estimate similar to one presented in Example 1:
ε = E | M 6.28 , X ( 6.28 ) M ( 6.28 , Y N ) | = E | M ( 0 , x 0 ) M ( 6.28 , Y N ) | c h .
When simulating 1000 sample trajectories of the numerical solution and averaging, the following estimates of the error are obtained: ε = 4.040 × 10 4 for h = 0.01 , ε = 4.046 × 10 5 for h = 0.001 , ε = 4.070 × 10 6 for h = 0.0001 . The proportional decrease in the error with decreasing the numerical integration step corresponds to the first-order convergence.
Example 3. 
This example addresses the invariant stochastic differential system whose state belongs to the sphere
x 1 2 + x 2 2 + x 3 2 2 = C ,
i.e., the first integral is M ( t , x ) = M ( x ) = ( x 1 2 + x 2 2 + x 3 2 ) / 2 .
Here,
n = 3 , X ( t ) = [ X 1 ( t ) X 2 ( t ) X 3 ( t ) ] T , G = x M ( t , x ) = [ x 1   x 2   x 3 ] T .
We use Formula (13) to define two linearly independent vectors (the condition of linear independence is x 2 0 ) that are orthogonal to the gradient:
N 1 = [ x 2   x 1   0 ] T , N 2 = [ 0   x 3   x 2 ] T .
The vector N 0 is equal to zero due to M ( t , x ) = M ( x ) (see Remark 4).
By analogy with Examples 1 and 2, we assume s = 1 (a scalar Wiener process is sufficient). According to conditions (19), (20), and (21), we obtain
σ ( t , x ) = u 1 1 ( t , x ) N 1 + u 2 1 ( t , x ) N 2 , f ( t , x ) = Σ + u 1 0 ( t , x ) N 1 + u 2 0 ( t , x ) N 2 , a ( t , x ) = u 1 0 ( t , x ) N 1 + u 2 0 ( t , x ) N 2 ,
where Σ is given by Formula (11):
Σ = 1 2 σ ( t , x ) x σ ( t , x ) .
These functions define coefficients of Equation (5) or (9), which specify a system with trajectories in four-dimensional space belonging to a hypercylinder over a sphere. The hypercylinder is defined by initial conditions X 1 ( 0 ) = x 10 , X 2 ( 0 ) = x 20 , and X 3 ( 0 ) = x 30 ( t 0 ).
Let u 1 0 ( t , x ) = 0 , u 2 0 ( t , x ) = 0 , u 1 1 ( t , x ) = 1 , and u 2 1 ( t , x ) = 1 . Then,
σ ( t , x ) = [ x 2   x 1 x 3   x 2 ] T , Σ ( t , x ) = [ ( x 1 x 3 ) / 2   x 2   ( x 1 x 3 ) / 2 ] T , f ( t , x ) = [ ( x 1 x 3 ) / 2   x 2   ( x 1 x 3 ) / 2 ] T , a ( t , x ) = [ 0   0   0 ] T .
In this example, Equations (5) and (9) are linear, and they can be represented as follows:
d X ( t ) = F X ( t ) d t + S X ( t ) d W ( t ) , d X ( t ) = S X ( t ) d W ( t ) ,
where
F = 1 2 0 1 2 0 1 0 1 2 0 1 2 , S = 0 1 0 1 0 1 0 1 0 ,
and W ( t ) denotes a standard Wiener process.
Trajectories of the solution to these stochastic differential equations almost surely belong to a three-dimensional manifold in T × R 3 , whose projection onto phase space R 3 is a sphere centered at the origin and with radius x 10 2 + x 20 2 + x 30 2 .
The analytical solution can be expressed through the matrix S [6,7]:
X ( t ) = exp S W ( t ) x 0 = 1 2 ( 1 + cos 2 W ( t ) ) 2 2 sin 2 W ( t ) 1 2 ( cos 2 W ( t ) 1 ) 2 2 sin 2 W ( t ) cos 2 W ( t ) 2 2 sin 2 W ( t ) 1 2 ( cos 2 W ( t ) 1 ) 2 2 sin 2 W ( t ) 1 2 ( 1 + cos 2 W ( t ) ) x 0 ,
where x 0 = [ x 10   x 20   x 30 ] T . It is easy to verify that the matrix exp S W ( t ) defines an orthogonal linear transformation in R 3 ( det exp S W ( t ) = 1 ), so | X ( t ) | = | x 0 | .
For visualization, we assume t T = [ 0 , 5 ] , i.e., t 0 = 0 and T = 5 , and define a partition { t k } with the step h = 0.01 for the time interval [ 0 , 5 ] :
t k + 1 = t k + h , k = 0 , 1 , , N 1 , t N = 5 , N = 5 h ;
we apply the following rule for modeling trajectories of the Wiener process:
W ( 0 ) = 0 , W ( t k + 1 ) = W ( t k ) + h ξ k , k = 0 , 1 , , N 1 ,
where random variables ξ k are independent and have a standard normal distribution.
Figure 4 shows three sample trajectories of the solution (the discrete approximation: Y k = X ( t k ) = exp S W ( t k ) x 0 ; graphs are ordered according to state components). Figure 5 contains three sample phase trajectories of the solution, where we use axes for x 1 (left) and x 2 (right) in the horizontal plane, and the vertical axis corresponds x 3 . The trajectories are obtained with the initial conditions x 10 = 0 and x 20 = x 30 = 1 .
For more complex behavior of phase trajectories, it is necessary to specify nonzero functions u 1 0 ( t , x ) and u 2 0 ( t , x ) or choose s > 1 , i.e., use a vector Wiener process. However, in this case, difficulties arise in obtaining an analytical solution to corresponding stochastic differential equations.

6. Conclusions

This study presents the solution to the inverse dynamics problem, specifically the method for forming invariant stochastic differential systems associated with a given first integral. The main advantages of this method are as follows:
(1) The method provides simple expressions for coefficients of stochastic differential equations.
(2) The method ensures a minimum number of functions required to determine the entire set of invariant stochastic differential systems associated with a given first integral (coefficients of equations depend on these functions linearly).
(3) The method allows one to obtain stochastic differential equations with a degenerate diffusion matrix relative to a part of the state components.
In addition to the theoretical results, the article includes numerical simulations of three invariant stochastic differential systems. For the first system (a third-order system), the state belongs to a catenoid. For the second system (a second-order system), the state belongs to a time-dependent parabola (a dynamic manifold). For the third system (a third-order system), the state belongs to a sphere.
For numerical simulations, the Milstein method [8] and the Artemiev method [7], which have first-order convergence, are used in the first two examples. The numerical solutions do not belong to the specified manifolds due to the error in the numerical methods; however, this error is small and fully corresponds to the indicated order of convergence. The third example uses an analytical solution.
The described method can be applied to derive invariant deterministic differential systems. It can be utilized to form more complex invariant stochastic differential systems. They are characterized by stochastic differential equations that include Wiener and Poisson components [23,26], as well as those with variable or random right-hand sides [40].

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Cariño, J.; Abaunza, H.; Garcia, P.C. A fully-actuated quadcopter representation using quaternions. Int. J. Control 2022, 96, 3132–3154. [Google Scholar] [CrossRef]
  2. Unler, T.; Guler, Y.S. Evaluating the performance of Euler and quaternion-based AHRS models in embedded systems for aviation and autonomous vehicle applications. J. Aviat. 2025, 9, 249–259. [Google Scholar] [CrossRef]
  3. Sapunkov, Y.G.; Molodenkov, A.V. Analytical solution of the problem on an axisymmetric spacecraft attitude maneuver optimal with respect to a combined functional. Autom. Remote Control 2021, 82, 1183–1200. [Google Scholar] [CrossRef]
  4. Levskii, M.V. Optimal control of the reorientation of a spacecraft in the given time with a quadratic performance criterion related to the control and phase variables. J. Comput. System Sci. Int. 2023, 62, 581–596. [Google Scholar] [CrossRef]
  5. Øksendal, B. Stochastic Differential Equations. An Introduction with Applications; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  7. Artemiev, S.S.; Averina, T.A. Numerical Analysis of Systems of Ordinary and Stochastic Differential Equations; VSP: Utrecht, The Netherlands, 1997. [Google Scholar]
  8. Milstein, G.N.; Tretyakov, M.V. Stochastic Numerics for Mathematical Physics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  9. Kuznetsov, D.F. Strong approximation of iterated Itô and Stratonovich stochastic integrals based on generalized multiple Fourier series. Application to numerical integration of Itô SDEs and semilinear SPDEs (Third edition). Differ. Uravn. Protsesy Upr. 2023, 1, A.1–A.947. [Google Scholar] [CrossRef]
  10. Averina, T.A.; Rybakov, K.A. Rosenbrock-type methods for solving stochastic differential equations. Numer. Anal. Appl. 2024, 17, 99–115. [Google Scholar] [CrossRef]
  11. Rybakov, K.A. Using spectral form of mathematical description to represent Itô iterated stochastic integrals. In Smart Innovation, Systems and Technologies; Springer: Singapore, 2022; Volume 274, pp. 331–344. [Google Scholar] [CrossRef]
  12. Dubko, V.A. First Integral of Systems of Stochastic Differenial Equations; Preprint; Institute of Mathematics, Academy of Sciences of the UkrSSR: Kyiv, Ukraine, 1978. [Google Scholar]
  13. Krylov, N.V.; Rozovskii, B.L. Stochastic partial differential equations and diffusion processes. Russ. Math. Surv. 1982, 37, 81–105. [Google Scholar] [CrossRef]
  14. Erugin, N.P. Construction of the entire set of systems of differential equations with a given integral curve. J. Appl. Math. Mech. 1952, 16, 659–670. [Google Scholar]
  15. Mukharlyamov, R.G. The construction of the set of systems of differential equations of stable motion with respect to an integral variety. Diff. Equat. 1969, 5, 688–699. [Google Scholar]
  16. Mukharlyamov, R.G. The construction of the differential equations of optimal motion for a given variety. Diff. Equat. 1971, 7, 1825–1834. [Google Scholar]
  17. Galiullin, A.S. Methods for Solving Inverse Problems of Dynamics; Nauka Publication: Moscow, Russia, 1986. [Google Scholar]
  18. Galiullin, I.A. Relationships in Mechanics: Theory of Surfaces; Moscow Aviation Institute Publication: Moscow, Russia, 2016. [Google Scholar]
  19. Galiullin, I.A.; Zarodov, V.K. Inverse Problems in Aircraft Dynamics; Moscow Aviation Institute: Moscow, Russia, 1987. [Google Scholar]
  20. Khrustalev, M.M. Methods of Invariance Theory in Problems of Synthesis of Aircraft Terminal Control; Moscow Aviation Institute: Moscow, Russia, 1987. [Google Scholar]
  21. Gorbatenko, S.A. Application of the concept of inverse problems for synthesing control laws for the aerostatic vehicle motion. Aerosp. MAI J. 2012, 19, 76–80. [Google Scholar]
  22. Dubko, V.A. Questions in the Theory and Applications of Stochastic Differential Equations; Far Eastern Branch of the USSR Academy of Sciences: Vladivostok, Russia, 1989. [Google Scholar]
  23. Karachanskaya, E.V. Integral Invariants of Stochastic Systems and Program Control with Probability 1; Pacific State University Publication: Khabarovsk, Russia, 2015. [Google Scholar]
  24. Tleubergenov, M.I. On the inverse stochastic reconstruction problem. Diff. Equat. 2014, 50, 274–278. [Google Scholar] [CrossRef]
  25. Tleubergenov, M.I.; Vassilina, G.K. On stochastic inverse problem of construction of stable program motion. Open Math. 2021, 19, 157–162. [Google Scholar] [CrossRef]
  26. Karachanskaya, E. Invariants for a dynamical system with strong random perturbations. In Dynamical Systems Theory, Models, Algorithms and Applications; Carpentieri, B., Ed.; IntechOpen: London, UK, 2021. [Google Scholar] [CrossRef]
  27. Karachanskaya, E.V.; Petrova, A.P. Modeling of the programmed control with probability 1 for some financial tasks. Math. Notes NEFU 2018, 25, 25–37. [Google Scholar] [CrossRef]
  28. Averina, T.A.; Rybakov, K.A. A modification of numerical methods for stochastic differential equations with first integrals. Numer. Anal. Appl. 2019, 12, 203–218. [Google Scholar] [CrossRef]
  29. Armstrong, J.; King, T. Curved schemes for stochastic differential equations on, or near, manifolds. Proc. R. Soc. A–Math. Phys. Eng. Sci. 2022, 478, 20210785. [Google Scholar] [CrossRef]
  30. Burrage, K.; Burrage, P.M.; Lythe, G. Effective numerical methods for simulating diffusion on a spherical surface in three dimensions. Numer. Algor. 2022, 91, 1577–1596. [Google Scholar] [CrossRef]
  31. Schwarz, S.; Herrmann, M.; Sturm, A.; Wardetzky, M. Efficient random walks on Riemannian manifolds. Found. Comput. Math. 2025, 25, 145–161. [Google Scholar] [CrossRef]
  32. Khasminskii, R. Stochastic Stability of Differential Equations; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  33. Anulova, S.V.; Veretennikov, A.Y. Stochastic Differential and Evolution Equations. In Probability Theory III. Encyclopaedia of Mathematical Sciences; Prokhorov, Y.V., Shiryaev, A.N., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; Volume 45, pp. 38–110. [Google Scholar] [CrossRef]
  34. Dubko, V.A. Stochastic Differential Equations. Selected Topics; Logos: Kyiv, Ukraine, 2012. [Google Scholar]
  35. Karachanskaya, E.V. Random Processes with Invariants; Pacific State University Publication: Khabarovsk, Russia, 2014. [Google Scholar]
  36. Kubo, R. Stochastic Liouville equations. J. Math. Phys. 1963, 4, 174–183. [Google Scholar] [CrossRef]
  37. Averina, T.A.; Karachanskaya, E.V.; Rybakov, K.A. Statistical modeling of random processes with invariants. In 2017 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON); IEEE: New York, NY, USA, 2017; pp. 34–37. [Google Scholar] [CrossRef]
  38. Conway, J.H.; Smith, D.A. On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry; A K Peters: Wellesley, MA, USA, 2003. [Google Scholar]
  39. Krivoshapko, S.N.; Ivanov, V.N. Encyclopedia of Analytical Surfaces; Springer: Cham, Switzerland, 2015. [Google Scholar]
  40. Averina, T.A.; Rybakov, K.A. Systems with regime switching on manifolds. In 2018 14th International Conference on Stability and Oscillations of Nonlinear Control Systems (Pyatnitskiy’s Conference) (STAB); IEEE: New York, NY, USA, 2018; pp. 1–3. [Google Scholar] [CrossRef]
Figure 1. Sample phase trajectories of the numerical solution corresponding to the random process X ( t ) , x 0 = [ 0   1   0 ] T .
Figure 1. Sample phase trajectories of the numerical solution corresponding to the random process X ( t ) , x 0 = [ 0   1   0 ] T .
Dynamics 06 00006 g001
Figure 2. Sample phase trajectories of the numerical solution corresponding to the random process X ( t ) , x 0 = [ 1   0   0 ] T .
Figure 2. Sample phase trajectories of the numerical solution corresponding to the random process X ( t ) , x 0 = [ 1   0   0 ] T .
Dynamics 06 00006 g002
Figure 3. Sample trajectories of the numerical solution corresponding to the random process X ( t ) .
Figure 3. Sample trajectories of the numerical solution corresponding to the random process X ( t ) .
Dynamics 06 00006 g003
Figure 4. Sample trajectories of the solution X ( t ) : X 1 ( t ) (top), X 2 ( t ) (middle), X 3 ( t ) (bottom).
Figure 4. Sample trajectories of the solution X ( t ) : X 1 ( t ) (top), X 2 ( t ) (middle), X 3 ( t ) (bottom).
Dynamics 06 00006 g004
Figure 5. Sample phase trajectories corresponding to the random process X ( t ) .
Figure 5. Sample phase trajectories corresponding to the random process X ( t ) .
Dynamics 06 00006 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rybakov, K. Forming Invariant Stochastic Differential Systems with a Given First Integral. Dynamics 2026, 6, 6. https://doi.org/10.3390/dynamics6010006

AMA Style

Rybakov K. Forming Invariant Stochastic Differential Systems with a Given First Integral. Dynamics. 2026; 6(1):6. https://doi.org/10.3390/dynamics6010006

Chicago/Turabian Style

Rybakov, Konstantin. 2026. "Forming Invariant Stochastic Differential Systems with a Given First Integral" Dynamics 6, no. 1: 6. https://doi.org/10.3390/dynamics6010006

APA Style

Rybakov, K. (2026). Forming Invariant Stochastic Differential Systems with a Given First Integral. Dynamics, 6(1), 6. https://doi.org/10.3390/dynamics6010006

Article Metrics

Back to TopTop