Next Article in Journal
Text Simplification to Specific Readability Levels
Next Article in Special Issue
q-Fractional Langevin Differential Equation with q-Fractional Integral Conditions
Previous Article in Journal
Nonlinear Constitutive and Mechanical Properties of an Auxetic Honeycomb Structure
Previous Article in Special Issue
Subclasses of p-Valent Functions Associated with Linear q-Differential Borel Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI

1
Economic & Data Science Department, ESB Business School, Reutlingen University, Alteburgstr. 150, 72762 Reutlingen, Germany
2
RRI-Reutlingen Research Institute, Reutlingen University, 72762 Reutlingen, Germany
Mathematics 2023, 11(9), 2061; https://doi.org/10.3390/math11092061
Submission received: 2 April 2023 / Revised: 24 April 2023 / Accepted: 25 April 2023 / Published: 26 April 2023
(This article belongs to the Special Issue Fractional Calculus and Mathematical Applications)

Abstract

:
The aim of this article is to establish a stochastic search algorithm for neural networks based on the fractional stochastic processes { B t H , t 0 } with the Hurst parameter H ( 0 , 1 ) . We define and discuss the properties of fractional stochastic processes, { B t H , t 0 } , which generalize a standard Brownian motion. Fractional stochastic processes capture useful yet different properties in order to simulate real-world phenomena. This approach provides new insights to stochastic gradient descent (SGD) algorithms in machine learning. We exhibit convergence properties for fractional stochastic processes.

1. Introduction

The gradient descent methodology is not computationally efficient in all applications. Sometimes, optimization algorithms become stuck in the flat regions of manifolds. In those cases, the optimization algorithm requires a long time to escape. This is the challenge of a vanishing gradient where, for instance, z ( θ ) is almost zero (see Section 4). The method of stochastic gradient descent (SGD) generally overcomes this problem.
Recent advancements in the field of factional stochastic processes exhibit the theoretical benefits of modeling complex systems [1,2,3,4]. Yet, so far no literature exists on fractional stochastic gradient descent (fSGD) or fractional stochastic networks. This paper sketches the potential of such new literature for modeling complex systems. Moreover, we exhibit that fractional stochastic processes are an advancement in machine learning (ML) and artificial intelligence (AI).
The methodology of fractional stochastic gradient descent and the role of stochastic neural networks are based on a generalized assumption of randomness. Mandelbrot and Van Ness defined a fractional Brownian motion (fBM), B t H , together with a Hurst parameter in 1968 [5]. For H = 1 2 , we obtain a standard Browning motion. Yet, for H 1 2 , we obtain new forms of randomness or stochastic processes that match real-world phenomena.
The new feature of a fractional Brownian motion (fBM) is that increments are interdependent. In the literature, this is called self-similarity. A self-similar stochastic process reveals invariance with respect to the time scale (scaling invariance). A standard Brownian motion or a Lévy process displays different properties. They have independent increments and belong to the famous class of Markov processes.
However, in science, there is ubiquitous evidence that fractional stochastic processes are of relevance. For instance, we frequently observe probability densities with sharp peaks, which is related to the phenomena of long-range interdependence. In many real-world observations and applications, we find the presence of interdependence, too. This pattern can be captured by fractional stochastic processes.
Nonetheless, some phenomena are even more complicated and require further generalization towards sub-fractional stochastic processes. The literature on sub-fBM’s demonstrates that those stochastic processes are useful in scientific applications [6]. A sub-fractional Brownian motion provides a nexus between a Brownian motion and fractional stochastic process. Those processes were introduced by Tudor et al. [7,8] and Bojdecki et al. [9]. Note that, as sub-fractional stochastic processes are not martingale processes, the basic tools of stochastic analysis are insufficient. However, researchers have developed new machinery to handle fractional stochastic processes, such as [10] or [11,12,13,14,15].
In this paper, our purpose is to develop and study the idea of fractional stochastic gradient descent algorithms. Our approach generalizes the existing literature on stochastic gradient descent (SGD) and stochastic neural networks. For instance, Hopfield [16] developed neural networks consisting of several perceptrons with randomness. Similarly, a Boltzmann network is a type of stochastic neural network wherein the output of the activation function is interpreted as a probability.
A study already exists about stochastic gradient descent and its challenges in machine learning [17,18]. Recent developments in the theory and applications of stochastic gradient descent are discussed in the following papers: Schmidt et al. [19], Haochen and Sra [20], Gotmare et al. [21], Curtis and Scheinberg [22], de Roos et al. [23]. The focus of our research is the motivation of fractional stochastic gradient descent (fSGD) algorithms. Thus, our research is beyond the scope of current literature and focuses on the possibility of fractional stochastic gradient descent in theory. We neglect potential computational limitations in machine learning.
The paper is organized as follows. Section 2 provides preliminary definitions. Subsequently, we introduce the foundations of fractional stochastic processes in Section 3. Section 4 introduces the idea of fractional stochastic search algorithms and derives the convergence results in general. Finally, in Section 5, we apply the method to two different cases. Section 6 concludes the paper.

2. Preliminaries

Machine learning is mainly based on neural networks and efficient optimization algorithms. The most primitive neural model is inspired by the work of Rosenblatt [24]. In the following section, we define the major elements from a machine learning perspective.
Definition 1.
A stochastic neuron is defined by n-inputs X = ( x 1 , , x n ) T , n-weighting factors W = ( w 1 , , w n ) T and an n-dimensional vector of biases B together with a sigmoid activation function σ c ( ζ ) = 1 1 + e c ζ with c 0 and a stochastic output Y
Y = σ ( W T X + B ) ,
where σ ( W T X + B ) ( 0 , 1 ) . Hence, we define the output for Y = 1 by P r o b ( Y = 1 ; X ) = σ ( W T X + B ) and for Y = 0 by the inverse probability: P r o b ( Y = 0 ; X ) = 1 σ ( W T X + B ) = σ ( W T X B ) .
Note that, if the activation potential is greater than zero, such as W T X + B > 0 , then this neural network is not necessarily activated according to Definition 1. An activation value of one only occurs with the probability of the activation function.
In machine learning, the gradient descent algorithm is omnipresent in all optimization problems. Yet, it does not provide robust solutions in each case. There are computational obstacles, such as when the algorithm becomes stuck in a local minimum or lost in a plateau from which it takes a long time to get out. A plateau is defined as a flat surface region where the gradient σ c is very small (or almost zero).
The optimization algorithm of a neural network always has the goal of finding the optimal weighting parameters θ = ( W , B ) . The standard algorithm used to optimize the parameters is frequently reformulated in order to minimize the cost function. This is called the gradient descent method. This method is closely related to Newton’s algorithm in numerical computing. The following definition summarizes the algorithm from a machine learning vantage point.
Definition 2.
The gradient descent algorithm is defined by
θ t + 1 = θ t λ t C t g t ,
where g t = L ( θ t ) is the gradient of a cost function, C t is an optional conditioning matrix, and λ t is the learning rate.
The stochastic gradient descent (SGD) method overcomes the obstacle if the gradient is close to zero. Indeed, SGD reaches a minimum along a non-linear stochastic process. In the following sections, we first discuss the literature and then generalize the approach to fractional stochastic processes.

3. Fractional Stochastic Processes

3.1. General Definitions

Consider a stochastic process with a Hurst parameter H. Subsequently, we define the elementary tools in fractional calculus.
Definition 3.
Let a , b R , a < b . Let f L 1 ( a , b ) and α > 0 . The left- and right-sided fractional integrals of f of order α are defined for x ( a , b ) , respectively, as
a D x α f ( x ) = a I x α f ( x ) = 1 Γ ( α ) a x ( x u ) α 1 f ( u ) d u a x ,
and
b D x α f ( x ) = b I x α f ( x ) = 1 Γ ( α ) x b ( u x ) α 1 f ( u ) d u x b .
This is the fractional integral of the Riemann–Liouville type. In the same vein, we define factional derivatives where we distinguish between left- and right-sided derivatives.
Definition 4.
The factional left- and right-sided derivatives, for f I a α ( L p ) and 0 < α < 1 , are defined by
a I x α f ( x ) = a D x α f ( x ) = 1 Γ ( 1 α ) d d x a x ( x u ) ( α + 1 ) f ( u ) d u
and
b I x α f ( x ) = b D x α f ( x ) = ( 1 ) α Γ ( 1 α ) d d x x b ( u x ) ( α + 1 ) f ( u ) d u ,
for all x ( a , b ) and I a α ( L p ) is the image of L p ( a , b ) .
Let us assume f I a 1 ( L 1 ) , then we obtain
a D x α a D x 1 α f ( x ) = D f ( x ) , b D x α b D x 1 α f ( x ) = D f ( x ) .
Notably, D α f ( x ) exists for all f C β ( [ a , b ] ) if α < β . Given those definitions, we are ready to define a Brownian motion:
Definition 5.
Let H be 0 < H < 1 , and let B 0 be an arbitrary real number. We call B H ( t , ω ) a fractional Brownian motion (fBM) with Hurst parameter H and starting value B 0 at time 0, such as
1. 
B H ( 0 , ω ) = B 0 , and;
2. 
B H ( t , ω ) B H ( 0 , ω ) = 1 Γ ( H + 1 2 ) [ 0 [ ( t s ) H 1 2 ( s ) H 1 2 ] d B ( s , ω ) +
0 t ( t s ) H 1 2 d B ( s , ω ) ] [Wyle fractional integral];
3. 
Equivalent to the Riemann–Liouville integral:
B H ( t , ω ) B H ( 0 , ω ) = 1 Γ ( H + 1 2 ) 0 t ( t s ) H 1 2 d B ( s , ω ) .
Next, let us consider the following corollary:
Corollary  1.
Consider H = 1 2 and B 0 = 0 . Then the Brownian motion is of B ( t , ω ) = B 1 2 ( t , ω ) .
Proof. 
Let H = 1 2 , we find B 1 2 ( t , ω ) B 1 2 ( 0 , ω ) = 1 Γ ( 1 ) 0 t d B ( s , ω ) = B ( t , ω ) .
In the literature, there exists an alternative, yet useful, definition:
Definition 6.
A fractional Brownian motion is a Gaussian process B H ( t ) for t 0 defined by the following covariance function
R f B M ( t , s ) = E [ B H ( t ) B H ( s ) ] = 1 2 [ | t | 2 H + | s | 2 H | t s | 2 H ] ,
where the Hurst index is denoted by H ( 0 , 1 ) .
Since the covariance of a Brownian motion is given in the literature, it is easy to extend the definition to an fBM with Hurst index H, such as
V a r [ B ( t ) B ( s ) ] = E [ ( B ( t ) B ( s ) ) 2 ] = | t s | V a r [ B H ( t ) B H ( s ) ] = E [ ( B H ( t ) B H ( s ) ) 2 ] = | t s | 2 H ,
where we obtain the definition of a Brownian motion for H = 1 2 . Following Herzog [15], we derive the covariance step-by-step:
C o v [ B H ( t ) B H ( s ) ] = E [ ( B H ( t ) E [ B H ( t ) ] ) ( B H ( s ) E [ B H ( s ) ] ) ] = E [ B H ( t ) B H ( s ) ] = 1 2 E [ B H ( t ) 2 ] + E [ B H ( s ) 2 ] E [ ( B H ( t ) B H ( s ) ) 2 ] = 1 2 [ | t | 2 H + | s | 2 H | t s | 2 H ] .
Corollary  2.
Consider a fractional Brownian motion. The expectation values of non-overlapping increments are E [ B H ( t ) B H ( s ) ] 0 and the variance is of E [ ( B H ( t ) B H ( s ) ) 2 ] = | t s | 2 H for all t , s R
Proof. 
See [15]. □

3.2. Properties

Next, we consider the properties of the fBM over time for different Hurst parameters. Suppose 0 < H < 1 2 or 1 2 < H < 1 . If we assume that the Hurst parameter is of 0 < H < 1 2 , we say the fractional stochastic process has a short memory. Conversely, if 1 2 < H < 1 , we obtain the property of long-range dependence. Figure 1 illustrates sample processes for the three ranges of the Hurst parameters, H.
Proposition 1.
Given a fractional Brownian motion, we obtain the following properties:
1. 
The fBM has stationary increments: B t H B s H = dis . B u H B s H ;
2. 
The fBM is H-self-similar, such as B H ( a t ) = a H B H ( t ) ;
3. 
The fBM is H-self-similar, such as B H ( a t ) = a H B H ( t ) ;
Proof. 
The proof follows Herzog [15]. In order to prove the stationary of increments, we set t 1 < t 2 < t 3 < t 4 . The equality of the covariance implies Y : = B H ( t 2 ) B H ( t 1 ) . Moreover, it has the same distribution, such as X : = B H ( t 4 ) B H ( t 3 ) . Subsequently, we find
E [ ( B H ( t 2 ) B H ( t 1 ) ) 2 ] = ( t 2 t 1 ) 2 H = ( Δ t ) 2 H E [ ( B H ( t 4 ) B H ( t 3 ) ) 2 ] = ( t 4 t 3 ) 2 H = ( Δ t ) 2 H ,
where t 1 < t 2 and t 3 < t 4 with Δ t = t 2 t 1 = t 4 t 3 . This demonstrates that the increments and the time evolution of the increments are the same at any given point. Consequently, we obtain stationary increments.
The second property of Proposition 1 is self-similarity. Consider the following definition,
E [ ( B H ( a t ) ) 2 ] = 1 2 [ ( a t ) 2 H + ( a t ) 2 H ( a t a t ) 2 H ] = ( a t ) 2 H = a 2 H t 2 H = a 2 H E [ ( B H ( t ) ) 2 ] .
Here, we find that ( B H ( a t ) ) 2 = a 2 H ( B H ( t ) ) 2 and B H ( a t ) = a H B H ( t ) . Part (3) is already given in Corollary 2. □

3.3. Definition of Sub-Fractional Processes

In a recent paper, Herzog [15] described a sub-fractional Brownian motion (sub-fBM) as an intermediate between a Brownian motion and a fractional Brownian motion. Without loss of generality, a sub-fBM is a self-similar Gaussian process. Note that both the fBM and sub-fBM have the properties of self-similarity and long-range dependence, yet a sub-fBM does not have stationary increments [9].
Any Brownian motion is uniquely defined by its covariance. For the sub-fBM we denote covariance by C o v ( ξ t H , ξ s H ) .
Definition 7.
Consider a sub-fractional Brownian motion with Hurst parameter H and a centered mean zero Gaussian process ξ H = { ξ t H , t 0 } with the following covariance function
R s f B M ( t , s ) : = E [ ξ t H ξ s H ] = s 2 H + t 2 H 1 2 [ ( s + t ) 2 H + | s t | 2 H ] ,
where ξ 0 H = 0 and E [ ξ t H ] = 0 .
Note, a fractional Brownian motion coincides with a Brownian motion if the Hurst parameter is H = 1 2 . Thus, a Brownian motion on the real line has a covariance of C o v ( ξ t H , ξ s H ) = s t : = min [ s , t ] . The process ξ t H has the following representation for H > 1 2 (see [25]):
ξ t H = 0 t K H ( t , s ) d W s ,
K H ( t , s ) = c H H 1 2 s 1 / 2 H s t ( u s ) H 3 / 2 u H 1 / 2 d u .
The kernel function of a sub-fractional Brownian motion is given by
ϕ s f B M ( s , t ) = 2 C o v ( ξ t H , ξ s H ) s t = H ( 2 H 1 ) | s t | 2 H 2 ( s + t ) 2 H 2 .

3.4. Properties of Sub-Fractional Processes

In this subsection, we reiterate useful properties of sub-fractional Brownian motions such as those described in Herzog [15].
Lemma 1.
Consider ξ t H be a sub-fBM for all t. The properties of the sub-fBM are:
1. 
E [ ( ξ t H ) 2 ] = ( 2 2 2 H 1 ) t 2 H .
2. 
E [ ( ξ t H ξ s H ) 2 ] = 2 2 H 1 ( t 2 H + s 2 H ) + ( t + s ) 2 H + ( t s ) 2 H .
3. 
If H 1 2 , then ξ t H ξ s H dis . ξ u H ξ s H , i.e., the increments are non-stationary.
Proof. 
See [15]. □
Finally, we follow Herzog [15] and prove the following proposition:
Proposition 2.
Let B t H be a fractional Brownian motion and ξ t H be a sub-fractional Brownian motion. For H ( 1 2 , 1 ) , the following holds:
1. 
E [ ( ξ t H ) 2 ] < E [ ( B t H ) 2 ] ;
2. 
R ξ t H ( s , t ) R B t H ( s , t ) .
Proof. 
Obviously, an fBM has the following variance: V a r [ B t H ] = | t | 2 H . Similarly, we obtain the variance of V a r [ ξ t H ] = ( 2 2 2 H 1 ) | t | 2 H for a sub-fBM. Subsequently, we have 0 < ( 2 H 1 ) ln 2 if H > 1 2 .
The second part follows for s , t > 0 :
s 2 H + t 2 H 1 2 [ ( s + t ) 2 H + | t s | 2 H ] 1 2 [ | t | 2 H + | s | 2 H | t s | 2 H ] s 2 H + t 2 H ( s + t ) 2 H .
In the case of s = t = 0 or s = 0 , t 0 , we have equality. □

4. Fractional Stochastic Search

Let X t be a an m-dimensional stochastic process driven by a fractional Brownian motion B t H , where H = 1 2 . The respective stochastic process X t is as follows:
d X t = a ( X t ) d t + σ ( X t ) d B t H , and X 0 = x 0 ,
where X 0 is the initial value and B t H = ( B 1 H ( t ) , , B m H ( t ) ) is an m-dimensional fractional Brownian motion. Next, consider a cost function z : R m R which needs to be optimized. Hence, we study the vector field for which the auxiliary function z ( X ( t ) ) is decreasing. This requires us to find the expectation value:
κ ( t ) = E [ z ( X ( t ) ) ] .
Thus, the function z ( X ( t ) ) is stochastic and dependent on time t. In general, an optimization algorithm of a neural network minimizes the expectation value of this function. Utilizing the machinery of stochastic analysis, Dynkin’s formula, among others, and following the approach described in [26], we obtain
κ ( t ) = z ( X 0 ) + 0 t E [ A ( z ( X ( s ) ) ) ] d s κ ( t + d t ) = z ( X 0 ) + 0 t + d t E [ A ( z ( X ( s ) ) ) ] d s ,
where the operator A = k a k X k + 1 2 i , j ( σ σ T ) 2 X i X j . The usage of Taylor-series approximation and the differencing of Equations (12) yields
Δ κ ( t ) = κ ( t + d t ) κ ( t ) = t t + d t E [ A ( z ( X ( s ) ) ) ] d s = E [ A ( z ( X ( t ) ) ) ] + O ( d t 2 ) .
The method of steepest descent computes the gradient of Δ κ ( t ) such that the process X t , is as negative as possible. However, if X t is a stochastic process, we need to study the expectation of the gradient, particularly where the value Δ κ ( t ) is as negative as possible, such as E [ A ( z ( X ( t ) ) ) ] < 0 .
In order to construct a stochastic process X t with this property, we specify a ( X t ) and σ ( X t ) in Equation (10), respectively. Next, we specify the diffusion term in Equation (10), σ , or the product σ σ T , which is a matrix, such that the algorithm in Equation (1) converges efficiently. Indeed, if we set the term of σ σ T as being inversely proportional to the Hessian matrix, H , then ( σ σ T ) i j = τ ( H z ) i j 1 with τ > 0 . Through this process we can show the convergence of the algorithm and the existence of the solution.
Given that function z is of class C 2 and strictly convex, then, according to [27], the Hessian matrix H z is symmetric, real, positive definite, and non-degenerate. This guarantees that the Hessian matrix H z has an inverse, which is also positive definite. Efficient computation can be achieved by utilizing the Cholesky decomposition. One can show that the diffusion term σ is a lower triangular matrix satisfying ( σ σ T ) i j = τ ( H z ) i j 1 . Under those conditions, we compute A z ( x )
A z ( x ) = k a k z ( x ) X k + 1 2 τ i , j ( H z 1 ) i j ( H z ) i j = z ( x ) , a ( x ) + n 2 τ .
In order to minimize the gradient, A z ( x ) , we have to minimize the first-term, because the second term is a constant. Choosing a ( x ) = λ z ( x ) and assuming A z ( x ) < 0 obtains the following condition:
A z ( x ) = z ( x ) , λ z ( x ) + n 2 τ = λ z ( x ) 2 + n 2 τ < 0 τ λ < 2 n z ( x ) 2 .
Using the square vector norm and the assumption of ξ = inf x z ( x ) 2 0 , it is sufficient to set τ and λ as the main parameters in the SGD algorithm, such that τ = 2 ξ n λ . In the sequel, we apply this algorithm to fractional stochastic search problems.

5. Application of Fractional Stochastic Search

In this section, we demonstrate the working of a fractional stochastic search. We exhibit the convergence of a fractional stochastic search within neural networks.

5.1. Stochastic Search: Case I

Suppose that we have a neural network with the following cost function: z ( x ) = 1 2 x 2 for x ( a 1 , a 2 ) with a i > 0 and a 1 < a 2 . The stochastic gradient descent method searches the minimum of this cost function.
Mathematically, the solution is obvious for this problem. The first derivative is of z ( x ) = x for x ( a 1 , a 2 ) . Hence, the minimum is at z ( a 1 ) = a 1 , and consequently, the minimum value is z ( a 1 ) = 1 2 a 1 2 . Next, we show that we can obtain the same value under a fractional stochastic search algorithm in a neural network.
In step one, we establish an adequate stochastic differential equation according to d X t = a ( X t ) d t + σ ( X t ) d B t H for X 0 = x 0 ( a 1 , a 2 ) . The gradient of the cost function, which is equal to the first derivative z ( x ) = x , as well as the Hessian of the cost function and the second derivative, is defined as H ( x ) = 2 z ( x ) = 1 . Both conditions enable us to compute the Lipschitz continuous coefficient functions. For a ( X t ) = λ z ( x ) = λ x . For σ 2 ( X t ) = τ H 1 ( x ) = τ 2 z ( x ) = τ 1 = τ . Hence, we obtain σ ( X t ) = τ . The stochastic differential equation for H = 1 2 has the form:
d X t = λ X t d t + τ d B t 1 2 .
The SDE in Equation (14) is an Ornstein–Uhlenbeck process driven by a Brownian motion B H ( t ) with the Hurst parameter H = 1 2 [28].
The solution is divided into two parts: In part one, we solve the non-stochastic problem d X t = λ X t d t . This is an ordinary differential equation and has the solution X t = X 0 e λ t . In part two, we define an auxiliary function Y t = X t e λ t and apply the Itô-Doeblin’s lemma:
d Y t = X t e λ t ( λ ) d t + e λ t d X t = X t e λ t ( λ ) d t + e λ t [ λ X t d t + τ d B t H ] = τ e λ t d B t H .
Note that, in this case, the derivation coincides with a standard Brownian motion. Next, integrating the last line yields Y t Y 0 = τ 0 t e λ s d B s H . Hence, we obtain X t = Y t e λ t , which is
X t = Y 0 + τ 0 t e λ s d B s H e λ t = X 0 e λ t + τ 0 t e λ ( t s ) d B s H .
Based on Equation (15), we find the expectation of E [ X t ] = X 0 e λ t . Note that the expected integral of a Brownian motion is zero. For t , the expected value is lim t E [ X t ] = 0 . Next, utilizing the general condition of τ = 2 ξ n λ for ξ = inf z ( x ) 2 and n = 1 in Section 4, we obtain
τ = 2 ξ n λ = 2 inf a 1 2 1 λ = 2 a 1 2 λ τ 2 λ = a 1 2 .
Finally, it remains to show that the SDE in Equation (14) converges to the minimum value. Hence, we study the convergence sequence:
κ ( t ) = E z ( X t ) | X 0 = 1 2 E ( X t ) 2 | X 0 = = 1 2 E X 0 2 e 2 λ t + 2 X 0 e λ t τ 0 t e λ ( t s ) d B s H + τ 0 t e λ ( t s ) d B s H 2 ,
where, for E [ ( X t ) 2 ] , we have substituted Equation (15). Next, we use the property that the expected stochastic integral is zero and the variance of the Brownian motion is V a r ( d B s H ) = d s . Thus, we obtain
κ ( t ) = 1 2 X 0 2 e 2 λ t + τ 0 t e 2 λ ( t s ) d s = = 1 2 X 0 2 e 2 λ t + τ e 2 λ ( t s ) 2 λ | 0 t = = 1 2 X 0 2 e 2 λ t + τ 2 λ 1 e 2 λ t .
In order to show the convergence, we compute the limit of the sequence for time to infinity. We obtain the following:
lim t κ ( t ) = lim t 1 2 0 + τ 2 λ = 1 2 τ 2 λ = 1 2 a 1 2 .
Indeed, we find that the (fractional) stochastic algorithm converges to the same minimum value of our function z ( x ) = x 2 2 for x ( a 1 , a 2 ) .

5.2. Stochastic Search: Case II

Conversely, suppose a fractional stochastic differential equation with a Hurst index H 1 2 of the form
d X t = μ X t d t + λ X t d B t H ,
where we define μ : = η and X 0 = x > 0 . We search the minimum of the function z ( X t ) = X t , where X t is the solution of the SDE in Equation (16). This equation can be rewritten in the fractional Hida space ( S ) H * as
d X t d t = μ X t + λ X t W t H = μ + λ W t H X t ,
where ⋄ is defined as the Wick product. Using Wick calculus, we find the solution as
X t = x e μ t + λ 0 t W s H d s = e μ t + λ B t H ,
where we have used the definition B t H = 0 t W s H d s . By applying the following definitions e w , f = e R z t d B t H 1 2 z H 2 , where z H 2 = R R z ( s ) z ( t ) ϕ ( s , t ) d s d t and ϕ ( s , t ) = H ( 2 H 1 ) | s t | 2 H 2 for s , t R , we obtain the final solution:
X t = x e μ t + λ B t H 1 2 λ 2 t 2 H .
The solution of Equation (18) has an expectation of E [ X t ] = X 0 e η t . Hence, for t , the expected value is zero: lim t E [ X t ] = 0 . It remains to show the convergence of the fractional SDE in machine learning:
κ ( t ) = E z ( X t ) | X 0 = E x e μ t + λ B t H 1 2 λ 2 t 2 H | X 0 = x E e μ t + λ B t H 1 2 λ 2 t 2 H = x e η t .
In order to show convergence, we compute the limit of the sequence for time to infinity. We equally obtain lim t κ ( t ) = lim t x e η t = 0 .
There are notable limitations of fractional stochastic gradient descent in general. Fractional calculus is built around the Riemann–Liouville integral, which is a non-local operator, lacks in uniqueness, and relies on the initial conditions. Given that a fractional process is not a martingale, the common stochastic tools are not applicable. Whether those properties constrain fractional stochastic gradient descent remains an open research question. Computational aspects might also be a limiting factor. However, for the first time, this research studies the idea of fractional search analogous to stochastic gradient descent in machine learning.

6. Conclusions

This article discovers fractional stochastic gradient descent algorithms for the optimization of neural networks. In the standard case, the fractional stochastic approach follows the well-known stochastic gradient descent method in machine learning. We discuss two special cases. First, we exhibit that fractional stochastic algorithms find the minima. This result might enhance algorithmic optimization in machine learning. Second, we discover the generalized patterns and properties of fractional stochastic processes. These insights may create a universal optimization approach in machine learning and AI in the future. We highlight the need for further research in that direction, particularly for the computational issues.

Funding

This research received no external funding except basic financial support from RRI—Reutlingen Research Institute, Reutlingen University. I appreciate the support for the advancement of scientific research and the betterment of society for the future.

Data Availability Statement

All data are available in the paper or upon request from the author.

Acknowledgments

I thank three anonymous reviewers for helpful comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Padhi, S.; Graef, J.; Pati, S. Multiple Positive Solutions for a boundary value problem with nonlinear nonlocal Riemann-Stieltjes Integral Boundary Conditions. Fract. Calc. Appl. Calc. 2018, 21, 716–745. [Google Scholar] [CrossRef]
  2. Ruiz, W. Dynamical system method for investigating existence and dynamical property of solution of nonlinear time-fractional PDEs. Nonlinear Dyn. 2019, 99, 1–20. [Google Scholar]
  3. Kamran, J.W.; Jamal, A.; Li, X. Numerical Solution of Fractional-Order Fredholm Integrodifferentiantial Equation in the Sense of Atangana-Baleanu Derivative. Math. Probl. Eng. 2020, 2021, 6662808. [Google Scholar]
  4. Guarigilia, E. Fractional calculus, zeta functions and Shannon entropy. Open Math. 2021, 19, 87–100. [Google Scholar] [CrossRef]
  5. Mandelbrot, B.; van Ness, J. Fractional Brownian Motions, Fractional Noises and Applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  6. Monin, A.; Yaglom, A. Statistical Fluid Mechansics: Mechanics of Turbulence; Dover Publication: New York, NY, USA, 2007; Volume 2. [Google Scholar]
  7. Tudor, C. On the Wiener integral with respect to sub-fractional Brownian motion on an interval. J. Math. Anal. Appl. 2009, 351, 456–468. [Google Scholar] [CrossRef]
  8. Tudor, C.; Zili, M. Covariance measure and stochastic heat equation with fractional noise. Fract. Calc. Appl. Anal. 2014, 17, 807–826. [Google Scholar] [CrossRef]
  9. Bojdecki, T.; Gorostiza, L.; Talarczyk, A. Sub-fractional Brownian motion and its relation to occuption times. Statist. Probab. Lett. 2004, 69, 405–419. [Google Scholar] [CrossRef]
  10. Duncan, T.; Hu, Y.; Pasik-Duncan, B. Stochastic Calculus for Fractional Brownian Motion. SIAM J. Control Optim. 2000, 38, 582–612. [Google Scholar] [CrossRef]
  11. Shen, G.; Yan, L. The stochastic integral with respect to the sub-fractional Brownian motion with H> 1 2 . J. Math. Sci. Sci. Adv. 2010, 6, 219–239. [Google Scholar]
  12. Yan, L.; Shen, G.; He, K. Itô’s formula for a sub-fractional Brownian motion. Commun. Stoch. Anal. 2011, 5, 135–159. [Google Scholar] [CrossRef]
  13. Liu, J.; Yan, L. Remarks on asymptotic behavior of weighted quadratic variation of subfractional Brownian motion. J. Korean Stat. Soc. 2012, 41, 177–187. [Google Scholar] [CrossRef]
  14. Prakasa, R. On some maximal and integral inequailities for sub-fractional Brownian motion. Stoch. Anal. Appl. 2017, 35, 2017. [Google Scholar]
  15. Herzog, B. Adopting Feynman–Kac Formula in Stochastic Differential Equations with (Sub-)Fractional Brownian Motion. Mathematics 2022, 10, 340. [Google Scholar] [CrossRef]
  16. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  17. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  18. Kochenderfer, H.; Wheeler, T. Algorithms for Optimization; MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  19. Schmidt, M.; Roux, N.L.; Bach, F. Minimizing finite sums with the stochastic average gradient. Math. Program. 2017, 162, 83–112. [Google Scholar] [CrossRef]
  20. Haochen, J.; Sra, S. Random Shuffling Beats SGD after Finite Epochs. Proc. Mach. Learn. Res. 2019, 97, 2624–2633. [Google Scholar]
  21. Gotmare, A.; Keskar, N.S.; Xiong, C.; Socher, R. A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation. arXiv 2018, arXiv:1810.13243. [Google Scholar]
  22. Curtis, F.E.; Scheinberg, K. Adaptive Stochastic Optimization: A Framework for Analyzing Stochastic Optimization Algorithms. IEEE Signal Process 2020, 37, 32–42. [Google Scholar] [CrossRef]
  23. de Roos, F.; Jidling, C.; Wills, A.; Schön, T.; Hennig, P. A Probabilistically Motivated Learning Rate Adaptation for Stochastic Optimization. arXiv 2021, arXiv:2102.10880. [Google Scholar]
  24. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  25. Alòs, E.; Mazet, O.; Nualart, D. Stochastic Calculus with Respect to Gaussian processes. Ann. Probab. 2001, 29, 766–801. [Google Scholar] [CrossRef]
  26. Calin, O. Deep Learning Architectures; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  27. Golub, G.H.; van Loan, C. Matrix Computations; Johns Hopkins Press: Baltimore, MD, USA, 1996. [Google Scholar]
  28. Ornstein, L.; Uhlenbeck, G. On the theory of Brownian motion. Phys. Rev. 1930, 36, 823–841. [Google Scholar]
Figure 1. Different fractional Brownian motions with the following Hurst index: (Left-panel) H = 0.25 , (middle-panel) H = 0.50 (standard BM), and (right-panel) H = 0.75 .
Figure 1. Different fractional Brownian motions with the following Hurst index: (Left-panel) H = 0.25 , (middle-panel) H = 0.50 (standard BM), and (right-panel) H = 0.75 .
Mathematics 11 02061 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Herzog, B. Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI. Mathematics 2023, 11, 2061. https://doi.org/10.3390/math11092061

AMA Style

Herzog B. Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI. Mathematics. 2023; 11(9):2061. https://doi.org/10.3390/math11092061

Chicago/Turabian Style

Herzog, Bodo. 2023. "Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI" Mathematics 11, no. 9: 2061. https://doi.org/10.3390/math11092061

APA Style

Herzog, B. (2023). Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI. Mathematics, 11(9), 2061. https://doi.org/10.3390/math11092061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop