Next Article in Journal
Modulations of Collapsing Stochastic Modified NLSE Structures
Next Article in Special Issue
On the Analytic Continuation of Lauricella–Saran Hypergeometric Function FK(a1,a2,b1,b2;a1,b2,c3;z)
Previous Article in Journal
Regularized Asymptotics of the Solution of a Singularly Perturbed Mixed Problem on the Semiaxis for the Schrödinger Equation with the Potential Q = X2
Previous Article in Special Issue
Algebraic Solution of Tropical Best Approximation Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation of Brownian Motion on Simple Graphs

by
George A. Anastassiou
1,* and
Dimitra Kouloumpou
2
1
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
2
Section of Mathematics, Hellenic Naval Academy, 18539 Piraeus, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4329; https://doi.org/10.3390/math11204329
Submission received: 18 September 2023 / Revised: 8 October 2023 / Accepted: 17 October 2023 / Published: 18 October 2023
(This article belongs to the Special Issue Approximation Theory and Applications)

Abstract

:
This article is based on chapters 9 and 19 of the new neural network approximation monograph written by the first author. We use the approximation properties coming from the parametrized and deformed neural networks based on the parametrized error and q-deformed and β -parametrized half-hyperbolic tangent activation functions. So, we implement a univariate theory on a compact interval that is ordinary and fractional. The result is the quantitative approximation of Brownian motion on simple graphs: in particular over a system S of semiaxes emanating from a common origin radially arranged and a particle moving randomly on S. We produce a large variety of Jackson-type inequalities, calculating the degree of approximation of the engaged neural network operators to a general expectation function of this kind of Brownian motion. We finish with a detailed list of approximation applications related to the expectation of important functions of this Brownian motion. The differentiability of our functions is taken into account, producing higher speeds of approximation.

1. Introduction

The first author in [1,2], see Section 2, Section 3, Section 4 and Section 5, was the first researcher to derive quantitative neural network approximation to continuous functions by precisely defined neural network operators of Cardaliaguet–Euvrard and ‘Squasing’ types, by using the modulus of continuity of the engaged function or its high-order derivative and obtaining almost attained Jackson-type inequalities. He took care of both the univariate and multivariate cases. Defining these operators as ’bell-shaped’ and ’squashing’ functions is supposed to provide compact support.
Furthermore, the first author (motivated by [3]) continued his studies on neural networks approximation by introducing and using the appropriate quasi-interpolation operators of sigmoidal and hyperbolic tangent type, which resulted in [4], by treating both the univariate and multivariate cases. He dealt also with the corresponding fractional cases [5,6,7]. The authors also are motivated by the seminal works [8,9,10,11,12,13].
In [14,15], the first author extended his studies for Banach space-valued functions for activation functions induced by the parametrized error and q-deformed and β -parametrized half-hyperbolic tangent sigmoid functions. The authors, motivated by [16], created neural network quantitative approximations to Brownian motion over a simple graph of a system of semiaxes.
They obtained a collection of Jackson-type inequalities, calculating the error of approximation to a general expectation function of this Brownian motion and its derivative. They present ordinary and fractional calculus results. They finish with a plethora of interesting applications.

2. Basics

2.1. About the Parametrized (Gauss) Error Special Activation Function

Here, we follow [17].
We consider here the parametrized (Gauss) error special activation function
e r f λ z = 2 π 0 λ z e t 2 d t , λ > 0 , z R ,
which is a sigmoidal-type function and a strictly increasing function.
Of special interest in neural network theory is when 0 < λ < 1 ; see Section 1—Introduction.
It has the basic properties
e r f 0 = 0 , e r f λ x = e r f λ x , for every 0 < λ < 1 e r f + = 1 , e r f = 1 ,
and
e r f λ x = 2 λ π e λ x 2 , x R .
We consider the function
χ x = 1 4 e r f λ x + 1 e r f λ x 1 , x R ,
and we notice that
χ x = χ x .
Thus, χ is an even function.
Since x + 1 > x 1 , then e r f λ x + 1 > e r f λ x 1 , and χ x > 0 , all x R .
We see
χ 0 = e r f λ 2 .
Let x > 0 ; then, we have that
χ x = λ 2 π e λ 2 x 1 2 e λ 2 x + 1 2 e λ 2 x + 1 2 e λ 2 x 1 2 < 0 ,
proving χ x < 0 , for x > 0 . That is, χ is strictly decreasing on [ 0 , ) , and it is strictly increasing on ( , 0 ] , and χ 0 = 0 .
Clearly, the x-axis is the horizontal asymptote of χ .
Concluding, χ is a bell symmetric function with maximum
χ 0 = e r f λ 2 .
Theorem 1.
It holds
i = χ x i = 1 , x R .
We have
Theorem 2.
We have that
χ x d x = 1 .
Hence, χ x is a density function on R . We need
Theorem 3.
Let 0 < α < 1 , and n N with n 1 α > 2 , λ > 0 . It holds
k = : n x k n 1 α χ n x k < 1 e r f λ n 1 α 2 2 ,
with
lim n + 1 e r f λ n 1 α 2 2 = 0 .
Denote by · the integral part and by · the ceiling of a number.
Furthermore, we need
Theorem 4.
Let x a , b R , λ > 0 and n N so that n a n b . Then,
1 k = n a n b χ n x k < 1 χ 1 = 4 e r f 2 λ .
Remark 1.
As in [18], we have that
lim n k = n a n b χ n x k 1 .
Note 1.
For large enough n, we always obtain n a n b . Also, a k n b , iff n a k n b . As in [18], we obtain that
k = n a n b χ n x k 1 .
Definition 1.
Let f C a , b and n N : n a n b . We introduce and define the X-valued linear neural network operators
A n f , x : = k = n a n b f k n χ n x k k = n a n b χ n x k , x a , b .
Clearly here, A n f , x C a , b . We study here the pointwise and uniform convergence of A n f , x to f x with rates.
For convenience, also, we call
A n f , x : = k = n a n b f k n χ n x k ,
that is
A n f , x = A n f , x k = n a n b χ n x k .
So that
A n f , x f x = A n f , x k = n a n b χ n x k f x
= A n f , x f x k = n a n b χ n x k k = n a n b χ n x k .
Consequently, we derive
A n f , x f x 4 e r f 2 λ A n f , x f x k = n a n b χ n x k .
That is
A n f , x f x 4 e r f 2 λ k = n a n b f k n f x χ n x k .
We will estimate the right-hand side of (19).
For that, we need, for f C a , b , the first modulus of continuity
ω 1 f , δ a , b : = ω 1 f , δ : = sup x , y a , b x y δ f x f y , δ > 0 .
The fact f C a , b is equivalent to lim δ 0 ω 1 f , δ = 0 ; see [19].
We present a series of real-valued neural network approximations to a function given with rates.
We first give
Theorem 5.
Let f C a , b , 0 < α < 1 , n N : n 1 α > 2 , λ > 0 , x a , b . Then,
(i)
A n f , x f x 4 e r f 2 λ ω 1 f , 1 n α + 1 e r f λ n 1 α 2 f = : ρ ,
and
(ii)
A n f f ρ .
We notice lim n A n f = f , pointwise and uniformly.
The speed of convergence is max 1 n α , 1 e r f λ n 1 α 2 .
We need
Definition 2
([20]). Let a , b R , α > 0 ; m = α N ( · is the ceiling of the number), f : a , b R . We assume that f m L 1 a , b . We call the left Caputo fractional derivative of order α:
D a α f x : = 1 Γ m α a x x t m α 1 f m t d t , x a , b .
If α N , we set D a α f : = f m the ordinary real-valued derivative (defined similar to numerical one, see [21], p. 83), and set D a 0 f : = f . See also [22,23,24,25].
By [20], D a α f x exists almost everywhere in x a , b and D a α f L 1 a , b .
If f m L a , b < , then by [20], D a α f C a , b , hence D a α f C a , b .
We mention the following.
Lemma 1
([19]). Let α > 0 , α N , m = α , f C m 1 a , b and f m L a , b . Then, D a α f a = 0 .
We also mention the following.
Definition 3
([26]). Let a , b R , α > 0 , m : = α . We assume that f m L 1 a , b , where f : a , b R . We call the right Caputo fractional derivative of order α:
D b α f x : = 1 m Γ m α x b z x m α 1 f m z d z , x a , b .
We observe that D b m f x = 1 m f m x , for m N , and D b 0 f x = f x .
By [26], D b α f x exists almost everywhere on a , b and D b α f L 1 a , b .
If f m L a , b < , and α N , by [26], D b α f C a , b , hence D b α f C a , b .
See also [27].
We need
Lemma 2
([19]). Let f C m 1 a , b , f m L a , b , m = α , α > 0 , α N . Then, D b α f b = 0 .
We present the following real-valued fractional approximation result by e r f λ -based neural networks.
Theorem 6.
Let 0 < α , β < 1 , f C 1 a , b , x a , b , n N : n 1 β > 2 , λ > 0 . Then,
(i)
A n f , x f x
4 e r f 2 λ Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D x α f , 1 n β x , b n α β +
1 e r f λ n 1 β 2 2 D x α f , a , x x a α + D x α f , x , b b x α ,
and
(ii)
A n f f 4 Γ α + 1 e r f 2 λ
sup x a , b ω 1 D x α f , 1 n β a , x + sup x a , b ω 1 D x α f , 1 n β x , b n α β +
1 e r f λ n 1 β 2 2 b a α sup x a , b D x α f , a , x + sup x a , b D x α f , x , b .
When α = 1 2 , we derive
Corollary 1.
Let 0 < β < 1 , f C 1 a , b , x a , b , n N : n 1 β > 2 , λ > 0 . Then,
(i)
A n f , x f x
8 e r f 2 λ π ω 1 D x 1 2 f , 1 n β a , x + ω 1 D x 1 2 f , 1 n β x , b n β 2 +
1 e r f λ n 1 β 2 2 D x 1 2 f , a , x x a + D x 1 2 f , x , b b x ,
and
(ii)
A n f f 8 e r f 2 λ π
sup x a , b ω 1 D x 1 2 f , 1 n β a , x + sup x a , b ω 1 D x 1 2 f , 1 n β x , b n β 2 +
1 e r f λ n 1 β 2 2 b a sup x a , b D x 1 2 f , a , x + sup x a , b D x 1 2 f , x , b < .

2.2. About q-Deformed and β -Parametrized Half-Hyperbolic Tangent Function φ q

All the next background comes from [28].
Here, we describe the properties of the activation function
φ q t : = 1 q e β t 1 + q e β t , t R ,
where q , β > 0 .
We have that
φ q 0 = 1 q 1 + q
and
φ q t = φ 1 q t , t R ,
hence
φ 1 q t = φ q t .
It is
lim t + φ q t = φ q + = 1 ,
and
lim t φ q t = φ q = 1 .
Furthermore
φ q t = 2 β q e β t e β t + q 2 > 0 , t R ,
therefore, φ q is striclty increasing. Moreover, in case of t < ln q β , we have that φ q is strictly concave up, with φ q ln q β = 0 .
And in case of t > ln q β , we have that φ q is strictly concave down.
Clearly, φ q is a shifted sigmoid function with φ q 0 = 1 q 1 + q , and φ q x = φ q 1 x , ∀ x R , (a semi-odd function); see also [28].
We consider the function
ϕ q x : = 1 4 φ q x + 1 φ q x 1 > 0 ,
x R ; β , q > 0 . Notice that ϕ q ± = 0 , so the x-axis is horizontal asymptote. We have that
ϕ q x = ϕ 1 q x , x R ,
which is a deformed symmetry.
Next, we have that
φ q is striclty increasing over , ln q β 1 and it is strictly decreasing over ln q β + 1 , + .
Moreover, ϕ q is concave down over ln q β 1 , ln q β + 1 , and it is strictly concave down over ln q β 1 , ln q β + 1 .
Consequently, ϕ q has a bell-type shape over R .
Of course, it holds ϕ q ln q β < 0 . Thus, at x = ln q β , we have the maximum value of ϕ q , which is
ϕ q ln q β = 1 e β 2 1 + e β = φ 1 1 2 .
We mention
Theorem 7
([29]). We have that
i = ϕ q x i = 1 , x R , q , β > 0 .
It follows
Theorem 8
([29]). It holds
ϕ q x d x = 1 , q , β > 0 .
So that ϕ q is a density function on R ; q , β > 0 .
We need the following result,
Theorem 9
([29]). Let 0 < α < 1 , and n N with n 1 α > 2 ; q , β > 0 . Then,
k = : n x k n 1 α ϕ q n x k < max q , 1 q e 2 β e β n 1 α = K e β n 1 α ,
where K : = max q , 1 q e 2 β .
Let · be the ceiling of the number, and let · be the integral part of the number.
We mention the following result:
Theorem 10
([29]). Let x a , b R and n N so that n a n b . For q > 0 , we consider the number λ q > z 0 > 0 with ϕ q z 0 = ϕ q 0 and β , λ q > 1 . Then,
1 k = n a n b ϕ q n x k < max 1 ϕ q λ q , 1 ϕ 1 q λ 1 q = : θ q .
We also mention
Remark 2
([29]). (i) We have that
lim n + k = n a n b ϕ q n x k 1 , for at least some x a , b ,
where β , q > 0 .
(ii) Let a , b R . For large n, we always have n a n b . Also, a k n b , if n a k n b . In general, it holds
k = n a n b ϕ q n x k 1 .
We need
Definition 4.
Let f C a , b and n N : n a n b . We introduce and define the real-valued linear neural network operators
H n f , x : = k = n a n b f k n Φ q n x k k = n a n b Φ q n x k , x a , b ; q , β > 0 .
Clearly, H n f C a , b .
We study here the pointwise and uniform convergence of H n f , x to f x with rates.
For convenience, also we call
H n f , x : = k = n a n b f k n Φ q n x k , .
That is
H n f , x : = H n f , x k = n a n b Φ q n x k .
So that
H n f , x f x = H n f , x k = n a n b Φ q n x k f x =
H n f , x f x k = n a n b Φ q n x k k = n a n b Φ q n x k .
Consequently, we derive that
H n f , x f x θ q H n f , x f x k = n a n b Φ q n x k =
θ q k = n a n b f k n f x Φ q n x k ,
where θ q as in (41). We will estimate the right-hand side of the last quantity.
We present a set of real-valued neural network approximations to a function given with rates.
Theorem 11.
Let f C a , b , 0 < α < 1 , n N : n 1 α > 2 , q , β > 0 , x a , b . Then,
(i)
H n f , x f x θ q ω 1 f , 1 n α + 2 f K e β n 1 α = : τ ,
where K as in (40),
and
(ii)
H n f f τ .
We observe that lim n H n f = f , pointwise and uniformly.
Next, we present the following.
Theorem 12.
Let 0 < α , β < 1 , q , β > 0 , f C 1 a , b , x a , b , n N : n 1 β > 2 . Then,
(i)
H n f , x f x
θ q Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D x α f , 1 n β x , b n α β +
K e β n 1 β D x α f , a , x x a α + D x α f , x , b b x α ,
and
(ii)
H n f f θ q Γ α + 1
sup x a , b ω 1 D x α f , 1 n β a , x + sup x a , b ω 1 D x α f , 1 n β x , b n α β +
b a α K e β n 1 β sup x a , b D x α f , a , x + sup x a , b D x α f , x , b .
When α = 1 2 , we derive
Corollary 2.
Let 0 < β < 1 , q , β > 0 , f C 1 a , b , x a , b , n N : n 1 β > 2 . Then,
(i)
H n f , x f x
2 θ q π ω 1 D x 1 2 f , 1 n β a , x + ω 1 D x 1 2 f , 1 n β x , b n β 2 +
K e β n 1 β D x 1 2 f , a , x x a + D x 1 2 f , x , b b x ,
and
(ii)
H n f f 2 θ q π
sup x a , b ω 1 D x 1 2 f , 1 n β a , x + sup x a , b ω 1 D x 1 2 f , 1 n β x , b n β 2 +
b a K e β n 1 β sup x a , b D x 1 2 f , a , x + sup x a , b D x 1 2 f , x , b < .

3. Combine 2.1 and 2.2

Let a , b R with a < b , f C [ a , b ] . Let also q , λ , β > 0 , γ = max q , 1 q .
For the next theorems, we call
1 L n f , x : = A n f , x , x a , b
2 L n f , x : = H n f , x , x a , b .
Also, we set
K 1 = K 1 λ = 4 e r f ( 2 λ )
K 2 = K 2 q = θ q .
Furthermore, we set
β ^ 1 , n = β ^ 1 , n λ , β = 1 e r f λ n 1 β 2 , n N , λ > 0 , 0 < β < 1 .
β ^ 2 , n = β ^ 2 , n q , β , β = 2 γ e 2 β β n 1 β , n N , q , β > 0 , 0 < β < 1
We present the following.
Theorem 13.
Let f C a , b , 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , x a , b . Then, for i = 1 , 2 ,
(i)
i L n f , x f x K i · ω 1 f , 1 n β + β ^ i , n f = : ρ i ,
and
(ii)
i L n f f ρ i .
We observe that lim n i L n f = f , pointwise and uniformly.
Proof. 
From Theorems 5 and 11. □
Next, we present
Theorem 14.
Let 0 < α , β < 1 , q , λ , β > 0 , f C 1 a , b , x a , b , n N : n 1 β > 2 . Then, for i = 1 , 2
(i)
i L n f , x f x
K i Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D x α f , 1 n β x , b n α β +
β ^ i , n 2 D x α f , a , x x a α + D x α f , x , b b x α ,
and
(ii)
i L n f f K i Γ α + 1
sup x a , b ω 1 D x α f , 1 n β a , x + sup x a , b ω 1 D x α f , 1 n β x , b n α β +
b a α β ^ i , n 2 sup x a , b D x α f , a , x + sup x a , b D x α f , x , b .
Proof. 
From Theorems 6 and 12. □
When α = 1 2 , we derive
Corollary 3.
Let 0 < β < 1 , q , λ , β > 0 , f C 1 a , b , x a , b , n N : n 1 β > 2 . Then, for i = 1 , 2
(i)
i L n f , x f x
2 K i π ω 1 D x 1 2 f , 1 n β a , x + ω 1 D x 1 2 f , 1 n β x , b n β 2 +
β ^ i , n 2 D x 1 2 f , a , x x a + D x 1 2 f , x , b b x ,
and
(ii)
i L n f f 2 K i π
sup x a , b ω 1 D x 1 2 f , 1 n β a , x + sup x a , b ω 1 D x 1 2 f , 1 n β x , b n β 2 +
b a β ^ i , n 2 sup x a , b D x 1 2 f , a , x + sup x a , b D x 1 2 f , x , b < .

4. About Random Motion on Simple Graphs

Here, we follow [16].
Suppose we have a system of S semiaxes with a common origin radially arranged and a particle moving randomly on S. Possible applications include the spread of toxic particles in a system of channels or vessels or the propagation of information in networks.
The mathematical model is the following: Let S be the set consisting of n semiaxes S 1 , , S n , n 2 , with a common origin 0 and let X t be the Brownian motion process on S: namely, the diffusion process on S whose infinitesimal generator L is
L u = 1 2 u ,
where
u = u 1 , , u n ,
together with the continuity conditions (a total of n 1 equations),
u 1 0 = = u n 0
and the so-called “Kirchoff condition”
u 1 0 + + u n 0 = 0 .
This is a Walsh-type Brownian motion (see [30]).
The process X t has a standard Brownian motion on each of the semiaxes and, when it hits 0, it continues its motion on the j-th semiaxis, 1 j n , with probability 1 n .
For each semiaxis S j , 1 j n , it is convenient to use the coordinate x j , 0 x j . Notice that if u = u 1 , , u n is a function on S, then its j-th component, u j , is a function on S j ; thus, u j = u j x j .
The transition density of X t is
p t , x k , y j = 2 n 2 π t e x k + y j 2 2 t , if k j ,
and
p t , x k , y k = 1 2 π t e x k y k 2 2 t n 2 n e x k + y k 2 2 t .
We need the following result.
Theorem 15.
Let t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 fixed. We consider the function g : R R , which is bounded on 0 , , i.e., there exists M > 0 such that g ( x ) M , for every x 0 , , and it is Lebesgue measurable on R . Let also X t be the standard Brownian motion on each of the semiaxes j = 1 , , n as described above. Here, x k is fixed on S k semiaxes, k 1 , , n . We consider the related expected value function
r t : = E k g ( X t ) = 0 g ( y k ) p ( t , x k , y k ) d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) d y j , t t 1 , t 2 .
The function r t is continuous in t and differentiable.
Proof. 
First, we observe that for t t 1 , t 2 and k , j 1 , , n with k j
0 < p t , x k , y j < 2 2 π t 1 .
Also, for t t 1 , t 2 , and k 1 , , n , it is
0 < p t , x k , y k < 2 2 π t 1 .
It is enough to prove that
I ( t ) : = 0 g ( y k ) p ( t , x k , y k ) d y k
is continuous in t t 1 , t 2 . We have that
g ( y k ) M .
Thus,
g ( y k ) p t , x k , y k M 2 2 π t 1 .
Furthermore, as 0 < t N t , with N , we obtain
g ( y k ) p t N , x k , y k g ( y k ) p t , x k , y k , for every y k 0 .
By the dominating convergence theorem I t n I t and thus, I t is continuous in t ; consequently, the function
r t : = E k g ( X t )
is continuous in t. □
We also need the next theorem.
Theorem 16.
Let t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 are fixed. We consider function g : R R , which is bounded on 0 , and Lebesgue measurable on R . Let also X t be the standard Brownian motion on each of the semiaxes j = 1 , , n as described above. Here, x k is fixed on S k semiaxes, k 1 , , n . Then, the related expected value function
r t : = E k g ( X t ) = 0 g ( y k ) p ( t , x k , y k ) d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) d y j , t t 1 , t 2 ,
is differentiable in t , and
r t t = 0 g ( y k ) p ( t , x k , y k ) t d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) t d y j , t t 1 , t 2 ,
which is continuous in t.
Proof. 
First, we observe that for t t 1 , t 2 and k , j 1 , , n with k j ,
p t , x k , y j t = 1 n t 2 π t e x k + y j 2 2 t x k + y j 2 t 1 .
Also, for t t 1 , t 2 , and k 1 , , n , it is
p t , x k , y k t = 1 2 t 2 π t e x k y k 2 2 t x k y k 2 t 1 n 2 n e x k + y k 2 2 t x k + y k 2 t 1 .
Furthermore, for k j ,
p t , x k , y j t 1 n t 1 2 π t 1 x k + y j 2 t 1 + 1
for every y j 0 , ,
and
p t , x k , y k t 1 2 t 1 2 π t 1 x k y k 2 t 1 + 1 + n 2 n x k + y k 2 t 1 + 1
for every y k 0 , .
So, p t , x k , y j t and p t , x k , y k t are bounded with respect to t. The bounds are integrable with respect to y j and y k , respectively.
We have
r t : = E k g ( X t ) = 0 g ( y k ) p ( t , x k , y k ) d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) d y j , t t 1 , t 2 .
We apply differentiation under the integral sign.
We notice that
g ( y k ) p ( t , x k , y k ) M 1 2 t 1 2 π t 1 x k y k 2 t 1 + 1 + n 2 n x k + y k 2 t 1 + 1 .
and
g ( y k ) p ( t , x k , y j ) M 1 n t 1 2 π t 1 x k + y j 2 t 1 + 1 .
Therefore, there exists
r t t = 0 g ( y k ) p ( t , x k , y k ) t d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) t d y j , t t 1 , t 2 ,
which is continuous in t (same proof as in Theorem 15). □

5. Main Results

We present the following general approximation results of Brownian motion on simple graphs.
Theorem 17.
We consider function g : R R , which is bounded on 0 , and Lebesgue measurable on R . Let also r t : = E k g X t be the related expected value function.
If 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , then for i = 1 , 2
(i)
i L n r t r t K i · ω 1 r , 1 n β + β ^ i , n r , t 1 , t 2 = : ρ i ,
and
(ii)
i L n r t r t , t 1 , t 2 ρ i .
We observe that lim n i L n r = r , pointwise and uniformly.
Proof. 
From Theorem 13. □
Next, we present
Theorem 18.
We consider function g : R R , which is bounded on 0 , and Lebesgue measurable on R . Let also r t : = E k g X t be the related expected value function.
If 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2
(i)
i L n r t r t
K i Γ α + 1 ω 1 D t α r , 1 n β t 1 , t + ω 1 D t α r , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α r , t 1 , t t t 1 α + D t α r , t , t 2 t 2 t α ,
and
(ii)
i L n r r , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α r , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α r , 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α r , t 1 , t + sup t t 1 , t 2 D t α r , t , t 2 .
Proof. 
From Theorem 14. □
When α = 1 2 , we derive
Corollary 4.
We consider function g : R R , which is bounded on 0 , and Lebesgue measurable on R . Let also r t : = E k g X t be the related expected value function.
If 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n r t r t
2 K i π ω 1 D t 1 2 r , 1 n β t 1 , t + ω 1 D t 1 2 r , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 r , t 1 , t t t 1 + D t 1 2 r , t , t 2 t 2 t ,
and
(ii)
i L n r r , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 r , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 r , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 r , t 1 , t + sup t t 1 , t 2 D t 1 2 r , t , t 2 < .
Proof. 
From Corollary 3. □
We continue with
Theorem 19.
We consider function g : R R , which is bounded on 0 , and Lebesgue measurable on R . Let also r t : = E k g X t be the related expected value function.
If 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , then for i = 1 , 2 ,
(i)
i L n r t t r t t K i · ω 1 r t , 1 n β + β ^ i , n r t , t 1 , t 2 = : ρ i ,
and
(ii)
i L n r t r t , t 1 , t 2 ρ i .
We observe that lim n i L n r t = r t , pointwise and uniformly.
Proof. 
From Theorem 13. □

6. Applications

Let a function g : R R , which is bounded on t 1 , t 2 , where t 1 , t 2 > 0 with t 1 < t 2 and is Lebesgue measurable on R . For the Brownian Motion on simple graphs X t , we will use the following notations
r t : = E k g ( X t ) =
0 g ( y k ) p ( t , x k , y k ) d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) d y j : = E k g ( X t ) ( 0 ) .
and
r t t = 0 g ( y k ) p ( t , x k , y k ) t d y k + j = 1 , j k n 0 g ( y j ) p ( t , x k , y j ) t d y j : = E k g ( X t ) ( 1 ) .
We can apply our main results to the function g ( W ) = W . Consider the function g : R R , where g ( x ) = x for every x R . Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k W ( t ) = 0 y k p ( t , x k , y k ) d y k + j = 1 , j k n 0 y j p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 5.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 ; then, for i = 1 , 2 and j = 0 , 1
(i)
i L n E k W ( j ) ( t ) E k W ( j ) ( t ) K i · ω 1 E k W ( j ) , 1 n β + β ^ i , n E k W ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k W ( j ) ( t ) E k W ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k W ( j ) ( t ) = E k W ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 6.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k W t E k W t
K i Γ α + 1 ω 1 D t α E k W , 1 n β t 1 , t + ω 1 D t α E k W , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k W , t 1 , t t t 1 α + D t α E k W , t , t 2 t 2 t α ,
and
(ii)
i L n E k W E k W , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k W 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k W , t 1 , t + sup t t 1 , t 2 D t α E k W , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 7.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k W t E k W t
2 K i π ω 1 D t 1 2 E k W , 1 n β t 1 , t + ω 1 D t 1 2 E k W , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k W , t 1 , t t t 1 + D t 1 2 E k W , t , t 2 t 2 t ,
and
(ii)
i L n E k W E k W , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k W , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k W , t 1 , t + sup t t 1 , t 2 D t 1 2 E k W , t , t 2 < .
Proof. 
From Corollary 4. □
For the next application, we consider the function g : R R , where g ( x ) = cos x for every x R . Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k cos W ( t ) = 0 cos y k p ( t , x k , y k ) d y k + j = 1 , j k n 0 cos y j p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 8.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 ; then, for i = 1 , 2 and j = 0 , 1
(i)
i L n E k cos W ( j ) ( t ) E k cos W ( j ) ( t )
K i · ω 1 E k cos W ( j ) , 1 n β + β ^ i , n E k cos W ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k cos W ( j ) ( t ) E k cos W ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k cos W ( j ) ( t ) = E k cos W ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 9.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k cos W t E k cos W t
K i Γ α + 1 ω 1 D t α E k cos W , 1 n β t 1 , t + ω 1 D t α E k cos W , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k cos W , t 1 , t t t 1 α + D t α E k cos W , t , t 2 t 2 t α ,
and
(ii)
i L n E k cos W E k cos W , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k cos W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k cos W 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k cos W , t 1 , t + sup t t 1 , t 2 D t α E k cos W , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 10.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k cos W t E k cos W t
2 K i π ω 1 D t 1 2 E k cos W , 1 n β t 1 , t + ω 1 D t 1 2 E k cos W , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k cos W , t 1 , t t t 1 + D t 1 2 E k cos W , t , t 2 t 2 t ,
and
(ii)
i L n E k cos W E k cos W , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k cos W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k cos W , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k cos W , t 1 , t + sup t t 1 , t 2 D t 1 2 E k cos W , t , t 2 < .
Proof. 
From Corollary 4. □
Let us consider now the function g : R R , where g ( x ) = tanh x for every x R . Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k tanh W ( t ) = 0 tanh y k p ( t , x k , y k ) d y k + j = 1 , j k n 0 tanh y j p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 11.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 ; then, for i = 1 , 2 and j = 0 , 1 ,
(i)
i L n E k tanh W ( j ) ( t ) E k tanh W ( j ) ( t )
K i · ω 1 E k tanh W ( j ) , 1 n β + β ^ i , n E k tanh W ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k tanh W ( j ) ( t ) E k tanh W ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k tanh W ( j ) ( t ) = E k tanh W ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 12.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k tanh W t E k tanh W t
K i Γ α + 1 ω 1 D t α E k tanh W , 1 n β t 1 , t + ω 1 D t α E k tanh W , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k tanh W , t 1 , t t t 1 α + D t α E k tanh W , t , t 2 t 2 t α ,
and
(ii)
i L n E k tanh W E k tanh W , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k tanh W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k tanh W 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k tanh W , t 1 , t + sup t t 1 , t 2 D t α E k tanh W , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 13.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k tanh W t E k tanh W t
2 K i π ω 1 D t 1 2 E k tanh W , 1 n β t 1 , t + ω 1 D t 1 2 E k tanh W , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k tanh W , t 1 , t t t 1 + D t 1 2 E k tanh W , t , t 2 t 2 t ,
and
(ii)
i L n E k tanh W E k tanh W , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k tanh W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k tanh W , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k tanh W , t 1 , t + sup t t 1 , t 2 D t 1 2 E k tanh W , t , t 2 < .
Proof. 
From Corollary 4. □
In the following, we consider the function g : R R , where g ( x ) = e x , > 0 for every x R . Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k e W ( t ) = 0 e y k p ( t , x k , y k ) d y k + j = 1 , j k n 0 e y j p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 14.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 ; then, for i = 1 , 2 and j = 0 , 1 ,
(i)
i L n E k e W ( j ) ( t ) E k e W ( j ) ( t )
K i · ω 1 E k e W ( j ) , 1 n β + β ^ i , n E k e W ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k e W ( j ) ( t ) E k e W ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k e W ( j ) ( t ) = E k e W ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 15.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k e W t E k e W t
K i Γ α + 1 ω 1 D t α E k e W , 1 n β t 1 , t + ω 1 D t α E k e W , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k e W , t 1 , t t t 1 α + D t α E k e W , t , t 2 t 2 t α ,
and
(ii)
i L n E k e W E k e W , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k e W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k e W 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k e W , t 1 , t + sup t t 1 , t 2 D t α E k e W , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 16.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k e W t E k e W t
2 K i π ω 1 D t 1 2 E k e W , 1 n β t 1 , t + ω 1 D t 1 2 E k e W , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k e W , t 1 , t t t 1 + D t 1 2 E k e W , t , t 2 t 2 t ,
and
(ii)
i L n E k e W E k e W , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k e W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k e W , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k e W , t 1 , t + sup t t 1 , t 2 D t 1 2 E k e W , t , t 2 < .
Proof. 
From Corollary 4. □
Let the generalized logistic sigmoid function g : R R , where g ( x ) = 1 + e x δ , δ > 0 for every x R . Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k 1 + e W δ ( t ) = 0 1 + e y k δ p ( t , x k , y k ) d y k + j = 1 , j k n 0 1 + e y j δ p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 17.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 ; then, for i = 1 , 2 and j = 0 , 1 ,
(i)
i L n E k 1 + e W δ ( j ) ( t ) E k 1 + e W δ ( j ) ( t )
K i · ω 1 E k 1 + e W δ ( j ) , 1 n β + β ^ i , n E k 1 + e W δ ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k 1 + e W δ ( j ) ( t ) E k 1 + e W δ ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k 1 + e W δ ( j ) ( t ) = E k 1 + e W δ ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 18.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k 1 + e W δ t E k 1 + e W δ t
K i Γ α + 1 ω 1 D t α E k 1 + e W δ , 1 n β t 1 , t + ω 1 D t α E k 1 + e W δ , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k 1 + e W δ , t 1 , t t t 1 α + D t α E k 1 + e W δ , t , t 2 t 2 t α ,
and
(ii)
i L n E k 1 + e W δ E k 1 + e W δ , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k 1 + e W δ , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k 1 + e W δ 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k 1 + e W δ , t 1 , t + sup t t 1 , t 2 D t α E k 1 + e W δ , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 19.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k 1 + e W δ t E k 1 + e W δ t
2 K i π ω 1 D t 1 2 E k 1 + e W δ , 1 n β t 1 , t + ω 1 D t 1 2 E k 1 + e W δ , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k 1 + e W δ , t 1 , t t t 1 + D t 1 2 E k 1 + e W δ , t , t 2 t 2 t ,
and
(ii)
i L n E k 1 + e W δ E k 1 + e W δ , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k 1 + e W δ , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k 1 + e W δ , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k 1 + e W δ , t 1 , t + sup t t 1 , t 2 D t 1 2 E k 1 + e W δ , t , t 2 < .
Proof. 
From Corollary 4. □
When δ = 1 , we have the usual logistic sigmoid function.
For the last application, we consider the Gompertz function g : R R , where g ( x ) = e μ e x μ < 0 for every x R . The Gompertz function is also a sigmoid function which describes growth as being slowest at the start and end of a given time period. Let also W = X t be the Brownian motion on simple graphs. Then, the expectation
E k e μ e W ( t ) = 0 e μ e y k p ( t , x k , y k ) d y k + j = 1 , j k n 0 e μ e y j p ( t , x k , y j ) d y j
is continuous in t.
Moreover,
Corollary 20.
Let 0 < β < 1 , n N : n 1 β > 2 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , then for i = 1 , 2 and j = 0 , 1 ,
(i)
i L n E k e μ e W ( j ) ( t ) E k e μ e W ( j ) ( t )
K i · ω 1 E k e μ e W ( j ) , 1 n β + β ^ i , n E k e μ e W ( j ) , t 1 , t 2 = : ρ i ,
and
(ii)
i L n E k e μ e W ( j ) ( t ) E k e μ e W ( j ) t , t 1 , t 2 ρ i .
We observe that lim n i L n E k e μ e W ( j ) ( t ) = E k e μ e W ( j ) , pointwise and uniformly.
Proof. 
From Theorems 17 and 19. □
Next, we present
Corollary 21.
Let 0 < α , β < 1 , q , λ , β > 0 , t t 1 , t 2 , where t 1 , t 2 0 , with t 1 < t 2 , and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k e μ e W t E k e μ e W t
K i Γ α + 1 ω 1 D t α E k e μ e W , 1 n β t 1 , t + ω 1 D t α E k e μ e W , 1 n β t , t 2 n α β +
β ^ i , n 2 D t α E k e μ e W , t 1 , t t t 1 α + D t α E k e μ e W , t , t 2 t 2 t α ,
and
(ii)
i L n E k e μ e W E k e μ e W , t 1 , t 2 K i Γ α + 1
sup t t 1 , t 2 ω 1 D t α E k e μ e W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t α E k e μ e W 1 n β t , t 2 n α β +
t 2 t 1 α β ^ i , n 2 sup t t 1 , t 2 D t α E k e μ e W , t 1 , t + sup t t 1 , t 2 D t α E k e μ e W , t , t 2 .
Proof. 
From Theorem 18. □
When α = 1 2 , we derive
Corollary 22.
Let 0 < β < 1 , q , λ , β > 0 , t t 1 , t 2 and n N : n 1 β > 2 . Then, for i = 1 , 2 ,
(i)
i L n E k e μ e W t E k e μ e W t
2 K i π ω 1 D t 1 2 E k e μ e W , 1 n β t 1 , t + ω 1 D t 1 2 E k e μ e W , 1 n β t , t 2 n β 2 +
β ^ i , n 2 D t 1 2 E k e μ e W , t 1 , t t t 1 + D t 1 2 E k e μ e W , t , t 2 t 2 t ,
and
(ii)
i L n E k e μ e W E k e μ e W , t 1 , t 2 2 K i π
sup t t 1 , t 2 ω 1 D t 1 2 E k e μ e W , 1 n β t 1 , t + sup t t 1 , t 2 ω 1 D t 1 2 E k e μ e W , 1 n β t , t 2 n β 2 +
t 2 t 1 β ^ i , n 2 sup t t 1 , t 2 D t 1 2 E k e μ e W