Next Article in Journal
Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models
Previous Article in Journal
An Intelligent Fuzzy MCDM Model Based on D and Z Numbers for Paver Selection: IMF D-SWARA—Fuzzy ARAS-Z Model
Previous Article in Special Issue
Discrete Analogue of Fishburn’s Fractional-Order Stochastic Dominance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 574; https://doi.org/10.3390/axioms12060574
Submission received: 6 May 2023 / Revised: 30 May 2023 / Accepted: 31 May 2023 / Published: 9 June 2023
(This article belongs to the Special Issue Advances in Mathematics and Its Applications)

Abstract

:
The stochastic inertial bidirectional associative memory neural networks (SIBAMNNs) on time scales are considered in this paper, which can unify and generalize both continuous and discrete systems. It is of primary importance to derive the criteria for the existence and uniqueness of both periodic and almost periodic solutions of SIBAMNNs on time scales. Based on that, the criteria for their exponential stability on time scales are studied. Meanwhile, the effectiveness of all proposed criteria is demonstrated by numerical simulation. The above study proposes a new way to unify and generalize both continuous and discrete SIBAMNNs systems, and is applicable to some other practical neural network systems on time scales.

1. Introduction

Bidirectional associative memory neural networks (BAMNNs) are a special class of recurrent neural networks that can store bipolar vector pairs, which were originally introduced and investigated by Kosko in 1988 [1]. BAMNNs have been widely studied due to their important applications in many fields [2,3,4,5,6,7], such as signal processing, image processing, associative memories and pattern recognition. It is well known that BAMNNs can be construed as a super damping (i.e., the damping tends to infinity) system physically, but the ones with weak damping play more important roles in actual applications. Therefore, the weak damping BAMNNs are proposed, which are also named inertial BAMNNs [8].
Artificial intelligence meets the requirements of being more intelligent and suitable for complex cases. As the most important branch of modern artificial intelligence, neural networks possess the advantages of massive parallel processing, self-learning abilities and nonlinear mapping, which motivated the development of intelligent control systems [9,10,11]. It is widely known that neural networks can be divided into continuous [12,13] and discrete cases [14,15], which have been thoroughly studied and widely applied. However, many complex systems include both continuous and discrete cases, and previous research on these two cases was carried out separately [16]. Therefore, it is challenging to consider neural networks under the unified framework, which can unify and generalize both continuous and discrete systems.
Fortunately, the calculus of time scales initiated by Hilger [17] can perfectly solve the problem of unifying and generalizing continuous and discrete cases. Many related works have sprung up that generalized classical methods under the time-scale framework, including the Darboux transformation method [18], the Hirota bilinear method [19], the Lie symmetry method [20] and the Lyapunov function method [21]. Recently, the research on neural networks on time scales has aroused wide attention, especially on the solutions of neural networks on time scales involving equilibrium [22], periodic [23], almost periodic [24], pseudo-almost periodic [25], almost automorphic solutions [26], etc. Moreover, some results on the dynamical properties of solutions have been achieved, such as their existence, stability and synchronization and periodic and almost periodic oscillatory behaviors. For instance, Yang et al. [23] studied the existence and exponential stability of periodic solutions of stochastic high-order Hopfield neural networks on time scales. Zhou et al. [24] investigated the existence and exponential stability of almost periodic solutions of neutral-type BAM neural networks on time scales.
The study of stochastic inertial BAM neural networks (SIBAMNNs) on time scales is still new, although there have been some studies on continuous or discrete cases. Considering the importance of SIBAMNNs and the significance of studying the system under a unified framework, we explore the existence, uniqueness and stability of both periodic and almost periodic solutions of the SIBAMNNs on time scales with discrete and distributed delays, which can unify and generalize the continuous and discrete cases. The analysis of SIBAMNNs on time scales built a bridge between continuous and discrete analyses. This provides an effective way of studying complex models that include both continuous and discrete factors, such as hybrid systems. The remaining paper is organized as follows: In Section 2, we give some Definitions and Lemmas for time scales. In Section 3, the criteria for the existence and uniqueness of periodic and almost periodic solutions of SIBAMNNs on time scales are explored. The exponential stability of SIBAMNNs on time scales is discussed in Section 4. The effectiveness of the proposed criteria for SIBAMNNs is demonstrated by considering numerical simulation in Section 5.

2. Preliminaries

Definition 1
([27] (periodic time scale)). A time scale T is called a periodic time scale if there exists p > 0 such that
Π : = p R : t ± p T , t T 0 .
Definition 2
([27] (periodic function)). Let T be a periodic time scale with period p. The function f : T R is called a periodic function with period ω > 0 if there exists n N * such that ω = n p , f ( t + ω ) = f ( t ) for all t T , where ω is the smallest number satisfying f ( t + ω ) = f ( t ) .
Definition 3
([28] (almost periodic function)). Let T be a periodic time scale. The function x : T R can be called almost periodic if the ε-translation set of x
E { ε , x } = τ Π : E | x ( t + τ ) x ( t ) | 2 < ε , t T
is relatively dense for all ε > 0 . I.e., for any ε > 0 , there exists a constant l ( ε ) > 0 such that each interval of length l ( ε ) contains a τ ( ε ) E { ε , x } such that
E | x ( t + τ ) x ( t ) | 2 < ε , t T .
τ is called the ε-translation number of x and l ( ε ) is the inclusion length of E { ε , x } .
Definition 4
([29,30] (jump operator)). Let T be a time scale.
1
The forward jump operator σ : T T is defined by
σ ( t ) : = inf s T : s > t .
2
The graininess function μ : T [ 0 , ) is defined by
μ ( t ) : = σ ( t ) t .
Definition 5
([29,30] (rd-continuous and regressive)).
1
f : T R is called rd-continuous provided it is continuous at right-dense points in T and its left-sided limits exist (finite) at left-dense points in T . The set of rd-continuous functions f : T R is denoted by C r d .
2
A function p : T R is regressive provided
1 + μ ( t ) p ( t ) 0
for all t T κ , where T κ = T { m } if the maximum m of T is left-scattered. Otherwise, T κ = T . A function p : T R is positively regressive provided
1 + μ ( t ) p ( t ) > 0
for all t T κ .
3
The set of all regressive and rd-continuous functions f : T R is denoted by
R = R ( T , R ) .
The set of all positively regressive and rd-continuous functions f : T R is denoted by
R + = R + ( T , R ) .
Definition 6
([29]). A function f : T R is delta-differential at t T κ if there is a number f Δ ( t ) such that, for all ε > 0 , there exists a neighborhood U of t (i.e., U = ( t δ , t + δ ) T for some δ > 0 ) such that
f ( σ ( t ) ) f ( s ) f Δ ( t ) ( σ ( t ) s ) ε σ ( t ) s , for all s U .
f Δ ( t ) is called the delta derivative of f at t. f is delta-differentiable on T κ provided f Δ ( t ) exists for all t T κ .
Lemma 1
([28]). Let T satisfy (1), then
σ ( t + τ ) = σ ( t ) + τ , μ ( t + τ ) = μ ( t ) .
Lemma 2
([29]). If p R , then
( e p ( t , s ) ) Δ = p ( t ) e p ( t , s ) .
Proof. 
Using Theorem 2.62 in [29], (3) can be easily derived. □
Lemma 3
([29]). If p R and a , b , c T , then
1
( e p ( c , t ) ) Δ = p ( t ) ( e p ( c , σ ( t ) ) ) .
2
a b p ( t ) e p ( c , σ ( t ) ) Δ t = e p ( c , a ) e p ( c , b ) .
3
The function p defined by ( p ) ( t ) = p ( t ) 1 + μ ( t ) p ( t ) for all t T κ is also an element of R .
4
e p ( σ ( t ) , s ) = ( 1 + μ ( t ) p ( t ) ) e p ( t , s ) .
 For more properties of exponential functions on time scales, please refer to Section 2.2 in [29].
Lemma 4
([31]). For any F R + N and p > 0 ,
F p N ( p 2 1 ) 0 l = 1 N F l p , ( l = 1 N F l ) p N ( p 1 ) 0 l = 1 N F l p .
Lemma 5
([32]). For the Δ -stochastic integral, if E ( a b x 2 ( t ) Δ t ) < , then E ( a b x ( t ) Δ w ( t ) ) = 0 and the I t o ^ -isometry
E ( a b x ( t ) Δ w ( t ) ) 2 = E a b x 2 ( t ) Δ t
holds.

3. Existence and Uniqueness of Solution for SIBAMNNs on Time Scales

 Consider the SIBAMNNs [33] on time scales
x i Δ 2 ( t ) = β i ( t ) x i Δ ( t ) a i ( t ) x i ( t ) + j = 1 m c i j ( t ) f j ( y j ( t ) ) + j = 1 m d i j ( t ) f j ( y j ( t τ i j ( 1 ) ( t ) ) ) + j = 1 m h i j ( t ) t τ i j ( 2 ) ( t ) t f j ( y j ( s ) ) Δ s + I i ( t ) + j = 1 m δ i j ( y j ( t ) ) w 1 j Δ ( t ) , y j Δ 2 ( t ) = γ j ( t ) y j Δ ( t ) b j ( t ) y j ( t ) + i = 1 n l j i ( t ) g i ( x i ( t ) ) + i = 1 n p j i ( t ) g i ( x i ( t ζ j i ( 1 ) ( t ) ) ) + i = 1 n q j i ( t ) t ζ j i ( 2 ) ( t ) t g i ( x i ( s ) ) Δ s + J j ( t ) + i = 1 n φ j i ( x i ( t ) ) w 2 i Δ ( t ) ,
with initial data
x i ( t ) = θ i ( 1 ) ( t ) , y j ( t ) = θ j ( 2 ) ( t ) , t [ t 0 ϑ , t 0 ] T ,
where ϑ = max { max i , j { τ i j ( ρ ) ( t ) , ζ j i ( ρ ) ( t ) } } , ρ = 1 , 2 , i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m . m , n are the numbers of neurons in layers. x i , y j denote the state of the i-th and j-th neurons, respectively. a i , b j denote the rate with which the i-th and j-th neurons will reset their potential to the resting state in isolation when disconnected from the networks and external inputs f j , g i are activation functions. c i j , l j i are the connection weight parameters, d i j , h i j , p j i and q j i are delayed connection weight parameters. τ i j ( 1 ) ( t ) , ζ j i ( 1 ) ( t ) , τ i j ( 2 ) ( t ) and ζ j i ( 2 ) ( t ) correspond to the discrete and distributed time-varying delays along the axon of the i-th and j-th units satisfying for t T , t τ i j ( ρ ) ( t ) , t ζ j i ( ρ ) ( t ) T . I i , J j are the external biases on the i-th and j-th units at time t. w 1 j , w 2 i are Brownian motions defined on complete probability space ( Ω , F , P ) , where Ω is a nonempty set, F is the associated σ -algebra on Ω with the probability measure P on ( Ω , F ) . δ i j , φ j i denote the noise intensity.
Using variable substitution
x i Δ ( t ) + ξ i x i ( t ) = u i ( t ) , y j Δ ( t ) + κ j y j ( t ) = v j ( t ) ,
(6) and (7) are equivalent to
x i Δ ( t ) = ξ i x i ( t ) + u i ( t ) ,  
u i Δ ( t ) = ( a i ( t ) β i ( t ) ξ i + ξ i 2 ) x i ( t ) ( β i ( t ) ξ i ) u i ( t ) + j = 1 m c i j ( t ) f j ( y j ( t ) ) + j = 1 m d i j ( t ) f j ( y j ( t τ i j ( 1 ) ( t ) ) ) + j = 1 m h i j ( t ) t τ i j ( 2 ) ( t ) t f j ( y j ( s ) ) Δ s + I i ( t ) + j = 1 n δ i j ( y j ( t ) ) w 1 j Δ ( t ) ,
y j Δ ( t ) = κ j y j ( t ) + v j ( t ) ,  
v j Δ ( t ) = ( b j ( t ) γ j ( t ) κ j + κ j 2 ) y j ( t ) ( γ j ( t ) κ j ) v j ( t ) + i = 1 n l j i ( t ) g i ( x i ( t ) ) + i = 1 n p j i ( t ) g i ( x i ( t ζ j i ( 1 ) ( t ) ) ) + i = 1 n q j i ( t ) t ζ j i ( 2 ) ( t ) t g i ( x i ( s ) ) Δ s + J j ( t ) + i = 1 n φ j i ( x i ( t ) ) w 2 i Δ ( t ) .
with initial data
x i ( t ) = θ i ( 1 ) ( t ) , y j ( t ) = θ j ( 2 ) ( t ) , t [ t 0 ϑ , t 0 ] T .
Denote
X ̲ i = inf t T | X i ( t ) | , X ¯ i = sup t T | X i ( t ) | , Ψ i = sup t T β i ξ i a i ξ i 2 2 , Γ j = sup t T γ j κ j b j κ j 2 2
for i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m .

3.1. Periodic Solution

Assumption 1.
f j , δ i j , g i and φ j i are Lipschitz-continuous with positive constants L j f , L i j δ , L i g and L j i φ , respectively, and f j ( 0 ) = g i ( 0 ) = δ i j ( 0 ) = φ j i ( 0 ) = 0 for i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m .
Assumption 2.
ξ i , κ j , β i ( t ) ξ i , γ j ( t ) κ j > 0 with ξ i , κ j , ( β i ( t ) ξ i ) , ( γ j ( t ) κ j ) R . a i ( t ) , b j ( t ) , β i ( t ) , γ j ( t ) , c i j ( t ) , l j i ( t ) , d i j ( t ) , p j i ( t ) , h i j ( t ) , q j i ( t ) , I i ( t ) , J j ( t ) ( i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m ) are periodic functions with period ω > 0 .
Lemma 6.
Suppose Assumptions 1 and 2 hold, for i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m , ( x i ( t ) , y j ( t ) ) is an ω-periodic solution of (6) if and only if it is the solution of
x i ( t ) = t t + ω A i ( t , s ) u i ( s ) Δ s ,  
u i ( t ) = t t + ω B i ( t , s ) [ ( a i ( s ) β i ( s ) ξ i + ξ i 2 ) x i ( s ) + j = 1 m c i j ( s ) f j ( y j ( s ) ) + j = 1 m d i j ( s ) f j ( y j ( s τ i j ( 1 ) ( s ) ) ) + j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( y j ( r ) ) Δ r + I i ( s ) ] Δ s + t t + ω B i ( t , s ) j = 1 m δ i j ( y j ( s ) ) Δ w 1 j ( s ) ,
y j ( t ) = t t + ω F j ( t , s ) v j ( s ) Δ s ,  
v j ( t ) = t t + ω G j ( t , s ) [ ( b j ( s ) γ j ( s ) κ j + κ j 2 ) y j ( s ) + i = 1 n l j i ( s ) g i ( x i ( s ) ) + i = 1 n p j i ( s ) g i ( x i ( s ζ j i ( 1 ) ( s ) ) ) + i = 1 n q j i ( s ) s ζ j i ( 2 ) ( s ) s g i ( x i ( r ) ) Δ r + J j ( s ) ] Δ s + t t + ω G j ( t , s ) i = 1 n φ j i ( x i ( s ) ) Δ w 2 i ( s ) ,
where
A i ( t , s ) = e ξ i ( t + ω , σ ( s ) ) 1 e ξ i ( t + ω , t ) , B i ( t , s ) = e ( β i ξ i ) ( t + ω , σ ( s ) ) 1 e ( β i ξ i ) ( t + ω , t ) ,
F j ( t , s ) = e κ j ( t + ω , σ ( s ) ) 1 e κ j ( t + ω , t ) , G j ( t , s ) = e ( γ j κ j ) ( t + ω , σ ( s ) ) 1 e ( γ j κ j ) ( t + ω , t ) .
Proof. 
Multiplying both sides of (8) by e ξ i ( θ , σ ( t ) ) , using Lemma 2, we have
[ e ξ i ( θ , t ) x i ( t ) ] Δ = e ξ i ( θ , σ ( t ) ) u i ( t ) .
Integrating both sides of (17) from t to t + ω and choosing θ = t + ω , we obtain
x i ( t + ω ) = e ξ i ( t + ω , t ) x i ( t ) + t t + ω e ξ i ( t + ω , σ ( s ) ) u i ( s ) Δ s .
Since x i ( t + ω ) = x i ( t ) , then x i ( t ) satisfies (13). Similarly, we can obtain that u i ( t ) , y i ( t ) , v i ( t ) satisfy (14)–(16). Vice versa, (18) can be derived from (13) and derivative on both sides of (18), we can obtain that x i ( t ) satisfies (8). Similarly, u i ( t ) , y i ( t ) , v i ( t ) satisfy (9)–(11). □
 Let X = { z B C F 0 b ( T , R 2 n + 2 m ) z k ( t ) = z k ( t + ω ) , z X R , k = 1 , 2 , . . . , 2 n + 2 m } with norm
z X = max 1 k 2 n + 2 m sup t [ t 0 , t 0 + ω ] T E ( z k ( t ) 2 ) , k = 1 , 2 , . . . , 2 n + 2 m ,
where z = ( z 1 , . . . , z n , z n + 1 , . . . , z 2 n , z 2 n + 1 , . . . , z 2 n + m , . . . , z 2 n + 2 m ) T = ( x 1 , . . . , x n , u 1 , . . . , u n , y 1 , . . . , y m , v 1 , . . . , v m ) T . B C F 0 b ( T , R 2 n + 2 m ) is the family of bounded F 0 -measurables. E ( · ) is the correspondent expectation operator and R is finite, which will be determined later, then X is a Banach space [34].
Theorem 1
(periodic solution). Let Assumptions 1 and 2 hold. If
Λ = max 1 i n , 1 j m { Λ 1 i , Λ 2 i , Λ 3 j , Λ 4 j } < 1 ,
where
Λ 1 i = ω 2 sup t , s { A i ( t , s ) } ,  
Λ 2 i = 5 ω 2 sup t , s { B i ( t , s ) } [ Ψ i + j = 1 m ( L j f ) 2 ( c ¯ i j 2 + d ¯ i j 2 + ( τ ¯ i j ( 2 ) h ¯ i j ) 2 ) + 1 ω ( j = 1 m L i j δ ) 2 ] ,
Λ 3 j = ω 2 sup t , s { F j ( t , s ) } ,  
Λ 4 j = 5 ω 2 sup t , s { G j ( t , s ) } [ Γ i + i = 1 n ( L i g ) 2 ( l ¯ j i 2 + p ¯ j i 2 + ( q ¯ j i 2 ζ ¯ j i ( 2 ) ) 2 ) + 1 ω ( i = 1 n L j i φ ) 2 ] ,
where t , s [ t 0 , t 0 + ω ] T , t s , then (6) and (7) have a unique ω-periodic solution in X .
Proof. 
Define an operator Φ on X by
Φ z = ( ( Φ z ) 1 , . . . , ( Φ z ) n , ( Φ z ) n + 1 , . . . , ( Φ z ) 2 n , ( Φ z ) 2 n + 1 , . . . , ( Φ z ) 2 n + m , . . . , ( Φ z ) 2 n + 2 m ) T ,
in which
( Φ z ) i ( t ) = t t + ω A i ( t , s ) z n + i ( s ) Δ s ,
( Φ z ) n + i ( t ) = t t + ω B i ( t , s ) [ ( a i ( s ) β i ( s ) ξ i + ξ i 2 ) z i ( s ) + j = 1 m c i j ( s ) f j ( z 2 n + j ( s ) ) + j = 1 m d i j ( s ) f j ( z 2 n + j ( s τ i j ( 1 ) ( s ) ) ) + j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( z 2 n + j ( r ) ) Δ r + I i ( s ) + j = 1 m δ i j ( z 2 n + j ( s ) ) Δ w 1 j ( s ) ] Δ s ,
( Φ z ) 2 n + j ( t ) = t t + ω F j ( t , s ) z 2 n + m + j ( s ) Δ s ,
( Φ z ) 2 n + m + j ( t ) = t t + ω G j ( t , s ) [ ( b j ( s ) γ j ( s ) κ j + κ j 2 ) z 2 n + j ( s ) + i = 1 n l j i ( s ) g i ( z i ( s ) ) + i = 1 n p j i ( s ) g i ( z i ( s ζ j i ( 1 ) ( s ) ) ) + i = 1 n q j i ( s ) s ζ j i ( 2 ) ( s ) s g i ( z i ( r ) ) Δ r + J j ( s ) + i = 1 n φ j i ( z i ( s ) ) Δ w 2 i ( s ) ] Δ s .
Using Assumption 2, it follows from (24)–(27) that ( Φ z ) k ( t ) = ( Φ z ) k ( t + ω ) for k = 1 , 2 , . . . , 2 n + 2 m .
 Next, we prove Φ z X R . It follows from (24)–(27) that
E ( Φ z ) i ( t ) 2 = E sup t , s { A i ( t , s ) } t t + ω z n + i ( s ) Δ s 2 Λ 1 i z X ,
E ( Φ z ) n + i ( t ) 2 6 5 Λ 2 i z X + 6 E t t + ω B i ( t , s ) I i ( s ) Δ s 2 ,
E ( Φ z ) 2 n + j ( t ) 2 = Λ 3 j z X ,
E ( Φ z ) 2 n + m + j ( t ) 2 6 5 Λ 4 j z X + 6 E t t + ω G j ( t , s ) J j ( s ) Δ s 2 ,
Let
Λ 5 = max 1 i n , 1 j m { Λ 1 i , 6 5 Λ 2 i , Λ 3 j , 6 5 Λ 4 j } , Λ 6 = max 1 i n , 1 j m 6 E t t + ω B i ( t , s ) I i ( s ) Δ s 2 , 6 E t t + ω G j ( t , s ) J j ( s ) Δ s 2 ,
then
Φ z X Λ 5 z X + Λ 6 .
Choosing R Λ 6 / ( 1 Λ 5 ) , we have Φ z X R , which means Φ X X .
Then, we prove Φ is a contraction mapping. For any z , z * X , it follows from (28)–(31) that
E ( Φ z Φ z * ) i ( t ) 2 = E t t + ω sup t , s { A i ( t , s ) } ( z n + i ( s ) z n + i * ( s ) ) Δ s 2 Λ 1 i E ( t t + ω ( z n + i ( s ) z n + i * ( s ) ) Δ s 2 ) Λ 1 i z z * X .
Similarly,
E ( Φ z Φ z * ) n + i ( t ) 2 Λ 2 i z z * X . E ( Φ z Φ z * ) 2 n + j ( t ) 2 Λ 3 j z z * X . E ( Φ z Φ z * ) 2 n + m + j ( t ) 2 Λ 4 j z z * X .
We obtain
Φ z Φ z * X Λ z z * X < z z * X
for Λ = max 1 i n , 1 j m { Λ 1 i , Λ 2 i , Λ 3 j , Λ 4 j } < 1 , which means Φ is a contraction mapping in X . Thus, Φ possesses a unique fixed point and there exists a unique periodic solution for systems (6) and (7) in X . □

3.2. Almost Periodic Solution

Assumption 3.
ξ i , κ j , β i ( t ) ξ i , γ j ( t ) κ j > 0 with ξ i , κ j , ( β i ( t ) ξ i ) , ( γ j ( t ) κ j ) R + . a i ( t ) , b j ( t ) , β i ( t ) , γ j ( t ) , c i j ( t ) , l j i ( t ) , d i j ( t ) , p j i ( t ) , h i j ( t ) , q j i ( t ) , I i ( t ) , J j ( t ) ( i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , m ) are almost periodic with period τ > 0 .
Lemma 7
([35]). If p R + and p ( t ) < 0 for t T , then for s t and s T ,
0 < e p ( t , s ) exp ( s t p ( r ) Δ r ) < 1 .
Lemma 8
([36]). If α R + , then
e α ( t s ) e α ( t , s ) , s t , s , t T .
We give an important Proposition to be used in the study of the existence of an almost periodic solution for (6) and (7).
Lemma 9.
If ξ i , κ j , β i ( t ) ξ i , γ j ( t ) κ j > 0 with ξ i , κ j , ( β i ( t ) ξ i ) , ( γ j ( t ) κ j ) R + , β i ( t ) , γ j ( t ) are almost periodic, then
e ξ i ( t + τ , σ ( s ) + τ ) e ξ i ( t , σ ( s ) ) = 0 , e κ j ( t + τ , σ ( s ) + τ ) e κ j ( t , σ ( s ) ) = 0 .
E t e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) Δ s 2 ε ^ ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ .
E t e ( γ j κ j ) ( t + τ , σ ( s + τ ) ) e ( γ j κ j ) ( t , σ ( s ) ) Δ s 2 ε ^ ( 2 ( γ ̲ j κ j ) 2 + μ ¯ γ ̲ j κ j ) 2 e 2 ( γ ̲ j κ j ) μ ¯ .
E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) 2 Δ s ε ^ ( 4 ( β ̲ i ξ i ) 3 + 4 μ ¯ ( β ̲ i ξ i ) 2 ) e 2 ( β ̲ i ξ i ) μ ¯ .
E t ( e ( γ j κ j ) ( t + τ , σ ( s + τ ) ) e ( γ j κ j ) ( t , σ ( s ) ) ) 2 Δ s ε ^ ( 4 ( γ ̲ j κ j ) 3 + 4 μ ¯ ( γ ̲ j κ j ) 2 ) e 2 ( γ ̲ j κ j ) μ ¯ .
Proof. 
Using Lemmas 2 and 3, we obtain
( e ξ i ( r + τ , σ ( s ) + τ ) ) Δ r = ξ i e ξ i ( r + τ , σ ( s ) + τ ) ,
and
( e ξ i ( t , r ) ) Δ r = ξ i e ξ i ( t , σ ( r ) ) .
Multiplying both sides of (40) by e ξ i ( t , σ ( r ) ) and using (41),
( e ξ i ( r + τ , σ ( s ) + τ ) e ξ i ( t , r ) ) Δ r = 0 .
Integrating (42) from σ ( s ) to t, we have
e ξ i ( r + τ , σ ( s ) + τ ) e ξ i ( t , r ) r = σ ( s ) r = t = 0 .
Similarly, we can obtain
e κ j ( r + τ , σ ( s ) + τ ) e κ j ( t , r ) r = σ ( s ) r = t = 0 ,
which means (35) holds.
For (36), it follows from Lemma 2 that
( e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) ) Δ r = ( β i ( r + τ ) ξ i ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) = ( β i ( r ) ξ i ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) + ( β i ( r ) β i ( r + τ ) ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) .
Multiplying both sides of (43) by e ( β i ξ i ) ( t , σ ( r ) ) and integrating it from σ ( s ) to t, using Lemma 3,
( e ( β i ξ i ) ( t , r ) ) Δ r = ( β i ( r ) ξ i ) e ( β i ξ i ) ( t , σ ( r ) ) ,
we have
E t e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) e ( β i ξ i ) ( t , r ) r = σ ( s ) r = t Δ s 2 = E t e ( β i ξ i ) ( t + τ , σ ( s ) + τ ) e ( β i ξ i ) ( t , σ ( s ) ) Δ s 2 E t σ ( s ) t e ( β i ξ i ) ( t , σ ( r ) ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) ( β i ( r ) β i ( r + τ ) ) Δ r Δ s 2 ε ^ E t σ ( s ) t e ( β i ξ i ) ( t , σ ( r ) ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) Δ r Δ s 2 ,
where
e ( β i ξ i ) ( t , σ ( r ) ) e ( β i ξ i ) ( r + τ , σ ( s ) + τ ) e ( β ̲ i ξ i ) ( t σ ( r ) ) e ( β ̲ i ξ i ) ( r σ ( s ) ) = e ( β ̲ i ξ i ) ( t r ) e ( β ̲ i ξ i ) ( σ ( r ) r ) e ( β ̲ i ξ i ) ( r σ ( s ) ) e ( β ̲ i ξ i ) ( t σ ( s ) ) e ( β ̲ i ξ i ) μ ¯
by using Lemma 7. With σ ( s + τ ) = σ ( s ) + τ derived from Lemma 1, it follows from (44) that
E t e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) Δ s 2 = E t e ( β i ξ i ) ( t + τ , σ ( s ) + τ ) e ( β i ξ i ) ( t , σ ( s ) ) Δ s 2 ε ^ E t e ( β ̲ i ξ i ) μ ¯ e ( β ̲ i ξ i ) ( t σ ( s ) ) ( t σ ( s ) ) Δ s 2 .
Using ( t σ ( s ) ) 2 ( β ̲ i ξ i ) e β ̲ i ξ i 2 ( t σ ( s ) ) , and Lemmas 3 and 8, it follows from (45) that
E t e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) Δ s 2 4 ε ^ ( β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ E t e 1 2 ( β ̲ i ξ i ) ( t σ ( s ) ) Δ s 2 4 ε ^ ( β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ E t e ( 1 2 ( β ̲ i ξ i ) ) ( t , σ ( s ) ) Δ s 2 ε ^ ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ .
Thus (36) holds. Similarly, (37) can be derived.
For (38), using ( t σ ( s ) ) 2 ( β ̲ i ξ i ) e β ̲ i ξ i 2 ( t σ ( s ) ) , and Lemmas 3 and 8,
E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) 2 Δ s ε ^ E t [ e ( β ̲ i ξ i ) μ ¯ e ( β ̲ i ξ i ) ( t σ ( s ) ) ( t σ ( s ) ) ] 2 Δ s ε ^ 4 ( β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ E t e ( β ̲ i ξ i ) ( t σ ( s ) ) Δ s ε ^ 4 ( β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ E t e ( β ̲ i ξ i ) ( t , σ ( s ) ) Δ s ε ^ ( 4 ( β ̲ i ξ i ) 3 + 4 μ ( β ̲ i ξ i ) 2 ) e 2 ( β ̲ i ξ i ) μ ¯ .
Similarly, (39) can be derived. □
Let Y = { z B C F 0 b ( T , R 2 n + 2 m ) z k ( t ) be almost periodic , z Y R ˜ , k = 1 , 2 , . . . , 2 n + 2 m } with norm
z Y = max 1 k 2 n + 2 m sup t T E ( z k ( t ) 2 ) , k = 1 , 2 , . . . , 2 n + 2 m ,
where R ˜ is finite, which will be determined later, then Y is a Banach space [37].
Theorem 2
(almost periodic solution). Let Assumptions 1 and 3 hold. If
M = max 1 i n , 1 j m { M 1 i , M 2 j , M 3 i , M 4 j } < 1 ,
where
M 1 i = 1 ( ξ i ) 2 , M 2 j = 1 ( κ j ) 2 , M 3 i = 5 ( β ̲ i ξ i ) 2 [ Ψ i + ( j = 1 m c ¯ i j L j f ) 2 + ( j = 1 m d ¯ i j L j f ) 2 + ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + ( β ̲ i ξ i ) ( j = 1 m L i j δ ) 2 ] , M 4 j = 5 ( γ ̲ j κ j ) 2 [ Γ j + ( i = 1 n l ¯ j i L i g ) 2 + ( i = 1 n p ¯ j i L i g ) 2 + ( i = 1 n q ¯ j i ζ ¯ j i ( 2 ) L i g ) 2 + ( γ ̲ j κ j ) ( i = 1 n L j i φ ) 2 ] ,
then (6) and (7) have a unique almost periodic solution in Y .
Proof. 
From (8), we have
x i ( t ) = e ξ i ( t , t 0 ) x i ( t 0 ) + t 0 t e ξ i ( t , σ ( s ) ) u i ( s ) Δ s ,
where t 0 , t T , t t 0 . As t 0 in (50), we have
x i ( t ) = t e ξ i ( t , σ ( s ) ) u i ( s ) Δ s .
Similarly, it follows from (9)–(11) that
u i ( t ) = t e ( β i ξ i ) ( t , σ ( s ) ) ( a i ( s ) β i ( s ) ξ i + ξ i 2 ) x i ( s ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m ( c i j ( s ) f j ( y j ( s ) ) + d i j ( s ) f j ( y j ( s τ i j ( 1 ) ( s ) ) ) ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( y j ( r ) ) Δ r Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) I i ( s ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m δ i j ( y j ( s ) ) Δ w 1 j ( s ) ,
y j ( t ) = t e κ j ( t , σ ( s ) ) v j ( s ) Δ s ,
v j ( t ) = t e ( γ j κ j ) ( t , σ ( s ) ) ( b j ( s ) γ j ( s ) κ j + κ j 2 ) y j ( s ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n ( l j i ( s ) g i ( x i ( s ) ) + p j i ( s ) g i ( x i ( s ζ j i ( 1 ) ( s ) ) ) ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n q j i ( s ) s ζ j i ( 2 ) ( s ) s g i ( x i ( r ) ) Δ r Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) J j ( s ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n φ j i ( x i ( s ) ) Δ w 2 i ( s ) .
Define an operator Θ on Y with
Θ z = ( ( Θ z ) 1 , . . . , ( Θ z ) n , ( Θ z ) n + 1 , . . . , ( Θ z ) 2 n , ( Θ z ) 2 n + 1 , . . . , ( Θ z ) 2 n + m , . . . , ( Θ z ) 2 n + 2 m ) T ,
in which
( Θ z ) i ( t ) = t e ξ i ( t , σ ( s ) ) u i ( s ) Δ s ,
( Θ z ) n + i ( t ) = t e ( β i ξ i ) ( t , σ ( s ) ) ( a i ( s ) β i ( s ) ξ i + ξ i 2 ) x i ( s ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m ( c i j ( s ) f j ( y j ( s ) ) + d i j ( s ) f j ( y j ( s τ i j ( 1 ) ( s ) ) ) ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( y j ( r ) ) Δ r Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) I i ( s ) Δ s + t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m δ i j ( y j ( s ) ) Δ w 1 j ( s ) ,
( Θ z ) 2 n + j ( t ) = t e κ j ( t , σ ( s ) ) v j ( s ) Δ s ,
( Θ z ) 2 n + m + j ( t ) = t e ( γ j κ j ) ( t , σ ( s ) ) ( b j ( s ) γ j ( s ) κ j + κ j 2 ) y j ( s ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n ( l j i ( s ) g i ( x i ( s ) ) + p j i ( s ) g i ( x i ( s ζ j i ( 1 ) ( s ) ) ) ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n q j i ( s ) s ζ j i ( 2 ) ( s ) s g i ( x i ( r ) ) Δ r Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) J j ( s ) Δ s + t e ( γ j κ j ) ( t , σ ( s ) ) i = 1 n φ j i ( x i ( s ) ) Δ w 2 i ( s ) .
First, we prove Ψ z Y . Using Proposition 9, we obtain
e ξ i ( t + τ , σ ( s ) + τ ) e ξ i ( t , σ ( s ) ) = 0 .
Using Lemma 4, it follows from (59) that
E ( Θ z ) i ( t + τ ) ( Θ z ) i ( t ) 2 2 E t ( e ξ i ( t + τ , σ ( s + τ ) ) e ξ i ( t , σ ( s ) ) ) u i ( s + τ ) Δ s 2 + 2 E t e ξ i ( t , σ ( s ) ) ( u i ( s + τ ) u i ( s ) ) Δ s 2 2 ξ i 2 ε ^ .
Since
E ( Θ z ) n + i ( t + τ ) ( Θ z ) n + i ( t ) 2 12 E | t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) × ( β i ( s ) ξ i a i ( s ) ξ i 2 ) x i ( s + τ ) Δ s | 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) ( ( β i ( s + τ ) ξ i a i ( s + τ ) ξ i 2 ) x i ( s + τ ) ( β i ( s ) ξ i a i ( s ) ξ i 2 ) x i ( s ) ) Δ s 2 + 12 E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) j = 1 m c i j ( s + τ ) f j ( y j ( s + τ ) ) Δ s 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) ( j = 1 m c i j ( s + τ ) f j ( y j ( s + τ ) ) j = 1 m c i j ( s ) f j ( y j ( s ) ) ) Δ s 2 + 12 E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) × j = 1 m d i j ( s + τ ) f j ( y j ( s + τ τ i j ( 1 ) ( s ) ) ) Δ s 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) × ( j = 1 m d i j ( s + τ ) f j ( y j ( s + τ τ i j ( 1 ) ( s ) ) ) j = 1 m d i j ( s ) f j ( y j ( s τ i j ( 1 ) ( s ) ) ) ) Δ s 2 + 12 E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) × j = 1 m h i j ( s + τ ) s + τ τ i j ( 2 ) ( s ) s + τ f j ( y j ( r ) ) Δ r Δ s 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) × ( j = 1 m h i j ( s + τ ) s + τ τ i j ( 2 ) ( s ) s + τ f j ( y j ( r ) ) Δ r j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( y j ( r ) ) Δ r Δ s 2 + 12 E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) I i ( s + τ ) Δ s 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) ( I i ( s + τ ) I i ( s ) ) Δ s 2 + 12 E t ( e ( β i ξ i ) ( t + τ , σ ( s + τ ) ) e ( β i ξ i ) ( t , σ ( s ) ) ) j = 1 m δ i j ( y j ( s + τ ) ) Δ w 1 j ( s ) 2 + 12 E t e ( β i ξ i ) ( t , σ ( s ) ) ( j = 1 m δ i j ( y j ( s + τ ) ) j = 1 m δ i j ( y j ( s ) ) ) Δ w 1 j ( s ) 2 = υ = 1 12 Ω υ ,
where
Ω 1 12 ε ^ R ˜ Ψ i ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ , Ω 2 12 ε ^ ( β ̲ i ξ i ) 2 ( 2 Ψ i + 2 ( ξ i 1 ) 2 R ˜ ) , Ω 3 12 ε ^ R ˜ ( j = 1 m c ¯ i j L j f ) 2 ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ , Ω 4 12 ε ^ ( β ̲ i ξ i ) 2 [ 2 ( j = 1 m c ¯ i j L j f ) 2 + 2 ( j = 1 m L j f ) 2 R ˜ ] , Ω 5 12 ε ^ R ˜ ( j = 1 m d ¯ i j L j f ) 2 ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ , Ω 6 12 ε ^ ( β ̲ i ξ i ) 2 [ 2 ( j = 1 m d ¯ i j L j f ) 2 + 2 ( j = 1 m L j f ) 2 R ˜ ] , Ω 7 12 ε ^ R ˜ ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ , Ω 8 12 ε ^ ( β ̲ i ξ i ) 2 [ 2 ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + 2 ( j = 1 m τ ¯ i j ( 2 ) L j f ) 2 R ˜ ] , Ω 9 12 ε ^ I ¯ i 2 ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ , Ω 10 12 ε ^ ( β ̲ i ξ i ) 2 .
Using Lemma 5 and (38),
Ω 11 12 ε ^ R ˜ ( j = 1 m L i j δ ) 2 ( 4 ( β ̲ i ξ i ) 3 + 4 μ ¯ ( β ̲ i ξ i ) 2 ) e 2 ( β ̲ i ξ i ) μ ¯ ,
Using Lemmas 5 and 7,
Ω 12 12 ε ^ β ̲ i ξ i ( j = 1 m L i j δ ) 2 .
It follows from (61)–(63) that
E ( Θ z ) n + i ( t + τ ) ( Θ z ) n + i ( t ) 2 12 ε ^ R ˜ ( 2 ( β ̲ i ξ i ) 2 + μ ¯ β ̲ i ξ i ) 2 e 2 ( β ̲ i ξ i ) μ ¯ [ Ψ i + ( j = 1 m c ¯ i j L j f ) 2 + ( j = 1 m d ¯ i j L j f ) 2 + ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + 1 R ˜ I ¯ i 2 ] + 12 ε ^ R ˜ ( j = 1 m L i j δ ) 2 ( 4 ( β ̲ i ξ i ) 3 + 4 μ ¯ ( β ̲ i ξ i ) 2 ) e 2 ( β ̲ i ξ i ) μ ¯ + 24 ε ^ ( β ̲ i ξ i ) 2 [ Ψ i + ( ξ i 1 ) 2 R ˜ + ( j = 1 m c ¯ i j L j f ) 2 + ( j = 1 m d ¯ i j L j f ) 2 + 2 ( j = 1 m L j f ) 2 R ˜ + ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + ( j = 1 m τ ¯ i j ( 2 ) L j f ) 2 R ˜ + 1 2 + 1 2 ( β ̲ i ξ i ) ( j = 1 m L i j δ ) 2 ] .
Similarly to (60) and (64),
E ( Θ z ) 2 n + j ( t + τ ) ( Θ z ) 2 n + j ( t ) 2 2 E t ( e κ j ( t + τ , σ ( s + τ ) ) e κ j ( t , σ ( s ) ) ) v j ( s + τ ) Δ s 2 + 2 E t e κ j ( t , σ ( s ) ) ( v j ( s + τ ) v j ( s ) ) Δ s 2 2 κ j 2 ε ^ ,
E ( Ψ z ) 2 n + m + j ( t + τ ) ( Ψ z ) 2 n + m + j ( t ) 2 12 ε ^ R ˜ ( 2 ( γ ̲ j κ j ) 2 + μ ¯ γ ̲ j κ j ) 2 e 2 ( γ ̲ j κ j ) μ ¯ [ Γ j + ( i = 1 n l ¯ j i L i g ) 2 + ( i = 1 n p ¯ j i L i g ) 2 + ( i = 1 n q ¯ j i ζ ¯ j i ( 2 ) L i g ) 2 + 1 R ˜ J ¯ j 2 ] + 12 ε ^ R ˜ ( i = 1 n L j i φ ) 2 ( 4 ( γ ̲ j κ j ) 3 + 4 μ ¯ ( γ ̲ j κ j ) 2 ) e 2 ( γ ̲ j κ j ) μ ¯ + 24 ε ^ ( γ ̲ j κ j ) 2 [ Γ j + ( κ j 1 ) 2 R ˜ + ( i = 1 n l ¯ j i L i g ) 2 + ( i = 1 n p ¯ j i L i g ) 2 + 2 ( i = 1 n L i g ) 2 R ˜ + ( i = 1 n q ¯ j i ζ ¯ j i ( 2 ) L i g ) 2 + ( i = 1 n ζ ¯ j i ( 2 ) L i g ) 2 R ˜ + 1 2 + 1 2 ( γ ̲ j κ j ) ( i = 1 n L j i φ ) 2 ] .
From (60) and (64)–(66), for any ε > 0 , there exists a corresponding ε ^ which produces
E ( Θ z ) k ( t + τ ) ( Θ z ) k ( t ) 2 ε for k = 1 , 2 , . . . , 2 m + 2 n .
Therefore, Ψ z is almost periodic.
Second, we prove Ψ z Y R ˜ . It follows from (55)–(58) that
E ( Θ z ) i ( t ) 2 E t e ξ i ( t , σ ( s ) ) u i ( s ) Δ s 2 1 ξ i 2 z Y ,
E ( Θ z ) n + i ( t ) 2 6 E t e ( β i ξ i ) ( t , σ ( s ) ) ( β i ξ i a i ξ i 2 ) x i ( s ) Δ s 2 + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m c i j ( s ) f j ( y j ( s ) ) Δ s 2 + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m d i j ( s ) f j ( y j ( s τ i j ( 1 ) ( s ) ) ) Δ s 2 + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m h i j ( s ) s τ i j ( 2 ) ( s ) s f j ( y j ( r ) ) Δ r Δ s 2 + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) I i ( s ) Δ s 2 + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m δ i j ( y j ( s ) ) Δ w 1 j ( s ) 2 6 z Y ( β ̲ i ξ i ) 2 [ Ψ i + ( j = 1 n c ¯ i j L j f ) 2 + ( j = 1 n d ¯ i j L j f ) 2 + ( j = 1 n h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + ( β ̲ i ξ i ) ( j = 1 n L i j δ ) 2 ] + 6 E t e ( β i ξ i ) ( t , σ ( s ) ) I i ( s ) Δ s 2 ,
E ( Θ z ) 2 n + j ( t ) 2 E t e κ j ( t , σ ( s ) ) v j ( s ) Δ s 2 1 κ j 2 z Y ,
E ( Θ z ) 2 n + m + j ( t ) 2 6 z Y ( γ ̲ j κ j ) 2 [ Γ i + ( i = 1 n l ¯ j i L i g ) 2 + ( i = 1 n p ¯ j i L i g ) 2 + ( i = 1 n q ¯ j i ζ ¯ j i ( 2 ) L i g ) 2 + ( γ ̲ j κ j ) ( i = 1 n L j i φ ) 2 ] + 6 E t e ( γ j κ j ) ( t , σ ( s ) ) J j ( s ) Δ s 2
Let
M 5 = max 1 i n , 1 j m M 1 i , 6 5 M 3 i , M 2 j , 6 5 M 4 j , M 6 = max 1 i n , 1 j m 6 E t e ( β i ξ i ) ( t , σ ( s ) ) I i ( s ) Δ s 2 , 6 E t e ( γ j κ j ) ( t , σ ( s ) ) J j ( s ) Δ s 2 .
From (67)–(70),
Θ z Y M 5 R ˜ + M 6 R ˜ .
Choosing R ˜ M 6 / ( 1 M 5 ) , we obtain Θ z Y R ˜ , which means Θ Y Y .
Next, we prove Θ is a contraction mapping. It follows from (55) that
Θ z Θ z * i ( t ) t e ξ i ( t , σ ( s ) ) ( u i ( s ) u i * ( s ) ) Δ s .
E Θ z Θ z * i ( t ) 2 E t e ξ i ( t , σ ( s ) ) ( u i ( s ) u i * ( s ) ) Δ s 2 < 1 ( ξ i ) 2 z Y .
Similarly, we have
E ( Θ z Θ z * ) n + i ( t ) 2 5 E t e ( β i ξ i ) ( t , σ ( s ) ) ( β i ξ i a i ξ i 2 ) ( x i ( s ) x i * ( s ) ) Δ s 2 + 5 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m c i j ( s ) ( f j ( y j ( s ) ) f j ( y j * ( s ) ) ) Δ s 2 + 5 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m d i j ( s ) ( f j ( y j ( s τ i j ( 1 ) ( s ) ) ) f j ( y j * ( s τ i j ( 1 ) ( s ) ) ) ) Δ s 2 + 5 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m s τ i j ( 2 ) ( s ) s h i j ( r ) ( f j ( y j ( r ) ) f j ( y j * ( r ) ) ) Δ r Δ s 2 + 5 E t e ( β i ξ i ) ( t , σ ( s ) ) j = 1 m δ i j ( y j ( s ) y j * ( s ) ) Δ w 1 j ( s ) 2 < 5 ( β ̲ i ξ i ) 2 [ Ψ i + j = 1 m ( c ¯ i j L j f ) 2 + j = 1 m ( d ¯ i j L j f ) 2 + j = 1 m ( τ ¯ i j ( 2 ) h ¯ i j L j f ) 2 + ( β ̲ i ξ i ) ( j = 1 m L i j δ ) 2 ] z Y .
E Θ z Θ z * 2 n + j ( t ) 2 E t e κ j ( t , σ ( s ) ) ( v j ( s ) v j * ( s ) ) Δ s 2 < 1 ( κ j ) 2 z Y .
E ( Θ z Θ z * ) 2 n + m + j ( t ) 2 < 5 ( γ ̲ j κ j ) 2 [ Γ i + j = 1 m ( l ¯ j i L j f ) 2 + j = 1 m ( p ¯ j i L j f ) 2 + j = 1 m ( ζ ¯ j i ( 2 ) q ¯ j i L i g ) 2 + ( γ ̲ j κ j ) ( j = 1 m L j i φ ) 2 ] z Y .
Let M 1 i , M 2 j , M 3 i , M 4 j be (49), if M = max 1 i n , 1 j m { M 1 i , M 2 j , M 3 i , M 4 j } < 1 , then
Θ ( z z * ) Y M z z * Y < z z * Y .
Θ is a contraction mapping in Y . Then, Θ possesses a unique fixed point and there exists a unique almost periodic solution for systems (6) and (7) in Y . □

4. Exponential Stability

Theorem 3.
If
max 1 i n , 1 j m { 2 M 1 i , 2 M 2 j , 6 5 M 3 i , 6 5 M 4 j } < 1 ,
then the solution of SIBAMNNs (6) and (7) on time scales is exponentially stable.
Proof. 
Let z k 1 ( t ) and z k 2 ( t ) be two solutions of systems (6) and (7) with initial data ( θ i 1 ( 1 ) ( s ) , u i 1 ( t 0 ) , θ j 1 ( 2 ) ( s ) , v j 1 ( t 0 ) ) and ( θ i 2 ( 1 ) ( s ) , u i 2 ( t 0 ) , θ j 2 ( 2 ) ( s ) , v j 2 ( t 0 ) ) , and z ˜ k ( t ) = z k 1 ( t ) z k 2 ( t ) , x ˜ i ( t ) = x i 1 ( t ) x i 2 ( t ) , u ˜ i ( t ) = u i 1 ( t ) u i 2 ( t ) , y ˜ i ( t ) = y i 1 ( t ) y i 2 ( t ) , v ˜ i ( t ) = v i 1 ( t ) v i 2 ( t ) . For initial state, set ψ i ( s ) = θ i 1 ( 1 ) ( s ) θ i 2 ( 1 ) ( s ) , ψ n + i ( t 0 ) = u i 1 ( t 0 ) u i 2 ( t 0 ) , ψ 2 n + j ( s ) = θ i 1 ( 2 ) ( s ) θ i 2 ( 2 ) ( s ) , ψ 2 n + m + j ( t 0 ) = v j 1 ( t 0 ) v j 2 ( t 0 ) for i = 1 , 2 , . . . n , j = 1 , 2 , . . . m , k = 1 , 2 , . . . 2 m + 2 n . From (8), we have
z ˜ i ( t ) = e ξ i ( t , t 0 ) x ˜ i ( t 0 ) + t 0 t e ξ i ( t , σ ( s ) ) u ˜ i ( s ) Δ s .
Let 0 < α 1 < min 1 i n { ξ i } . It follows from (78) that
e α 1 ( t , t 0 ) E ( z ˜ i ( t ) 2 ) 2 E ( x ˜ i ( t 0 ) 2 ) + 2 e α 1 ( t , t 0 ) E ( t 0 t e ξ i ( t , σ ( s ) ) u ˜ i ( s ) Δ s 2 ) ,
where
E ( t 0 t e ξ i ( t , σ ( s ) ) u ˜ i ( s ) Δ s 2 ) 1 ξ i 2 E ( u ˜ i ( s ) 2 ) .
Then, from (79), we have
e α 1 ( t , t 0 ) E ( z ˜ i ( t ) 2 ) 2 E z ˜ i ( t 0 ) 2 + 2 ξ i 2 e α 1 ( t , t 0 ) E ( u ˜ i ( t ) 2 ) .
Let 0 < α 2 < min 1 i n β ̲ i ξ i , 0 < α 3 < min 1 j m κ j , 0 < α 4 < min 1 j m γ ̲ j κ j . Similarly, we have
e α 2 ( t , t 0 ) E ( z ˜ n + i ( t ) 2 ) 6 E z ˜ n + i ( t 0 ) 2 + 6 ( β ̲ i ξ i ) 2 Ψ i e α 2 ( t , t 0 ) E ( x ˜ i ( t ) 2 ) + 6 ( β ̲ i ξ i ) 2 e α 2 ( t , t 0 ) [ ( j = 1 m c ¯ i j L j f ) 2 + ( j = 1 m d ¯ i j L j f ) 2 + ( j = 1 m h ¯ i j τ ¯ i j ( 2 ) L j f ) 2 + ( β ̲ i ξ i ) ( j = 1 m L i j δ ) 2 ] E y ˜ j ( t ) 2 ,
e α 3 ( t , t 0 ) E ( z ˜ 2 n + j ( t ) 2 ) 2 E z ˜ 2 n + j ( t 0 ) 2 + 2 κ j 2 e α 3 ( t , t 0 ) E ( v ˜ j ( t ) 2 ) ,
e α 4 ( t , t 0 ) E ( z ˜ 2 n + m + j ( t ) 2 ) 6 E z ˜ 2 n + m + j ( t 0 ) 2 + 6 ( γ ̲ j κ j ) 2 Γ i e α 4 ( t , t 0 ) E ( y ˜ j ( t ) 2 ) + 6 ( γ ̲ j κ j ) 2 e α 4 ( t , t 0 ) [ ( i = 1 n l ¯ j i L i g ) 2 + ( i = 1 n p ¯ j i L i g ) 2 + ( i = 1 n q ¯ j i ζ ¯ j i ( 2 ) L j f ) 2 + ( γ ̲ j κ j ) ( i = 1 n L j i φ ) 2 ] E x ˜ i ( t ) 2 .
From (80)–(83), choose
C > max 1 i n , 1 j m 2 1 2 M 1 i , 2 1 2 M 2 j , 6 1 6 5 M 3 i , 6 1 6 5 M 4 j ,
and 0 < α < min 1 l 4 { α l } . If (80) holds, then C > 1 . Since e α ( t , t 0 ) > 1 for t t 0 , then t [ t 0 ϑ , t 0 ] T , and we obtain
max k E ( z ˜ k ( t ) 2 ) C e α ( t , t 0 ) ψ X , k = 1 , 2 , . . . , 2 m + 2 n
for the periodic solution and
max k E ( z ˜ k ( t ) 2 ) C e α ( t , t 0 ) ψ Y , k = 1 , 2 , . . . , 2 m + 2 n
for the almost periodic solution. We claim that for t ( t 0 , + ) T ,
max k E ( z ˜ k ( t ) 2 ) C e α ( t , t 0 ) ψ X , max k E ( z ˜ k ( t ) 2 ) C e α ( t , t 0 ) ψ Y .
i.e., for any p > 1 , the periodic solution satisfies
max 1 k 2 n + 2 m E ( z ˜ k ( t ) 2 ) < p C e α ( t , t 0 ) ψ X , t ( t 0 , + ) T ,
and the almost periodic solution satisfies
max 1 k 2 n + 2 m E ( z ˜ k ( t ) 2 ) < p C e α ( t , t 0 ) ψ Y , t ( t 0 , + ) T .
We give the proof by contradiction. Without a loss of generality, assume that (84) is not satisfied, then there exist t 1 ( t 0 , + ) T and i 0 { 1 , 2 , . . . , 2 m + 2 n } such that
E ( z ˜ i 0 ( t 1 ) 2 ) p C e α ( t 1 , t 0 ) ψ X , E ( z ˜ i 0 ( t ) 2 ) < p C e α ( t , t 0 ) ψ X , t ( t 0 , t 1 ) T ,
E ( z ˜ l ( t ) 2 ) < p C e α ( t , t 0 ) ψ X , t ( t 0 , t 1 ] T for l = 1 , 2 , . . . , 2 m + 2 n , l i 0 .
Hence, there exists a constant λ 1 such that
E ( z ˜ i 0 ( t 1 ) 2 ) = λ p C e α ( t 1 , t 0 ) ψ X , E ( z ˜ i 0 ( t ) 2 ) < λ p C e α ( t , t 0 ) ψ X , t ( t 0 , t 1 ) T ,
E ( z ˜ l ( t ) 2 ) < λ p C e α ( t , t 0 ) ψ X , t ( t 0 , t 1 ] T , l = 1 , 2 , . . . , 2 m + 2 n , l i 0 .
It follows from (78) that
E ( z ˜ i 0 ( t 1 ) 2 ) = E e ξ i 0 ( t 1 , t 0 ) z ˜ i 0 ( t 0 ) + t 0 t 1 e ξ i 0 ( t 1 , σ ( s ) ) z ˜ n + i 0 ( s ) Δ s 2 2 E e ξ i 0 ( t 1 , t 0 ) z ˜ i 0 ( t 0 ) 2 + 2 E t 0 t 1 e ξ i 0 ( t 1 , σ ( s ) ) z ˜ n + i 0 ( s ) Δ s 2 < 2 ψ X + 2 λ p C ξ i 2 ψ X < λ p C ψ X ( 2 C + 2 ξ i 2 ) .
Since C > 2 / ( 1 2 M 1 i ) , we obtain E ( z ˜ i 0 ( t 1 ) 2 ) < λ p C ψ X from (86), which means contradiction. Therefore, (84) holds. Similarly, we can derive that (85) holds. Therefore,
z ˜ X < p C e α ( t , t 0 ) ψ X , z ˜ Y < p C e α ( t , t 0 ) ψ Y ,
which means the exponential stability of the solutions for systems (6) and (7) is obtained. □

5. Numerical Example

The accuracy and effectiveness of the obtained results are demonstrated by numerical example in this section.
Let i = j = 2 . Consider the SIBAMNNs on time scales
x i Δ 2 ( t ) = β i ( t ) x i Δ ( t ) a i ( t ) x i ( t ) + j = 1 2 c i j ( t ) f j ( y j ( t ) ) + j = 1 2 d i j ( t ) f j ( y j ( t τ i j ( 1 ) ( t ) ) ) + j = 1 2 h i j ( t ) t τ i j ( 2 ) ( t ) t f j ( y j ( s ) ) Δ s + I i ( t ) + j = 1 2 δ i j ( y j ( t ) ) w 1 j Δ ( t ) ,
y j Δ 2 ( t ) = γ j ( t ) y j Δ ( t ) b j ( t ) y j ( t ) + i = 1 2 l j i ( t ) g j ( x i ( t ) ) + i = 1 2 p j i ( t ) g i ( x i ( t ζ j i ( 1 ) ( t ) ) ) + i = 1 2 q j i ( t ) t ζ j i ( 2 ) ( t ) t g i ( x i ( s ) ) Δ s + J j ( t ) + i = 1 2 φ j i ( x i ( t ) ) w 2 i Δ ( t ) .
Choose
C = ( c i j ) 2 × 2 = 0.5 cos t 0.3 cos t 0.3 sin t 0.5 sin t , D = ( d i j ) 2 × 2 = 0.1 cos t 0.08 cos t 0.08 sin t 0.1 sin t ,
H = ( h i j ) 2 × 2 = 0.12 sin t 0.1 sin t 0.1 cos t 0.12 cos t , P = ( p j i ) 2 × 2 = 0.1 sin t 0.08 sin t 0.08 cos t 0.1 cos t ,
L = ( l j i ) 2 × 2 = 0.5 sin t 0.3 sin t 0.3 cos t 0.5 cos t , Q = ( q j i ) 2 × 2 = 0.12 cos t 0.1 cos t 0.1 sin t 0.12 sin t ,
β 1 ( t ) = 6.8 + 0.3 sin t , β 2 ( t ) = 7 + 0.4 sin t , γ 1 ( t ) = 6.8 + 0.3 cos t , γ 2 ( t ) = 7 + 0.4 cos t , a 1 ( t ) = 6.5 + 0.3 sin t , a 2 ( t ) = 6.6 + 0.4 sin t , b 1 ( t ) = 6.5 + 0.3 cos t , b 2 ( t ) = 6.6 + 0.4 cos t , f 1 ( y 1 ( t ) ) = 0.06 sin ( y 1 ( t ) ) , f 2 ( y 2 ( t ) ) = 0.04 sin ( y 2 ( t ) ) , g 1 ( x 1 ( t ) ) = 0.03 sin ( 2 x 1 ( t ) ) , g 2 ( x 2 ( t ) ) = 0.02 sin ( 2 x 2 ( t ) ) , ξ 1 = ξ 2 = 1.5 , κ 1 = κ 2 = 1.6 , I 1 = J 2 = 0.85 sin t , I 2 = J 1 = 0.85 cos t , τ i j ( 1 ) = ζ j i ( 1 ) = 0.03 , τ i j ( 2 ) = ζ j i ( 2 ) = 0 , δ i j ( y j ( t ) ) = 0.01 sin ( y j ( t ) ) , φ j i ( x i ) ) = 0.01 sin ( x i ( t ) ) .
(1) For T = R , Assumption 1, Assumption 2, (19) and (77) are satisfied under the above parameters, which means (87) and (88) have a periodic solution which is exponentially stable (Figure 1 and Figure 2).
(2) For T = l [ a l , b l ] and a l = 2 k π , b l = ( 2 k + 1 ) π , k Z , Assumptions 1 and 2 and (19) and (77) are satisfied under the above parameters, which means (87) and (88) have a periodic solution which is exponentially stable.
(3) For general time scales, Assumption 1, Assumption 3 and (77) are satisfied under the above parameters, which means (87) and (88) have an almost periodic solution which is exponentially stable.
Remark 1.
It follows from the above analysis that changes in the time scale can cause changes in the periodicity and stability of the solutions of SIBAMNNs.

6. Conclusions

We derive the criteria for the existence, uniqueness and exponential stability of both periodic and almost periodic solutions of SIBAMNNs on time scales, which unify and generalize the continuous and discrete cases and provide greater flexibility in handling time scales of practical importance. One can observe that the method for continuous-time inertial BAM neural networks in [38] and discrete-time inertial neural networks in [39] cannot directly be applied to system (6). The exponential stability of the solution on time scales is investigated without constructing a Lyapunov function, which provides a way to study the stability of neural networks that are not easy to construct Lyapunov functions for on time scales. Moreover, the inertial term, stochastic process and distributed time delay on time scales are considered simultaneously, which makes the model more applicable. Furthermore, new estimations for the exponential functions with almost periodic parameters on time scales are derived in Lemma 9. It is meaningful to study the existence and stability of periodic and almost periodic solutions for systems on time scales which can unify the continuous and discrete situations. A possible future direction is how to generalize the technique to deal with the time-scale cases and the methods used in this paper to resolve some other types of neural networks on time scales. Moreover, the study of periodic, almost periodic, pseudo-almost periodic and almost automorphic solutions of neural networks on time scales is also another possible future direction.

Author Contributions

Conceptualization, methodology, writing—original draft, validation, M.L.; supervision, funding acquisition, H.D.; resources, validation, Y.Z.; writing—reviewing and editing, validation, funding acquisition, project administration, Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 12105161, 11975143) and the Natural Science Foundation of Shandong Province (No. ZR2019 QD018).

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our great appreciation to the editors and reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kosko, B. Bidrectional associative memories. IEEE Trans. Syst. Man Cybern. 1988, 18, 49–60. [Google Scholar] [CrossRef] [Green Version]
  2. Raja, R.; Anthoni, S.M. Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case. Commun. Nonlinear Sci. Numer. Simulat. 2011, 16, 613–622. [Google Scholar] [CrossRef]
  3. Shao, Y.F. Existence of exponential periodic attractor of BAM neural networks with time-varying delays and impulses. Nerocomputing 2012, 93, 1–9. [Google Scholar] [CrossRef]
  4. Yang, W.G. Existence of an exponential periodic attractor of periodic solutions for general BAM neural networks with time-varying delays and impulses. Appl. Math. Comput. 2012, 219, 569–582. [Google Scholar] [CrossRef]
  5. Zhang, Z.Q.; Liu, K.Y.; Yang, Y. New LMI-based condition on global asymptotic stability concerning BAM neural networks of neural type. Neurocomputing 2012, 81, 24–32. [Google Scholar] [CrossRef]
  6. Zhu, Q.X.; Rakkiyappan, R.; Chandrasekar, A. Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control. Neurocomputing 2014, 136, 136–151. [Google Scholar] [CrossRef]
  7. Lin, F.; Zhang, Z.Q. Global asymptotic synchronization of a class of BAM neural networks with time delays via integrating inequality techniques. J. Syst. Sci. Complex. 2020, 33, 366–392. [Google Scholar] [CrossRef]
  8. Zhou, F.Y.; Yao, H.X. Stability analysis for neutral-type inertial BAM neural networks with time-varying delays. Nonlinear Dynam. 2018, 92, 1583–1598. [Google Scholar] [CrossRef]
  9. Roshani, G.H.; Hanus, R.; Khazaei, A.; Zych, M.; Nazemi, E.; Mosorov, V. Density and velocity determination for single-phase flow based on radiotracer technique and neural networks. Flow Meas. Instrum. 2018, 61, 9–14. [Google Scholar] [CrossRef]
  10. Azimirad, V.; Ramezanlou, M.T.; Sotubadi, S.V.; Janabi-Sharifi, F. A consecutive hybrid spiking-convolutional (CHSC) neural controller for sequential decision making in robots. Neurocomputing 2022, 490, 319–336. [Google Scholar] [CrossRef]
  11. Mozaffari, H.; Houmansadr, A. E2FL: Equal and equitable federated learning. arXiv 2022, arXiv:2205.10454. [Google Scholar]
  12. Lakshmanan, S.; Lim, C.P.; Prakash, M.; Nahavandi, S.; Balasubramaniam, P. Neutral-type of delayed inertial neural networks and their stability analysis using the LMI Approach. Neurocomputing 2016, 230, 243–250. [Google Scholar] [CrossRef]
  13. Kumar, R.; Das, S. Exponential stability of inertial BAM neural network with time-varying impulses and mixed time-varying delays via matrix measure approach. Commun. Nonlinear Sci. Numer. Simulat. 2019, 81, 105016. [Google Scholar] [CrossRef]
  14. Xiao, Q.; Huang, T.W. Quasisynchronization of discrete-time inertial neural networks with parameter mismatches and delays. IEEE Trans. Cybern. 2021, 51, 2290–2295. [Google Scholar] [CrossRef] [PubMed]
  15. Sun, G.; Zhang, Y. Exponential stability of impulsive discrete-time stochastic BAM neural networks with time-varying delay. Neurocomputing 2014, 131, 323–330. [Google Scholar] [CrossRef]
  16. Pan, L.J.; Cao, J.D. Stability of bidirectional associative memory neural networks with Markov switching via ergodic method and the law of large numbers. Neurocomputing 2015, 168, 1157–1163. [Google Scholar] [CrossRef]
  17. Hilger, S. Analysis on measure chains-a unified approach to continuous and discrete calculus. Res. Math. 1990, 18, 18–56. [Google Scholar] [CrossRef]
  18. Cieśliński, J.L.; Nikiciuk, T.; Waśkiewicz, K. The sine-Gordon equation on time scales. J. Math. Anal. Appl. 2015, 423, 1219–1230. [Google Scholar] [CrossRef]
  19. Hovhannisyan, G. 3 soliton solution to Sine-Gordon equation on a space scale. J. Math. Phys. 2019, 60, 103502. [Google Scholar] [CrossRef]
  20. Zhang, Y. Lie symmetry and invariants for a generalized Birkhoffian system on time scales. Chaos Soliton Fractals 2019, 128, 306–312. [Google Scholar] [CrossRef]
  21. Federson, M.; Grau, R.; Mesquita, J.G.; Toon, E. Lyapunov stability for measure differential equations and dynamic equations on time scales. J. Differ. Equ. 2019, 267, 4192–4223. [Google Scholar] [CrossRef]
  22. Gu, H.B.; Jiang, H.J.; Teng, Z.D. Existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays. J. Franklin I 2010, 347, 719–731. [Google Scholar] [CrossRef]
  23. Yang, L.; Fei, Y.; Wu, W.Q. Periodic solution for ∇-stochastic high-order Hopfield neural networks with time delays on time scales. Neural Process Lett. 2019, 49, 1681–1696. [Google Scholar] [CrossRef]
  24. Zhou, H.; Zhou, Z.; Jiang, W. Almost periodic solutions for neutral type BAM neural networks with distributed leakage delays on time scales. Neurocomputing 2015, 157, 223–230. [Google Scholar] [CrossRef]
  25. Arbi, A.; Cao, J.D. Pseudo-almost periodic solution on time-space scales for a novel class of competitive neutral-type neural networks with mixed time-varying delays and leakage delays. Neural Process Lett. 2017, 46, 719–745. [Google Scholar] [CrossRef]
  26. Dhama, S.; Abbas, S. Square-mean almost automorphic solution of a stochastic cellular neural network on time scales. J. Integral Equ. Appl. 2020, 32, 151–170. [Google Scholar] [CrossRef]
  27. Kaufmann, E.R.; Raffoul, Y.N. Periodic solutions for a neutral nonlinear dynamical equation on a time scale. J. Math. Anal. Appl. 2006, 319, 315–325. [Google Scholar] [CrossRef] [Green Version]
  28. Lizama, C.; Mesquita, J.G.; Ponce, R. A connection between almost periodic functions defined on time scales and ℝ. Appl. Anal. 2014, 93, 2547–2558. [Google Scholar] [CrossRef]
  29. Bohner, M.; Peterson, A. Dynamic Equations on Time Scales: An Introduction with Applications; Springer Science & Business Media: New York, NY, USA, 2001. [Google Scholar]
  30. Bohner, M.; Peterson, A. Advances in Dynamic Equations on Time Scales; Birkhauser: Boston, MA, USA, 2003. [Google Scholar]
  31. Wu, F.; Hu, S.; Liu, Y. Positive solution and its asymptotic behaviour of stochastic functional Kolmogorov-type system. J. Math. Anal. Appl. 2010, 364, 104–118. [Google Scholar] [CrossRef] [Green Version]
  32. Bohner, M.; Stanzhytskyi, O.M.; Bratochkina, A.O. Stochastic dynamic equations on general time scales. Electron. J. Differ. Equ. 2013, 2013, 1215–1230. [Google Scholar]
  33. Ke, Y.Q.; Miao, C.F. Stability and existence of periodic solutions in inertial BAM neural networks with time delay. Neural Comput. Appl. 2013, 23, 1089–1099. [Google Scholar]
  34. Yang, L.; Li, Y.K. Existence and exponential stability of periodic solution for stochastic Hopfield neural networks on time scales. Neurocomputing 2015, 167, 543–550. [Google Scholar] [CrossRef]
  35. Ad1var, M.; Raffoul, Y.N. Existence of periodic solutions in totally nonlinear delay dynamic equations. Electron. J. Qual. Theory Differ. Equ. 2009, 1, 1–20. [Google Scholar] [CrossRef]
  36. Wang, C. Almost periodic solutions of impulsive BAM neural networks with variable delays on time scales. Commun. Nonlinear Sci. Numer. Simulat. 2014, 19, 2828–2842. [Google Scholar] [CrossRef]
  37. Li, Y.K.; Yang, L.; Wu, W.Q. Square-mean almost periodic solution for stochastic Hopfield neural networks with time-varying delays on time scales. Neural Comput. Appl. 2015, 26, 1073–1084. [Google Scholar] [CrossRef]
  38. Qi, J.; Li, C.; Huang, T. Stability of inertial BAM neural network with time-varying delay via impulsive control. Neurocomputing 2015, 161, 162–167. [Google Scholar] [CrossRef]
  39. Zhang, T.; Liu, Y.; Qu, H. Global mean-square exponential stability and random periodicity of discrete-time stochastic inertial neural networks with discrete spatial diffusions and Dirichlet boundary condition. Comput. Math. Appl. 2023, 141, 116–128. [Google Scholar] [CrossRef]
Figure 1. Numerical simulations of state variables x 1 ( t ) , x 2 ( t ) of models (87) and (88) for parameter values given in Section 5.
Figure 1. Numerical simulations of state variables x 1 ( t ) , x 2 ( t ) of models (87) and (88) for parameter values given in Section 5.
Axioms 12 00574 g001
Figure 2. Numerical simulations of state variables y 1 ( t ) , y 2 ( t ) of models (87) and (88) for parameter values given in Section 5.
Figure 2. Numerical simulations of state variables y 1 ( t ) , y 2 ( t ) of models (87) and (88) for parameter values given in Section 5.
Axioms 12 00574 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, M.; Dong, H.; Zhang, Y.; Fang, Y. Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales. Axioms 2023, 12, 574. https://doi.org/10.3390/axioms12060574

AMA Style

Liu M, Dong H, Zhang Y, Fang Y. Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales. Axioms. 2023; 12(6):574. https://doi.org/10.3390/axioms12060574

Chicago/Turabian Style

Liu, Mingshuo, Huanhe Dong, Yong Zhang, and Yong Fang. 2023. "Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales" Axioms 12, no. 6: 574. https://doi.org/10.3390/axioms12060574

APA Style

Liu, M., Dong, H., Zhang, Y., & Fang, Y. (2023). Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales. Axioms, 12(6), 574. https://doi.org/10.3390/axioms12060574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop