Next Article in Journal
Load-Balanced Pickup Strategy for Multi-UAV Systems with Heterogeneous Capabilities
Previous Article in Journal
Numerical Evaluation of Stable and OpenMP Parallel Face-Based Smoothed Point Interpolation Method for Geomechanical Problems
Previous Article in Special Issue
Transcritical Bifurcation and Neimark–Sacker Bifurcation in a Discrete Predator–Prey Model with Constant-Effort Harvesting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analytical Study on the Synchronization of a Two-Cell Inhibitory Neural Dynamics

by
Julia V. Chaparova
1,
Dimitar R. Chaparov
2 and
Teodor G. Georgiev
1,*
1
Department of Mathematics, Faculty of Natural Sciences and Education, “Angel Kanchev” University of Ruse, 8 Studentska Str., 7017 Ruse, Bulgaria
2
Faculty of Mathematics and Informatics, Sofia University “St. Kliment Ohridski”, 5 James Bourchier Blvd., 1164 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(1), 8; https://doi.org/10.3390/math14010008
Submission received: 30 October 2025 / Revised: 30 November 2025 / Accepted: 9 December 2025 / Published: 19 December 2025
(This article belongs to the Special Issue Advances in Mathematical Biology and Applications)

Abstract

Biophysical observations in different brain regions have displayed synchronous firing in inhibitory neural networks. Although inhibition reduces the postsynaptic neuron’s activity, coordinated synchronous rhythms could be predicted provided that inhibition delay is incorporated into the biophysical models. In this article, we study the local dynamics of two mutually coupled neurons connected via inhibitory synapses. The key assumption in the model is that the neurotransmitter release activates some secondary synaptic processes that give additional time to the postsynaptic neuron before it feels the inhibition. Stability conditions for synchrony are derived for this simple inhibitory network. Numerical experiments are presented that justify theoretical conclusions. The geometric singular perturbation theory is used as well as a variational argument.

1. Introduction

Inhibition is an essential mechanism for proper functioning of the brain and entire nervous system [1,2,3,4,5,6]. Inhibitory neural networks regulate the activity of excitatory neurons, control spike timing and play a central role in the behavior of the brain [7,8,9]. Inhibitory neural activities are related to functions such as locomotion [10,11] and to pathological brain states such as anxiety-driven behaviors [12,13]. Although it is still not fully understood how inhibitory neurons like GABAergic neurons modulate behavioral states, it is clear now that “in every single psychiatric disease, there is some kind of an inhibitory oscillation problem” [7].
Oscillations [14] are one of the main subjects, whose study aims to better understand the central nervous system. The types of dynamical behavior include synchronization [15,16], where each cell in the network becomes active simultaneously, clustering [17,18,19], where cells split into groups that have synchronization within each group but are not synchronized to cells of other groups, and more complicated behaviors, such as traveling waves [20,21].
After the pioneering works of Ermentrout, Kopell, Rinzel, Wang, and Terman [22,23,24,25,26,27,28], efforts have been focused on creating biophysical models to control the synchrony in inhibitory oscillations. While excitatory networks amplify electrical signals [29], inhibitory networks tend to suppress them, which makes synchronization a delicate problem [30]. Typically, inhibitory oscillation models exhibit chaotic behaviors [31,32,33,34]. In [35], the mechanisms of neural synchronization in hippocampal networks are explored, focusing on the role of spike doublets in inhibitory neurons. The study investigates how conduction delays between neurons affect synchronization and how the timing of spikes within doublets contributes to this process. The authors of [32] established two synchronization mechanisms in inhibitory neural networks: inhibition delay and high-frequency entrainment based on the neurotransmitter kinetics. In [36], synchronization of a two-cell neural network in the presence of multiple synaptic connections is examined. Analytical and numerical results are obtained regarding how synaptic strength changes and transmission delays between neurons impact synchronization.
Motivated by the above works, we consider a two-cell neural network mutually coupled by inhibitory synapses. The aim is to obtain stability conditions for synchronous oscillations. For this purpose, inhibition delay needs to be incorporated into the model. Using the framework in [27], but examined in a novel way utilizing a single parameter δ for the time and position estimations between the two neurons, we analytically obtain a new parameter set in which stable synchronization occurs. Precise sufficient conditions are presented for adequate timing in both silent and active phases of the neurons, for the delay from the time one oscillator activates until the time the other oscillator feels the inhibition and for the stable synchronization of the two cells. The delay is biologically relevant, as was pointed out above, and is crucial for the synchronization.
This paper is organized as follows. In Section 2, the problem is introduced, and using the geometric singular perturbation theory it is reduced to fast and slow subsystems in Section 3. In Section 4, general assumptions about the nonlinear functions in the model are stated and two lemmas are proved concerning properties of the jump-up curve. In Section 5 the problem is restated and the main stability conditions are listed. Section 6 contains preliminary results about the timing during the cycle. Section 7 contains the analytical results and an example in Section 8 illustrates them.

2. The Model

Following [27], we describe the dynamics of a single neuron without any coupling by the equation
d v d t = f ( v , w ) d w d t = ϵ g ( v , w )
known as the relaxation oscillator. Here v denotes the membrane potential of the cell while w is a relaxational variable, which evolves on a slow time scale since ϵ > 0 is chosen to be small. We assume that the v-nullcline f ( v , w ) = 0 is a cubic-shaped curve and the w-nullcline g ( v , w ) = 0 is a monotone increasing curve that intersects the cubic at a single point, denoted by p 0 . We suppose that p 0 is situated on the middle branch of the cubic nullcline. Also, we assume that f > 0 ( f < 0 ) below (above) the cubic v-nullcline and g > 0 ( g < 0 ) below (above) the w-nullcline. According to the Poincaré–Bendixson theorem, for ϵ sufficiently small, (1) has a limit cycle that approaches a singular limit cycle as ϵ 0 .
The singular limit cycle is shown in Figure 1. One part of the cycle lies along the left branch of the cubic nullcline and corresponds to the silent phase of the neuron. Another part lies along the right branch of the cubic nullcline and corresponds to the active phase of the neuron. The other two lie along horizontal lines in the phase plane vw and connect the left and right branches. The “jump-up” to the active phase occurs at the minimum of the cubic and the “jump-down” occurs at the maximum.
In our work, we consider a model of two mutually coupled inhibitory neurons, which, without coupling, are modeled by (1) ([27]):
v i = f ( v i , w i ) s i g s y n ( v i v s y n ) w i = ϵ g ( v i , w i ) , i = 1 , 2
x i = ϵ α ( 1 x i ) H ( v j θ v ) ϵ β x i j i s i = ϕ ( 1 s i ) H ( x i θ s y n ) ϵ K s i
Here ( v 1 , w 1 ) and ( v 2 , w 2 ) are the two oscillators, g s y n > 0 corresponds to the maximal conductance of the synapse and the reversal potential v s y n is such that v > v s y n along each bounded singular solution; thus the coupling term s i g s y n ( v i v s y n ) > 0 and the synapse is inhibitory. The functions 0 s i ( t ) 1 , i = 1 , 2 give the synaptic strengths (the inhibitions). Note that synapses activate quickly and deactivate slowly. Here ϕ , K , α , β > 0 are rate constants, H is the Heaviside step function and θ s y n > 0 and θ v are threshold constants. The additional functions 0 < x i ( t ) < α / ( α + β ) ensure a delay from the time one oscillator jumps up (the membrane potential v j rises up and crosses θ v ) until the time the other oscillator feels the inhibition ( x i increases until it crosses θ s y n ).

3. Reduced Systems—Fast and Slow Regimes

The reduction of problem (2)–(3) into fast and slow equations is given in [27]; we present it here for completeness and the reader’s convenience. Using the method of singular perturbation theory, we obtain fast equations from the above system by setting ϵ = 0 . In this regime, w is a constant along each solution. Fast equations determine the evolution of the trajectories of the two cells along the jump up and jump down.
The behavior of solutions near the cubic nullcline during the silent and active phases of the oscillators is governed by the slow equations. They are obtained by introducing the slow time scale τ = ϵ t into (2)–(3) and then setting ϵ = 0 in the resulting system. The derivative d w / d τ is denoted by w ˙ . In what follows, only the slow equations are considered with respect to the slow time τ , since the jump up and jump down happen instantaneously with respect to τ . There are three slow regimes.
i. 
The two cells are silent
From the first two equations in (2), it follows that trajectories lie on the left branch of the cubic-like surface
f ( v , w ) s g s y n ( v v s y n ) = 0
w i ˙ = g ( v i , w i ) , i = 1 , 2
and since v j < θ v , j = 1 , 2 , in this regime
x i ˙ = β x i s i ˙ = K s i when x i < θ s y n s i = 1 when x i > θ s y n i = 1 , 2 .
Equations (4) and (5) can be reduced further. If the left branch of (4) is given by v = h L ( w , s ) , then substituting in (5) and denoting G L ( w , s ) = g ( h L ( w , s ) , w ) , we reach
w i ˙ = G L ( w i , s i ) , i = 1 , 2
In this regime the two cells are not coupled. Coupling happens at the moment one of the cells reaches the jump-up point to the right branch on the cubic surface (4) in the active phase. The jump-up curve w = w L ( s ) in the phase plane of the slow variables ws (Figure 2) corresponds to the curve of minimums on the left branch of (4). Later, we will see that the (reciprocal) slope of the jump-up curve is negative,
d w L d s < 0
for s [ 0 , 1 ] , which means the bigger the value of inhibition s i , the smaller the value to which w i must decrease to jump.
ii. 
The two cells are active
In this regime the trajectories lie on the right branch of cubic surface (4),
w i ˙ = g ( v i , w i ) , i = 1 , 2
with v j > θ v , j = 1 , 2 ; in this regime it follows that
x i ˙ = ( α + β ) α α + β x i , s i ˙ = K s i when x i < θ s y n s i = 1 when x i > θ s y n i = 1 , 2 .
We denote the right branch of (4) by v = h R ( w , s ) and substituting in (5); then denoting G R ( w , s ) = g ( h R ( w , s ) , w ) , we reach
w i ˙ = G R ( w i , s i ) , i = 1 , 2 .
If the synapses were direct and initially the two cells were close to each other in their silent phases, the jump up of a cell to the active phase would switch on immediately s = 1 for the other cell, setting the two cells wide apart from each other, and thus the synchronization would not be possible. In our case of indirect synapses, the jump up of cell 1 switches the inhibition s 2 = 1 with a delay due to the time necessary for x 2 to increase up to θ s y n , as is clear from Equation (8). During that delay it is possible (within a parameter set) for cell 2 to reach the curve w L ( s ) as well and to jump up before its inhibition s 2 is set to 1.
A cell leaves its active phase when its trajectory reaches the local maximum of cubic f ( v , w ) g s y n ( v v s y n ) = 0 obtained from (4) with s = 1 . Hence, cells jump down to their silent phases through the same point in the phase plane ws, denoted by ( w R ( 1 ) , 1 ) .
iii. 
One cell is silent and the other cell is active—the equations in this case are presented below.

4. General Assumptions and Properties of the Jump-Up Curve

In this section we make assumptions on the nonlinear functions in (2) and (3) [27].
Suppose the following conditions are satisfied: f , g C 1 ; f ( v , w ) = 0 is a cubic-like curve while g ( v , w ) = 0 is increasing and intersects the cubic nullcline at a single point that is situated on the middle branch. As before, we assume that f > 0 ( f < 0 ) below (above) the cubic v -nullcline, g > 0 ( g < 0 ) below (above) the w -nullcline and
H1. 
f w < 0 , g w < 0 near the v-nullcline.
H1’. 
f w increases in v and decreases in w.
H2. 
g v > 0 near the left branches of (4) for s [ 0 , 1 ] .
H3. 
g v = 0 near the right branches of (4) for s [ 0 , 1 ] .
H4. 
a : = m i n g w > 0 on the left branches of (4) for s [ 0 , 1 ] and 0 < K < a .
H5. 
θ s y n < α α + β .
As (H4) shows, the inhibition decay K must be small, which means that when the inhibition of the two oscillators is not maximal ( s = 1 ), it nevertheless remains close to the maximum value. The next two lemmas concern properties of the jump-up curve. The following lemma is proved in [27], Remark 4; we present it here for the reader’s convenience.
Lemma 1 
([27]). Suppose that condition (H1) is fulfilled. Let the curve of minimums on the left branch of the cubic surface (4) be denoted by ( v L ( s ) , w L ( s ) ) . Then the (reciprocal) slope of the jump-up curve λ ( s ) : = d w L / d s < 0 for s ( 0 , 1 ) .
Proof. 
Let Φ ( v , w , s ) = f ( v , w ) s g s y n ( v v s y n ) be the left-hand side of (4). Since the left branch of (4) is given by v = h L ( w , s ) ,
h L w = Φ w Φ v , h L s = Φ s Φ v
Substituting ( v L ( s ) , w L ( s ) ) into (4) and then differentiating with respect to s yields
Φ v v L ( s ) + Φ w w L ( s ) + Φ s = 0
On the other hand, since h L w becomes unbounded at the points of minimum on the left branch of (4), it follows that Φ v = 0 along the jump-up curve. Then the last equation yields
w L ( s ) = λ ( s ) = Φ s Φ w = g s y n ( v L ( s ) v s y n ) f w ( v L ( s ) , w L ( s )
The result follows from (H1) and the fact that synapses are inhibitory, v s y n < v L ( s ) , s ( 0 , 1 ) . □
The following lemma concerns a property of the jump-up curve under the backward flow of the differential equation during the phase when the two cells are silent. This lemma is the tool for proving the main results in the next section.
Lemma 2. 
Suppose that conditions (H1), (H1’), (H2) and (H4) are satisfied. Consider the image of the jump-up curve under the backward flow of
w ˙ = G L ( w , s ) , s ˙ = K s
Then the slope of the image of w L ( s ) under the flow of (10) stays negative for all τ < 0 and s ( 0 , 1 ) .
Proof. 
Consider the variational equations associated with (10). These equations are as follows:
W ˙ = a W b S , S ˙ = K S
(cf. Tancredi et al., 2001 [37]), where a = G L ( w , s ) / w and b = G L ( w , s ) / s are calculated along the solutions of (10). Denote m : = W / S . If we take m ( 0 ) = λ , the slope of w L ( s ) at some s ¯ ( 0 , 1 ) , then m ( τ ) for τ < 0 represents the evolution of this slope along the backward flow of (10). From Lemma 1 it follows that λ < 0 .
From (11), the slope m ( τ ) is governed by
m ˙ = d d τ W S = W ˙ S W S ˙ S 2 = ( a W b S ) S + K S W S 2 = b + ( K a ) m
Thus, using conditions (H1) and (H2), the definition of G L ( w , s ) = g ( h L ( w , s ) , w ) and the equations obtained in the proof of Lemma 1, we have
h L w = Φ w Φ v = f w Φ v < 0 , h L s = Φ s Φ v = g s y n ( v v s y n ) Φ v < 0 ,
a = G L w = g v h L w g w > 0 , b = G L s = g v h L s > 0
along the flow of (10). Here we used Φ v < 0 on the left branch of (4) and v s y n < v along the flow of (10), since the synapses are inhibitory. Also, condition (H4) ensures that K a < 0 along the flow of (10).
We prove that
m ( τ ) < λ for τ < 0 .
For this, it is enough to show that
b + ( K a ) λ > 0
along the backward flow of (10) for all τ < 0 . Indeed, (12) and (14) imply that m ( τ ) increases near τ = 0 . Now, arguing by contradiction we denote t 1 = sup { τ < 0   s . t .   m ( τ ) = λ } . By the definition of t 1 , we have m ˙ ( t 1 ) 0 , which is equivalent to
b | τ = t 1 + ( K a ) | τ = t 1 m ( t 1 ) = b | τ = t 1 + ( K a ) | τ = t 1 λ 0
contradicting (14) and proving (13).
Finally, in order to prove (14) we note that from (9) it is equivalent to
b a K < | λ | = g s y n ( v v s y n ) f w ( v , w )
where v = v L ( s ¯ ) , w = w L ( s ¯ ) . The left-hand side of the last inequality reduces to
g v g s y n ( v v s y n ) g v ( f w ) + ( K + g w ) Φ v < g v g s y n ( v v s y n ) g v ( f w ) = g s y n ( v v s y n ) f w
and the desired result follows since v s y n < v < v , w > w and f w is increasing in v and decreasing in w. □
Remark 1. 
In [27] Lemma 3, the same property of the jump-up curve is proved for f ( v , w ) , given in the form
f ( v , w ) = f 1 ( v ) g c w ( v v R )
where g c > 0 and v R represent the maximal conductance and reversal potential, respectively ([27] (2.2)). Thus, (1) includes the well-known Morris–Lecar equations [38]. Although (15) does not satisfy (H1’), all the results below are still applied for (2)–(3), with f ( v , w ) given in (15) since we use the monotonicity of f / w in v and w only for proving Lemma 2.

5. The Problem Within One Complete Cycle 0 τ T 0

In this section, we proceed with introducing notations for the important moments of time during the movement of the two cells along their trajectories within a complete cycle. Without loss of generality, we suppose cell 1 at τ = 0 is at the point of jump down ( w R ( 1 ) , 1 ) in the phase plane ws, while cell 2 at τ = 0 is still in its active phase on the right branch of (4), and the two cells are maximally inhibited, s i ( 0 ) = 1 , i = 1 , 2 . Also, we assume
θ s y n < x i ( 0 ) < α α + β , i = 1 , 2 .
We denote by δ the time needed for cell 2 to reach the jump-down position and assume δ is small. Until that time, the variable x 1 increases according to (8). At τ = δ , the membrane potential v 2 decreases rapidly and crosses the threshold θ v ; thus x 1 starts to decrease according to (3) and (6).
Let t i be the moment at which x i ( τ ) crosses the threshold θ s y n for the first time. Thus,
x i ( t i ) = θ s y n , i = 1 , 2 ,
and cell i is in its silent phase. After t i , according to (6) the inhibition s i starts to decrease. Let T i be the time at which the trajectory of cell i reaches the jump-up curve w L ( s ) (Figure 2, blue lines). At T i , the membrane potential v i rises up rapidly and crosses the threshold θ v again. At that moment T i , the variable x j of the other cell j starts to increase according to (8). Let t j be the moment at which x j ( τ ) crosses the threshold θ s y n for the second time. Thus,
x j ( t j ) = θ s y n , j = 1 , 2
At the time t j the inhibition s j is set to 1 (Figure 2, red lines).
After T i , cell i is in its active phase. Let T 0 , i denote the time at which cell i reaches the jump-down point ( w R ( 1 ) , 1 ) , and T 0 = min ( T 0 , 1 , T 0 , 2 ) . At T 0 , the cycle completes.
Note that under assumptions (H1) and (H3), it follows that G R s = 0 . Indeed, by definition G R ( w , s ) = g ( h R ( w , s ) , w ) , where v = h R ( w , s ) denotes the right branch of (4); thus we have
G R s = g v h R s = 0 .
Hence, the problem under consideration within one complete cycle is as follows.
w ˙ i = G L ( w i , s i ) , σ i < τ < T i i = 1 , 2 , σ i = 0 , i = 1 δ , i = 2
w i ( σ i ) = w R ( 1 ) , i = 1 , 2 ,
w ˙ i = G R ( w i ) , T i < τ < T 0 , i , i = 1 , 2 , and 0 < τ < δ for i = 2
w i ( T i ) = w L ( s i ( T i ) ) , w i ( T 0 , i ) = w R ( 1 ) , i = 1 , 2 ,
s i ˙ = K s i when x i < θ s y n s i = 1 when x i > θ s y n s i ( 0 ) = 1 , i = 1 , 2 ,
x i ˙ = ( α + β ) α α + β x i , 0 < τ < σ i ¯ and T j < τ < T 0 , j ,
x ˙ i = β x i , σ i ¯ < τ < T j , σ i ¯ = δ , i = 1 0 , i = 2 , i , j = 1 , 2 , i j
Note that T 0 depends on the parameters and nonlinear functions, and T 0 in general is not a constant with respect to following cycles. Let us summarize conditions for the nonlinearities obtained by now. Since g > 0 ( g < 0 ) below (above) the w -nullcline of (1), from Lemmas 1 and 2 we have
G L , G R C 1 ( D ) , w L C 1 ( [ 0 , 1 ] ) , G L ( w , s ) < 0 , G R ( w ) > 0 , G P w < 0 , P = L , R , G L s < 0
on the phase space D = { ( w , s ) s . t . 0 s 1 , w L ( s ) w w R ( 1 ) } ;
λ ( s ) : = w L ( s ) < 0 for s [ 0 , 1 ] .
Suppose additionally that the following conditions are satisfied:
H6. 
max 0 s 1 | λ ( s ) | K < G R ( w R ( 1 ) ) .
H7. 
δ + 1 α + β ln α α ( α + β ) θ s y n < w L ( 0 ) w R ( 1 ) d w G R ( w ) .
Note that inequality (H7) stays valid, replacing δ by a smaller δ 1 > 0 .
H8. 
δ + 1 β ln α ( α + β ) θ s y n < 2 w R ( 1 ) w L ( 0 ) d 0 w R ( 1 ) w L ( 1 ) d 1 .
with d 0 : = max D | G L | , d 1 : = min D | G L | . Let us note that (H8) yields
H8’. 
δ + 1 β ln α ( α + β ) θ s y n < w R ( 1 ) w L ( 0 ) d 0 .
H9. 
δ + w R ( 1 ) w L ( 1 ) d 1 w R ( 1 ) w L ( 0 ) d 0 < 1 α + β ln α α + β 1 exp β w R ( 1 ) w L ( 0 ) d 0 α α + β θ s y n .
We note that (H8’) implies that the logarithm on the right-hand side of (H9) is positive. Indeed,
α α + β exp β w R ( 1 ) w L ( 0 ) d 0 < θ s y n 1 β ln α ( α + β ) θ s y n < w R ( 1 ) w L ( 0 ) d 0
Again, (H8), (H8’) and (H9) are still true, replacing δ by a smaller δ 1 > 0 .
H10. 
1 α + β ln 1 C 1 1 α 2 β θ s y n + α ( α + β ) α α + β θ s y n 1 C 1 < w L ( 0 ) w R ( 1 ) d w G R ( w ) .
Where C 1 : = 1 max 0 s 1 | λ ( s ) | K / G R ( w R ( 1 ) ) . From (H6) we have 0 < C 1 < 1 . Let us note that (H10) yields (H7) for sufficiently small δ .
H11. 
w L ( 0 ) w R ( 1 ) d w G R ( w ) < 1 α + β ln 1 C 1 1 α β θ s y n + 2 α ( α + β ) α α + β θ s y n 1 C 1 .

6. Preliminary Results

Proposition 1 
(A priori estimates for T 1 and T 2 ). Denote d 0 : = max D | G L | , d 1 : = min D | G L | . Then the following inequalities hold.
w R ( 1 ) w L ( 0 ) d 0 T 1 w R ( 1 ) w L ( 1 ) d 1
δ + w R ( 1 ) w L ( 0 ) d 0 T 2 δ + w R ( 1 ) w L ( 1 ) d 1
Proof. 
From (19) and (26) we have w ˙ 2 = G L ( w 2 , s 2 ) d 0 . Integrating for δ < τ < T 2 and using (20), we obtain
w R ( 1 ) w 2 ( T 2 ) d 0 ( T 2 δ )
On the other hand, from (22) and Lemma 1 we have w 2 ( T 2 ) < w L ( 0 ) ; thus
T 2 δ + w R ( 1 ) w L ( 0 ) d 0
Similarly, (19) and (26) yield w ˙ 2 = G L ( w 2 , s 2 ) d 1 . Integrating for δ < τ < T 2 and using (20), we obtain
w R ( 1 ) w 2 ( T 2 ) d 1 ( T 2 δ )
Now (22) and Lemma 1 give us w 2 ( T 2 ) > w L ( 1 ) ; thus
T 2 δ + w R ( 1 ) w L ( 1 ) d 1
The first two inequalities follow as before. □
The next two propositions give sufficient conditions for x 1 and x 2 to cross the threshold θ s y n both in the silent and in the active phases of the oscillators. From the point of view of (23), this means that the inhibition s i will switch off and on during the cycle.
Proposition 2. 
Suppose that (H8’) is satisfied. Then
t 1 < T 1 , t 2 < T 2 , t 2 < T 1 , t 1 < T 2
Proof. 
From (16), (17), (24) and (25), we have
t 1 = δ + 1 β ln x 1 ( δ ) θ s y n < δ + 1 β ln α ( α + β ) θ s y n , t 2 = 1 β ln x 2 ( 0 ) θ s y n < 1 β ln α ( α + β ) θ s y n
Now (H8’) and Proposition 1 give the result. □
The following proposition presents conditions that guarantee a delay from the time one oscillator jumps up until the time at which the inhibition of the other oscillator is set to s = 1 .
Proposition 3. 
Suppose that (H8’) and (H9) are satisfied. Then
T 1 < t 1 , T 2 < t 2
Proof. 
From (18) and (24) we have
t 1 = T 2 + 1 α + β ln α α + β x 1 ( T 2 ) α α + β θ s y n , t 2 = T 1 + 1 α + β ln α α + β x 2 ( T 1 ) α α + β θ s y n
By (17), (25), (H8’) and Proposition 2, it follows that x 1 ( T 2 ) < θ s y n and x 2 ( T 1 ) < θ s y n . Thus
T 2 < t 1 , T 1 < t 2
On the other hand, from (16), (25) and Proposition 1, we have
x 2 ( T 1 ) = x 2 ( 0 ) e β T 1 < α α + β exp β w R ( 1 ) w L ( 0 ) d 0
Hence (H9) implies
t 2 T 2 = T 1 T 2 + 1 α + β ln α α + β x 2 ( T 1 ) α α + β θ s y n > 1 α + β ln α α + β 1 exp β w R ( 1 ) w L ( 0 ) d 0 α α + β θ s y n + w R ( 1 ) w L ( 0 ) d 0 w R ( 1 ) w L ( 1 ) d 1 δ > 0
The first inequality follows in a similar way. □

7. Main Results

In what follows, we impose additional restrictions on the initial conditions x i ( 0 ) , i = 1, 2 and δ in order to obtain synchronization between the two oscillators. By synchronization we mean that the time between the two cells at the end of the cycle has to be less than the time δ between them at the beginning of the cycle (sometimes we call this compression), and each time one of the cells jumps up (down) the other cell has to do the same almost simultaneously with the leading cell. The next theorem shows the ordering of the two oscillators at the important moments along their trajectories within the cycle and estimates the time at jumps up and jumps down.
Theorem 1.
Suppose that conditions (H1)–(H6), (H1’), (H8) and (H9), (16) and
x 2 ( 0 ) e 2 β δ < x 1 ( δ ) < x 2 ( 0 ) e β δ
are satisfied. The following inequalities are fulfilled:
i 
0 < t 2 t 1 < δ ;
ii 
T 1 < T 2 ;
iii 
T 2 T 1 < δ ;
iv 
0 < T 0 , 2 T 0 , 1 < δ .
Proof. 
(i) From (28) and (30) the inequalities follow. As (16), (17) and (23) show, inequality t 2 > t 1 means that the inhibition of cell 1, s 1 ( τ ) , starts to decrease from 1 first, before the inhibition of cell 2, s 2 ( τ ) , starts decreasing.
(ii) First, we show that w 1 ( t 1 ) < w 2 ( t 2 ) (Figure 2). From (19), (20) and (i), we have w ˙ 1 = G L ( w 1 , 1 ) for 0 < τ < t 1 , w ˙ 2 = G L ( w 2 , 1 ) for δ < τ < t 2 , w 1 ( 0 ) = w 2 ( δ ) = w R ( 1 ) and t 2 t 1 < δ . Then w 2 ( τ ) = w 1 ( τ δ ) for δ τ t 2 , and in particular
w 2 ( t 2 ) = w 1 ( t 2 δ ) > w 1 ( t 1 ) ,
since w 1 decreases in ( 0 , T 1 ) according to (26).
We consider the trajectories of w ˙ = G L ( w , s ) , s ˙ = K s , starting from the initial points ( w 1 ( t 1 ) , 1 ) and ( w 2 ( t 2 ) , 1 ) in D. From (17), (23), (25), (27) and Proposition 3, we deduce that
s 1 ( T 1 ) > s 2 ( T 2 ) , w 1 ( T 1 ) < w 2 ( T 2 )
(Figure 2, blue lines). From (i) we have s 1 ( τ ) < s 2 ( τ ) for t 1 < τ T 1 , and in particular s 1 ( T 1 ) < s 2 ( T 1 ) . Thus, from (32) it follows that s 2 ( T 2 ) < s 2 ( T 1 ) , and since s 2 ( τ ) decreases according to (23), we conclude that T 1 < T 2 . Hence, cell 1 reaches the jump-up curve first, and cell 2 is behind.
(iii) In order to show compression in the silent phases of the oscillators, we consider the image of the jump-up curve w L ( s ) along the backward flow of (10) at the point ( w 1 ( t 1 ) , 1 ) and denote by τ ¯ < 0 the time needed for that backward translation. Also, we denote by t 3 the time at which the trajectory ( w 2 ( τ ) , s 2 ( τ ) ) crosses the translated backward curve w L ( s ) . Thus t 3 is such that the point ( w 2 ( t 3 ) , s 2 ( t 3 ) ) lies on the translated jump-up curve, and
T 2 T 1 = t 3 t 1 .
Since T 1 < T 2 we have t 3 t 1 > 0 . We denote by
m ( τ ¯ ) = w 2 ( t 3 ) w 1 ( t 1 ) s 2 ( t 3 ) s 1 ( t 1 ) ,
the (reciprocal) slope of the straight line connecting the points ( w 1 ( t 1 ) , 1 ) and ( w 2 ( t 3 ) , s 2 ( t 3 ) ) . From the mean value theorem, there is a point ( w ( τ ) , s ( τ ) ) lying on the translated backward jump-up curve (between ( w 1 ( t 1 ) , 1 ) and ( w 2 ( t 3 ) , s 2 ( t 3 ) ) ) such that the slope of the tangent line at that point is equal to m ( τ ¯ ) . From Lemma 2 we have m ( τ ¯ ) < λ < 0 , where λ is the corresponding (reciprocal) slope obtained from m ( τ ¯ ) under the forward flow of (10) at τ = 0 . Thus
0 < m ( τ ¯ ) ( 1 s 2 ( t 3 ) ) = w 2 ( t 3 ) w 1 ( t 1 ) .
From t 3 t 1 > 0 we have two possibilities, either t 1 < t 3 t 2 or t 1 < t 2 < t 3 . In the first case it follows immediately from (i) that T 2 T 1 = t 3 t 1 t 2 t 1 < δ , and the proof of (iii) is complete. In the latter case, δ < t 1 < t 2 < t 3 , (i), (26), (31) and the last inequality give us
w 2 ( t 2 ) w 2 ( t 3 ) < w 2 ( t 2 ) w 1 ( t 1 ) = w 1 ( t 2 δ ) w 1 ( t 1 )
Thus, the mean value theorem applied to the first and last terms yields
0 < t 3 t 2 < M ( t 1 t 2 + δ ) < M δ
where M = max D | G L | / min D | G L | .
On the other hand
0 < m ( τ ¯ ) ( 1 s 2 ( t 3 ) ) = w 2 ( t 3 ) w 1 ( t 1 ) = ( w 2 ( t 3 ) w 1 ( t 3 δ ) ) + ( w 1 ( t 3 δ ) w 1 ( t 1 ) )
and the mean value theorem gives
w 1 ( t 3 δ ) w 1 ( t 1 ) = w ˙ 1 ( ξ ) ( t 3 t 1 δ )
for some ξ between t 1 and t 3 δ . Thus we obtain
0 < t 3 t 1 < δ E | w ˙ 1 ( ξ ) |
where E is denoted by
E = m ( τ ¯ ) ( 1 s 2 ( t 3 ) ) ( w 2 ( t 3 ) w 1 ( t 3 δ ) ) .
Next we show that E > 0 . For this, we need to show first that t 3 δ < T 1 is satisfied. Indeed, from the definition of t 3 , (28), Proposition 1 and (H8), we have
T 1 t 3 + δ = 2 T 1 T 2 t 1 + δ > 2 T 1 T 2 1 β ln α ( α + β ) θ s y n > 2 w R ( 1 ) w L ( 0 ) d 0 w R ( 1 ) w L ( 1 ) d 1 δ 1 β ln α ( α + β ) θ s y n > 0
Also, Proposition 2 yields t 3 = T 2 T 1 + t 1 < T 2 . Now, from w ˙ 1 = G L ( w 1 , s 1 ) for t 2 δ < τ < t 3 δ , we have
w 1 ( t 3 δ ) = w 1 ( t 2 δ ) + t 2 δ t 3 δ G L ( w 1 ( τ ) , s 1 ( τ ) ) d τ .
Also, from w ˙ 2 = G L ( w 2 , s 2 ) for t 2 < τ < t 3 , it follows that
w 2 ( t 3 ) = w 2 ( t 2 ) + t 2 t 3 G L ( w 2 ( τ ) , s 2 ( τ ) ) d τ .
Thus, subtracting the last two equalities and using (31), we obtain
w 2 ( t 3 ) w 1 ( t 3 δ ) = t 2 t 3 G L ( w 2 ( τ ) , s 2 ( τ ) ) G L ( w 1 ( τ δ ) , s 1 ( τ δ ) ) d τ = t 2 t 3 G L w ( w ¯ , s ¯ ) ( w 2 ( τ ) w 1 ( τ δ ) ) + G L s ( w ¯ , s ¯ ) ( s 2 ( τ ) s 1 ( τ δ ) ) d τ .
From (23) and (i) we have s 2 ( τ ) < s 1 ( τ δ ) 1 for t 2 < τ < t 3 ; thus
| w 2 ( t 3 ) w 1 ( t 3 δ ) | a 0 t 2 t 3 | w 2 ( τ ) w 1 ( τ δ ) | d τ + b 0 t 2 t 3 ( 1 s 2 ( τ ) ) d τ b 0 K 2 ( t 3 t 2 ) 2 + a 0 t 2 t 3 | w 2 ( τ ) w 1 ( τ δ ) | d τ
with a 0 = max G L w , b 0 = max G L s on a compact subset in the interior of D. Then the Gronwall inequality yields
| w 2 ( t 3 ) w 1 ( t 3 δ ) | b 0 K 2 ( t 3 t 2 ) 2 e a 0 ( t 3 t 2 )
Hence,
E = m ( τ ¯ ) ( 1 s 2 ( t 3 ) ) ( w 2 ( t 3 ) w 1 ( t 3 δ ) ) | λ | 1 e K ( t 3 t 2 ) b 0 K 2 ( t 3 t 2 ) 2 e a 0 ( t 3 t 2 )
We denote
ψ ( x ) = | λ | ( 1 e K x ) b 0 K 2 x 2 e a 0 x , 0 x < +
and note that ψ C , ψ ( 0 ) = 0 and there is unique x > 0 such that ψ ( x ) > 0 in ( 0 , x ) and ψ ( x ) < 0 in ( x , + ) . From (33), it follows that E > 0 for δ sufficiently small. From T 2 T 1 = t 3 t 1 and (34), we conclude that T 2 T 1 < δ .
(iv) From (21) and (22) we have
T 0 , i = T i + w i ( T i ) w R ( 1 ) d w G R ( w ) , i = 1 , 2 .
We show that T 0 , 2 T 0 , 1 > 0 . Indeed, from w i ( T i ) = w L ( s i ( T i ) ) , i = 1 , 2 , (17), (23), (27), (32), Proposition 3 and the mean value theorem, we have
0 < w 2 ( T 2 ) w 1 ( T 1 ) = | λ ( s ¯ ) | ( s 1 ( T 1 ) s 2 ( T 2 ) ) = | λ ( s ¯ ) | ( e K ( T 1 t 1 ) e K ( T 2 t 2 ) ) = | λ ( s ¯ ) | K e K ξ ( ( T 2 T 1 ) ( t 2 t 1 ) )
for some s ¯ between s 2 ( T 2 ) and s 1 ( T 1 ) and some ξ between T 1 t 1 and T 2 t 2 . Thus, (26) and condition (H6) yield
T 0 , 2 T 0 , 1 = T 2 T 1 w 1 ( T 1 ) w 2 ( T 2 ) d w G R ( w ) T 2 T 1 1 G R ( w R ( 1 ) ) ( w 2 ( T 2 ) w 1 ( T 1 ) )
1 max 0 s 1 | λ ( s ) | K G R ( w R ( 1 ) ) ( T 2 T 1 ) > 0
Inequality (37) shows that at the end of the cycle, the leading cell will be cell 1. The last inequality follows directly,
T 0 , 2 T 0 , 1 < T 2 T 1 < δ
Thus, we obtain compression of the time between the two cells both in the silent and in the active phases of the oscillators. □
In view of synchronization, it is enough to show compression of the time at the end of the cycle, and for the next cycle, all the conditions of Theorem 1 are satisfied with the new initial values for δ , x 1 ( 0 ) , x 2 ( 0 ) . From (37) we have T 0 = T 0 , 1 . Thus, we introduce notations for the new initial data as follows, δ 1 = T 0 , 2 T 0 , 1 , x ^ 1 ( 0 ) = x 1 ( T 0 ) and x ^ 2 ( 0 ) = x 2 ( T 0 ) . From (37) and (38) we have 0 < δ 1 < δ , so compression is available. Hence, it remains to show that the set
X = ( δ , x 1 ( 0 ) , x 2 ( 0 ) ) R + 3 | θ s y n < x i ( 0 ) < α α + β , i = 1 , 2 x 2 ( 0 ) e 2 β δ < x 1 ( δ ) < x 2 ( 0 ) e β δ
is invariant under the map
Π : ( δ , x 1 ( 0 ) , x 2 ( 0 ) ) ( δ 1 , x ^ 1 ( 0 ) , x ^ 2 ( 0 ) ) ,
i.e., that Π ( X ) X .
The following theorem presents conditions ensuring (16) and the left inequality of (30), with δ , x 1 ( 0 ) , x 2 ( 0 ) replaced by δ 1 , x ^ 1 ( 0 ) , x ^ 2 ( 0 ) , respectively. As for the right inequality of (30), only a necessary condition is known by now.
Theorem 2. 
Suppose (H1)–(H9), (H1’), (16) and (30) are fulfilled. Then
θ s y n < x ^ 1 ( 0 ) < x ^ 1 ( δ 1 ) < x ^ 2 ( 0 ) < α α + β
Moreover, (H10) is a sufficient condition for
x ^ 2 ( 0 ) e 2 β δ 1 < x ^ 1 ( δ 1 )
and (H11) is a necessary condition for
x ^ 1 ( δ 1 ) x ^ 2 ( 0 ) e β δ 1
Proof. 
From (24) is clear that x ^ 1 ( 0 ) < x ^ 1 ( δ 1 ) and x ^ 2 ( 0 ) < α / ( α + β ) .
Now, we show that θ s y n < x ^ 1 ( 0 ) . From (18), (24) and Proposition 3, it is enough to show that t 1 < T 0 , 1 .
From (24), (25), (30) and T 1 < T 2 , we have x 1 ( τ ) < x 2 ( τ ) for 0 τ T 0 ; thus x ^ 1 ( 0 ) < x ^ 2 ( 0 ) and
x 1 ( T 2 ) < x 2 ( T 1 )
From (22), (35) and Lemma 1, we have
T 0 , 1 > T 1 + w L ( 0 ) w R ( 1 ) d w G R ( w )
From (25) and (29) we have
t 1 < T 2 + 1 α + β ln α α ( α + β ) θ s y n
Thus (H7) and T 2 T 1 < δ yield
T 0 , 1 t 1 > T 1 T 2 + w L ( 0 ) w R ( 1 ) d w G R ( w ) 1 α + β ln α α ( α + β ) θ s y n > 0
Next, we see that
x ^ 1 ( δ 1 ) < x ^ 2 ( 0 )
Indeed, from (24) we have
x ^ 1 ( δ 1 ) = x 1 ( T 0 , 2 ) = α α + β α α + β x 1 ( T 2 ) e ( α + β ) ( T 0 , 2 T 2 ) x ^ 2 ( 0 ) = x 2 ( T 0 , 1 ) = α α + β α α + β x 2 ( T 1 ) e ( α + β ) ( T 0 , 1 T 1 )
Also, (38) implies T 0 , 2 T 2 < T 0 , 1 T 1 , and (42) follows from (41). We next prove (39). From (29) and (41) we have
0 < t 1 t 2 ( T 2 T 1 ) + 1 α + β x 2 ( T 1 ) x 1 ( T 2 ) α α + β θ s y n
Using the notation C 1 : = 1 max 0 s 1 | λ ( s ) | K / G R ( w R ( 1 ) ) introduced in (H10), we rewrite (37) as
δ 1 = T 0 , 2 T 0 , 1 C 1 ( T 2 T 1 ) > 0
Hence
T 2 T 1 1 C 1 δ 1 .
On the other hand, from (17), (25) and Theorem 1 (i), (ii), we have x 2 ( τ ) = x 1 ( τ + t 1 t 2 ) for τ [ t 2 , T 1 ] . From x 1 ( τ ) = θ s y n e β ( τ t 1 ) in [ t 1 , T 2 ] , it follows that x ˙ 1 ( τ ) = β θ s y n e β ( τ t 1 ) , and combining this with (41) yields
0 < x 2 ( T 1 ) x 1 ( T 2 ) = x 1 ( T 1 + t 1 t 2 ) x 1 ( T 2 ) = β θ s y n e β ( ξ t 1 ) ( T 2 T 1 + t 2 t 1 )
for some T 1 + t 1 t 2 < ξ < T 2 .
Also, we have x 1 ( τ ) = x 2 ( τ ( t 1 t 2 ) ) for τ [ t 1 , T 0 , 2 ] . Indeed, from (38), (41) and (43), we have T 0 , 2 T 0 , 1 < t 1 t 2 , thus T 0 , 2 ( t 1 t 2 ) < T 0 , 1 . From
x 2 ( τ ) = α α + β α α + β θ s y n e ( α + β ) ( τ t 2 )
in [ T 1 , T 0 , 1 ] , it follows that x ˙ 2 ( τ ) = ( α + β ) α α + β θ s y n e ( α + β ) ( τ t 2 ) . Thus
x 2 ( T 0 , 1 ) x 1 ( T 0 , 2 ) = x 2 ( T 0 , 1 ) x 2 ( T 0 , 2 ( t 1 t 2 ) ) = ( α + β ) α α + β θ s y n e ( α + β ) ( η t 2 ) ( T 0 , 1 T 0 , 2 + t 1 t 2 )
for some T 0 , 2 ( t 1 t 2 ) < η < T 0 , 1 . Hence
x 2 ( T 0 , 1 ) x 1 ( T 0 , 2 ) < ( α + β ) α α + β θ s y n e ( α + β ) ( T 0 , 2 t 1 ) ( t 1 t 2 δ 1 )
It is clear that (39) is equivalent to
1 β ln x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) < 2 δ 1
which we are going to prove. From (42) and (46) we have
0 < 1 β ln x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) = 1 β ln 1 + x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) x ^ 1 ( δ 1 ) < 1 β x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) x ^ 1 ( δ 1 ) < x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) β θ s y n = x 2 ( T 0 , 1 ) x 1 ( T 0 , 2 ) β θ s y n < ( α + β ) α α + β θ s y n e ( α + β ) ( T 0 , 2 t 1 ) β θ s y n ( t 1 t 2 δ 1 )
On the other hand, from (36) and (45) it follows that
x 2 ( T 1 ) x 1 ( T 2 ) β θ s y n ( T 2 T 1 + t 2 t 1 ) 2 β θ s y n ( T 2 T 1 )
Thus (43) yields
t 1 t 2 T 2 T 1 + 1 ( α + β ) α α + β θ s y n ( x 2 ( T 1 ) x 1 ( T 2 ) ) 1 + 2 β θ s y n ( α + β ) α α + β θ s y n ( T 2 T 1 )
Hence
1 β ln x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) < ( α + β ) α α + β θ s y n e ( α + β ) ( T 0 , 2 t 1 ) β θ s y n ( t 1 t 2 δ 1 ) ( α + β ) α α + β θ s y n e ( α + β ) ( T 0 , 2 t 1 ) β θ s y n 1 + 2 β θ s y n ( α + β ) α α + β θ s y n ( T 2 T 1 ) δ 1
We denote A : = ( α + β ) α α + β θ s y n / ( β θ s y n ) . Thus, from (44) we have
1 β ln x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) < A 1 + 2 A 1 C 1 1 e ( α + β ) ( T 0 , 2 t 1 ) δ 1
Now, we need to estimate e ( α + β ) ( T 0 , 2 t 1 ) . From (29) and (35) we have
T 0 , 2 t 1 = w 2 ( T 2 ) w R ( 1 ) d w G R ( w ) 1 α + β ln α α + β x 1 ( T 2 ) α α + β θ s y n
Thus,
e ( α + β ) ( T 0 , 2 t 1 ) = α α + β x 1 ( T 2 ) α α + β θ s y n e ( α + β ) w 2 ( T 2 ) w R ( 1 ) d w G R ( w ) < α α + β α α + β θ s y n e ( α + β ) w 2 ( T 2 ) w R ( 1 ) d w G R ( w )
Hence,
1 β ln x ^ 2 ( 0 ) x ^ 1 ( δ 1 ) < α β θ s y n 1 + 2 A 1 C 1 1 e ( α + β ) w 2 ( T 2 ) w R ( 1 ) d w G R ( w ) δ 1
In order to prove (39), we need to show that
α β θ s y n 1 + 2 A 1 C 1 1 e ( α + β ) w 2 ( T 2 ) w R ( 1 ) d w G R ( w ) < 2 ,
or equivalently,
1 α + β ln 1 C 1 1 α 2 β θ s y n + α ( α + β ) α α + β θ s y n 1 C 1 < w 2 ( T 2 ) w R ( 1 ) d w G R ( w )
According to (22), w 2 ( T 2 ) lies on the jump-up curve w L ( s ) , and considering its negative slope we conclude that w 2 ( T 2 ) < w L ( 0 ) . Thus, (39) follows from (H10).
Finally, we note that
1 α + β ln 1 C 1 1 α β θ s y n + 2 α ( α + β ) α α + β θ s y n 1 C 1 < w 2 ( T 2 ) w R ( 1 ) d w G R ( w )
is a sufficient condition for x ^ 2 ( 0 ) e β δ 1 < x ^ 1 ( δ 1 ) , which proves the necessity of (H11) to (40). □

8. Example

The aim of this section is to demonstrate the consistency of all the Hypotheses (H1)–(H11). Secondly, we show the stability analysis proved in Theorems 1 and 2.
As was noted in the Remark, our results are applicable to problems (2)–(3), in which f ( v , w ) is given in the form (15). For numerical experiments of (2)–(3), we take ([27], Appendix A)
f ( v , w ) = 0.5 ( v + 0.5 ) 3 w ( v + 0.72 ) 1 + tanh v + 0.01 0.15 ( v 1 ) + 0.2 g ( v , w ) = 13.64568310 P ( v ) + 5 Q ( v ) 4.88781214 w
with
P ( v ) = 1 v S ( t ) d t , Q ( v ) = 1 1 + exp ( ( v + 0.18 ) / 0.005 ) S ( v ) = exp ( 1 / ( 0.15 v ) 2 ) , v < 0.15 0 , v 0.15
g s y n = 0.3 , v s y n = 0.72 , θ s y n = 0.05 , θ v = 0.01 , ϵ = 0.003 , ϕ = 0.3 ([27], Remark 5), α = 0.496541 , β = 7.370830 , K = 0.148978 such that all the assumptions are satisfied. Indeed, (H1)–(H11) can easily be verified using
v L ( 0 ) = 0.358901 , w L ( 0 ) = 0.27346 , v L ( 1 ) = 0.358901 , w L ( 1 ) = 0.173461 , v R ( 1 ) = 0.103569 , w R ( 1 ) = 0.697901 , ( w R ( 1 ) w L ( 0 ) ) / d 0 = 0.10611 , ( w R ( 1 ) w L ( 1 ) ) / d 1 = 0.174813 , C 1 = 0.99 , G R ( w ) = 2.18768 w .
As is seen in Figure 3, Φ ( v , w , s ) for s [ 0 , 1 ] and g ( v , w ) define a relaxation oscillator for (2). The jump-up curve w = w L ( s ) corresponding to the curve of minimums ( v L ( s ) , w L ( s ) ) on the left branches of Φ ( v , w , s ) = 0 for s [ 0 , 1 ] is w = w L ( 0 ) 0.1 s with w L ( 0 ) = 0.27346 , and the corresponding function v L ( s ) = 0.358901 is a constant. Thus, the slope λ ( s ) : = d w L / d s = 0.1 for s [ 0 , 1 ] . The jump-down point ( w R ( 1 ) , 1 ) corresponds to the local maximum of Φ ( v , w , s ) = 0 at s = 1 , and w R ( 1 ) = 0.697901 . Note that v L ( s ) = 0.358901 is the threshold above which the neurons generate their action potentials. If v L ( s ) = 0.358901 × 10 2 mV, it is biologically plausible.
Condition (H1) is satisfied. Indeed, g w = 1 and f w = 3 ( v v s y n ) < 0 , since v > v s y n = 0.72 for the inhibitory synapses. Again, if v s y n = 0.72 × 10 2 m V , it is biologically relevant. The condition f w < 0 means that the smaller the relaxational variable w is, the bigger the rate of change in the membrane potential v is.
Conditions (H2) and (H3) can be checked explicitly and are confirmed graphically in Figure 4. In particular, (H3) means that the rate of change in the relaxational variables w i , i = 1 , 2 does not depend on the membrane potentials v i in the active phases of the oscillators. Since a = 1 , condition (H4) is also satisfied.
The constant α / ( α + β ) is the least upper bound for the auxiliary variables x i ( t ) > 0 , i = 1 , 2 . So the inequality θ s y n < α / ( α + β ) allows for x i ( t ) , i = 1 , 2 to cross the threshold θ s y n from above and from below during the cycles of the two oscillators. Alternatively, this means that the maximum inhibition s i = 1 , i = 1 , 2 will switch on and off according to (6) and (8) (or (23)) during the cycles. It is clear that (H5) is satisfied with the chosen values of θ s y n , α , β above. All the remaining conditions (H6)–(H11) can also easily be verified.
In order to verify the stability analysis produced in Theorems 1 and 2, we found a computational solution of the system (19)–(25). That is why we needed to calculate the function G L ( w , s ) : = g ( h L ( w , s ) , w ) , where v = h L ( w , s ) represents the left branches of Φ ( v , w , s ) = 0 , on the phase space D = { ( w , s ) s . t . 0 s 1 , w L ( s ) w w R ( 1 ) } (Figure 5). On the other hand, the functions G L ( w , s ) and G R ( w ) are essential for conditions (H6)–(H11).
We take δ = 0.005 , x 1 ( 0 ) = 0.051 , x 2 ( 0 ) = 0.055 ; then x 1 ( δ ) = 0.05146 , and it is easy to check that (16) and (30) are fulfilled. The advantage of solving (19)–(25) instead of (2)–(3) is obtaining the important moments of time T i , T 0 , i , i = 1 , 2 of the two oscillators during the cycles. The graphs of the membrane potentials v 1 ( t ) , v 2 ( t ) found from the solutions of (19)–(25) are presented in Figure 6, and a plot of the trajectory in the vw phase space can be seen in Figure 7. Stable synchronization of the two cells during 23 cycles can be seen in Table 1, as predicted by Theorems 1 and 2. Moreover, the change in δ per cycle for 68 cycles (Figure 8) further confirms the stability.
In Table 1, the relative times of the two oscillators at jumps up ( T 1 , T 2 ) and jumps down ( T 0 , 1 , T 0 , 2 ) with respect to the beginning of the cycles are given. First, note that cell 1 is the leading cell in all the cycles, since the numbers in columns T 2 T 1 and T 0 , 2 T 0 , 1 are all positive. Second, we notice that each time cell 1 jumps up (down), cell 2 immediately jumps up (down) because in each cycle T 2 T 1 < δ , T 0 , 2 T 0 , 1 < δ and numbers in columns T 2 T 1 and T 0 , 2 T 0 , 1 decrease, respectively. And finally, we observe compression both in silent phases and active phases of the oscillators in each cycle, as was predicted in Theorems 1 and 2. This is due to the fact that the numbers in columns ( T 0 , 2 T 0 , 1 ) previous row ( T 2 T 1 ) and ( T 2 T 1 ) ( T 0 , 2 T 0 , 1 ) are all positive. Thus, we establish stable synchronization. Note that the trajectories of the two oscillators are not periodic but chaotic, since after the second cycle, numbers in column T 0 , 1 decrease while those in column T 0 , 2 increase.

9. Concluding Remarks

In this article, the dynamics of two mutually coupled inhibitory neurons is examined. A parameter regime for stable synchronous behavior is obtained analytically and verified numerically. In our study two factors play a crucial role in synchronization: the inhibition delay, as well as the inhibition decay; the latter must be small. These observations agree with those known in the literature [32,36] and, as was mentioned before, are applicable to a Morris–Lecar model [38].
Although only two cells are synchronized, which is a limitation of this study, the results obtained here could be relevant to neural clusters or synchronization of many more nerve cells with direct inhibition instead of synaptic inhibition [39]. On the other hand, the considered model is rather general, since it includes two nonlinear arbitrary functions. They only need to satisfy our conditions, which makes them useful not only for biological purposes.
Let us mention explicitly that the dynamics of (2)–(3) is much richer than that shown in Theorems 1 and 2. There are parameter regimes in which one oscillator fires several times while the other remains in its silent state; also antiphase solutions are possible [27]. In other inhibitory models such behavior is also described [31,34].

Author Contributions

Conceptualization, J.V.C.; methodology, J.V.C., D.R.C. and T.G.G.; investigation, J.V.C., D.R.C. and T.G.G.; resources, J.V.C., D.R.C. and T.G.G.; writing—original draft preparation, T.G.G.; writing—review and editing, J.V.C., D.R.C. and T.G.G.; visualization, D.R.C.; software, D.R.C.; validation, J.V.C., D.R.C. and T.G.G.; formal analysis, J.V.C., D.R.C. and T.G.G.; data curation, D.R.C.; project administration, J.V.C.; funding acquisition, J.V.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed by the European Union-NextGenerationEU through the National Recovery and Resilience Plan of the Republic of Bulgaria, project number BG-RRP-2.013-0001, and by the Scientific Research Fund of the University of Ruse “Angel Kanchev” under project 2025-FNSE-03.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are very grateful to the anonymous reviewers, whose valuable comments and suggestions improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Thayer, J.F. On the Importance of Inhibition: Central and Peripheral Manifestations of Nonlinear Inhibitory Processes in Neural Systems. Dose Response 2006, 4, 2–21. [Google Scholar] [CrossRef]
  2. Buzsáki, G.; Kaila, K.; Raichle, M. Inhibition and Brain Work. Dose Neuron 2007, 56, 771–783. [Google Scholar] [CrossRef] [PubMed]
  3. Lenin, D.; de la Paz, O.; Gulias-Cañizo, R.; D’Abril Ruíz-Leyja, E.; Sánchez-Castillo, H.; Parodí, J. The role of GABA neurotransmitter in the human central nervous system, physiology, and pathophysiology. Rev. Mex. Neurocienc. 2021, 22, 67–76. [Google Scholar]
  4. Papatheodoropoulos, C. Compensatory Regulation of Excitation/Inhibition Balance in the Ventral Hippocampus: Insights from Fragile X Syndrome. Biology 2025, 14, 363. [Google Scholar] [CrossRef] [PubMed]
  5. Young, G. Activation-Inhibition Coordination in Neuron, Brain, and Behavior Sequencing/Organization: Implications for Laterality and Lateralization. Symmetry 2022, 14, 2051. [Google Scholar] [CrossRef]
  6. Baldwin, K.T.; Giger, R.J. Insights into the physiological role of CNS regeneration inhibitors. Front. Mol. Neurosci. 2015, 8, 2015. [Google Scholar] [CrossRef]
  7. Buzsáki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  8. Mann, E.O.; Paulsen, O. Role of GABAergic inhibition in hippocampal network oscillations. Trends Neurosci. 2007, 30, 343–349. [Google Scholar] [CrossRef]
  9. Swanson, O.; Maffei, A. From Hiring to Firing: Activation of Inhibitory Neurons and Their Recruitment in Behavior. Front. Mol. Neurosci. 2019, 12, 168. [Google Scholar] [CrossRef]
  10. Giordano, N.; Alia, C.; Fruzzetti, L.; Pasquini, M.; Palla, G.; Mazzoni, A.; Micera, S.; Fogassi, L.; Bonini, L.; Caleo, M. Fast-Spiking Interneurons of the Premotor Cortex Contribute to Initiation and Execution of Spontaneous Actions. J. Neurosci. 2023, 43, 4234–4250. [Google Scholar] [CrossRef]
  11. Branson, K.; Freeman, J. Imaging the Neural Basis of Locomotion. Cell 2015, 163, 541–542. [Google Scholar] [CrossRef][Green Version]
  12. Freund, T.F. Interneuron Diversity series: Rhythm and mood in perisomatic inhibition. Trends Neurosci. 2003, 26, 489–495. [Google Scholar] [CrossRef] [PubMed]
  13. Takagi, Y.; Sakai, Y.; Abe, Y.; Nishida, S.; Harrison, B.J.; Martínez-Zalacaín, I.; Soriano-Mas, C.; Narumoto, J.; Tanaka, S.C. A common brain network among state, trait, and pathological anxiety from whole-brain functional connectivity. NeuroImage 2018, 172, 506–516. [Google Scholar] [CrossRef] [PubMed]
  14. Stiefel, K.; Ermentrout, B. Neurons as oscillators. J. Neurophysiol. 2016, 116, 2950–2960. [Google Scholar] [CrossRef] [PubMed]
  15. Kopell, N.; Ermentrout, B. Chemical and electrical synapses perform complementary roles in the synchronization of interneuronal networks. Proc. Natl. Acad. Sci. USA 2004, 101, 15482–15487. [Google Scholar] [CrossRef]
  16. Ryu, H.; Campbell, S.A. Geometric analysis of synchronization in neuronal networks with global inhibition and coupling delays. Philos. Trans. R. Soc. A 2019, 377, 20180129. [Google Scholar] [CrossRef]
  17. Wang, Y.; Shi, X.; Si, B.; Cheng, B.; Chen, J. Synchronization and oscillation behaviors of excitatory and inhibitory populations with spike-timing-dependent plasticity. Cogn. Neurodynamics 2022, 17, 715–727. [Google Scholar] [CrossRef]
  18. Gelastopoulos, A.; Kopell, N. Interactions of multiple rhythms in a biophysical network of neurons. J. Math. Neurosci. 2020, 10, 19. [Google Scholar] [CrossRef]
  19. Miller, J.; Ryu, H.; Wang, X.; Booth, V.; Campbell, S.A. Patterns of synchronization in 2D networks of inhibitory neurons. Front. Mol. Neurosci. 2022, 16, 903883. [Google Scholar] [CrossRef]
  20. Haigh, Z.J.; Tran, H.; Berger, T.; Shirinpour, S.; Alekseichuk, I.; Koenig, S.; Zimmermann, J.; McGovern, R.; Darrow, D.; Herman, A.; et al. Modulation of motor excitability reflects traveling waves of neural oscillations. Cell Rep. 2025, 44, 115864. [Google Scholar] [CrossRef]
  21. Arbi, A. Novel traveling waves solutions for nonlinear delayed dynamical neural networks with leakage term. Chaos Solitons Fractals 2021, 152, 111436. [Google Scholar] [CrossRef]
  22. Kopell, N.; Ermentrout, G.B. Coupled oscillators and the design of central pattern generators. Math. Biosci. 1988, 90, 87–109. [Google Scholar] [CrossRef]
  23. Wang, X.J.; Rinzel, J. Alternating and synchronous rhythms in reciprocally inhibitory model neurons. Neural Comput. 1992, 4, 84–97. [Google Scholar] [CrossRef]
  24. Wang, X.J.; Rinzel, J. Spindle rhythmicity in the reticularis thalami nucleus: Synchronization among mutually inhibitory neurons. Neuroscience 1993, 53, 899–904. [Google Scholar] [CrossRef] [PubMed]
  25. Golomb, D.; Rinzel, J. Dynamics of globally coupled inhibitory neurons with heterogeneity. Phys. Rev. E 1993, 48, 4810–4814. [Google Scholar] [CrossRef] [PubMed]
  26. Skinner, F.K.; Kopell, N.; Marder, E. Mechanisms for oscillation and frequency control in reciprocally inhibitory model neural networks. J. Comput. Neurosci. 1994, 1, 69–87. [Google Scholar] [CrossRef]
  27. Terman, D.; Kopell, N.; Bose, A. Dynamics of Two Mutually Coupled Slow Inhibitory Neurons. Phys. D 1998, 117, 241–275. [Google Scholar] [CrossRef]
  28. Van, V.C.; Abbott, L.F.; Ermentrout, B. When Inhibition not Excitation Synchronizes Neural Firing. J. Comput. Neurosci. 1994, 1, 313–321. [Google Scholar] [CrossRef]
  29. Hansel, D.; Mato, G.; Meunier, C. Synchrony in Excitatory Neural Networks. Neural Comput. 1995, 7, 307–337. [Google Scholar] [CrossRef]
  30. Karbowski, J.; Kopell, N. Multispikes and synchronization in a large neural network with temporal delays. Neural Comput. 2000, 12, 1573–1606. [Google Scholar] [CrossRef]
  31. Pusuluri, K.; Ju, H.; Shilnikov, A. Chaotic dynamics in neural systems. In Synergetics; Encyclopedia of Complexity and Systems Science; Springer: New York, NY, USA, 2020; pp. 197–209. [Google Scholar] [CrossRef]
  32. Chauhan, A.S.; Taylor, J.D.; Nogaret, A. Dual Mechanism for the Emergence of Synchronization in Inhibitory Neural Networks. Sci. Rep. 2018, 8, 11431. [Google Scholar] [CrossRef]
  33. Brunel, N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. J. Comput. Neurosci. 2000, 8, 183–208. [Google Scholar] [CrossRef]
  34. Matveev, V.; Bose, A.; Nadim, F. Capturing the bursting dynamics of a two-cell inhibitory network using a one-dimensional map. J. Comput. Neurosci. 2007, 23, 169–187. [Google Scholar] [CrossRef] [PubMed][Green Version]
  35. Ermentrout, G.B.; Kopell, N. Fine structure of neural spiking and synchronization in the presence of conduction delays. Proc. Natl. Acad. Sci. USA 1998, 95, 1259–1264. [Google Scholar] [CrossRef] [PubMed]
  36. Shavikloo, M.; Esmaeili, A.; Valizadeh, A.; Madadi, A.M. Synchronization of delayed coupled neurons with multiple synaptic connections. Cogn. Neurodynamics 2024, 18, 631–643. [Google Scholar] [CrossRef] [PubMed]
  37. Tancredi, G.; Sanchez, A.; Roig, F. A comparison between methods to compute Lyapunov exponents. Astron. J. 2001, 121, 1171–1179. [Google Scholar] [CrossRef]
  38. Morris, C.; Lecar, H. Voltage oscillations in the barnicle giant muscle fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef]
  39. Wnuk, A. How Inhibitory Neurons Shape the Brain’s Code. Available online: https://www.brainfacts.org/ (accessed on 6 October 2021).
Figure 1. Nullclines and singular limit cycle (red) of a relaxation oscillator.
Figure 1. Nullclines and singular limit cycle (red) of a relaxation oscillator.
Mathematics 14 00008 g001
Figure 2. The phase planes ws both in the silent phase and in the active phase—one possibility. In the silent phase the trajectories of the two cells are given in blue (cell 1 starts at τ = 0 while cell 2 starts at τ = δ ); in the active phase the trajectories are given in red. The jump-up curve w = w L ( s ) is presented in green.
Figure 2. The phase planes ws both in the silent phase and in the active phase—one possibility. In the silent phase the trajectories of the two cells are given in blue (cell 1 starts at τ = 0 while cell 2 starts at τ = δ ); in the active phase the trajectories are given in red. The jump-up curve w = w L ( s ) is presented in green.
Mathematics 14 00008 g002
Figure 3. Φ ( v , w , s ) : = f ( v , w ) s g s y n ( v v s y n ) = 0 are cubic-shaped curves for different values of s [ 0 , 1 ] (red s = 1 , green s = 0.9 and so on with different colors), and g ( v , w ) = 0 increases and intersects the middle branches of the cubics. This defines a relaxation oscillator for (2).
Figure 3. Φ ( v , w , s ) : = f ( v , w ) s g s y n ( v v s y n ) = 0 are cubic-shaped curves for different values of s [ 0 , 1 ] (red s = 1 , green s = 0.9 and so on with different colors), and g ( v , w ) = 0 increases and intersects the middle branches of the cubics. This defines a relaxation oscillator for (2).
Mathematics 14 00008 g003
Figure 4. g ( v , w ) decreases in w and increases in v on the left branches of Φ ( v , w , s ) = 0 and is a constant in v on the right branches of Φ ( v , w , s ) = 0 .
Figure 4. g ( v , w ) decreases in w and increases in v on the left branches of Φ ( v , w , s ) = 0 and is a constant in v on the right branches of Φ ( v , w , s ) = 0 .
Mathematics 14 00008 g004
Figure 5. G L ( w , s ) on the phase space D = { ( w , s ) s.t. 0 s 1 , w L ( s ) w w R ( 1 ) } .
Figure 5. G L ( w , s ) on the phase space D = { ( w , s ) s.t. 0 s 1 , w L ( s ) w w R ( 1 ) } .
Mathematics 14 00008 g005
Figure 6. The graphs of the membrane potentials v 1 ( t ) (blue), v 2 ( t ) (orange) of the two neurons.
Figure 6. The graphs of the membrane potentials v 1 ( t ) (blue), v 2 ( t ) (orange) of the two neurons.
Mathematics 14 00008 g006
Figure 7. A graph plotting the trajectories of the two cells in the vw phase plane.
Figure 7. A graph plotting the trajectories of the two cells in the vw phase plane.
Mathematics 14 00008 g007
Figure 8. A graph depicting the variation in δ across successive cycles. The data for this figure is derived from the column T 0 , 2 T 0 , 1 in Table 1.
Figure 8. A graph depicting the variation in δ across successive cycles. The data for this figure is derived from the column T 0 , 2 T 0 , 1 in Table 1.
Mathematics 14 00008 g008
Table 1. Relative times of the two oscillators.
Table 1. Relative times of the two oscillators.
Cycle # T 1 T 2 T 2 T 1 T 0 , 1 T 0 , 2 T 0 , 2 T 0 , 1 ( T 0 , 2 T 0 , 1 ) Previous Row ( T 2 T 1 ) ( T 2 T 1 ) ( T 0 , 2 T 0 , 1 )
10.1459780.1459730.0049960.4465790.4465670.004989 0.000007
20.1460550.1460130.0049470.4467870.4466750.0048770.0000410.000070
30.1460570.1460150.0048350.4467920.4466800.0047650.0000410.000071
40.1460560.1460160.0047240.4467910.4466810.0046550.0000410.000069
50.1460560.1460160.0046160.4467890.4466830.0045480.0000400.000067
60.1460550.1460160.0045100.4467880.4466840.0044440.0000390.000066
70.1460550.1460170.0044060.4467870.4466850.0043420.0000380.000064
80.1460540.1460170.0043050.4467860.4466860.0042420.0000370.000063
90.1460540.1460180.0042060.4467850.4466870.0041440.0000360.000061
100.1460530.1460180.0041090.4467840.4466880.0040490.0000350.000060
110.1460530.1460190.0040150.4467820.4466890.0039560.0000340.000059
120.1460530.1460190.0039220.4467810.4466910.0038650.0000340.000057
130.1460520.1460190.0038320.4467800.4466920.0037760.0000330.000056
140.1460520.1460200.0037440.4467790.4466930.0036900.0000320.000055
150.1460520.1460200.0036580.4467780.4466940.0036050.0000310.000053
160.1460510.1460200.0035740.4467770.4466950.0035220.0000310.000052
170.1460510.1460210.0034920.4467760.4466950.0034410.0000300.000051
180.1460500.1460210.0034120.4467760.4466960.0033620.0000290.000050
190.1460500.1460220.0033330.4467750.4466970.0032850.0000290.000049
200.1460500.1460220.0032570.4467740.4466980.0032090.0000280.000048
210.1460490.1460220.0031820.4467730.4466990.0031360.0000270.000046
220.1460490.1460220.0031090.4467720.4467000.0030630.0000270.000045
230.1460490.1460230.0030370.4467710.4467010.0029930.0000260.000044
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chaparova, J.V.; Chaparov, D.R.; Georgiev, T.G. An Analytical Study on the Synchronization of a Two-Cell Inhibitory Neural Dynamics. Mathematics 2026, 14, 8. https://doi.org/10.3390/math14010008

AMA Style

Chaparova JV, Chaparov DR, Georgiev TG. An Analytical Study on the Synchronization of a Two-Cell Inhibitory Neural Dynamics. Mathematics. 2026; 14(1):8. https://doi.org/10.3390/math14010008

Chicago/Turabian Style

Chaparova, Julia V., Dimitar R. Chaparov, and Teodor G. Georgiev. 2026. "An Analytical Study on the Synchronization of a Two-Cell Inhibitory Neural Dynamics" Mathematics 14, no. 1: 8. https://doi.org/10.3390/math14010008

APA Style

Chaparova, J. V., Chaparov, D. R., & Georgiev, T. G. (2026). An Analytical Study on the Synchronization of a Two-Cell Inhibitory Neural Dynamics. Mathematics, 14(1), 8. https://doi.org/10.3390/math14010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop