Next Article in Journal
Alternative Support Threshold Computation for Market Basket Analysis
Previous Article in Journal
The Sequential Hotelling Game with One Parameterized Location
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hysteresis in Neuron Models with Adapting Feedback Synapses

by
Sebastian Thomas Lynch
and
Stephen Lynch
*
Department of Computer Science, Loughborough University, Loughborough LE11 3TU, UK
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(2), 70; https://doi.org/10.3390/appliedmath5020070
Submission received: 2 May 2025 / Revised: 5 June 2025 / Accepted: 9 June 2025 / Published: 13 June 2025

Abstract

:
Despite its significance, hysteresis remains underrepresented in mainstream models of plasticity. In this work, we propose a novel framework that explicitly models hysteresis in simple one- and two-neuron models. Our models capture key feedback-dependent phenomena such as bistability, multistability, periodicity, quasi-periodicity, and chaos, offering a more accurate and general representation of neural adaptation. This opens the door to new insights in computational neuroscience and neuromorphic system design. Synaptic weights change in several contexts or mechanisms including, Bienenstock–Cooper–Munro (BCM) synaptic modification, where synaptic changes depend on the level of post-synaptic activity; homeostatic plasticity, where all of a neuron synapses simultaneously scale up or down to maintain stability; metaplasticity, or plasticity of plasticity; neuromodulation, where neurotransmitters influence synaptic weights; developmental processes, where synaptic connections are actively formed, pruned and refined; disease or injury; for example, neurological conditions can induce maladaptive synaptic changes; spike-time dependent plasticity (STDP), where changes depend on the precise timing of pre- and postsynaptic spikes; and structural plasticity, where changes in dendritic spines and axonal boutons can alter synaptic strength. The ability of synapses and neurons to change in response to activity is fundamental to learning, memory formation, and cognitive adaptation. This paper presents simple continuous and discrete neuro-modules with adapting feedback synapses which in turn are subject to feedback. The dynamics of continuous periodically driven Hopfield neural networks with adapting synapses have been investigated since the 1990s in terms of periodicity and chaotic behaviors. For the first time, one- and two-neuron models are considered as parameters are varied using a feedback mechanism which more accurately represents real-world simulation, as explained earlier. It is shown that these models are history dependent. A simple discrete two-neuron model with adapting feedback synapses is analyzed in terms of stability and bifurcation diagrams are plotted as parameters are increased and decreased. This work has the potential to improve learning algorithms, increase understanding of neural memory formation, and inform neuromorphic engineering research.

1. Introduction

In 1992, Dong and Hopfield [1] considered neural networks with two kinds of time-dependent dynamics that occur, one involved the action potentials of neurons and the other the change in synaptic strengths between those neurons. Adaptive synaptic connections play a crucial role in brain plasticity, or neuro-plasticity, which is the brain’s ability to reorganize itself by forming new neural connections throughout life in response to new experiences or learning. This adaptability allows the brain to compensate for injuries, adapt to new situations, and learn new skills; for example, [2]. Elasto-plasticity was first observed in solids by Tresca in 1864 [3], and the term “hysteresis” was first coined by Sir Alfred James Ewing in 1885 when describing magnetic materials [4]. Hysteresis is a phenomenon in which the state of a system depends not only on its current conditions but also on its past history. In other words, the output or response of the system lags behind changes in the input. This creates a sort of memory effect, often visible as a bistable loop when input–output relationships are graphed. The author has demonstrated hysteresis in a wide range of phenomena, including single muscle fiber, non-linear electrical circuits, non-linear optical resonators and microfiber ring resonators, mechanical engineering, chemical kinetics, population dynamics of blood cells, and angiogenesis and hemotopoiesis; for example, see [5]. Hysteresis is prevalent in biology, as shown in [6], so it should come as no surprise that it occurs in these neuronal models with adaptive feedback synapses.
Synaptic weights can change in a variety of ways but hysteretic effects are rarely covered. BCM synaptic modification, particularly in the visual cortex, is studied in more biophysical detail in [7], and implementation has recently been achieved with memristors in [8]. Homeostatic plasticity in a mouse primary visual cortex is investigated in [9] where it is shown that homeostatic forms of plasticity can be recruited in a structured way according to the evolving needs of a developing neural circuit. A metaplasticity view of the interaction between homeostatic and Hebbian plasticity is covered in [10]. Neuromodulation is important in human movement control and an imbalance of neurotransmitters, such as dopamine and serotonin, can be associated with various neurological disorders causing tremors or spasms [11]. There are over one hundred different types of neurotransmitter in the brain with 80% being excitatory and 20% inhibitory. Neurotransmitter concentrations in the human brain vary throughout the day, in fact, this variation is essential for regulating sleep, mood, attention, and overall brain function. In the womb, autism spectrum disorders can manifest from the developmental synaptic processes involving genes and other elements such as epigenetic factors, hormonal and inflammatory signals, microbiota, and stress [12]. Adaptive plasticity after damage to the human brain is discussed in [13] and behavioral training has been shown to induce synaptic plasticity in both intact and injured animals [14]. STDP was successfully implemented with Ta 2 O 5 -based memristors with CMOS-based neurons with applications in unsupervised learning tasks [15]. Synaptic plasticity in optoelectronic devices provides a promising route in the development of energy-efficient and adaptive optoelectronic neuromorphic systems [16]. Multistable synaptic plasticity and its influence on collective synchronization is shown to give rise to co-existing chimera-like and bump-like states simultaneously in a network [17]. In 2021, Bao et al. studied a memristive single neuron model with an adapting synapse and constructed an analog circuit implementation [18]. The phase portraits on the oscilloscopes are similar to those shown in this paper. More recently, Seralan et al. [19] consider a regular network of neurons connected through memristive synapses. They begin with one- and two-neuron models before looking at simple networks of one hundred neurons. They observed collective behavior and synchrony depend on the number of synaptic connections, coupling strengths, and rewiring probabilities. Ma et al. [20] investigate a discrete memristor-coupled Rulkov neuron map, using discrete memristors to model synaptic functionality, they present both numerical and experimental results.
For the first time, this paper presents hysteretic models of simple continuous and discrete neuronal systems where a single parameter is increased and decreased and synaptic connections adapt. In the real-world examples listed above, a number of parameters increase and decrease, and hysteresis will inevitably follow. Future research should concentrate on how hysteresis affects those models.
For definitions, theory, programming, and applications in dynamical systems, the author recommends his books based on Mapl e T M [21], Mathematic a ® [22], MATLA B ® [23], and Pytho n ® [24]. The reader might find it useful to discover some MATLAB R2025a and Python 3 program files listed in this pape. Appendix A lists a MATLAB program to produce an animation of a phase portrait as a parameter increases. Appendix B lists a MATLAB program, using the second iterative method, to plot a bifurcation diagram. Appendix C gives a Python program for plotting a stability diagram. Finally, Appendix D shows a Python program, using the second iterative method, for plotting a bifurcation diagram.

2. Continuous Neuron Models with Adapting Feedback Synapse

When modeling simple neural networks, two variables are often employed; the action potential of the neuron and the synaptic strength of connections between the neurons. First, consider the single neuron model, as described by Li and Chen [25]:
d u d t = u + ϕ ( u ) ϕ ( s ) + I ( t ) , d s d t = α s + α ϕ 2 ( u ) ,
where u is the action potential of the neuron and s represents the adapting synapse strength. The activation functions are defined by ϕ ( u ) = f ( a u ) and ϕ ( s ) = f ( b s ) , where a and b are positive constants, and f ( u ) = 3 u e u 2 / 2 . The input is given by I ( t ) = ϵ sin ( ω t ) , making this a non-autonomous periodically driven system. The function f is the same as that used in [25] and its shape is reminiscent of the derivative of a Gaussian function and resembles the receptive fields of certain biological neurons, especially those in the visual cortex. The function also captures a strong response to moderate levels of stimulation and a diminishing response to high input, which mimics the saturation behavior of biological neurons. This is useful in pattern recognition, function approximation, and unsupervised learning contexts like self-organizing maps. The parameter α is related to how quickly or slowly the synapse adapts. In biophysical mechanisms, it might correspond to, for example, calcium dynamics, enzyme kinetics, or receptor trafficking. In artificial models, it often reflects a design choice to control learning speed and memory trace duration.
Figure 1 shows how the activation function changes as the parameter a changes, either stretching or contracting in the horizontal direction.
Figure 2 shows phase portraits for the system (1) as the parameter α increases from α = 0.5 to α = 5 . One can animate this process using a suitable mathematical package as referred to earlier. Appendix A lists a program for animating a single trajectory as α increases. There are periodic and chaotic attractors that connect for certain parameter values. Of course, this behavior can be summarized in a bifurcation diagram, see Figure 3. In Figure 2a, when α = 0.5 , there are four period-1 orbits; as α increases to one, two of the period-1 cycles become period-2. When α = 2 , a connected chaotic cycle forms for two of the cycles, which persists when α = 3 . When α = 3.7 , there are again two period-1 cycles and two period-2 cycles. Finally, when α = 5 , there are four period-1 cycles again.
Comparing Figure 2 and Figure 3, it is easy to see how the bifurcation diagram is a summary of the phase portraits; however, there is no feedback in α as yet.
Figure 4 shows a gallery of bifurcation diagrams obtained using the second iterative method (see my books for an explanation). For the first time, hysteresis is shown in a simple continuous neuron model with adapting feedback synapses. In Figure 4a, the neuron remains in a steady state as α increases and decreases. In Figure 4b,c, the behavior is sensitive to the step length used in the program and different hysteretic curves are formed. In Figure 4d–f, the outcome is affected by different initial conditions, step lengths, or the maximum number of iterations.
To conclude this section, consider a simple two-neuron model:
d u d t = u + ϕ ( v ) ϕ ( s ) + I ( t ) , d v d t = v + ϕ ( u ) ϕ ( s ) + I ( t ) , d s d t = α s + α ϕ ( u ) ϕ ( v ) ,
where u , v are action potentials of the neurons, s is the adapting synapse strength with ϕ ( u ) = f ( a u ) , ϕ ( v ) = f ( b v ) , and ϕ ( s ) = f ( c s ) , where f ( u ) = 3 u e u 2 / 2 . The inputs are equal, I ( t ) = ϵ sin ( ω t ) .
Figure 5 shows a gallery of three-dimensional phase portraits for system (2). There are more attractors than for the single neuron, but the dynamics are very similar. The bifurcation diagrams for system (2) are similar to those for the one-neuron model and will not be shown here.
A neuron can form synaptic connections with itself in certain circumstances, a phenomenon known as autoreception or autaptic synapses. Like other types of synapses, autaptic connections can undergo changes in strength over time, which could play a role in learning and memory processes. In the next section, we will consider a discrete two-neuron model with self-synaptic connections.

3. Discrete Neuron Models with Adapting Feedback Synapse

Consider the two-neuron model depicted in Figure 6a, consisting of two neurons with activations u n and v n , respectively. The adaptive synaptic weights and biases are labeled w 11 , w 12 , w 21 , w 22 and b 1 , b 2 , respectively. The discrete two-neuron model is given as follows:
u n + 1 = b 1 + w 11 ϕ 1 u n + w 12 ϕ 2 v n , v n + 1 = b 2 + w 21 ϕ 1 u n + w 22 ϕ 2 v n ,
where the activation functions are follows:
ϕ 1 ( u ) = f ( a u ) , ϕ 2 ( u ) = f ( b u ) ,
and f ( u ) = 3 u e u 2 , as in previous examples. The effect of changing the parameters a and b is to stretch and contract the activation functions in the horizontal direction (see Figure 1), and this is how the synaptic connections adapt.
In order to simplify the analysis, assume that w 22 = 0 in Equation (3). The fixed points of period one, or steady-states, satisfy the equations u n + 1 = u n = u , say, and v n + 1 = v n = v , say. Thus,
b 1 = u w 11 f ( a u ) w 12 f ( b v ) , y = b 2 + w 21 f ( a u ) .
Taking P = u n + 1 and Q = v n + 1 , the Jacobian matrix is
J = P u P v Q u Q v = w 11 f ( a u ) w 12 f ( b v ) w 21 f ( a u ) 0 ,
where f ( a u ) = 3 a e a 2 u 2 / 2 3 a 3 u 2 e a 2 u 2 / 2 . The characteristic equation is given by
λ 2 w 11 f ( a u ) λ w 12 w 21 f ( a u ) f ( b v ) = 0 .
The equations used to plot Figure 6b are derived below. Using Equations (4) and  (5), when λ = + 1 , the bistable bounding curve B + 1 , is given by the parametric equations:
w 12 = 1 w 11 f ( a u ) w 21 f ( a u ) f ( b v ) , b 1 = u w 11 f ( a u ) w 12 f ( b v ) .
When λ = 1 , the eigenvalues are purely imaginary and irrationally related. The quasi-periodic bounding curve B 1 , is given by the parametric equations:
w 12 = 1 + w 11 f ( a u ) w 21 f ( a u ) f ( b v ) , b 1 = u w 11 f ( a u ) w 12 f ( b v ) .
Finally, when det ( J ) = 1 and trace ( J ) < 2 , the bounding curve B | J | = 1 is given by the parametric equations:
w 12 = 1 w 21 f ( a u ) f ( b v ) , b 1 = u w 11 f ( a u ) w 12 f ( b v ) .
Figure 7 shows how the stability diagram in Figure 6b relates to bistable, unstable and quasi-periodic regions for this dynamical system. In Figure 7a, there is a large bistable cycle isolated from any instabilities. In Figure 6b, there is chaos with periodic windows, and in Figure 7c, quasi-periodic behavior is present in the upper and lower branches of the bistable cycle.
Figure 8 shows how the bifurcation diagram changes as the parameter a increases from zero up to a = 0.9 . For the first time, it is shown how the hysteresis is affected as synaptic strengths are varied. The parameter a alters the activation function, stretching and contracting it horizontally, and this adapting feedback synapse greatly affects the behavior of the neuronal system. When a = 0.05 , there is no hysteresis, one path is followed as the bias is ramped up and down. When a = 0.1 , there is clearly an isolated counterclockwise bistable cycle which grows as a passes through a = 0.2 and a = 0.3 . When a = 0.4 , a quasi-periodic bubble appears in the top branch of the bistable loop, and when a = 0.5 , and a = 0.6 , there is quasi-periodic behavior in the upper and lower branches. When a = 0.8 , the bistable loop has vanished and there is predominant period-3 behavior. When a = 0.9 , there is periodic and quasi-periodic behavior on the two branches again.

4. Conclusions

We show that simple one- and two-neuron models with adaptive feedback synapses are history dependent. Hysteresis is important in neuro-plasticity because it reflects the brain’s ability to “remember” past states or experiences, which influences how it responds to new stimuli. In simple terms, hysteresis means that the history of activity in a neural circuit affects its future behavior, not just its current inputs. Hysteresis is vitally important for stability in memory formation and threshold effects might result in the network requiring stronger stimulus to change states.
For both models, the second iterative method is adopted where a parameter is varied and the solution to the previous iterate is used as the initial condition for the next iterate. In this way, a feedback mechanism is introduced, there is a history associated with the process, and only one point is plotted for each value of the parameter, see Appendix B for a program of a continuous model and Appendix D for a program of a discrete model.
Figure 4 and Figure 8 show that the systems are history dependent, in both cases, the parameters are ramped up in red and down in blue. Figure 6, Figure 7 and Figure 8, demonstrate how the analysis supports the main findings and bistable and unstable regions can be predicted from a stability analysis.
This study shows that learning depends on the neural state history, and what the brain has experienced before. Hysteresis adds robustness to neural processing, filtering out minor, irrelevant inputs that could otherwise disrupt memory or learned behavior, making the network resistant to noise.
Future work will look at hysteresis in BCM synaptic modification, homeostatic plasticity, metaplasticity, neuromodulation, developmental processes, disease and injury in the brain, STDP, and structural plasticity.
Finally, some of the MATLAB and Python programs are listed in the Appendices so readers can experiment with the code.

Author Contributions

Conceptualization, S.T.L. and S.L.; methodology, S.T.L. and S.L.; software, S.T.L. and S.L.; validation, S.T.L. and S.L.; formal analysis, S.T.L. and S.L.; investigation, S.T.L. and S.L.; writing—-original draft preparation, S.T.L. and S.L.; writing—review and editing, S.T.L. and S.L.; supervision, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the anonymous referees for their valuable comments and suggestions that led to the improvement of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

% MATLAB Program.
% See Figure 2. Animated Single Neuron with Adaptive Synapse.
% Start with u=-0.1 and s=1.
% Plot the single trajectory as the parameter alpha goes from 0 to 5.
clear
clear
xa=zeros(10000);
xmin=-0.2;xmax=0.2;
ymin=0.2;ymax=2.6;
q=5;p=5;epsilon=0.2;w=2*pi;
t_int=500;
for j=1:500
    F(j)=getframe;
alpha=0.01*j;
sys = @(t,x) [-x(1)+f1(q*x(2))*f1(p*x(1))+epsilon*sin(w*t);
    -alpha*x(2)+alpha*f1(p*x(1))*f1(p*x(1))];
options = odeset("RelTol",1e-4,"AbsTol",1e-4);
[t,xa]=ode45(sys,[0 100],[-.1 1],options);
plot(xa(end-t_int:end,1),xa(end-t_int:end,2),"b")
axis([xmin xmax ymin ymax])
fsize=20;
set(gca,"FontSize",fsize);
xlabel("u(t)");
ylabel("s(t)");
F(j)=getframe;
end
movie(F,2)
% End Program.

Appendix B

% MATLAB Program.
% See Figure 4c.
% Bifurcation Diagram using the Second, Iterative Method with Feedback.
clear
figure
global alpha;
xmin=0;xmax=5;
ymin=-0.2;ymax=0.2;
% Parameters.
Max=10000;step=0.0005;interval=Max*step;
u=0.1;s=1; % Initial conditions.
% Ramp alpha up. Take 2pi time intervals.
for n=1:Max
alpha=step*n;
[t,x]=ode45(’Sys1’,0:2*pi,[u,s]);
u=x(2,1);
s=x(2,2);
rup(n)=x(2,1);
end
% Ramp alpha down. Take 2pi time intervals.
for n=1:Max
alpha=interval-step*n;
[t,x]=ode45(’Sys1’,0:2*pi,[u,s]);
u=x(2,1);
s=x(2,2);
rdown(n)=x(2,1);
end
% Plot the bifurcation diagram.
hold on;
rr=step:step:interval;
plot(rr,rup,’r.’,’MarkerSize’,1); % Ramp up, red dots.
plot(interval-rr,rdown,’b.’,’MarkerSize’,1); % Ramp down, blue dots.
hold off;
fsize=20;
set(gca,’FontSize’,fsize);
axis([xmin xmax ymin ymax]);
xlabel(’\alpha’);
ylabel(’u(t)’);
% Activation function.
function [y] = f1(x)
y = 3*x*exp(-x^2/2);
end
% Single Neuron System.
function xdot=Sys1(t,x)
q=5;p=5;epsilon=0.2;w=2*pi;
global alpha;
xdot(1)=-x(1)+f1(q*x(2))*f1(p*x(1))+epsilon*sin(w*t);
xdot(2)=-alpha*x(2)+alpha*f1(p*x(1))*f1(p*x(1));
xdot=[xdot(1);xdot(2)];
end
% End of Program.

Appendix C

# Python Program.
# Stability diagram. See Figure 6b.
import numpy as np
import matplotlib.pyplot as plt
b2 , w11 , w21 , a , b = -1 , 1.5 , 5 , 0.3 , 0.1
x = np.linspace(-2.5 , 3.2 , 1000)
y = b2 + w21 * 3 * (a * x) * np.exp(-a**2*x**2/2)
plt.axis([-5 , 5 , -5 , 5 ])
w12 = (1 - w11 *(3 * a * np.exp(-a**2*x**2/2) - 3 * a**3 * x**2 * \
      np.exp(-a**2*x**2/2))) / (w21*(3*a*np.exp(-a**2*x**2/2)- \
      3*a**3*x**2*np.exp(-a**2*x**2/2))*(3*b*np.exp(-b**2*y**2/2) \
      -3*b**3*y**2*np.exp(-b**2*y**2/2)))
b1 = x - w11*(3*(a*x)*np.exp(-a**2*x**2/2)) - \
     w12*(3*b*y*np.exp(-b**2*y**2/2))
plt.plot(b1 , w12)
plt.xlabel("$b_1$")
plt.ylabel("$w_{12}$")
plt.rcParams["font.size"] = 30
w12 = (1 + w11 *(3 * a * np.exp(-a**2*x**2/2) - 3 * a**3 * x**2 * \
      np.exp(-a**2*x**2/2))) / (w21*(3*a*np.exp(-a**2*x**2/2) - \
      3*a**3*x**2*np.exp(-a**2*x**2/2))*(3*b*np.exp(-b**2*y**2/2) - \
      3*b**3*y**2*np.exp(-b**2*y**2/2)))
b1 = x - w11*(3*(a*x)*np.exp(-a**2*x**2/2)) - \
     w12*(3*b*y*np.exp(-b**2*y**2/2))
plt.plot(b1 , w12)
w12 = - 1 / (w21*(3*a*np.exp(-a**2*x**2/2) - \
     3*a**3*x**2*np.exp(-a**2*x**2/2)) * \
     (3*b*np.exp(-b**2*y**2/2)-3*b**3*y**2*np.exp(-b**2*y**2/2)))
b1 = x - w11*(3*(a*x)*np.exp(-a**2*x**2/2)) - \
     w12*(3*b*y*np.exp(-b**2*y**2/2))
plt.plot(b1 , w12)
plt.savefig("stability.png" , dpi = 400)
plt.show()

Appendix D

# Python Program.
# See Figure 7b.
# Bifurcation Diagrams for a Two-Neuron Module with Adaptive Synapses.
# Unstable.
from matplotlib import pyplot as plt
import numpy as np
w12 = -2
b2 , w11 , w21 , a , b = -1 , 1.5 , 5 , 0.3 , 0.1
start, max = -4, 8
half_N = 9999
N = 2 * half_N + 1
N1 = 1 + half_N
xs_up, xs_down = [], []
x0, y0 = -7.5, 5
ns_up = np.arange(half_N)
ns_down = np.arange(N1, N)
def f(x):
  return 3 * x * np.exp(-x**2 / 2)
# Ramp b1 up
for n in ns_up:
    b1 = start + n*max / half_N
    x = b1 + w11 * f(a * x0) + w12 * f(b * y0)
    y = b2 + w21 * f(a * x0)
    xn = x
    x0 , y0 = x , y
    xs_up.append([n, xn])
xs_up = np.array(xs_up)
# Ramp b1 down
for n in ns_down:
    b1 = start + 2*max - n*max / half_N
    x = b1 + w11 * f(a * x0) + w12 * f(b * y0)
    y = b2 + w21 * f(a * x0)
    xn = x
    x0 , y0 = x , y
    xs_down.append([N-n, xn])
xs_down = np.array(xs_down)
fig, ax = plt.subplots()
xtick_labels = np.linspace(start, max, 7)
ax.set_xticks([(-start + x) / max * N1 for x in xtick_labels])
ax.set_xticklabels(["{:.1f}".format(xtick) for xtick in xtick_labels])
plt.rcParams["font.size"] = 30
plt.plot(xs_up[:, 0], xs_up[:, 1], "r.", markersize=1)
plt.plot(xs_down[:, 0], xs_down[:,1], "b.", markersize=1)
plt.xlabel(r"$b_1$")
plt.ylabel(r"$u_n$")
plt.show()

References

  1. Dong, D.W.; Hopfield, J.J. Dynamic properties of neural networks with adapting synapses. Netw. Comput. Neural Syst. 1992, 3, 267–283. [Google Scholar] [CrossRef]
  2. Abbott, L.R.; Nelson, S.B. Synaptic plasticity: Taming the beast. Nat. Neurosci. 2000, 3, 1178–1183. [Google Scholar] [CrossRef] [PubMed]
  3. Tresca, H.E. Mémoire sur l’écoulement des corps solides. Mémoire Présentés par Divers Savants Acad. Sci. Paris 1872, 20, 75–135. [Google Scholar]
  4. Ewing, J.W. Experimental researches in magnetism. Trans. R. Soc. Lond. 1885, 176, 523640. [Google Scholar]
  5. Lynch, S. Python for Scientific Computing and Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  6. Noori, H.R. Hysteresis Phenomena in Biology; Springer: New York, NY, USA, 2014. [Google Scholar]
  7. Albesa-González, A.; Froc, M.; Williamson, O.; van Rossum, M.C.W. Weight dependence in BCM leads to adjustable synaptic competition. J. Comput. Neurosci. 2022, 50, 431–444. [Google Scholar] [CrossRef] [PubMed]
  8. Hu, S.M.; Liu, J.X.; Yao, L.Y.; Song, H.J.; Zhong, X.L.; Wang, J.B. Enlarging the frequency threshold range of Bienenstock-Cooper-Munro rules in WOx-based memristive synapses by Al doping. J. Mater. Chem. C 2025, 13, 3311–3319. [Google Scholar] [CrossRef]
  9. Wen, W.; Turrigiano, G.G. Developmental Regulation of Homeostatic Plasticity in Mouse Primary Visual Cortex. J. Neurosci. 2021, 41, 9891–9905. [Google Scholar] [CrossRef] [PubMed]
  10. Yee, A.X.; Hsu, Y.T.; Chen, L. A metaplasticity view of the interaction between homeostatic and Hebbian plasticity. Philos. Trans. R. Soc. B Biol. Sci. 2017, 372, 1715. [Google Scholar] [CrossRef] [PubMed]
  11. Schmidt, A.; Meindl, T.; Albu-Schäffer, A.; Franklin, D.W.; Stratmann, P. Influence of serotonin on the long-term muscle contraction of the Kohnstamm phenomenon. Sci. Rep. 2025, 15, 16588. [Google Scholar] [CrossRef] [PubMed]
  12. Ben-Ari, Y.; Danchin, É.É. Limitations of genomics to predict and treat autism: A disorder born in the womb. J. Med. Genet. 2025, 62, 303–310. [Google Scholar] [CrossRef] [PubMed]
  13. Briones, T.L.; Suh, E.; Jozsa, L.; Rogozinska, M.; Woods, J.; Wadowska, M. Changes in number of synapses and mitochondria in pre-synaptic terminals in the dentate gyrus following cerebral ischemia and rehabilitation training. Brain Res. 2005, 1033, 51–57. [Google Scholar] [CrossRef] [PubMed]
  14. Briones, T.L.; Suh, E.; Jozsa, L.; Hattar, H.; Chai, J.; Wadowska, M. Behaviorally-induced ultrastructural plasticity in the hippocampal region after cerebral ischemia. Brain Res. 2004, 997, 137–146. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, Y.Z.; Zhao, J.Y.; Xiao, Y.; Chen, P.; Chen, H.S.; He, E.H.; Lin, P.; Pan, G. Variation-resilient spike-timing-dependent plasticity in memristors using bursting neuron circuit. Neuromorphic Comput. Eng. 2025, 5, 024013. [Google Scholar] [CrossRef]
  16. Lee, S.H. Plasticity and Memory Retention in ZnO-CNT Nanocomposite Optoelectronic Synaptic Devices. Materials 2025, 18, 2293. [Google Scholar] [CrossRef] [PubMed]
  17. Provata, A.; Almirantis, Y.; Li, W.T. Multistable Synaptic Plasticity Induces Memory Effects and Cohabitation of Chimera and Bump States in Leaky Integrate-and-Fire Networks. Entropy 2025, 27, 257. [Google Scholar] [CrossRef] [PubMed]
  18. Bao, B.; Zhu, Y.X.; Ma, J.; Bao, H.; Wu, H.G.; Chen, M. Memristive neuron model with an adapting synapse and its hardware implementations. Sci. China Technol. Sci. 2021, 64, 1107–1117. [Google Scholar] [CrossRef]
  19. Seralan, V.; Chandrasekhar, D.; Pakiriswamy, S.; Rajagopal, K. Collective behavior of an adapting synapse-based neuronal network with memristive effect and randomness. Cogn. Neurodyn. 2024, 18, 4071–4087. [Google Scholar] [CrossRef] [PubMed]
  20. Ma, T.; Al-Barakati, A.A.; Jahanshahi, H.; Miao, M. Hidden dynamics of memristor-coupled neurons with multi-stability and multi-transient hyperchaotic behavior. Phys. Scr. 2023, 98, 105202. [Google Scholar] [CrossRef]
  21. Lynch, S. Dynamical Systems with Applications Using Maple, 2nd ed.; Springer International Publishing: New York, NY, USA, 2010. [Google Scholar]
  22. Lynch, S. Dynamical Systems with Applications Using Mathematica, 2nd ed.; Springer International Publishing: New York, NY, USA, 2017. [Google Scholar]
  23. Lynch, S. Dynamical Systems with Applications Using MATLAB, 3rd ed.; Springer International Publishing: New York, NY, USA, 2025. [Google Scholar]
  24. Lynch, S. Dynamical Systems with Applications Using Python; Springer International Publishing: New York, NY, USA, 2018. [Google Scholar]
  25. Li, C.; Chen, G. Coexisting chaotic attractors in a single neuron model with adapting feedback synapse. Chaos Solitons Fractals 2005, 23, 1599–1604. [Google Scholar] [CrossRef]
Figure 1. The function ϕ ( u ) = f ( a u ) , when: (a) a = 0.5 ; (b) a = 1 ; (c) a = 2 .
Figure 1. The function ϕ ( u ) = f ( a u ) , when: (a) a = 0.5 ; (b) a = 1 ; (c) a = 2 .
Appliedmath 05 00070 g001
Figure 2. Phase portraits for system (1), when a = 5 , b = 5 , ϵ = 0.2 , ω = 2 π . Initial points were taken across the plane, and transient trajectories were removed. Plots when (a) α = 0.5 , there are four period-1 cycles; (b) α = 1 , two period-2 cycles and two period-1 cycles; (c) α = 2 , a connected chaotic attractor; (d) α = 3 , a connected chaotic attractor; (e) α = 3.7 , two period-2 cycles and two period-1 cycles; (f) α = 5 , four period-1 cycles. Compare with Figure 3.
Figure 2. Phase portraits for system (1), when a = 5 , b = 5 , ϵ = 0.2 , ω = 2 π . Initial points were taken across the plane, and transient trajectories were removed. Plots when (a) α = 0.5 , there are four period-1 cycles; (b) α = 1 , two period-2 cycles and two period-1 cycles; (c) α = 2 , a connected chaotic attractor; (d) α = 3 , a connected chaotic attractor; (e) α = 3.7 , two period-2 cycles and two period-1 cycles; (f) α = 5 , four period-1 cycles. Compare with Figure 3.
Appliedmath 05 00070 g002
Figure 3. A bifurcation diagram for system (1) using the first iterative method, where a = 5 , b = 5 , ϵ = 0.2 , and ω = 2 π . A Poincaré map is taken at multiples of t = 2 π , as the system is 2 π -periodically driven. There are four attractors, the period-1 and period-2 cycles are clear and the chaotic attractors merge for a range of α values. The figure shows that the system is multi-stable. Compare with Figure 2.
Figure 3. A bifurcation diagram for system (1) using the first iterative method, where a = 5 , b = 5 , ϵ = 0.2 , and ω = 2 π . A Poincaré map is taken at multiples of t = 2 π , as the system is 2 π -periodically driven. There are four attractors, the period-1 and period-2 cycles are clear and the chaotic attractors merge for a range of α values. The figure shows that the system is multi-stable. Compare with Figure 2.
Appliedmath 05 00070 g003
Figure 4. Bifurcation diagrams for system (1) using the second iterative method, where a = 5 , b = 5 , ϵ = 0.2 , and ω = 2 π . For each value of α one point is plotted. In all cases, red dots are ramped up and blue dots are ramped down. (a) u 0 = 0.5 , s 0 = 0.1 , step = 0.0005, period-1 ramping up and down. (b) u 0 = 0.1 , s 0 = 1 , step = 0.0005. (c) u 0 = 0.1 , s 0 = 1 , step = 0.001. (d) u 0 = 0.1 , s 0 = 0.09 , step = 0.0005. (e) u 0 = 0.1 , s 0 = 0.1 , step = 0.001, Max = 5110. (f) u 0 = 0.1 , s 0 = 1 , step = 0.01, Max = 5060. Compare with Figure 3.
Figure 4. Bifurcation diagrams for system (1) using the second iterative method, where a = 5 , b = 5 , ϵ = 0.2 , and ω = 2 π . For each value of α one point is plotted. In all cases, red dots are ramped up and blue dots are ramped down. (a) u 0 = 0.5 , s 0 = 0.1 , step = 0.0005, period-1 ramping up and down. (b) u 0 = 0.1 , s 0 = 1 , step = 0.0005. (c) u 0 = 0.1 , s 0 = 1 , step = 0.001. (d) u 0 = 0.1 , s 0 = 0.09 , step = 0.0005. (e) u 0 = 0.1 , s 0 = 0.1 , step = 0.001, Max = 5110. (f) u 0 = 0.1 , s 0 = 1 , step = 0.01, Max = 5060. Compare with Figure 3.
Appliedmath 05 00070 g004
Figure 5. Three-dimensional phase portraits for system (2), when a = 5 , b = 5 , c = 5 , ϵ = 0.2 , ω = 2 π . Initial points across 3-D space, and transient trajectories were removed. Plots when: (a) α = 0.5 , there are four period-1 cycles and two period-2 cycles; (b) α = 1 , four period-2 cycles and two period-1 cycles; (c) α = 2 , a connected chaotic attractor, and four period-1 cycles; (d) α = 3 , six periodic cycles; (e) α = 4 , two chaotic cycles and four period-1 cycles; (f) α = 5 , six period-1 cycles.
Figure 5. Three-dimensional phase portraits for system (2), when a = 5 , b = 5 , c = 5 , ϵ = 0.2 , ω = 2 π . Initial points across 3-D space, and transient trajectories were removed. Plots when: (a) α = 0.5 , there are four period-1 cycles and two period-2 cycles; (b) α = 1 , four period-2 cycles and two period-1 cycles; (c) α = 2 , a connected chaotic attractor, and four period-1 cycles; (d) α = 3 , six periodic cycles; (e) α = 4 , two chaotic cycles and four period-1 cycles; (f) α = 5 , six period-1 cycles.
Appliedmath 05 00070 g005
Figure 6. (a) Two-neuron module with adaptive feedback synapses. (b) Stability diagram in the b 1 , w 12 plane when b 2 = 1 , w 11 = 1.5 , w 21 = 5 , a = 0.3 and b = 0.1 . The curves denote the boundaries where the system is bistable B + 1 , unstable B | J | = 1 , and quasi-periodic B 1 . Compare with Figure 7.
Figure 6. (a) Two-neuron module with adaptive feedback synapses. (b) Stability diagram in the b 1 , w 12 plane when b 2 = 1 , w 11 = 1.5 , w 21 = 5 , a = 0.3 and b = 0.1 . The curves denote the boundaries where the system is bistable B + 1 , unstable B | J | = 1 , and quasi-periodic B 1 . Compare with Figure 7.
Appliedmath 05 00070 g006
Figure 7. For all figures set b 2 = 1 , w 11 = 1.5 , w 21 = 5 , a = 0.3 , b = 0.1 , and vary the parameter b 1 . (a) Set w 12 = 2 , and ramp the bias b 1 up and down, 5 b 1 5 . There is a clear counterclockwise hysteresis (bistable) cycle as b 1 is ramped up (red dots) and ramped down (blue dots). (b) Set w 12 = 2 , and ramp the bias b 1 up and down, 5 b 1 5 . There is a clear instability (period-doubling and un-doubling into/out of chaos), as b 1 is ramped up and down. (c) Set w 12 = 3 , there is quasi-periodic behavior as b 1 is ramped up and down. Compare with Figure 6.
Figure 7. For all figures set b 2 = 1 , w 11 = 1.5 , w 21 = 5 , a = 0.3 , b = 0.1 , and vary the parameter b 1 . (a) Set w 12 = 2 , and ramp the bias b 1 up and down, 5 b 1 5 . There is a clear counterclockwise hysteresis (bistable) cycle as b 1 is ramped up (red dots) and ramped down (blue dots). (b) Set w 12 = 2 , and ramp the bias b 1 up and down, 5 b 1 5 . There is a clear instability (period-doubling and un-doubling into/out of chaos), as b 1 is ramped up and down. (c) Set w 12 = 3 , there is quasi-periodic behavior as b 1 is ramped up and down. Compare with Figure 6.
Appliedmath 05 00070 g007
Figure 8. For all figures set b 2 = 1 , w 11 = 1.5 , w 12 = 2 , w 21 = 5 , b = 0.1 , and vary the parameter a. (a) When a = 0.05 , there is no hysteresis. (b) A small bistable cycle when a = 0.1 . (c) The bistable cycle is growing. a = 0.2 . (d) The bistable cycle is large, a = 0.3 . (e) Quasi-periodic behavior appears in the top branch, a = 0.4 . (f,g) There is quasi-periodic behavior in the upper and lower branches, a = 0.6 . (h) Periodic and quasi-periodic behavior on one branch, a = 0.8 . (i) Periodic and quasi-periodic behavior on two branches, a = 0.9 .
Figure 8. For all figures set b 2 = 1 , w 11 = 1.5 , w 12 = 2 , w 21 = 5 , b = 0.1 , and vary the parameter a. (a) When a = 0.05 , there is no hysteresis. (b) A small bistable cycle when a = 0.1 . (c) The bistable cycle is growing. a = 0.2 . (d) The bistable cycle is large, a = 0.3 . (e) Quasi-periodic behavior appears in the top branch, a = 0.4 . (f,g) There is quasi-periodic behavior in the upper and lower branches, a = 0.6 . (h) Periodic and quasi-periodic behavior on one branch, a = 0.8 . (i) Periodic and quasi-periodic behavior on two branches, a = 0.9 .
Appliedmath 05 00070 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lynch, S.T.; Lynch, S. Hysteresis in Neuron Models with Adapting Feedback Synapses. AppliedMath 2025, 5, 70. https://doi.org/10.3390/appliedmath5020070

AMA Style

Lynch ST, Lynch S. Hysteresis in Neuron Models with Adapting Feedback Synapses. AppliedMath. 2025; 5(2):70. https://doi.org/10.3390/appliedmath5020070

Chicago/Turabian Style

Lynch, Sebastian Thomas, and Stephen Lynch. 2025. "Hysteresis in Neuron Models with Adapting Feedback Synapses" AppliedMath 5, no. 2: 70. https://doi.org/10.3390/appliedmath5020070

APA Style

Lynch, S. T., & Lynch, S. (2025). Hysteresis in Neuron Models with Adapting Feedback Synapses. AppliedMath, 5(2), 70. https://doi.org/10.3390/appliedmath5020070

Article Metrics

Back to TopTop