Next Article in Journal
Seismic Response Evaluation of Isolated Bridges Equipped with Fluid Inerter Damper
Previous Article in Journal
Waveguide Arrays Interaction to Second Neighbors: Semi-Infinite Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Synchronization of Hindmarsh–Rose Neurons with Memristive Couplings

by
Illiani Carro-Pérez
and
Juan Gonzalo Barajas-Ramírez
*
División de Control y Sistemas Dinámicos, Instituto Potosino de Investigación Científica y Tecnológica A.C. (IPICYT), Camino a la Presa San José 2255, Lomas 4ta. Sección, San Luis Potosí 78216, SLP, Mexico
*
Author to whom correspondence should be addressed.
Dynamics 2025, 5(4), 50; https://doi.org/10.3390/dynamics5040050
Submission received: 1 October 2025 / Revised: 31 October 2025 / Accepted: 1 November 2025 / Published: 1 December 2025

Abstract

In this study, we explore the emergence of generalized synchronization (GS) in arrays of Hindmarsh–Rose (HR) neurons that are coupled through memristive synapses. We design coupling functions utilizing active memristors to facilitate GS in a bidirectionally coupled two-neuron memristive neural network (MNN). Our analysis employs a nearest neighbor (NN) approach. Our findings indicate that there is a threshold coupling strength for the active memristive synapses required to achieve GS. Additionally, we investigate how memristor parameters affect the temporal characteristics of synchronized neuronal firing patterns. Specifically, we discover that the interburst interval (IBI) is directly proportional to the coupling strength of the memristive synapses, while the interspike interval (ISI) is inversely proportional to this strength.

1. Introduction

A straightforward way to model brains is to represent them as networks of neurons that communicate via synapses. The collective behavior of these networks is determined by the electrochemical transmission of signals between neurons at synapses [1]. A key dynamical feature of neuron models is the presence of an action potential, which is a spike-shaped output voltage signal. The Hindmarsh–Rose (HR) model is a simplified neuron model that effectively captures the emergence of action potentials [2]. Additionally, the HR model can exhibit various types of oscillations, including chaotic behavior, when specific parameter values are chosen [3].
Communication between neurons involves both electrical and chemical components. The electrical aspect describes the ion currents triggered by voltage differences across neuronal membranes. In this context, signals can move through gap junction channels in either direction in accordance with Ohm’s law. The chemical part of synaptic transmission involves the presynaptic neuron releasing neurotransmitters into the synaptic cleft. When these chemicals bind to receptors on the postsynaptic neuron, they generate a signal that modifies the electrical properties of the postsynaptic neuron. As a result, the signal transmission is directed and subject to time-varying parameters, saturating at a constant value as the synaptic cleft becomes filled with neurotransmitters [4].
In recent years, several circuit implementations of neurons and synapses based on memristors have been proposed to emulate neural system behavior, such as channel opening and closing driven by ionic density [5,6,7,8]. A generalization of memristors was introduced in 1976 [9], highlighting a property in which devices can exhibit negative memristance over specific intervals of their internal variable; this phenomenon classifies the device as locally active. These locally active memristors have been utilized to model synaptic behavior [10], capturing nonlinear responses and adaptive plasticity through their internal-state evolution. Several studies [6,8,11,12] have implemented locally active memristors with memductance described by a hyperbolic tangent function, ensuring bounded memristance characteristics. However, the exploration of fully active memristors as synaptic elements remains an open research avenue, particularly regarding their influence on the firing patterns and the synchronization properties of interconnected neural circuits.
In this contribution, we focus on a simplified memristive neural network (MNN) consisting of identical HR neurons that interact through active memristive synapses. Our model comprises two HR neurons bidirectionally coupled via locally active memristors. Although the array is small, it has physical significance since it illustrates the effects of the connection dependency on the memristor’s internal state in the emergence of coherent firing patterns in larger arrays. In this regard, previous work has investigated the emergence of identical synchronization between two neurons connected by a locally active memristor [13]. An analytical proof of exponential synchronization in a two-neuron MNN coupled via a locally active memristor was established under suitable conditions on the memristor coupling coefficient and the initial state [11]. Another study explored a network of three HR neurons connected by memristive synapses in a ring topology, revealing that identical synchronization occurs when the coupling strength exceeds a certain threshold [8]. Furthermore, ref. [12] has shown that the transition from synchronization to desynchronization depends on the MNN’s connection structure. Notably, in the exponential synchronization approach, analytical criteria provided by the Lyapunov method establish sufficient stability conditions concerning the memristor’s initial conditions and coupling strength. These criteria differ from those used in other synchronization regimes, such as generalized synchronization (GS) in mutually coupled systems, where the existence of a synchronization manifold is typically confirmed numerically, e.g., via the nearest-neighbor approach [14].
In this work, we investigate the emergence of coherence in firing patterns of MNNs consisting of identical HR neurons, focusing on the emergence of GS and the dependence of firing patterns, such as interspike interval (ISI) and interburst interval (IBI), on the strength of the memristive connections. We propose a fully active memristive synapse with consistently negative memristance, which was bounded above and below [6,8,11,12]. Taking inspiration from previous research [13,15,16,17,18,19], we proposed quadratic forms for the memristance function. Additionally, we demonstrate the presence of a pinched hysteresis loop (PHL) in the second and fourth quadrants, as in [12]. That remains across various frequency values, while its lobe area decreases as frequency increases.
In the following section, we will present our MNN model of HR neurons with memristive connections and describe its synchronization challenges.

2. Preliminaries

Consider an MNN where the following equations model each node:
x ˙ 1 ( t ) x ˙ 2 ( t ) x ˙ 3 ( t ) = a x 1 ( t ) 3 + b x 1 ( t ) 2 + x 2 ( t ) x 3 ( t ) + I s y n ( t ) e d x 1 ( t ) 2 x 2 ( t ) ϵ ( s ( x 1 ( t ) + x 0 ) x 3 ( t ) )
Here, the variable x 1 ( t ) represents voltage, x 2 ( t ) denotes recuperation, and x 3 ( t ) indicates adaptation within the neural model. This model effectively captures the dynamics of ionic currents through membrane channels, specifically potassium K + and sodium N a + for the fast subsystem. In contrast, the ionic fluxes related to chlorine C l and other leaking ions pertain to the slow variable x 3 ( t ) . The external input I s y n ( t ) facilitates inter-neuronal connections. Notably, the RH model expressed in (1) exhibits chaotic behavior when parameters are set as a = 1 , b = 3 , c = 1 , d = 5 , σ = 4 , x 0 = 1.6 , ε = 0.0021 , and I e x t ( t ) = 3.29 [3].
The parameter 0 < ϵ 1 describes the relation of fast–slow time scales in the neural model, allowing for periodic currents that trigger the action potential displaying different patterns of bursting and spiking behavior [3].
The interconnection between neurons is modeled as a current given by memristors of the following form:
I s y n ( t ) = g ( z ( t ) , ν ( t ) ) ν ( t ) z ˙ ( t ) = f ( z ( t ) , ν ( t ) )
where z ( t ) R m is the internal state of the memristor, ν ( t ) R is the voltage input, and I s y n ( t ) R is the current output of the memristive device; with f : R m × R R m and g : R m × R R representing continuous functions describing the internal dynamics and the memductive function, which is zero-at-zero and and corresponds to the derivative of the flux–charge characteristic with respect to the input variable.
We simplify the internal dynamics of the memristor:
z ˙ ( t ) = v ( t )
In this simplified formulation, the vector field f ( · ) lacks any leakage term, meaning that no dissipative mechanisms counterbalance the growth of z ( t ) . This implies the absence of processes that would gradually diminish the internal state (memory trace). By neglecting such dissipation, we isolate the intrinsic effect of memristive coupling on neuronal excitability and synchronization.
In this contribution, we consider that all connections are modeled by memristors with the following memductance function:
g ( z ( t ) , ν ( t ) ) = β α z 2 ( t ) + 1 ( β + γ )
where α , β , and γ > 0 , with β representing the coupling strength coefficient. As such, the memductance is bounded by the following:
( γ + β ) g ( z ( t ) , ν ( t ) ) γ z ( t ) R
As shown in Figure 1, under periodic input, the current–voltage diagram is a frequency-dependent pinch hysteresis. At the same time, the memductance function is quadratic and consistently negative; therefore, the model (2)–(4) is an active memristor [8].
It is worth noting that ever since the mathematical generalization of the concept of a memristor in 1976 [9], a more flexible interpretation allows us to consider memristive devices that are not strictly passive; i.e., for some intervals of the memristor’s internal variable, its memristance is negative, and it is therefore said to be locally active. This feature is directly related to ionic current channels in physiological models of neuronal membrane electrical activity. Locally active memristors have recently been proposed to model synapse behavior with different memductance functions, including the hyperbolic tangent function [6,8,10,11,12]. The reason behind modeling memristance this way is to have a bounded memristance value. Taking inspiration from these works, a memductance function that saturates as a function of the memristor’s locally active internal variable can be proposed in the form (4), as shown in (5). In addition to the saturation of the memristive function, the proposed quadratic memductance function in the current–voltage variables follows the reasoning of [13,15,16,17,18,19], which implies a cubic relation in the charge–flux interpretation of the memristor. That results in the preservation of the pinched hysteresis loop (PHL) in the second and fourth quadrants, as observed in Figure 1.
Using the memristive synapses described above to connect two HR neurons, we have the following MNN:
x ˙ 11 ( t ) = x 12 ( t ) x 13 ( t ) a x 11 3 ( t ) + b x 11 2 ( t ) + I e x t ( t ) + M 21 ( t ) x ˙ 12 ( t ) = c x 12 ( t ) d x 11 2 ( t ) x ˙ 13 ( t ) = ε [ σ ( x 11 ( t ) x 0 ) x 13 ( t ) ] x ˙ 21 ( t ) = x 22 ( t ) x 23 ( t ) a x 21 3 ( t ) + b x 21 2 ( t ) + I e x t ( t ) + M 12 ( t ) x ˙ 22 ( t ) = c x 22 ( t ) d x 21 2 ( t ) x ˙ 23 ( t ) = ε [ σ ( x 21 ( t ) x 0 ) x 23 ( t ) ]
M 21 ( t ) = k g ( z 21 ( t ) , ν 21 ( t ) ) ν 21 ( t ) z ˙ 21 ( t ) = ν 21 ( t ) M 12 ( t ) = k g ( z 12 ( t ) , ν 12 ( t ) ) ν 12 ( t ) z ˙ 12 ( t ) = ν 12 ( t )
where x i ( t ) = [ x i 1 ( t ) , x i 2 ( t ) , x i 3 ( t ) ] R 3 represents the state of the ith HR neuron; M 21 ( t ) represents the memristive synapses from neuron one to neuron two, and M 12 refers to the connection in the opposite direction. The input to the neurons is their voltage difference ν 21 ( t ) = x 11 ( t ) x 21 ( t ) and ν 12 ( t ) = x 21 ( t ) x 11 ( t ) with k > 0 the network’s coupling strength and the memductance function (4), which are identical for all connections.
The MNN (6) and (7) is said to achieve GS if after a transient time t > t T , its states are related through a static function F ( · ) that holds uniformly in time, such that
x 1 ( t ) = F 12 ( x 2 ( t ) )
In implicit form, this can be written as
F ( x 1 ( t ) , x 2 ( t ) ) = x 1 ( t ) F 12 ( x 2 ( t ) ) = 0 , t > t T
Notice that the functional relation F ( · ) must be independent of time and state variables.
The main difference between generalized and identical synchronization is that in GS, the relationship between states is not the identity. That is, their temporal coordination follows a more general relation that must be static and independent of the systems’ states. As such, the stability conditions are essentially the same but in a different error dynamics: instead of the difference between states, it is the difference between the state of one system and the image of the static function that describes the relationship between the states. A simple physical interpretation is that GS appears when a system, instead of exactly copying the motion of a system, does the exact opposite. This phenomenon, sometimes called antisynchronization, is in fact one form of GS [14].
An alternative way to describe GS is in terms of manifolds. The dynamics of (6) and (7) evolves in the manifold:
M = { [ x 1 ( t ) , x 2 ( t ) , z 21 ( t ) , z 21 ( t ) ] R 8 : solutions of the system ( 6 ) and ( 7 ) }
For MNN (6) and (7) to achieve GS, the manifold M ¯ must be at least locally asymptotically stable:
M ¯ = { [ x 1 ( t ) , x 2 ( t ) , z 21 ( t ) , z 21 ( t ) ] R 8 : F ( x 1 ( t ) , x 2 ( t ) ) = 0 , z 21 ( t ) = c ¯ 1 , z 12 ( t ) = c ¯ 2 with c ¯ 1 and c ¯ 2 constants . }
Notice that since the states of one system map on top of another once GS is achieved, the manifold M ¯ is effectively on the lower dimension R 3 instead of the entire state space R 8 . Therefore, GS is achieved if the manifold M ¯ is locally stable, that is, all transverse directions are contracting. One way to determine the local stability of the GS manifold is to characterize all its transverse directions via Lyapunov exponents (LEs). If all transverse directions have negative LEs, then the GS manifold is locally stable [14]. The LEs can be calculated using the well-known algorithm proposed by Wolf [20]. However, since these calculations are complex and demanding, a simplified indicator of GS is the nearest neighbor method [21]. The nearest-neighbor method measures the distance between M points on the solution trajectories of the systems as they evolve in time; if this distance remains approximately constant, the systems are coordinated in time. A significant advantage of this method is that the number of distances to calculate scales linearly with the number of nodes in the network, unlike Lyapunov-based methods, which require evaluating variational equations and Jacobian matrices. The nearest neighbor method is computationally efficient, as it only involves storing and comparing state vectors, which are operations that are considerably faster and more amenable to parallelization. Consequently, the method is well-suited for the analysis of large and even heterogeneous networks.

3. Numerical Results

To identify the emergence of GS in our MNN model, we apply the false neighbors approach. To illustrate its calculation, we take six points x 1 , 2 j ( j = 1 , , 8 ) randomly chosen from the trajectory of each neuron (Figure 2a,b). Then, after sufficient time has passed such that a full oscillation has been completed (n iteration steps later), we identify their corresponding neighbors x 1 j n (Figure 2b). For these points, we measure their normalized average distance d as
d = 1 δ K ¯ j = 0 M 1 n = 0 K j 1 x 2 j x 2 j n 2 K ¯ = j = 0 M 1 K j
where M is the number of randomly chosen points in the trajectory and δ is the average distance between the chosen points and their neighbors for the first neuron, while K j is the number of nearest neighbors for point j. On average, we use N = 153 points across all experiments. Allowing K j to vary has practical advantages: it provides a more reliable estimate of local distances.
As shown in Figure 2, the trajectory moves from x 1 j to x 1 j n if they are contained in a small vicinity ( ρ 1 ). In our illustration, a region of state space is represented with the same color. Then, in the second neuron, our initial points move, after the same amount of time, to places where they are no longer neighbors. Then, there is no static functional relationship between these systems; in other words, there is no GS in our MNN model. Alternatively, in Figure 3, where the coupling memristors have the value β = 1.14 , all neighbors in the second neuron are also in a small vicinity ( δ 1 and d 0 ). Therefore, there is GS between these neurons.
The MNN model (4) and (5) with busting HR neuron parameters [3], active memristor synapses ( β = 1.14 ) [8] and unitary coupling strength ( k = 1 ) results in the trajectories shown in Figure 4.
Notice that “burstings” appear at regular intervals but at different times for each neuron. This is more clearly shown in their third coordinate, where the anti-synchronized nature of the GS generated is easily observed. An alternative way to express this form of GS is the changes in its IBI; if it is periodic with a fixed period, then GS is achieved. In this contribution, we are particularly interested in determining the effect of memristive synapses on the emergence of GS. To this end, we evaluated the distance d to false neighbors for different values of the parameter β .
Notice that for a small β value, the distance is too considerable and therefore no GS is detected. For b e t a 1.4 , the neighbors’ distance is nearly zero, indicating that GS is achieved.
Additionally, we consider the effects of β on the MNN’s IBI and ISI. For each value of β , we register maximum, minimum, and average values. Figure 5 shows that as β increases, the IBI also grows. Conversely, the average ISI decreases linearly as the value of β decreases (see Figure 6).

4. Discussion of Results

Different approaches can be used to determine the emergence of GS on dynamical networks. In this contribution, we use the nearest neighbor approach for a small network of HR neurons coupled via an active memristive synapse. A significant advantage of the nearest neighbor approach is that unlike Lyapunov-based methods, it does not require the evaluation of variational equations or Jacobian matrices. This makes it computationally efficient, as it only involves storing and comparing state vectors—operations that are considerably faster and more amenable to parallelization. Consequently, the method is well suited for the analysis of large or heterogeneous networks composed of nonidentical nodes. The primary concern in this contribution was to identify the effects of memristive synapse parameters on the emergence of GS in the resulting MNN. We show that for a pair of HR neurons in an MNN, GS is achieved when the memristor parameter β is above a particular threshold value (see Figure 7). Furthermore, we identify that the memristor strength coefficient influences the temporal characteristics of their firing patterns. Within this operational regime, an increase in the memristor coefficient leads to a proportional increment in the average IBI while simultaneously resulting in a corresponding decrease in the average ISI. To extend the presented results, it is necessary to include the effects of larger populations of neurons and the features of their connection structure. Initial efforts in this regard indicate that additional activation patterns in the MNN beyond SG emerge when small-world or scale-free networks are considered; however, these results will be reported elsewhere.

Author Contributions

Conceptualization, analysis and validation I.C.-P. and J.G.B.-R. Original draft preparation J.G.B.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the SECIHTI of Mexico under the 968050 grant for doctoral studies.

Data Availability Statement

Programs and simulation data are available via direct contact with the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rabinovich, M.I.; Varona, P.; Selverson, A.I.; Abarbanel, H.D.I. Dynamical principles in neuroscience. Rev. Mod. Phys. 2006, 78, 1213–1265. [Google Scholar] [CrossRef]
  2. Hindmarsh, J.L.; Rose, R.M. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. Lond. Ser. B Biol. Sci. 1984, 221, 87–102. [Google Scholar] [CrossRef]
  3. Innocenti, G.; Morelli, A.; Genesio, R.; Torcini, A. Dynamical phases of the Hindmarsh-Rose neuronal model: Studies of the transition from bursting to spiking chaos. Chaos Interdiscip. Nonlinear Sci. 2007, 17, 043128. [Google Scholar] [CrossRef] [PubMed]
  4. Sterratt, D.; Graham, B.; Gillies, A.; Willshaw, D. Principles of Computational Modeling in Neuroscience; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
  5. Chua, L.O. Memristor-The missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  6. Wu, J.; Li, Z.; Lan, Y. Coexistence and control of firing patterns in a heterogeneous neuron-coupled network by memristive synapses. Nonlinear Dyn. 2025, 113, 13715–13726. [Google Scholar] [CrossRef]
  7. Mannan, Z.I.; Kim, H.; Chua, L.O. Implementation of Neuro-Memristive Synapse for Long- and Short-Term Bio-Synaptic Plasticity. Sensors 2021, 21, 644. [Google Scholar] [CrossRef] [PubMed]
  8. Hu, X.; Jiang, B.; Chen, J.; Liu, C. Synchronization behavior in a memristive synapse-connected neuronal network. Eur. Phys. J. Plus 2022, 137, 895. [Google Scholar] [CrossRef]
  9. Chua, L.O.; Kang, S.M. Memristive devices and systems. Proc. IEEE 1976, 64, 209–223. [Google Scholar] [CrossRef]
  10. Li, R.; Wang, Z.; Dong, E. A new locally active memristive synapse-coupled neuron model. Nonlinear Dyn. 2021, 104, 4459–4475. [Google Scholar] [CrossRef]
  11. Bao, H.; Zhang, Y.; Lu, W.; Bao, B. Memristor synapse-coupled memristive neuron network: Synchronization transition and occurrence of chimera. Nonlinear Dyn. 2020, 100, 937–950. [Google Scholar] [CrossRef]
  12. Kanagaraj, S.; Durairaj, P.; Sampath, S.; Karthikeyan, A.; Rajagopal, K. Collective dynamics of a coupled Hindmarsh–Rose neurons with locally active memristor. Biosystems 2023, 232, 105010. [Google Scholar] [CrossRef] [PubMed]
  13. Xu, F.; Zhang, J.; Fang, T.; Huang, S.; Wang, M. Synchronous dynamics in neural system coupled with memristive synapse. Nonlinear Dyn. 2018, 92, 1395–1402. [Google Scholar] [CrossRef]
  14. Moskalenko, O.I.; Koronovskii, A.A.; Hramov, A.E.; Boccaletti, S. Generalized synchronization in mutually coupled oscillators and complex networks. Phys. Rev. E 2012, 86, 036216. [Google Scholar] [CrossRef] [PubMed][Green Version]
  15. Ma, J.; Lv, M.; Zhou, P.; Xu, Y.; Hayat, T. Phase synchronization between two neurons induced by coupling of electromagnetic field. Appl. Math. Comput. 2017, 307, 321–328. [Google Scholar] [CrossRef]
  16. Cheng, X.; Song, X.; Wang, R. Self-organization collective dynamics of heterogeneous neurons with memristive and plastic chemical synapses. Int. J. Mod. Phys. B 2022, 36, 2250030. [Google Scholar] [CrossRef]
  17. Zandi-Mehran, N.; Jafari, S.; Hashemi, G.S.; Nazarimehr, F.; Perc, M. Different synaptic connections evoke different firing patterns in neurons subject to an electromagnetic field. Nonlinear Dyn. 2020, 100, 1809–1824. [Google Scholar] [CrossRef]
  18. Usha, K.; Subha, P.A. Hindmarsh-Rose neuron model with memristors. Biosystems 2019, 178, 1–9. [Google Scholar] [CrossRef] [PubMed]
  19. Xu, Y.; Xia, Y.; Ma, J.; Alsaedi, A.; Ahmad, B. Synchronization between neurons coupled by memristor. Chaos Solitons Fractals 2017, 104, 435–442. [Google Scholar] [CrossRef]
  20. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef]
  21. Rulkov, N.F.; Sushchik, M.M.; Tsimring, L.S.; Abarbanel, H.D.I. Generalized synchronization of chaos in directionally coupled chaotic systems. Phys. Rev. E 1995, 51, 980–994. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Memristor (2)–(4). (a) Pinched hysteresis loop of the memristor for ν ( t ) = sin ( ω t ) , (b) the memductance evaluated for different values of z ( t ) .
Figure 1. Memristor (2)–(4). (a) Pinched hysteresis loop of the memristor for ν ( t ) = sin ( ω t ) , (b) the memductance evaluated for different values of z ( t ) .
Dynamics 05 00050 g001
Figure 2. Phase portraits for two HR neurons coupled by a memristive synapse (6) and (7) with β = 0.1 . In this case, since neighbors are mostly false, no GS was detected.
Figure 2. Phase portraits for two HR neurons coupled by a memristive synapse (6) and (7) with β = 0.1 . In this case, since neighbors are mostly false, no GS was detected.
Dynamics 05 00050 g002
Figure 3. The phase portraits for two HR neurons coupled by a memristive synapse (6) and (7) with β = 1.5 . In this case, GS was detected between neurons 1 and 2.
Figure 3. The phase portraits for two HR neurons coupled by a memristive synapse (6) and (7) with β = 1.5 . In this case, GS was detected between neurons 1 and 2.
Dynamics 05 00050 g003
Figure 4. Trajectories of two HR neuron networks coupled via memristor synapses (6)–(9) with the initial condition to ( 0.3945 ,   0.5858 ,   4.709 ,   1.361 ,   8.26 ,   3.11 ,   0   ,   0 ) and β = 1.5 .
Figure 4. Trajectories of two HR neuron networks coupled via memristor synapses (6)–(9) with the initial condition to ( 0.3945 ,   0.5858 ,   4.709 ,   1.361 ,   8.26 ,   3.11 ,   0   ,   0 ) and β = 1.5 .
Dynamics 05 00050 g004
Figure 5. Influence of memristor parameter β on the IBI of the MNN (6) and (7).
Figure 5. Influence of memristor parameter β on the IBI of the MNN (6) and (7).
Dynamics 05 00050 g005
Figure 6. Influence of memristor parameter β on the ISI of the MNN (6) and (7).
Figure 6. Influence of memristor parameter β on the ISI of the MNN (6) and (7).
Dynamics 05 00050 g006
Figure 7. The nearest neighbor distance versus the memristor parameter β for two HR neurons coupled by memristive synapse (6) and (7).
Figure 7. The nearest neighbor distance versus the memristor parameter β for two HR neurons coupled by memristive synapse (6) and (7).
Dynamics 05 00050 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carro-Pérez, I.; Barajas-Ramírez, J.G. Generalized Synchronization of Hindmarsh–Rose Neurons with Memristive Couplings. Dynamics 2025, 5, 50. https://doi.org/10.3390/dynamics5040050

AMA Style

Carro-Pérez I, Barajas-Ramírez JG. Generalized Synchronization of Hindmarsh–Rose Neurons with Memristive Couplings. Dynamics. 2025; 5(4):50. https://doi.org/10.3390/dynamics5040050

Chicago/Turabian Style

Carro-Pérez, Illiani, and Juan Gonzalo Barajas-Ramírez. 2025. "Generalized Synchronization of Hindmarsh–Rose Neurons with Memristive Couplings" Dynamics 5, no. 4: 50. https://doi.org/10.3390/dynamics5040050

APA Style

Carro-Pérez, I., & Barajas-Ramírez, J. G. (2025). Generalized Synchronization of Hindmarsh–Rose Neurons with Memristive Couplings. Dynamics, 5(4), 50. https://doi.org/10.3390/dynamics5040050

Article Metrics

Back to TopTop