Abstract
Specific neural coding (SNC) forms the basis of information processing in bio-brain, which generates distinct patterns of neural coding in response to corresponding exterior forms of stimulus. The performance of SNC is extremely dependent on brain-inspired models. However, the bio-rationality of a brain-inspired model remains inadequate. The purpose of this paper is to investigate a more bio-rational brain-inspired model and the SNC of this brain-inspired model. In this study, we construct a complex spiking neural network (CSNN) in which its topology has the small-word property and the scale-free property. Then, we investigated the SNC of CSNN under various strengths of various stimuli and discussed its mechanism. Our results indicate that (1) CSNN has similar neural time coding under same kind of stimulus; (2) CSNN has significant SNC based on time coding under various exterior stimuli; (3) our discussion implies that the inherent factor of SNC is synaptic plasticity.
1. Introduction
Bio-brain exhibits remarkable information processing ability [1]. Neural information coding is crucial for facilitating this processing. Various responses are formed in response to various stimuli, which is essential for the brain to differentiate between various stimuli to achieve advanced brain functions [2,3]. The performance of SNC is extremely dependent on brain-inspired models. However, the bio-rationality of a brain-inspired model remains inadequate as they do not fully capture the structure and function of real neural systems. This research gap limits the ability of current models to replicate the complexities of biological neural coding mechanisms. Spiking neural network (SNN), inspired by the brain [4,5], can drive advancements in artificial intelligence through research on their specific neural coding. An SNN architecture consists of a neuron model, a synaptic plasticity model, and a network topology.
Bio-neurons act as units for processing neural information within the brain, while neuron models are mathematical representations that replicate electrophysiological characteristics of bio-neurons. Many types of neuron models have been developed by researchers, including the Hodgkin–Huxley (H-H) neuron model, the leaky integrate-and-fire (LIF) neuron model, the Izhikevich neuron model, and so on. Since the H-H neuron model incurs significant computational expense [6], the LIF neuron model reduces computational expense but fails to accurately replicate the firing behavior observed in bio-neurons [7]. The Izhikevich neuron model can reflect the firing activity of bio-neurons well while maintaining a reduced computing expense [8]. Therefore, it is extensively utilized in the construction of SNNs [9,10].
Bio-synapses serve as the fundamental components for transmitting neural information among bio-neurons [11]. Excitatory synapses (ESs) have the capacity to enhance the effectiveness of neural information transfer [12], whereas inhibitory synapses (ISs) have the capacity to decrease the speed and sensitivity of neural information transfer [13]. Biological studies have shown that ISs, in conjunction with ESs, play a crucial role in the regulation of neural activity in the bio-brain. For instance, Xue et al. [14] found that the proportion of ISs rose alongside heightened neuronal excitability by measuring how neurons of young mice responded to optical stimuli, which reflects excitation–inhibition balance in anatomy. Inspired by bio-synapses, synaptic plasticity models co-regulated by ESs and ISs are utilized to construct SNNs. Zhao et al. [15] constructed an SNN with both ES and IS models. They discovered that this SNN outperforms an SNN without an inhibitory synapse model on MNIST datasets. For bio-synapses, the neurotransmitter dispersion induces a synaptic time delay (STD), which is stochastically distributed within the interval [0.1, 40] [16]. Hence, synaptic plasticity models with STD have been introduced in SNNs. Zhang et al. [17] developed an SNN with STD, which demonstrated the ability to accurately reproduce the expected spike sequence and achieved better accuracy than an SNN without an STD on the TIDIGITS speech recognition task. However, in this study, the STD was set to a fixed value that did not conform to the bio-STD interval [0.1, 40] . Hence, on the basis of co-regulation of ESs and ISs, a synaptic plasticity model with STD conforming to the bio-STD can improve the performance of SNNs.
The topology decides the forms of connection between neurons in a brain network. According to principles of complex network theory, networks are classified according to their topological structure into regular, random, and complex networks. Among these, complex networks encompass small-world (SW) networks and scale-free (SF) networks [18]. The SW networks exhibit both a high mean clustering coefficient (CC) and a low mean shortest path length (SPL) [19]. The SF network exhibits the degree distribution of nodes adheres to a power-law distribution, characterized by significant heterogeneity, which endows it with robust fault tolerance [20]. Biological research has shown that bio-functional brain networks (FBN) are complex networks with SW and/or SF properties. Van et al. [21] generated FBNs of 28 healthy subjects, finding that these networks exhibited SW and SF properties. Liu et al. [22] examined the SW property of FBNs in 31 schizophrenia patients and 31 healthy individuals, revealing that SW property was absent in schizophrenia patients. Stylianou et al. [23] generated FBNs for 15 Parkinson’s disease patients to investigate the effect of treatment of dopaminergic on SF property and found that SF property was normal by the treatment. Based on results for FBNs, researchers have constructed SNNs with the topology of SW and/or SF properties. Tsakalos et al. [24] developed a small-world spiking neural network (SWSNN) by utilizing Watts and Strogatz (WS) algorithm for topology generation. They discovered that this SWSNN achieved superior recognition accuracy compared to a two-layered SNN on the image dataset. Reis et al. [25] constructed an SFSNN utilizing Barabási–Albert (BA) algorithm and investigated synchronization. They discovered that the SFSNN could inhibit burst synchronization under the exterior disturbance of light pulses while still maintaining low synchronization after the disturbance had stopped for an extended period. In a previous study [26], we constructed SWSNN and SFSNN to explore anti-interference capacity under pulse noise. Our findings revealed that SWSNN was more robust than SFSNN, which indicates that anti-interference capacity of these two SNNs is affected by topology. Based on the topological characteristics of bio-FBN, the complex spiking neural network (CSNN) exhibiting SW property and SF property can enhance the bio-rationality of brain-inspired models.
The bio-brain responds to exterior stimuli through neural coding [27]. Callier et al. [28] investigated the reaction of populations of neurons in monkey’s brain when exposed to skin pressure marks. Their study revealed that these neurons can encode the timing, location, and magnitude of skin indentation quickly and consistently, indicating that population coding can effectively encode features of contact events. Inspired by these biological findings, researchers have examined the coding of brain-inspired models. Zhu et al. [29] examined energy consumption of an SNN. They discovered that the distribution of network energy is positively associated with the strength of coupling between neurons, implying that energy coding is a useful tool for assessing cost-effectiveness of the brain-inspired model. Du et al. [30] studied the firing activity of an SNN under a sinusoidal induced electric field (IEF) using the inter-spike intervals (ISI). They found that the ISI gradually became an integer multiple of the firing period over time, forming a stable neural coding for IEF with random phase noise. Biological results have demonstrated that the bio-brain can form specific patterns of neural coding in response to various forms of exterior stimulus. For example, Stephanie et al. [31] observed neural activity in the brains of mice in response to different forms of taste stimulus and found that there was obvious specificity in the population coding in the gustatory cortex, indicating that different taste stimuli allow mice to produce different coding patterns. Therefore, the SNC can generate distinctive coding patterns that correspond to specific exterior stimuli. The performance of SNC is extremely dependent on brain-inspired models. However, the bio-rationality of the topology in current brain-inspired models remains deficient. This gap hinders the potential of neural coding in applications.
In order to deal with the above challenges, we propose a brain-inspired model with bio-rationality and investigate the SNC of CSNN under various strengths of various stimuli. In this study, we construct a CSNN with a topology that exhibits SW and SF properties, where the nodes are represented by Izhikevich neuron models, and the edges consist of synaptic plasticity models incorporating bio-STD, co-regulated by ESs and ISs. This design enhances the bio-rationality of the model, making it more aligned with the bio-brain and its information-processing capabilities. To investigate the SNC of CSNN under various strengths of different stimuli, K-means clustering algorithm and cosine similarity algorithms are employed. K-means clustering algorithm is well-suited for this task, efficiently grouping coding patterns into clusters and enabling the analysis of coding differences across stimuli, which helps to identify different responses across varying stimuli and allows for a clear analysis of coding differences. Meanwhile, the cosine similarity algorithm quantifies the similarity between coding patterns, and thus enabling a deeper understanding of the similarity within the classes. Since neural coding primarily relies on time coding patterns rather than absolute spike counts, cosine similarity provides an effective measure for evaluating the consistency of SNC under different stimulus. These two strategies complement each other and offer a way to analyze the time neural coding in CSNNs. Furthermore, we discuss the mechanism of the SNC by analyzing the correlation between the SNC and the synaptic plasticity.
The main contributions of this paper are as follows:
- To enhance the bio-rationality of brain-inspired models, a CSNN is proposed with a topology that incorporates both SW and SF properties. The nodes are Izhikevich neuron models, and the edges are represented by synaptic plasticity models with a bio-STD co-regulated by ESs and ISs.
- To investigate the SNC of CSNN under various strengths of various stimuli, the K-means clustering algorithm and cosine similarity algorithm are used. Our results indicate that CSNN exhibits marked time coding similarity under various strengths of same stimulus; the CSNN exhibits marked SNC under various stimuli.
- To elucidate mechanism of the SNC based on CSNN, we conduct a discussion between synaptic weight and SNC, which indicates that the inherent factor of the SNC is synaptic plasticity.
The following sections are organized as follows: The method of constructing the CSNN is described in Section 2. The SNC of CSNN under various stimuli is proposed in Section 3. Mechanism of the SNC is discussed in Section 4. Finally, a conclusion is presented in Section 5. Our flowchart is shown in Figure 1.
Figure 1.
Flowchart of our study.
2. Materials and Methods
In this section, we describe construction of CSNN and a neural coding approach of CSNN.
2.1. Construction of CSNN
To increase the bio-rationality of brain-inspired models, the CSNN is constructed. Then, we investigate the topology characteristics including CC and SPL.
2.1.1. Izhikevich Neuron Model
Based on the description in Section 1, Izhikevich neuron model has a low computational expense and precisely reflects firing activity of bio-neurons, compared with H-H neuron model [6] and LIF neuron model [7]. Thus, we use the Izhikevich neuron model to represent the nodes in CSNN, which is expressed as follows [32]:
where represents membrane voltage; represents recovery variable for ; represents the exterior current; represents synaptic current; and , , , and are dimensionless variables which simulate both excitatory and inhibitory neurons.
2.1.2. Synaptic Plasticity Model
A synaptic plasticity model with bio-STD and co-regulation of ES and IS can enhance the performance of SNNs. Thus, we use this synaptic plasticity model.
An STD is introduced to represent the diffusion of neurotransmitters in bio-synapses, defined as follows [33]:
where denotes the synaptic current; denotes postsynaptic membrane potential; denotes synaptic weight; and reflects changes in concentration of the neurotransmitter , which can be described as follows:
where denotes presynaptic membrane potential; ES weight and IS weight are regulated by following rules [34]; and denote the constants for the positive and negative combination speeds of the neurotransmitter, respectively; is STD which conforms to the bio-STD.
When a postsynaptic neuron does not detect an action potential from a presynaptic neuron , and slump as follows:
where and represent the decay constants of the excitatory synaptic conductance and inhibitory synaptic conductance, respectively.
When detects an action potential from , and change as follows:
If exceeds , it is set to ; however, whereas if , it is set to 0. The ES increment and IS weight increment are regulated by and , which are the modification functions for the STDP and can be defined as follows:
where denotes the firing interval between the presynaptic and postsynaptic spikes.
2.1.3. Complex Network Topology
The complex network topology exhibiting SW and SF properties can enhance the bio-rationality of brain-inspired models. Thus, we generate complex network topology. The Barrat–Barthelemy–Vespignani (BBV) algorithm is capable of simulating the dynamic evolution in the local edge weights caused by adding new nodes and regulating the mean CC within a large scope [35]. It can construct complex networks of various topologies with the SW and SF properties by adjusting the reconnection probability parameter . In this study, the BBV algorithm is utilized to generate a complex network topology; selecting an appropriate value for is essential to achieve a network with bio-rationality through analysis of its SW property [36] and SF property [37].
To obtain an appropriate in order to generate a complex network with the SW and SF properties, we generate complex networks within the range [0.1, 1] in steps of 0.1. We set network size is 500 nodes. SW and SF properties of the complex networks for various are presented in Table 1.
Table 1.
and of the complex network with different .
From these simulations, we observed that when , the corresponding is 2.15, which closely aligns with the SF property of the human FBN, which is 2 [38]. Similarly, at , , which falls within the range of the SW property of the human FBN [39]. Thus, we select to generate a bio-rational network topology to construct the CSNN.
2.1.4. Topological Characteristics of CSNN
In order to evaluate CSNN, we investigated its topological characteristics including CC and SPL.
- 1.
- CC
Mean CC denotes the tightness of nodes in a network, thereby illustrating the effectiveness of local information transfer within an SNN [40]. Since the edges of the SNN are weighted, it is necessary to use the mean weighted CC () with the following expression:
where denotes the node strength; and denote the synaptic weights; denotes the node degree; and , , and denote adjacent matrices.
- 2.
- SPL
The mean SPL denotes the mean of the shortest distance between all pairs of nodes in a network, thereby illustrating the effectiveness of global information transfer within an SNN [40]. Since the edges of the SNN are weighted, it is necessary to use the mean weighted SPL () with the following expression:
where denotes synaptic weights; denotes connection between and ; and denotes the possible pathway from to .
2.2. The Time Coding of CSNN Under Exterior Stimuli
To evaluate the effectiveness of the brain-inspired model, we investigated the SNC of the CSNN based on time coding under exterior stimuli.
2.2.1. Exterior Stimuli
To investigate SNC of the CSNN under exterior stimuli, three stimuli were employed including white Gaussian stimulus, impulse stimulus, and AC magnetic field stimulus.
- 1.
- White Gaussian stimulus
A Gaussian distribution governs the instantaneous intensity of white Gaussian noise, and it is expressed as follows:
where denotes the standard deviation in ; and is the mean value of . In this study, the instantaneous intensity is continuously added to throughout the 1000 ms.
- 2.
- Impulse stimulus
The short duration, huge amplitude, and burst patterns of irregular discontinuous impulse spikes make up impulse noise. The mathematical description is as follows:
where denotes the onset time of stimulus; stands for the duration of the stimulus; and stands for the impulse intensity. In this study, remains consistent across all neurons, indicating that the impulse stimulus is introduced to each neuron model concurrently at the same initial moment, is 200 ms. All neuron models experience impulse noise, which is viewed as a current disturbance on the exterior input current in Equation (1).
- 3.
- AC magnetic field stimulus
The alternating magnetic field stimulus is described as follows:
where stands for the amplitude of AC magnetic field stimulus; and is the magnetic field frequency. All of the neuron models are subjected to AC magnetic field stimulus, which is denoted in Equation (1) as a voltage disturbance on the membrane potential .
2.2.2. Time Coding Method of CSNN
Time coding can be used to encode neural information by capturing the precise moments of the neuronal action potential. One kind of time coding is ISI coding which is based on the disparity between adjacent firing times, which can accurately reflect dynamic neural information [30]. When a neuron model fires an action potential at time , -th element of ISI series is defined as follows:
The coding of ISI can be analyzed through three aspects: ISI time domain diagram, ISI histogram, and joint ISI distribution. ISI time domain diagram denotes dynamic variations of ISI of all neurons over time. ISI histogram denotes the rate distribution of ISI within the entire set of ISIs. Joint ISI distribution denotes disparities of neighboring ISIs among all neurons.
- 1.
- Coding pattern of CSNN white Gaussian stimulus
Each neuron in the CSNN received white Gaussian stimulus with a strength range of 5–25 with 2.5 /step. Using white Gaussian stimulus of 5, 15, and 25 as examples, their three aspects of ISI coding are shown in Figure 2.
Figure 2.
ISI coding pattern of the CSNN under white Gaussian stimulus: (a1–a3) ISI time domain diagrams; (b1–b3) ISI histograms; (c1–c3) joint ISI distributions.
Figure 2(a1–a3) show ISI time domain diagrams under of 5, 15, and 25 ; Figure 2(b1–b3) show ISI histograms under of 5, 15, and 25 ; Figure 2(c1–c3) show joint ISI distributions under of 5, 15, and 25 . The neural coding pattern illustrated in Figure 2 reflects the dynamic variations of ISI, rate distribution of ISI, and changes in adjacent ISI in our CSNN when exposed to white Gaussian stimulus. Three eigenvectors are extracted from three patterns under white Gaussian stimulus: the highest ISI is extracted from the ISI time domain diagram, the highest percentage of ISI is extracted from the ISI histogram, the highest disparity between neighboring ISI is extracted from the joint ISI distribution.
- 2.
- Coding pattern of CSNN under impulse stimulus
Each neuron in the CSNN received impulse stimulus with a strength range of 5–25 with 2.5 /step. Using the impulse stimulus strength of 5, 15, and 25 as examples, their three aspects of ISI coding are shown in Figure 3.
Figure 3.
ISI coding pattern of the CSNN under impulse stimulus: (a1–a3) ISI time domain diagrams; (b1–b3) ISI histograms; (c1–c3) joint ISI distributions.
Figure 3(a1–a3) show ISI time domain diagrams under of 5, 15, and 25 ; Figure 3(b1–b3) show ISI histograms under of 5, 15, and 25 ; Figure 3(c1–c3) show joint ISI distributions under of 5, 15, and 25 . The neural coding pattern illustrated in Figure 3 reflects the dynamic variations of ISI, rate distribution of ISI, and changes in adjacent ISI in our CSNN when exposed to impulse stimulus. Three eigenvectors are extracted from three patterns under impulse stimulus: the highest ISI is extracted from the ISI time domain diagram; the highest percentage of ISI is extracted from the ISI histogram; and the highest disparity between neighboring ISI is extracted from the joint ISI distribution.
- 3.
- Coding pattern of CSNN under AC magnetic field stimulus
Each neuron in the CSNN received AC magnetic field stimulus with a strength range of 5–25 with 2.5 /step. Using the AC magnetic field stimulus amplitude of 5, 15, and 25 as examples, their three aspects of ISI coding are shown in Figure 4.
Figure 4.
ISI coding pattern of the CSNN under AC magnetic field stimulus. (a1–a3) ISI time domain diagrams; (b1–b3) ISI histograms; (c1–c3) joint ISI distributions.
Figure 4(a1–a3) show ISI time domain diagrams under of 5, 15, and 25 ; Figure 4(b1–b3) show ISI histograms under of 5, 15, and 25 ; Figure 4(c1–c3) show joint ISI distributions under of 5, 15, and 25 . The neural coding pattern illustrated in Figure 4 reflects the dynamic variations of ISI, rate distribution of ISI, and changes in adjacent ISI in our CSNN when exposed to AC magnetic field stimulus. Three eigenvectors are extracted from three patterns under AC magnetic field stimulus: the highest ISI is extracted from the ISI time domain diagram, the highest percentage of ISI is extracted from the ISI histogram, the highest disparity between neighboring ISI is extracted from the joint ISI distribution.
2.3. The SNC of CSNN
Based on the time coding of CSNN, we employ a cosine similarity algorithm to explore similarity within the classes, while employing K-means clustering algorithm to explore the disparity between classes. In this subsection, we introduce these two algorithms and provide details on their parameters.
2.3.1. Cosine Similarity Algorithm
The cosine similarity algorithm is chosen because it effectively captures the similarity between eigenvectors, ensuring that the structural resemblance of ISI coding patterns is accurately measured. This method is widely used for measuring the similarity between eigenvectors. This method is as follows [41]:
where and denote two various samples containing the chosen eigenvector. In this study, we set a similarity threshold of 0.8 based on empirical testing. If the cosine similarity between two eigenvectors exceeds this threshold, we consider them to belong to the same class. This threshold ensures that only highly similar responses are clustered together, which helps avoid misclassification in the analysis.
2.3.2. K-Means Clustering Algorithm
K-means clustering algorithm is well-suited for this task, efficiently grouping coding patterns into clusters and enabling the analysis of coding differences across stimuli, which helps to identify different responses across varying stimuli and allows for a clear analysis of coding differences. K-means clustering algorithm was utilized to calculate the time coding disparity of CSNN under various stimuli. The steps of the algorithm are the following [42]:
- Decide the number of clusters K: We set K = 3 because we aim to categorize the data into three distinct clusters corresponding to the different time coding patterns of CSNN.
- Decide the number of eigenvectors H: We set H = 3 because one sample has three eigenvectors. Therefore, a sample with three eigenvectors is randomly chosen as the initial cluster center for each cluster.
- Assign samples to clusters: For every sample, the Euclidean distance between the sample and each cluster center is computed; then, each sample is assigned to the cluster closest to its center.
- Recompute the cluster centers: Compute the new cluster center by averaging each eigenvector across all samples.
- Repeat until convergence: Repeat these steps until the classification outcomes for each sample remain unchanged.
3. Results
This section first introduces the experimental settings. Then, we compute the CC and SPL of CSNN. Finally, we then investigate the SNC of CSNN based on time coding under various exterior stimuli.
3.1. Experimental Settings
In this subsection, the experimental settings are introduced, including the parameters of the Izhikevich neuron model and the synaptic plasticity model.
3.1.1. Parameters of Izhikevich Neuron Model
Following [43], the proportion of excitatory to inhibitory Izhikevich neuron models is 4:1. Hence, we stochastically distribute them with same ratio of 4:1 within the SNNs. Following [44], Table 2 presents the dimensionless parameters and corresponding values for , , , and in the Izhikevich neuron models utilized, encompassing excitatory and inhibitory. These parameters are crucial for modeling the firing behavior of neurons. Thus, the parameters in Table 2 are the key components of the Izhikevich neuron model employed in this study.
Table 2.
Parameters of the Izhikevich neuron models.
3.1.2. Parameters of the Synaptic Plasticity Model
Following [33,34], Table 3 shows the parameters and corresponding values used in synaptic plasticity model. These parameters help simulate changes in synaptic weights in response to neuronal activity.
Table 3.
Parameters of the synaptic plasticity model.
Biological research indicates that the bio-STD is stochastically distributed in interval [0.1, 40] ms [16]. We therefore incorporate an STD into the synaptic plasticity model.
3.2. Results of Topological Characteristics of CSNN
To evaluate the topological characteristics of CSNN, we computed the and of CSNN as shown in Table 4.
Table 4.
The and of CSNN.
Table 4 presents the and of CSNN. falls within the range of of human brain ( 0.88) [21]; falls within the range of of human brain (1.7 5.1) [21]. These results indicate that CSNN is a bio-rational brain-inspired model and effectively balances local and global connectivity.
3.3. Time Coding of CSNN
In this subsection, we investigated the similarity within classes and disparity between classes of CSNN.
3.3.1. The Time Coding Similarity Under the Same Stimulus
The cosine similarity algorithm serves to assess the similarity among time coding under various strengths for each stimulus. For white Gaussian stimulus (5–25 dBW, 2.5 dBW/step), impulse stimulus (5–25 mA, 2.5 mA/step), and AC magnetic field stimulus (5–25 mV, 2.5 mV/step), each strength value corresponds to a unique time coding, which we considered as one sample. Hence, nine samples were collected for each stimulus. For every sample, we extracted the highest ISI (eigenvector 1), the highest percentage of ISI (eigenvector 2), and the highest disparity between neighboring ISI (eigenvector 3). Each sample’s three eigenvectors together form a three-dimensional eigenvector, which serves as the vector (or ) in Equation (13).
For each type of stimulus, the cosine similarity between samples under any two stimuli of different strengths is analyzed and the mean value is calculated. This method ensures that the cosine similarity calculation can fully reflect the overall features of the time code, rather than just the similarities of individual features. These means are presented in Table 5.
Table 5.
Cosine similarity.
Table 5 illustrates that cosine similarity of CSNN under three stimuli was near to one, which suggests that the CSNN demonstrates significant time coding similarity when subjected to varying strengths of the same stimulus.
Figure 2, Figure 3 and Figure 4 illustrate the ISI coding patterns of CSNN under different stimuli. The cosine similarity values in Table 5 further support this observation: under the same stimulus condition, the CSNN exhibits highly similar SNC patterns (cosine similarity > 0.99), indicating that SNC is characterized by similarity within the classes.
3.3.2. Time Coding Disparity Under Various Stimuli
We applied the K-means clustering algorithm to analyze the differences in time coding under various stimuli. These simulation parameters align with those for assessing the previously discussed similarities. We computed means of the clustering accuracies of CSNN’s time coding under various stimuli. The classification accuracies can be found in Table 6.
Table 6.
The classification accuracies.
The classification accuracy for ISI coding patterns utilizing Eigenvectors 1 + Eigenvectors 2 + Eigenvectors 3 is highest as shown in Table 6, which indicates that the ISI coding patterns of CSNN have the largest disparity under various stimuli.
Figure 2, Figure 3 and Figure 4 illustrate ISI coding patterns of CSNN under different stimuli. The classification accuracies in Table 6 further support this observation: different stimuli lead to distinct coding patterns, as evidenced by the varying classification performances. Specifically, individual eigenvectors achieve moderate accuracy (around 72–76%), whereas the combination of all three eigenvectors significantly enhances classification performance (97.32%). This suggests that SNC patterns exhibit notable differences across stimuli.
Overall, our findings indicate that the CSNN exhibits a marked SNC characterized by similarity within the classes and disparity between classes.
4. Discussion
In this section, we first elucidate the mechanism of SNC. Then, we discuss the comparison with other research and limitations.
4.1. The Mechanism of SNC
To elucidate mechanism of SNC, we explore the relationship between the SNC and the synaptic plasticity.
4.1.1. The Synaptic Weight
SNN is connected via synaptic plasticity in accordance with topological connection. Consequently, we investigated the synaptic weight. The mean synaptic weight (MSW) refers to mean weight of all the synapses within an SNN. Based on the same strength of three stimuli mentioned above, we illustrate the evolution of MSWs of the CSNN under various stimuli, as presented in Figure 5.
Figure 5.
Evolution of MSW.
Based on Figure 5, three evolution processes of MSW exhibit a comparable tendency. MSW shows a considerable decrease in the initial 300 ms, followed by a stabilization period from 300 ms to 1000 ms.
In our study, the MSW decreases in the initial 300 ms. This behavior occurs due to the continuous exterior stimuli applied from the start of the simulation. Once the network has fully adapted to the external input, the MSW stabilizes. The three exterior stimuli exert a consistent influence on the CSNN throughout the simulation.
4.1.2. Relevance Analysis
The CSNN generates the relevant time coding under a specific stimulus, which forms SNC. To elucidate the mechanism of SNC, we established the relationship between MSW and mean ISI. The mean ISI represents the mean of all neuronal ISI within an SNN over a given period. Based on the same strength of three stimuli mentioned earlier, we initially show the evolution of mean ISI of the CSNN under various stimuli over a 100 ms interval, as depicted in Figure 6.
Figure 6.
Evolution of mean ISI.
Based on Figure 6, these evolution processes of mean ISI exhibit a comparable tendency. Mean ISI shows a considerable increase in the initial 300 ms, followed by a stabilization period from 300 ms to 1000 ms. The continuous exterior stimuli throughout the simulation also drive the observed increase and stabilization of mean ISI. Initially, the network adapts to the exterior stimuli, increasing mean ISI. Once the network reaches equilibrium, the mean ISI stabilizes.
Subsequently, we established the relationship between SNC and synaptic plasticity through the Pearson correlation, which is expressed as follows:
where represent the samples; denotes the size of the sample. In order to assess the significance of , the -test is utilized, which is described as follows:
The 0.01 level of significance is indicated by “**” and the 0.05 level of significance is indicated by “*”.
In this research, represents MSW over a 100 time interval; represents mean ISI for the same duration; is set to 10. The between MSW and mean ISI in the CSNN under three stimuli with various strengths are presented in Table 7.
Table 7.
The for various stimuli.
Table 7 indicates a significant correlation at the 0.01 level between the three mean ISIs and their respective MSWs across three different stimuli, which suggests that inherent factor of SNC is synaptic plasticity.
Table 4 presents the CC and SPL of CSNN. These structural metrics reflect the organization of synaptic connectivity and support the observed SNC formation. Additionally, Table 5 demonstrates the cosine similarity of the ISI coding patterns under different stimuli strengths. The high similarity values indicate that the SNC maintains stability despite variations in exterior stimulus intensity. Furthermore, the classification accuracy results in Table 6 highlight the discriminative power of SNC across different stimulus conditions. By integrating Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 4, Table 5, Table 6 and Table 7, we establish a comprehensive connection between the ISI, synaptic plasticity, and network topology. The results demonstrate that the adaptation of synaptic weights directly impacts the coding pattern, reinforcing the role of synaptic plasticity in SNC formation and stability.
4.2. Comprehensive Discussion
In this section, we discuss the comparison with other methods and limitations of CSNN.
4.2.1. Comparison with Other Methods
Our findings significantly contribute to the brain-inspired models, especially regarding neural coding. The performance of SNC is extremely dependent on brain-inspired models. Thus, we propose a bio-rational brain-inspired model. In terms of topology, our CSNN model is inspired by the topological characteristics of bio-FBN, in contrast to previous studies [24,25], which investigated topologies exhibiting SF or SW properties. In terms of synaptic plasticity model, our CSNN has a bio-STD and co-regulate of ES and IS, in contrast to the previous study [15], which focused on fixed STD. In terms of neural coding, our CSNN represents has significant SNC, in contrast to the previous study [45], which focused on the SNC of SFSNN. These comparisons illustrate the superiority of our method.
4.2.2. Limitations
While this study provides insights into SNC of CSNN, several limitations should be acknowledged. First, although the topology of the CSNN aligns with bio-rationality, it does not fully capture the true connectivity patterns observed in biological neural networks. Furthermore, real-world implementations often encounter continuously changing and time-varying stimuli. However, this study does not investigate how CSNNs respond to such dynamic inputs. Finally, while a strong correlation between MSW and ISI is established, the study does not provide a causal explanation. A more detailed mechanistic analysis is necessary to confirm how synaptic plasticity directly affects SNC.
5. Conclusions
This study proposed CSNN, a brain-inspired model characterized by bio-rationality. Its topology combines the SW property and SF property. To investigate the SNC of CSNN under various strengths of various stimuli, we used a K-means clustering algorithm and a cosine similarity algorithm. To elucidate its mechanism, we discuss the relationship between SNC and synaptic plasticity.
The following main conclusions are drawn from this work: (1) The CSNN exhibits marked time coding similarity under various strengths of the same stimulus; the CSNN exhibits marked SNC under various stimuli. (2) Our discussion indicates that the inherent factor of the SNC is synaptic plasticity. Investigating SNC and its underlying mechanisms can enhance our comprehension of intricate brain cognitive functions and the processes involved in information handling, while also fostering advancements in artificial intelligence.
For our future work, we aim to construct a more bio-rationality brain-inspired model, and will apply this brain-inspired model based on SNC to other pattern recognition tasks such as image recognition and speech recognition. Second, investigating how CSNN respond to time-varying exterior stimuli remains an essential avenue for future work, which could provide insights into their robustness and efficiency in processing real-world signals. Finally, while the present study identifies a significant correlation between MSW and ISI, future work should focus on uncovering the causal mechanisms underlying this relationship.
Author Contributions
Conceptualization, L.G.; methodology, L.G.; software, Z.W.; formal analysis, Z.W.; investigation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, Y.S.; supervision, H.L.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by National Natural Science Foundation of China, grant number 52477232.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Norman-Haignere, S.V.; Feather, J.; Boebinger, D.; Brunner, P.; Ritaccio, A.; McDermott, J.H.; Schalk, G.; Kanwisher, N. A Neural Population Selective for Song in Human Auditory Cortex. Curr. Biol. 2022, 32, 1470–1484.e12. [Google Scholar] [CrossRef] [PubMed]
- Panzeri, S.; Harvey, C.D.; Piasini, E.; Latham, P.E.; Fellin, T. Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention, and Behavior. Neuron 2017, 93, 491–507. [Google Scholar] [CrossRef] [PubMed]
- Baez-Santiago, M.A.; Reid, E.E.; Moran, A.; Maier, J.X.; Marrero-Garcia, Y.; Katz, D.B. Dynamic taste responses of parabrachial pontine neurons in awake rats. J. Neurophysiol. 2016, 115, 1314–1323. [Google Scholar] [CrossRef]
- Tang, F.; Zhang, J.; Zhang, C.; Liu, L. Brain-Inspired Architecture for Spiking Neural Networks. Biomimetics 2024, 9, 646. [Google Scholar] [CrossRef] [PubMed]
- Passias, A.; Tsakalos, K.A.; Kansizoglou, I.; Kanavaki, A.M.; Gkrekidis, A.; Menychtas, D.; Aggelousis, N.; Michalopoulou, M.; Gasteratos, A.; Sirakoulis, G.C. A Biologically Inspired Movement Recognition System with Spiking Neural Networks for Ambient Assisted Living Applications. Biomimetics 2024, 9, 296. [Google Scholar] [CrossRef]
- Yu, D.; Wang, G.; Li, T.; Ding, Q.; Jia, Y. Filtering Properties of Hodgkin–Huxley Neuron on Different Time-Scale Signals. Commun. Nonlinear Sci. Numer. Simul. 2023, 117, 106894. [Google Scholar] [CrossRef]
- Kamal, N.; Singh, J. A Highly Scalable Junctionless FET Leaky Integrate-and-Fire Neuron for Spiking Neural Networks. IEEE Trans. Electron Devices 2021, 68, 1633–1638. [Google Scholar] [CrossRef]
- Elkaranshawy, H.A.; Aboukelila, N.M.; Elabsy, H.M. Suppressing the Spiking of a Synchronized Array of Izhikevich Neurons. Nonlinear Dyn. 2021, 104, 2653–2670. [Google Scholar] [CrossRef]
- Wang, G.; Yang, L.; Zhan, X.; Li, A.; Jia, Y. Chaotic Resonance in Izhikevich Neural Network Motifs under Electromagnetic Induction. Nonlinear Dyn. 2022, 107, 3945–3962. [Google Scholar] [CrossRef]
- Liu, D.; Guo, L.; Wu, Y.; Lv, H.; Xu, G. Antiinterference Function of Scale-Free Spiking Neural Network under AC Magnetic Field Stimulation. IEEE Trans. Magn. 2021, 57, 3400205. [Google Scholar] [CrossRef]
- She, X.; Long, Y.; Kim, D.; Mukhopadhyay, S. ScieNet: Deep Learning with Spike-Assisted Contextual Information Extraction. Pattern Recognit. 2021, 118, 108002. [Google Scholar] [CrossRef]
- He, H.; Shen, W.; Zheng, L.; Guo, X.; Cline, H.T. Excitatory Synaptic Dysfunction Cell-Autonomously Decreases Inhibitory Inputs and Disrupts Structural and Functional Plasticity. Nat. Commun. 2018, 9, 2893. [Google Scholar] [CrossRef]
- Scekic-Zahirovic, J.; Sanjuan-Ruiz, I.; Kan, V.; Megat, S.; De Rossi, P.; Dieterlé, S.; Cassel, R.; Jamet, M.; Kessler, P.; Wiesner, D.; et al. Cytoplasmic FUS Triggers Early Behavioral Alterations Linked to Cortical Neuronal Hyperactivity and Inhibitory Synaptic Defects. Nat. Commun. 2021, 12, 3028. [Google Scholar] [CrossRef]
- Xue, M.; Atallah, B.V.; Scanziani, M. Equalizing Excitation–Inhibition Ratios across Visual Cortical Neurons. Nature 2014, 511, 596–600. [Google Scholar] [CrossRef] [PubMed]
- Zhao, D.; Zeng, Y.; Li, Y. BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and Balanced Excitatory–Inhibitory Neurons. Neural Netw. 2022, 154, 68–77. [Google Scholar] [CrossRef] [PubMed]
- Taherkhani, A.; Belatreche, A.; Li, Y.; Cosma, G.; Maguire, L.P.; McGinnity, T.M. A Review of Learning in Biologically Plausible Spiking Neural Networks. Neural Netw. 2020, 122, 253–272. [Google Scholar] [CrossRef]
- Zhang, M.; Wu, J.; Belatreche, A.; Pan, Z.; Xie, X.; Chua, Y.; Li, G.; Qu, H.; Li, H. Supervised Learning in Spiking Neural Networks with Synaptic Delay-Weight Plasticity. Neurocomputing 2020, 409, 103–118. [Google Scholar] [CrossRef]
- Barthelemy, M. Morphogenesis of Spatial Networks; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
- Nemzer, L.R.; Cravens, G.D.; Worth, R.M.; Motta, F.; Placzek, A.; Castro, V.; Lou, J.Q. Critical and ictal phases in simulated EEG signals on a small-world network.Front. Comput. Neurosci. 2021, 14, 583350. [Google Scholar] [CrossRef]
- Keerthana, G.; Anandan, P.; Nandhagopal, N. Enhancing the robustness and security against various attacks in a scale: Free network. Wirel. Pers. Commun. 2021, 117, 3029–3050. [Google Scholar] [CrossRef]
- Van Den Heuvel, M.P.; Stam, C.J.; Boersma, M.; Hulshoff Pol, H.E. Small-World and Scale-Free Organization of Voxel-Based Resting-State Functional Connectivity in the Human Brain. NeuroImage 2008, 43, 528–539. [Google Scholar] [CrossRef]
- Liu, Y.; Liang, M.; Zhou, Y.; He, Y.; Hao, Y.; Song, M.; Yu, C.; Liu, H.; Liu, Z.; Jiang, T. Disrupted Small-World Networks in Schizophrenia. Brain 2008, 131, 945–961. [Google Scholar] [CrossRef] [PubMed]
- Stylianou, O.; Kaposzta, Z.; Czoch, A.; Stefanovski, L.; Yabluchanskiy, A.; Racz, F.S.; Ritter, P.; Eke, A.; Mukli, P. Scale-Free Functional Brain Networks Exhibit Increased Connectivity, Are More Integrated and Less Segregated in Patients with Parkinson’s Disease Following Dopaminergic Treatment. Fractal Fract. 2022, 6, 737. [Google Scholar] [CrossRef]
- Tsakalos, K.-A.; Sirakoulis, G.C.; Adamatzky, A.; Smith, J. Protein Structured Reservoir Computing for Spike-Based Pattern Recognition. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 322–331. [Google Scholar] [CrossRef]
- Reis, A.S.; Brugnago, E.L.; Caldas, I.L.; Batista, A.M.; Iarosz, K.C.; Ferrari, F.A.S.; Viana, R.L. Suppression of Chaotic Bursting Synchronization in Clustered Scale-Free Networks by an External Feedback Signal. Chaos Interdiscip. J. Nonlinear Sci. 2021, 31, 83128. [Google Scholar] [CrossRef] [PubMed]
- Guo, L.; Song, Y.; Wu, Y.; Xu, G. Anti-Interference of a Small-World Spiking Neural Network against Pulse Noise. Appl. Intell. 2023, 53, 7074–7092. [Google Scholar] [CrossRef]
- Steinmetz, N.A.; Zatka-Haas, P.; Carandini, M.; Harris, K.D. Distributed Coding of Choice, Action and Engagement across the Mouse Brain. Nature 2019, 576, 266–273. [Google Scholar] [CrossRef] [PubMed]
- Callier, T.; Suresh, A.K.; Bensmaia, S.J. Neural Coding of Contact Events in Somatosensory Cortex. Cereb. Cortex 2019, 29, 4613–4627. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, R.; Zhu, F. The Energy Coding of a Structural Neural Network Based on the Hodgkin–Huxley Model. Front. Neurosci. 2018, 12, 122. [Google Scholar] [CrossRef]
- Du, L.; Cao, Z.; Lei, Y.; Deng, Z. Electrical Activities of Neural Systems Exposed to Sinusoidal Induced Electric Field with Random Phase. Sci. China Technol. Sci. 2019, 62, 1141–1150. [Google Scholar] [CrossRef]
- Staszko, S.M.; Boughter, J.D.; Fletcher, M.L. The Impact of Familiarity on Cortical Taste Coding. Curr. Biol. 2022, 32, 4914–4924.e4. [Google Scholar] [CrossRef]
- Izhikevich, E.M. Simple Model of Spiking Neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef]
- Destexhe, A.; Mainen, Z.F.; Sejnowski, T.J. An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding. Neural Comput. 1994, 6, 14–18. [Google Scholar] [CrossRef]
- Kleberg, F.I.; Fukai, T.; Gilson, M. Excitatory and Inhibitory STDP Jointly Tune Feedforward Neural Circuits to Selectively Propagate Correlated Spiking Activity. Front. Comput. Neurosci. 2014, 8, 53. [Google Scholar] [CrossRef] [PubMed]
- Dan, W.; Xiao-Zheng, J. On Weighted Scale-Free Network Model with Tunable Clustering and Congestion. Acta Phys. Sin. 2012, 61, 228901. [Google Scholar] [CrossRef]
- Hartmann, B.; Sugár, V. Searching for Small-World and Scale-Free Behaviour in Long-Term Historical Data of a Real-World Power Grid. Sci. Rep. 2021, 11, 6575. [Google Scholar] [CrossRef]
- Eguíluz, V.M.; Chialvo, D.R.; Cecchi, G.A.; Baliki, M.; Apkarian, A.V. Scale-Free Brain Functional Networks. Phys. Rev. Lett. 2005, 94, 18102. [Google Scholar] [CrossRef]
- Piersa, J.; Piekniewski, F.; Schreiber, T. Theoretical Model for Mesoscopic-Level Scale-Free Self-Organization of Functional Brain Networks. IEEE Trans. Neural Netw. 2010, 21, 1747–1758. [Google Scholar] [CrossRef]
- Li, G.; Luo, Y.; Zhang, Z.; Xu, Y.; Jiao, W.; Jiang, Y.; Huang, S.; Wang, C. Effects of Mental Fatigue on Small-World Brain Functional Network Organization. Neural Plast. 2019, 2019, 1716074. [Google Scholar] [CrossRef]
- Diykh, M.; Li, Y.; Wen, P. Classify epileptic EEG signals using weighted complex networks based community structure detection. Expert Syst. Appl. 2017, 90, 87–100. [Google Scholar] [CrossRef]
- Liu, Y.; Zeng, Y.; Li, R.; Zhu, X.; Zhang, Y.; Li, W.; Li, T.; Zhu, D.; Hu, G. A Random Particle Swarm Optimization Based on Cosine Similarity for Global Optimization and Classification Problems. Biomimetics 2024, 9, 204. [Google Scholar] [CrossRef] [PubMed]
- Liu, G.; Li, M.; Wang, H.; Lin, S.; Xu, J.; Li, R.; Tang, M.; Li, C. D3K: The Dissimilarity-Density-Dynamic Radius K-Means Clustering Algorithm for scRNA-Seq Data. Front. Genet. 2022, 13, 912711. [Google Scholar] [CrossRef] [PubMed]
- Vogels, T.P.; Sprekeler, H.; Zenke, F.; Clopath, C.; Gerstner, W. Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science 2011, 334, 1569–1573. [Google Scholar] [CrossRef] [PubMed]
- Izhikevich, E.M. Which Model to Use for Cortical Spiking Neurons? IEEE Trans. Neural Netw. 2004, 15, 1063–1070. [Google Scholar] [CrossRef] [PubMed]
- Guo, L.; Hou, L.; Wu, Y.; Lv, H.; Yu, H. Encoding specificity of scale-free spiking neural network under different external stimulations. Neurocomputing 2020, 418, 126–138. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).