Next Article in Journal
Advances in Virtual Cutting Guide and Stereotactic Navigation for Complex Tumor Resections of the Sacrum and Pelvis: Case Series with Short-Term Follow-Up
Next Article in Special Issue
Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning
Previous Article in Journal
On-Site Pilot-Scale Microalgae Cultivation Using Industrial Wastewater for Bioenergy Production: A Case Study towards Circular Bioeconomy
Previous Article in Special Issue
Machine Learning Approaches to Differentiate Sellar-Suprasellar Cystic Lesions on Magnetic Resonance Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI

by
Nikola K. Kasabov
1,2,3,4,5,6,*,
Helena Bahrami
1,7,8,9,
Maryam Doborjeh
1 and
Alan Wang
5,10,11,*
1
Knowledge Engineering and Discovery Research Innovation, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
2
Intelligent Systems Research Center, University of Ulster, Londonderry BT48 7JL, UK
3
Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
4
Computer Science and Engineering Department, Dalian University, Dalian 116622, China
5
Auckland Bioengineering Institute, University of Auckland, Auckland 1010, New Zealand
6
Knowledge Engineering Consulting Ltd., Auckland 1071, New Zealand
7
Core & Innovation, Wine-Searcher, Auckland 0640, New Zealand
8
Royal Society Te Apārangi, Wellington 6011, New Zealand
9
Research Association New Zealand (RANZ), Auckland 1010, New Zealand
10
Faculty of Medical and Health Sciences, University of Auckland, Auckland 1010, New Zealand
11
Centre for Brain Research, University of Auckland, Auckland 1010, New Zealand
*
Authors to whom correspondence should be addressed.
Bioengineering 2023, 10(12), 1341; https://doi.org/10.3390/bioengineering10121341
Submission received: 18 August 2023 / Revised: 16 October 2023 / Accepted: 14 November 2023 / Published: 21 November 2023
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)

Abstract

:
Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery.

1. Introduction

Memory is referred to as the brain’s ability to recall experiences or information that is encountered or learned previously. If this information is recalled using only partial inputs, we refer to it as Associative Memory (AM) [1,2]. There are three main types of memory in the brain, namely sensory memory, short-term or working memory, and long-term memory, which function in different ways. However, each of these types is manifested through brain activities in space (areas of the brain) and time (spiking sequences), stored as connection weights and recalled always with only partial input information in time and space. AM in the brain is always spatio-temporal.
Humans can learn and understand many categories and objects from spatio-temporal stimuli by creating a spatial and temporal association between them. Inspired by the human brain capability, AM has been introduced to the machine learning field to memorize information and retrieve it from partial or noisy data. For example, neural network models for associative pattern recognition were proposed by J. Hopfield [3] and B. Kosko [4]. In 2019, Haga and Fukai [5] introduced a memory system for neural networks based on an attractor network, which is a group of connected nodes that display patterns of activity and tend towards certain states. They applied the concept of excitatory and inhibitory nodes to their proposed network to mimic the role of the hippocampus in balancing networks to form new associations. The work above is related to vector-based data (e.g., static images) and not to spatio-temporal data. None of them relate to spatio-temporal data and more specifically, to neuroimaging (NI) data.
Spatio-temporal associative memory (STAM) is defined here as a system that is trained for classification or prediction on all available spatio-temporal variables and data and recalled only on part of the spatial or/and temporal components.
The idea of using a brain-inspired and brain-structured spiking neural network (SNN) as a spatio-temporal associative memory (STAM) was first introduced in [6] as part of the NeuCube SNN architecture, but the main concepts and definitions of STAM were introduced in [7], where a NeuCube model, trained on complete spatio-temporal data, creating spatio-temporal patterns in its connections, was recalled when only partial spatial- or/and temporal information was provided as inputs.
In this paper, we introduced a general model of STAM for the classification of Neuroimaging (NI) data and then applied it for the development of STAMs on EEG and fMRI spatio-temporal data. The paper is organized as follows. Section 2 presents the background concepts of spiking neural networks (SNN), NeuCube, and STAM on NeuCube [7]. Section 3 presents a STAM-NI as a general NI classification model, while Section 4 presents a STAM-EEG model and Section 5 presents a STAM-fMRI classification model. Section 6 offers discussions about using the STAM-NI framework across bioengineering applications, including multimodal neuroimaging data, and also what are the next challenges in the development and the use of STAM as new AI techniques for the future.

2. SNN, the NeuCube Framework, and the STAM on the NeuCube Concept

2.1. Spiking Neural Networks (SNN)

Spiking neural networks (SNN) are biologically inspired ANNs where information is represented as binary events (spikes), similar to the event potentials in the brain, and learning is also inspired by principles in the brain. SNNs are also universal computational mechanisms [8,9]. Learning in SNN relates to changes in connection weights between two spatially located spiking neurons over time (Figure 1) so that both “time” and “space” are learned in the spatially distributed connections.
A well-known unsupervised learning paradigm inspired by the Hebbian learning principle is spike-time dependent plasticity (STDP) [8], in which the synaptic weights are adjusted based on the temporal order of the incoming spike (pre-synaptic) and the output spike (post-synaptic). STDP is expressed in Equation (1), where  τ + and τ are time parameters and A + and A refer to temporal synaptic adjustment.
W t p r e t p o s t = A +   e x p t p r e t p o s t τ +                         i f   t p r e < t p o s t , A   e x p t p r e t p o s t τ   i f   t p r e > t p o s t ,
Many computational models and architectures have been developed with the use of SNN (see for a review [9]). One of them, NeuCube [6,10,11] has been used for the proposed STAM-NeuCube model and also for the STAM-NI, STAM-EEG, and STAM-fMRI models developed here.

2.2. The NeuCube Framework

The NeuCube architecture is depicted in Figure 2 [6]. It consisted of the following functional modules:
  • Input data encoding module.
  • 3D SNN reservoir module (SNNcube) that is designed according a spatial brain template [12,13,14].
  • Output function (classification) module, such as deSNN [11].
  • Gene regulatory network (GRN) module (optional).
  • Parameter optimization module (optional).

2.3. The STAM on NeuCube Concept

The main thrust of the proposed [7] STAM on NeuCube concept is that, since a SNNcube learns functional pathways of spiking activities represented as structural pathways of connections when only a small initial part of input data is entered, the SNN will ‘synfire’ and ‘chain-fire’ learned connection pathways [15] to reproduce learned functional pathways as polychronisation of neuronal clusters [16]. Some studies defined the state of a SNN as a dynamic chaotic attractor [17] that can be reached with the partial input information. In [18,19] polychronous neuronal groups are studied that are activated from partial inputs.
Spatio-temporal input data was first encoded into spike sequences and then spatio-temporal patterns of these sequences were learned in a SNNcube of the NeuCube framework that was structured according to a spatial template representing spatial information of the modeled data. For brain data, templates such as Talairach [12], MNI [13], personal MRI [14], etc., can be used. For multisensory streaming data modeling, the location of the sensors is used [9]. Connections are created and strengthened in the SNNcube through STDP learning. Once data is learned, the SNNcube retains the connections as a long-term memory.
To validate a STAM model, several types of accuracy tests were introduced in [7]:
  • Temporal association accuracy: validating the full model on partial temporal data of the same variables.
  • Spatial association accuracy: validation of the full model on full or partial temporal data, but on a subset of variables.
  • Temporal generalization accuracy: validation of the full model on partial temporal data of the same variables or a subset of them, but on a new data set.
  • Spatial generalization accuracy: validation of the full model on full or partial temporal data and a subset of variables, using a new data set.
Based on the general STAM-NeuCube concept, here we developed a specific STAM for NI data, called STAM-NI and then applied it for the development of STAM-EEG and STAM-fMRI, demonstrated on case study NI data.

3. The Proposed STAM-NI Classification Model and Its Mathematical Description

Spatio-temporal NI data are collected from specific brain locations over time. It is important to incorporate both the spatial and temporal information from the NI data across all measured variables over time in order to capture a meaningful pattern from the data in a computational model.
SNN and the brain-inspired NeuCube architecture have proved to be efficient in learning spatio-temporal NI data and capturing meaningful spatio-temporal patterns from the data [9]. The challenge now is how to utilize this feature for the development of STAM for the classification of NI data.
The following procedures and mathematical equations describe the proposed STAM-NI classification framework:
(1)
Spatial information from the NI data, e.g., 3D location of electrodes, was used to structure a SNNcube and to define the locations of the input neurons to map the NI variables. Suitable brain templates were used for this purpose [12,13,14].
(2)
Every spatio-temporal NI sequence, measured as a variable Vi, was encoded into a spike sequence using some of the existing methods [9]. This is illustrated in Figure 3.
(3)
The encoded sequences of all NI variables V were used to train a SNNcube in unsupervised mode using the STDP rule (Equation (1)), creating a connectionist structure. Before training, the SNNcube was initialized with the use of the small-world connectivity model, where neuron a was connected to other neuron b with a probability Pa,b that depended on the closeness of the two neurons. The closer they are (the smaller the distance between them Da,b), the higher the probability of connecting them (Equation (2)).
p a , b = C × e D 2 a , b / λ 2
λ is a parameter.
(4)
The trained SNNcube was recalled (activated) by all NI spatio-temporal data samples, one by one, using all variables as in step 3. For every sample Pi, a state Si of the SNNcube was defined during the propagation of the input sequence. The state Si was defined as a sequence of activated neurons Ni1, Ni2, …, Nil over time, that was used to train a deSNN classifier in a supervised mode [11], forming an l-element vector Wi of connection weights of an output neuron Oi assigned to the class of the input sequence Pi. For the supervised learning in the deSNN classifier, Equations (3) and (4) were used:
Wj,i = α Modorder(j,i)
ΔWj,i (t) = ej(t) D
where Mod is a modulation factor defining the importance of the order of the spike arriving at a synapse j of output neuron Oi, ej(t) = 1 if there is a consecutive spike at synapse j at time t during the presentation of the learned pattern by the output neuron I, and (−1) otherwise. In general, the drift parameter D can be different for ‘up’ and ‘down’ drifts, and α is a parameter.
(5)
When a new input sequence Pnew is presented, either as a full sequence in time and/or space (number of input variables) or as a partial one for STAM, a new SNNcube state Snew was learned as a new output neuron Onew. Its weight vector Wnew is compared with the weight vectors of the existing output neurons for classification tasks using the k-nearest neighbor method. The new sample Pnew was classified based on the pre-defined output classes of the closest, according to the Euclidean distance, output weight vectors Wi (Equation (5)).
Class (Pnew) = Class (Pi), if |Wnew − Wi| < |Wnew − Wk|,
for all output neurons Ok.
(6)
The temporal and spatial association and generalization accuracy were calculated.

4. The Proposed STAM-EEG Classification Method and Experimental Case Study

4.1. The Proposed STAM-EEG Classification Method

i.
Defining the spatial and temporal components of the EEG data for the classification task, e.g., EEG channels and EEG time series data.
ii.
Designing a SNNcube that is structured according to a brain template suitable for the EEG data (e.g., Talairach, or MNI, etc.).
iii.
Defining the mapping in the input EEG channels into the SNNcube 3D structure (see Figure 4a as an example of mapping 14 EEG channels in a Talairach-structured SNNcube).
iv.
Encoding data and training a NeuCube model to classify complete spatio-temporal EEG data, having K EEG channels measured over a full-time T.
v.
Analyse the model through cluster analysis, spiking activity, and the EEG channel spiking proportional diagram (see for example Figure 4b,c).
vi.
Recall the STAM-EEG model on the same data and same variables but measured over time T1 < T to calculate the classification temporal association accuracy.
vii.
Recall the STAM-EEG model on K1 < K EEG channels to evaluate the classification spatial association accuracy.
viii.
Recall the model on the same variables, measured over time T or T1 < T on new data to calculate the classification temporal generalization accuracy.
ix.
Recall the NeuCube model on K1 < K EEG channels to evaluate the classification spatial generalization accuracy using a new EEG dataset.
x.
Evaluate the K1 EEG channels as potential classification EEG biomarkers for an early diagnosis or prognosis according to the problem at hand.

4.2. Experimental Results

The experimental EEG data consisted of 60 recordings of 14 EEG channels of a subject who was moving a wrist: up (class 1), straight (class 2), and down (class 3). The data included 20 samples for each class, each sample measured at 128 time points used to discretize 1000 ms signal. First, a full NeuCube STAM-EEG classification model was trained on all 60 samples and 14 variables. The parameter settings of the STAM-EEG NeuCube model are shown in Table 1 (for explanation, see [6,11]).
Parameter values of a NeuCube model influence a great deal the performance of the model. There are several ways to deal with this problem. If there is domain knowledge related to the data and the problem in hand, that would instruct some of the parameter values, that will be a first step. Different combinations of the values of other parameters can be experimented with using either a Grid search or evolutionary computation methods, with an objective function to reduce the classification error [9]. The parameter values in Table 1 are default parameters for a NeuCube model with the single aim to demonstrate the methods and they can be further optimized.
The fully trained NeuCube STAM-EEG classification model was first analyzed for connectivity and neuronal spiking activity (Figure 4a–c) and then tested for different accuracies (Table 2, Table 3, Table 4 and Table 5), also using a newly introduced here Retained Memory Accuracy (RMA), calculated using Equation (6) below:
R M A = A r A f
where Af is the classification accuracy of the full STAM model, and Ar is the retained accuracy of the model, validated received association or generalization on shorter time windows of data or less number of variables.
Table 2 tested the temporal association classification accuracy of the model. Table 3 shows the temporal generalization accuracy when 50% of the data was used for training the full model and 50% for validation. It showed that RMA = 1 when the model was validated on T1 time of 95% and RMA = 0.95 for 80% of the time of the data used. Similar experiments are shown in Table 4 and Table 5 for evaluating the spatial association and generalization accuracy of the model correspondingly. When one of the input variables (T7, ranked lowest according to Figure 4c) was removed when the model was validated, the RMA was still very high.
The proposed STAM-EEG classification method is illustrated here on a simple EEG problem, but its applicability is much wider across various studies involving EEG or ECoG data. A large STAM-EEG model can be developed for a particular problem. This model can be validated for its temporal and spatial association and generalization accuracy on a particular subset of EEG channels, measured at shorter times. If the validation accuracies are acceptable, then the model can be successfully used on smaller EEG data. This method can be used for the early detection of brain events in an online mode, using only a shorter time of activity of a small number of channels. Further applicability of the proposed STAM-EEG classification method is discussed in Section 6.

5. STAM-fMRI for Classification

5.1. The Proposed STAM-fMRI Classification Method

i.
Defining the spatial and temporal components of the fMRI data for the classification task, e.g., fMRI voxels and the time series measurement.
ii.
Designing a SNNcube that is structured according to a brain template suitable for the fMRI data. This could be a direct mapping of the fMRI voxel coordinates or transforming the voxel coordinates from the fMRI image to another template, such as Talairach, MNI, etc. [20] (Figure 5a).
iii.
Selecting voxel features/variables K from the full set of voxels (Figure 5b) and defining their mapping as input neurons in the 3D SNNcube. (Figure 5c).
iv.
Encode data and train a NeuCube model to classify a complete spatio-temporal fMRI data, having K variables as inputs measured over time T.
v.
Analyse the model through connectivity and spiking activity analysis around the input voxels (Table 3).
vi.
Recall the STAM-fMRI model on the same data and same variables but measured over time T1 < T to calculate the classification temporal association accuracy.
vii.
Recall the STAM-fMRI on K1 < K EEG channels to evaluate the classification spatial association accuracy.
viii.
Recall the model on the same variables, measured over time T or T1 < T on new data to calculate the classification temporal generalization accuracy.
ix.
Recall the NeuCube model on K1 < K variables to evaluate the classification spatial generalization accuracy using a new fMRI dataset.
x.
wRank and evaluate the K1 fMRI features/variables as potential classification biomarkers (Section 5.5).

5.2. STAM-fMRI for Classification of Experimental fMRI Data

The experimental fMRI data set used here was originally collected by Marcel Just and his colleagues at Carnegie Mellon University’s Center for Cognitive Brain Imaging (CCBI) [21]. The fMRI recorded 5062 voxels from the whole brain volume while a subject was performing a cognitive reading task. There were two categories of sentences (affirmative and negative), each remaining on the screen for 8 s corresponding to 16 measured brain images. There were a total number of 40 sentences.
A full STAM-fMRI model was developed for the classification of fMRI samples into two classes (class 1: affirmative sentence and class 2: negative sentence). Signal to noise ratio (SNR) feature selection method was applied to the fMRI data to select vital fMRI variables with a high power of discrimination between the defined classes. As shown in Figure 5b, we selected 20 top important voxels that had SNR values higher than the 0.4 threshold. These 20 fMRI features are used as input variables to train the STAM-fMRI model for classification.
Figure 5a shows how 5062 voxel coordinates were mapped into a 3-dimensional SNNcube. Table 6 shows the brain region of interest (RoI) associated with these top-20 selected fMRI features and the evolved connectivity in the 3D SNN STAM model around the input features, as follows: LT (3), LOPER (3), LIPL (1), LDLPFC (6), RT (2), CALC (1), LSGA (1), RDLPFC (1), RSGA (1), RIT (1). The full names of the areas are: left temporal lobe (LT); left opercularis (LOPER); left inferior parietal lobule (LIPL); left dorsolateral prefrontal cortex (LDLPFC) and right dorsolateral prefrontal cortex (RDLPFC); calcarine sulcus (CALC); right supramarginal gyrus (RSGA).
The training classification accuracy of the full STAM-EEG classification model was 100% (Figure 5c,d) and the associative temporal and spatial testing accuracy of the model was further tested and presented below.
Figure 6a presents three snapshots of deep learning of eight-second fMRI data in a SNNcube when a subject was reading a negative sentence (time in seconds). Figure 6b captures the internal structural pattern, represented as spatio-temporal connectivity in the SNN model trained with eight-second fMRI data streams. The corresponding functional pattern is illustrated in Figure 6c as a sequence of spiking activity of clusters of neurons in a trained SNNcube. The internal functional dimensionality of the SNN model shows that while the subject was reading a negative sentence, the activated cognitive functions were initiated from the Spatial Visual Processing function. Then it was followed by the Executive functions, including decision-making and working memory. From there, the Logical and Emotional Attention functions were involved. Finally, the Emotional Memory formation and Perception functions were evoked.

5.3. The Full STAM-fMRI Classification Model Is Recalled on Partial Temporal fMRI Data

Here the trained full STAM-fMRI model in Section 5.2 was recalled on 70% and 50% of the time length of the same data used for the training (Figure 7).
The classification temporal association accuracy for both experiments was 100%. Using less than 50% of the time series resulted in an accuracy of less than 100%.

5.4. Testing the Full STAM-fMRI Model on a Smaller Portion of the Spatial Information (a Smaller Number of fMRI Variables/Features)

Here, the STAM-fMRI model from Section 5.2, trained on 20 features, was validated only on 18 of them, by removing the last two from the SNR ranking (Figure 5b). The spatial classification association accuracy was again 100% (Figure 8). The accuracy decreases when less than 18 input variables are used.

5.5. Potential Bio-Marker Discovery from the STAM-fMRI

A fully trained STAM-fMRI classification model can be analyzed in terms of the most activated brain regions related to reading affirmative and negative sentences. Figure 9 shows the distribution of the average connection weights around the input features located in the left and right hemispheres of the trained SNN models related to reading different sentences.

5.6. STAM for Longitudinal MRI Neuroimaging

STAM systems can be developed also for longitudinal MRI data (STAM-longMRI), such as the one used in [22], where 6 years of MRI data has been modeled to predict dementia and AD in 2 and 4 years ahead from a large cohort of data. A STAM-longMRI system can be trained on the full length of longitudinal MRI data and used to be recalled in a shorter time for early prediction of future events.

6. Discussions, Conclusions, and Directions for Further Research

6.1. Potential Applications of the Proposed STAM-NI Classification Methods

The potential applications of the STAM-NI classification methods proposed here become evident in various fields, including post-stroke recovery prediction, early diagnosis, and prognosis of mild cognitive impairment (MCI) and Alzheimer’s disease (AD), as well as depression and other mental health conditions. These applications can be NI techniques such as EEG and fMRI to analyze spatio-temporal patterns of brain activity and make accurate and early predictions or classifications.
One notable application is in post-stroke recovery prediction. By training a STAM model on NI data collected from stroke patients, the model can learn the spatio-temporal patterns associated with successful recovery. Subsequently, the model can be recalled using only partial NI variables or time points to predict the recovery trajectory of the same patient or a new stroke patient. This capability can assist clinicians in personalized treatment planning and rehabilitation strategies [23,24].
Another application lies in the early diagnosis and prognosis of MCI and Alzheimer’s disease. By training a STAM model on longitudinal NI data, such as EEG and fMRI recordings, from individuals with and without MCI/AD, the model can learn the complex spatio-temporal patterns indicative of disease progression. The model can then be utilized to classify new individuals based on their NI data, enabling early detection and intervention for improved patient outcomes [25,26].
Depression is another mental health condition that can benefit from the STAM framework. By training a STAM-NI model on NI data, such as resting-state fMRI, from individuals with depression, it can capture the spatio-temporal associations related to the disorder. This trained model can subsequently be used to classify new individuals as either depressed or non-depressed based on their NI data, aiding in early diagnosis and treatment planning [27].
Furthermore, the STAM systems hold potential for applications in neurodevelopmental disorders, such as autism spectrum disorder (ASD). By training a STAM model on EEG data, it can identify distinctive spatio-temporal patterns associated with ASD, contributing to early diagnosis and intervention [28]. Similarly, the framework can be applied to investigate brain disorders related to aging, such as Parkinson’s disease or age-related cognitive decline [29].
By incorporating multimodal spatio-temporal data, including clinical, genetic, cognitive, and demographic information, during the training phase, a STAM model can enable comprehensive analyses. This integration of multiple modalities aims to enhance the model’s ability to make accurate predictions or classifications, even when only a subset of the modalities is available for recall. Such a capability can provide valuable insights for personalized medicine, treatment planning, and patient management [30].
One challenge in the STAM system design is how it can effectively associate different data modalities during learning, enabling successful recall even when only one modality is available. For instance, can a STAM model learn brain data from synesthetic subjects who experience auditory sensations when they see colors? Addressing this challenge requires leveraging prior knowledge about brain structural and functional pathways, as well as stimuli data and corresponding spatio-temporal data from subjects. Current understanding of structural connectivity and functional pathways during perception can be utilized to initialize the connectivity of the SNN Cube before training [31,32,33].
Another open question pertains to how sound, image, and brain response data (e.g., EEG, fMRI) can be inputted as associated spatio-temporal patterns into dedicated groups of neurons. This concept aligns with the principles employed in neuroprosthetics, where stimulus signals are delivered to specific brain regions to compensate for damage, effectively “skipping” damaged areas [34,35]. Experiments conducted using the STAM-NI framework have the potential to provide insights and ideas for the development of new types of neuroprosthetics that leverage spatio-temporal associations in neural activity.
Wider applications of the proposed STAM models can be anticipated, such as predicting air pollution [36] with the use of neuromorphic hardware [37,38,39,40].

6.2. Future Development and Challenges of the STAM-NI Methods

STAM-NI methods can be developed in the future to address the following challenges:
-
Developing new functions in the NeuCube SNN, enabling a better STAM system design that are inspired by neurogenetic [41] and brain cognition [42,43,44] and also enhancing already existing SNN models for transfer learning and knowledge discovery [45,46,47].
-
Normalizing or/and harmonizing NI data across various data sources [48]. Establishing an effective “mapping” between training variables and synchronized time units will be crucial.
-
Implementation of STAM models on neuromorphic microchips, consuming much less energy and being implantable for online adaptive learning and control [37,38,39,40,49]. The choice of a hardware platform for the implementation of a practical STAM system would depend on the specific task requirements.
-
STAM-NI, which works under different temporal conditions, e.g., with data collected at varying intervals. At present the same time unit is used for training and for (e.g., milliseconds, seconds, etc.). If the recall data is measured in different time intervals, we can apply interpolation between the data points so that they will match the training temporal units. Such data interpolation has been successfully used in brain data analysis using the NeuCube SNN [22].
-
STAM-NI, for different spatial settings. At this stage, we have explored the model when data for training and recall are in the same spatial setting and same context. We can explore further the ability of the model for incremental learning of new variables, that can be mapped spatially. In this case, the network of connections in the 3D SNN will form new clusters that connect spatially the new variables and may also develop links with the “old” variables.
-
STAM-NI, which accounts for the variability of the variables themselves. In real-world scenarios, variables may have different characteristics, and their relationships may evolve. We will consider how a STAM framework performs with diverse types of variables, including those with different temporal dynamics and spatial distributions.
-
In conclusion, the proposed STAM-NI classification framework and its specific models STAM-EEG and STAM-fMRI are not aimed to substitute existing methods and systems for NI data analyses. Rather, they are extending their functionality for better NI data modeling, data understanding, and early event diagnosis and prognosis.

Author Contributions

Conceptualization, N.K.K.; methodology, N.K.K., H.B., M.D. and A.W.; software, N.K.K., H.B. and M.D.; validation, N.K.K., H.B., M.D. and A.W.; formal analysis, N.K.K., H.B., M.D. and A.W.; investigation, N.K.K., H.B., M.D. and A.W.; resources, N.K.K. and M.D.; data curation, N.K.K., H.B. and M.D.; writing—original draft preparation, N.K.K., H.B., M.D. and A.W.; writing—review and editing, N.K.K., H.B., M.D. and A.W.; visualization, N.K.K., H.B. and M.D.; supervision, N.K.K.; project administration, N.K.K.; funding acquisition, N.K.K., M.D. and A.W. All authors have read and agreed to the published version of the manuscript.

Funding

N.K.K, M.D. and A.W. are partially supported by projects co-funded by the Ministry of Business, Innovation and Enterprise (MBIE) of New Zealand and the Singapore Data Science Consortium (SDSC), projects 13287/AUTX2001 (N.K.K. and M.D.) and UOAX2001 (A.W.), 2021–2023. A.W. was also funded by the Marsden Fund Project 22-UOA-120 and the Health Research Council of New Zealand’s project 21/144.

Institutional Review Board Statement

Not applicable, as data is publicly available.

Informed Consent Statement

Not applicable, as data has already been made publicly available.

Data Availability Statement

All data is publicly available as described in the paper.

Acknowledgments

First: we would like to acknowledge the significant contribution of the two reviewers for the final version of the paper. For the experiments in the paper, a publicly available NeuCube development software environment was used: https://kedri.aut.ac.nz/research-groups/neucube (accessed on 13 November 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Squire, L.R. Memory and Brain Systems: 1969–2009. J. Neurosci. 2009, 29, 12711–12716. [Google Scholar] [CrossRef]
  2. Squire, L.R. Memory systems of the brain: A brief history and current perspective. Neurobiol. Learn. Mem. 2004, 82, 171–177. [Google Scholar] [CrossRef]
  3. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  4. Kosko, B. Bidirectional Associative Memories. IEEE Trans. Syst. Man Cybern. 1988, 18, 49–60. [Google Scholar] [CrossRef]
  5. Haga, T.; Fukai, T. Extended Temporal Association Memory by Modulations of Inhibitory Circuits. Phys. Rev. Lett. 2019, 123, 078101. [Google Scholar] [CrossRef]
  6. Kasabov, N.K. NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Netw. 2014, 52, 62–76. [Google Scholar] [CrossRef]
  7. Kasabov, N. Spatio-Temporal Associative Memories in Brain-inspired Spiking Neural Networks: Concepts and Perspectives. TechRxiv 2023. preprint. [Google Scholar] [CrossRef]
  8. Song, S.; Miller, K.D.; Abbott, L.F. Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919–926. [Google Scholar] [CrossRef]
  9. Kasabov, N. Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence; Springer Nature: Berlin/Heidelberg, Germany, 2019; 750p, Available online: https://www.springer.com/gp/book/9783662577134 (accessed on 12 November 2023).
  10. Kasabov, N. NeuroCube EvoSpike Architecture for Spatio-Temporal Modelling and Pattern Recognition of Brain Signals. In ANNPR; Mana, N., Schwenker, F., Trentin, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7477, pp. 225–243. [Google Scholar]
  11. Kasabov, N.; Dhoble, K.; Nuntalid, N.; Indiveri, G. Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition. Neural Netw. 2013, 41, 188–201. [Google Scholar] [CrossRef]
  12. Talairach, J.; Tournoux, P. Co-Planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System—An Approach to Cerebral Imaging; Thieme Medical Publishers: New York, NY, USA, 1988. [Google Scholar]
  13. Zilles, K.; Amunts, K. Centenary of Brodmann’s map—Conception and fate. Nat. Rev. Neurosci. 2010, 11, 139–145. [Google Scholar] [CrossRef]
  14. Mazziotta, J.C.; Toga, A.W.; Evans, A.; Fox, P.; Lancaster, J. A Probablistic Atlas of the Human Brain: Theory and Rationale for Its Development. NeuroImage 1995, 2, 89–101. [Google Scholar] [CrossRef]
  15. Abeles, M. Corticonics; Cambridge University Press: New York, NY, USA, 1991. [Google Scholar]
  16. Izhikevich, E. Polychronization: Computation with Spikes. Neural Comput. 2006, 18, 245–282. [Google Scholar] [CrossRef]
  17. Neftci, E.; Chicca, E.; Indiveri, G.; Douglas, R. A systematic method for configuring vlsi networks of spiking neurons. Neural Comput. 2011, 23, 2457–2497. [Google Scholar] [CrossRef]
  18. Szatmáry, B.; Izhikevich, E.M. Spike-Timing Theory of Working memory. PLoS Comput. Biol. 2010, 6, e1000879. [Google Scholar] [CrossRef]
  19. Humble, J.; Denham, S.; Wennekers, T. Spatio-temporal pattern recognizers using spiking neurons and spike-timing-dependent plasticity. Front. Comput. Neurosci. 2012, 6, 84. [Google Scholar] [CrossRef]
  20. Kasabov, N.K.; Doborjeh, M.G.; Doborjeh, Z.G. Mapping, learning, visualisation, classification and understanding of fMRI data in the NeuCube Evolving Spatio Temporal Data Machine of Spiking Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 887–899. [Google Scholar] [CrossRef]
  21. Mitchell, T.M.; Hutchinson, R.; Niculescu, R.S.; Pereira, F.; Wang, X.; Just, M.; Newman, S. Learning to Decode Cognitive States from Brain Images. Mach. Learn. 2004, 57, 145–175. [Google Scholar] [CrossRef]
  22. Doborjeh, M.; Doborjeh, Z.; Merkin, A.; Bahrami, H.; Sumich, A.; Krishnamurthi, R.; Medvedev, O.N.; Crook-Rumsey, M.; Morgan, C.; Kirk, I.; et al. Personalised Predictive Modelling with Spiking Neural Networks of Longitudinal MRI Neuroimaging Cohort and the Case Study of Dementia. Neural Netw. 2021, 144, 522–539. [Google Scholar] [CrossRef]
  23. Chong, B.; Wang, A.; Borges, V.; Byblow, W.D.; Alan Barber, P.; Stinear, C. Investigating the structure-function relationship of the corticomotor system early after stroke using machine learning. NeuroImage Clin. 2022, 33, 102935. [Google Scholar] [CrossRef]
  24. Karim, M.; Chakraborty, S.; Samadiani, N. Stroke Lesion Segmentation using Deep Learning Models: A Survey. IEEE Access 2021, 9, 44155–44177. [Google Scholar]
  25. Li, H.; Shen, D.; Wang, L. A Hybrid Deep Learning Framework for Alzheimer’s Disease Classification Based on Multimodal Brain Imaging Data. Front. Neurosci. 2021, 15, 625534. [Google Scholar]
  26. Niazi, F.; Bourouis, S.; Prasad, P.W.C. Deep Learning for Diagnosis of Mild Cognitive Impairment: A Systematic Review and Meta-Analysis. Front. Aging Neurosci. 2020, 12, 244. [Google Scholar]
  27. Sona, C.; Siddiqui, S.A.; Mehmood, R. Classification of Depression Patients and Healthy Controls Using Machine Learning Techniques. IEEE Access 2021, 9, 26804–26816. [Google Scholar]
  28. Fanaei, M.; Davari, A.; Shamsollahi, M.B. Autism Spectrum Disorder Diagnosis Based on fMRI Data Using Deep Learning and 3D Convolutional Neural Networks. Sensors 2020, 20, 4600. [Google Scholar]
  29. Zhang, X.; Liu, T.; Qian, Z. A Comprehensive Review on Parkinson’s Disease Using Deep Learning Techniques. Front. Aging Neurosci. 2021, 13, 702474. [Google Scholar]
  30. Hjelm, R.D.; Calhoun-Sauls, A.; Shiffrin, R.M. Deep Learning and the Audio-Visual World: Challenges and Frontiers. Front. Neurosci. 2020, 14, 219. [Google Scholar]
  31. Poline, J.-B.; Poldrack, R.A. Frontiers in brain imaging methods grand challenge. Front. Neurosci. 2012, 6, 96. [Google Scholar] [CrossRef]
  32. Pascual-Leone, A.; Hamilton, R. The metamodal organization of the brain. Prog. Brain Res. 2001, 134, 427–445. [Google Scholar]
  33. Honey, C.J.; Kötter, R.; Breakspear, M.; Sporns, O. Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc. Natl Acad. Sci. USA 2007, 104, 10240–10245. [Google Scholar] [CrossRef]
  34. Nicolelis, M. Mind in Motion. Sci. Am. 2012, 307, 44–49. [Google Scholar] [CrossRef]
  35. Paulun, V.C.; Beer, A.L.; Thompson-Schill, S.L. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. eLife 2019, 8, e42848. [Google Scholar]
  36. Liu, H.; Lu, G.; Wang, Y.; Kasabov, N. Evolving spiking neural network model for PM2.5 hourly concentration prediction based on seasonal differences: A case study on data from Beijing and Shanghai. Aerosol Air Qual. Res. 2021, 21, 200247. [Google Scholar] [CrossRef]
  37. Furber, S. To Build a Brain. IEEE Spectr. 2012, 49, 39–41. [Google Scholar] [CrossRef]
  38. Indiveri, G.; Stefanini, F.; Chicca, E. Spike-based learning with a generalized integrate and fire silicon neuron. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS 2010) 2010, Paris, France, 30 May–2 June 2010; pp. 1951–1954. [Google Scholar]
  39. Indiveri, G.; Chicca, E.; Douglas, R.J. Artificial cognitive systems: From VLSI networks of spiking neurons to neuromorphic cognition. Cogn. Comput. 2009, 1, 119–127. [Google Scholar] [CrossRef]
  40. Delbruck, T. jAER Open-Source Project. 2007. Available online: https://sourceforge.net/p/jaer/wiki/Home/ (accessed on 12 November 2023).
  41. Benuskova, L.; Kasabov, N. Computational Neuro-Genetic Modelling; Springer: New York, NY, USA, 2007. [Google Scholar]
  42. BrainwaveR Toolbox. Available online: http://www.nitrc.org/projects/brainwaver/ (accessed on 12 November 2023).
  43. Buonomano, D.; Maass, W. State-dependent computations: Spatio-temporal processing in cortical networks. Nat. Rev. Neurosci. 2009, 10, 113–125. [Google Scholar] [CrossRef]
  44. Kang, H.J.; Kawasawa, Y.I.; Cheng, F.; Zhu, Y.; Xu, X.; Li, M.; Sousa, A.M.; Pletikos, M.; Meyer, K.A.; Sedmak, G.; et al. Spatio-temporal transcriptome of the human brain. Nature 2011, 478, 483–489. [Google Scholar] [CrossRef]
  45. Kasabov, N.; Tan, Y.; Doborjeh, M.; Tu, E.; Yang, J.; Goh, W.; Lee, J. Transfer Learning of Fuzzy Spatio-Temporal Rules in the NeuCube Brain-Inspired Spiking Neural Network: A Case Study on EEG Spatio-temporal Data. IEEE Trans. Fuzzy Syst. 2023, 1–12. [Google Scholar] [CrossRef]
  46. Kumarasinghe, K.; Kasabov, N.; Taylor, D. Brain-inspired spiking neural networks for decoding and understanding muscle activity and kinematics from electroencephalography signals during hand movements. Sci. Rep. 2021, 11, 2486. Available online: https://www.nature.com/articles/s41598-021-81805-4 (accessed on 12 November 2023). [CrossRef]
  47. Wu, X.; Feng, Y.; Lou, S.; Zheng, H.; Hu, B.; Hong, Z.; Tan, J. Improving NeuCube spiking neural network for EEG-based pattern recognition using transfer learning. Neurocomputing 2023, 529, 222–235. [Google Scholar] [CrossRef]
  48. Wen, G.; Shim, V.; Holdsworth, S.J.; Fernandez, J.; Qiao, M.; Kasabov, N.; Wang, A. Artificial Intelligence for Brain MRI Data Harmonization: A Systematic Review. Bioengineering 2023, 10, 397. [Google Scholar] [CrossRef]
  49. Dey, S.; Dimitrov, A. Mapping and Validating a Point Neuron Model on Intel’s Neuromorphic Hardware Loihi. Front. Neuroinform. 2022, 16, 883360. [Google Scholar] [CrossRef]
Figure 1. Learning in SNN relates to changes in the connection weights between two spatially located spiking neurons over time so that both “time” and “space” are learned in the spatially distributed connections (http://en.m.wikipedia.org/wiki/neuron, accessed on 13 November 2023).
Figure 1. Learning in SNN relates to changes in the connection weights between two spatially located spiking neurons over time so that both “time” and “space” are learned in the spatially distributed connections (http://en.m.wikipedia.org/wiki/neuron, accessed on 13 November 2023).
Bioengineering 10 01341 g001
Figure 2. The NeuCube brain-inspired SNN architecture (from [6], (©Elsevier, reproduced with permission from Kasabov, N., NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data, Neural Networks, vol. 52, 2014)).
Figure 2. The NeuCube brain-inspired SNN architecture (from [6], (©Elsevier, reproduced with permission from Kasabov, N., NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data, Neural Networks, vol. 52, 2014)).
Bioengineering 10 01341 g002
Figure 3. Original EEG signal (top), encoded into spike sequence (middle) and a reconstruction of the signal from the spike sequence, back to real values (bottom) (from [9], (©Springer-Nature 2019, reproduced with permission from Kasabov, N., Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, 2019)).
Figure 3. Original EEG signal (top), encoded into spike sequence (middle) and a reconstruction of the signal from the spike sequence, back to real values (bottom) (from [9], (©Springer-Nature 2019, reproduced with permission from Kasabov, N., Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, 2019)).
Bioengineering 10 01341 g003
Figure 4. (a) Training the NeuCube STAM-EEG model on full data (60 EEG samples) and validating it on T1 = 80% of the time of the data (see Table 2). Different input neurons, representing corresponding EEG channels, are presented in different colors; (b) Post-training neuronal connectivity and cluster formations. (c) (Left): The size of the segments represents the spiking activity of the corresponding input neuron to an EEG channel; the largest the section, the higher the impact this channel has on the model; (Right): EEG electrode layout.
Figure 4. (a) Training the NeuCube STAM-EEG model on full data (60 EEG samples) and validating it on T1 = 80% of the time of the data (see Table 2). Different input neurons, representing corresponding EEG channels, are presented in different colors; (b) Post-training neuronal connectivity and cluster formations. (c) (Left): The size of the segments represents the spiking activity of the corresponding input neuron to an EEG channel; the largest the section, the higher the impact this channel has on the model; (Right): EEG electrode layout.
Bioengineering 10 01341 g004aBioengineering 10 01341 g004b
Figure 5. (a) Mapping of the 5062 fMRI voxels into a 3D SNN model of the NeuCube framework; (b) selecting top-20 voxels as input variables using SNR ranking (on the y-axis) of top voxels (on the x-axis) related to the affirmative versus negative sentences. The top features are selected according to their SNR values that were greater than a threshold = 0.4. (c) a full STAM-fMRI model implemented in NeuCube trained and tested on 100% of the data using all 20 features; (d) its training accuracy is 100%, but the validation association and generalization accuracies are further tested below.
Figure 5. (a) Mapping of the 5062 fMRI voxels into a 3D SNN model of the NeuCube framework; (b) selecting top-20 voxels as input variables using SNR ranking (on the y-axis) of top voxels (on the x-axis) related to the affirmative versus negative sentences. The top features are selected according to their SNR values that were greater than a threshold = 0.4. (c) a full STAM-fMRI model implemented in NeuCube trained and tested on 100% of the data using all 20 features; (d) its training accuracy is 100%, but the validation association and generalization accuracies are further tested below.
Bioengineering 10 01341 g005
Figure 6. (a) Three snapshots of learning of 8-s fMRI data in a STAM-fMRI model when a subject is reading a negative sentence (time in seconds); Positive connections are colored in blue and negative connections in red. (b) Internal structural pattern represented as spatio-temporal connectivity in the SNN model trained with 8-s fMRI data stream; (c) A functional pattern represented as a sequence of spiking activity of clusters of spiking neurons in a trained SNN model. The arrows show the order of activation of different spatially distributed neuronal areas after fMRI data is presented to an already trained SNNcube.
Figure 6. (a) Three snapshots of learning of 8-s fMRI data in a STAM-fMRI model when a subject is reading a negative sentence (time in seconds); Positive connections are colored in blue and negative connections in red. (b) Internal structural pattern represented as spatio-temporal connectivity in the SNN model trained with 8-s fMRI data stream; (c) A functional pattern represented as a sequence of spiking activity of clusters of spiking neurons in a trained SNN model. The arrows show the order of activation of different spatially distributed neuronal areas after fMRI data is presented to an already trained SNNcube.
Bioengineering 10 01341 g006
Figure 7. Parameters for spike encoding and validation of the STAM-fMRI model from Section 5.2. Left panel: For validation, only 70% (0.7) from the initial time points of the fMRI samples, equaled to 5.6-s data, are used, rather than using 8 s of the data for training the full model. Right panel: The model is tested/validated only on 50% of the temporal length (4 s) of the training data. The classification temporal association accuracy for both experiments is 100%. Using less than 50% of the time series results in an accuracy of less than 100%.
Figure 7. Parameters for spike encoding and validation of the STAM-fMRI model from Section 5.2. Left panel: For validation, only 70% (0.7) from the initial time points of the fMRI samples, equaled to 5.6-s data, are used, rather than using 8 s of the data for training the full model. Right panel: The model is tested/validated only on 50% of the temporal length (4 s) of the training data. The classification temporal association accuracy for both experiments is 100%. Using less than 50% of the time series results in an accuracy of less than 100%.
Bioengineering 10 01341 g007
Figure 8. Classification spatial association accuracy is 100% when 18 input features are used. The panel at the write shows the correct classification of all input fMRI samples in class 1 (in green) and class 2 (in blue).
Figure 8. Classification spatial association accuracy is 100% when 18 input features are used. The panel at the write shows the correct classification of all input fMRI samples in class 1 (in green) and class 2 (in blue).
Bioengineering 10 01341 g008
Figure 9. Distribution of the average connection weights around the input voxels located in the left and right hemispheres of the trained SNN models related to negative sentences (in (a)) and affirmative sentences (in (b)). The dominant voxels for the discrimination of the negative from the affirmative sentences are LDLPFC, LIPL, LT, and LSGA.
Figure 9. Distribution of the average connection weights around the input voxels located in the left and right hemispheres of the trained SNN models related to negative sentences (in (a)) and affirmative sentences (in (b)). The dominant voxels for the discrimination of the negative from the affirmative sentences are LDLPFC, LIPL, LT, and LSGA.
Bioengineering 10 01341 g009
Table 1. STAM-EEG Parameter Settings of a NeuCube model.
Table 1. STAM-EEG Parameter Settings of a NeuCube model.
Dataset InformationEncoding Method and ParametersNeuCube ModelSTDP ParametersdeSNNs Classifier Parameters
sample number: 60,
feature number: 14 channels,
time length: 128,
class number: 3.
encoding method:
Step Forward (SF)
spike threshold: 0.5,
window size: 5,
filter type: SS.
number of neurons: 1471,
brain template: Talairach,
neuron model: LIF.
potential leak rate: 0.002, STDP rate: 0.01, firing threshold: 0.5, training iteration: 1, refractory time: 6, LDC probability: 0.mod: 0.8,
drift: 0.005,
K: 1,
sigma: 1.
Table 2. Temporal association accuracy of the STAM-EEG model from Figure 4a–c.
Table 2. Temporal association accuracy of the STAM-EEG model from Figure 4a–c.
Time T1 (% of the Full Time for Training)Number of Input Variables Training/Validation % of Data
Samples
Temporal
Association
Accuracy
RMA
100% (full)14 (100%)100/100100%1
95%14 (100%)100/100100%1
90%14 (100%)100/10098%0.98
80%14 (100%)100/10095%0.95
Table 3. Temporal generalization accuracy of the STAM-EEG model from Figure 4a–c.
Table 3. Temporal generalization accuracy of the STAM-EEG model from Figure 4a–c.
Time T1 (% of the Full Time)Number of Input Variables Used Training/Validation % of Data Samples Temporal
Generalization
Accuracy
RMA
100% (full)14 (100%)50/5080%1
95%14 (100%)50/5080%1
90%14 (100%)50/5076%0.95
Table 4. Spatial association accuracy of the STAM-EEG model from Figure 4a–c when feature T7 was removed.
Table 4. Spatial association accuracy of the STAM-EEG model from Figure 4a–c when feature T7 was removed.
Time T1 (% of the Full Time)Number of Input VariablesTraining/Validation % of Data
Samples
Spatial
Association
Accuracy
RMA
100% (full)14 (100%)100/100100%1
100% (full)13 (93%)100/100100%1
95%13 (93%)100/100100%1
90%13 (93%)100/10086%0.86
Table 5. Spatial generalization accuracy of the STAM-EEG model from Figure 4a–c when feature T7 was removed.
Table 5. Spatial generalization accuracy of the STAM-EEG model from Figure 4a–c when feature T7 was removed.
Time T1 (% of the Full Time)Number of
Input Variables
Training/Validation % of Data SamplesTemporal
Association
Accuracy
RMA
100% (full)14 (100%)50/5080%1
100% (full)13 (93%)50/50100%1
95%13 (93%)50/5080%1
90%13 (93%)50/5076%0.95
Table 6. The level of the evolved connectivity of each input feature neuron, representing a local brain area when a person is reading a negative (Neg) vs. affirmative (Aff) sentence can be used for feature selection and bio-marker discovery; the higher the value, the more important the input feature is.
Table 6. The level of the evolved connectivity of each input feature neuron, representing a local brain area when a person is reading a negative (Neg) vs. affirmative (Aff) sentence can be used for feature selection and bio-marker discovery; the higher the value, the more important the input feature is.
AreaLTLOPERLIPLLOPERLDLPFCLOPERLTLDLPFCRTCALC
Neg1.40.921.871.032.081.121.480.440.20.89
Aff0.90.561.010.871.030.650.890.230.10.43
areaLSGALDLPFCLTLDLPFCRTLDLPFCLDLPFCRDLPFCRSGARITAvg
Neg 1.841.031.90.451.11.260.560.190.431.41.7
Aff1.040.681.10.170.80.240.220.110.320.90.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kasabov, N.K.; Bahrami, H.; Doborjeh, M.; Wang, A. Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI. Bioengineering 2023, 10, 1341. https://doi.org/10.3390/bioengineering10121341

AMA Style

Kasabov NK, Bahrami H, Doborjeh M, Wang A. Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI. Bioengineering. 2023; 10(12):1341. https://doi.org/10.3390/bioengineering10121341

Chicago/Turabian Style

Kasabov, Nikola K., Helena Bahrami, Maryam Doborjeh, and Alan Wang. 2023. "Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI" Bioengineering 10, no. 12: 1341. https://doi.org/10.3390/bioengineering10121341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop