Next Article in Journal
Sex Differences in Work-Stress Memory Bias and Stress Hormones
Previous Article in Journal
Feasibility Randomized Trial for an Intensive Memory-Focused Training Program for School-Aged Children with Acquired Brain Injury
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cortico-Hippocampal Computational Modeling Using Quantum Neural Networks to Simulate Classical Conditioning Paradigms

1
The State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
2
The Binhai Industrial Technology Research Institute of Zhejiang University, Tianjin 300301, China
3
Electrical Engineering Department, University of Baghdad, Baghdad 10071, Iraq
4
The Institute of Computer Science, Zhejiang University, Hangzhou 310027, China
5
The Marcs Institute for Brain and Behaviour and School of Psychology, Western Sydney University, Sydney 1797, Australia
6
The Department of Human Anatomy and Physiology, the Faculty of Health Sciences, University of Johannesburg, Johannesburg 2198, South Africa
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Brain Sci. 2020, 10(7), 431; https://doi.org/10.3390/brainsci10070431
Submission received: 5 June 2020 / Revised: 26 June 2020 / Accepted: 3 July 2020 / Published: 7 July 2020
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)

Abstract

:
Most existing cortico-hippocampal computational models use different artificial neural network topologies. These conventional approaches, which simulate various biological paradigms, can get slow training and inadequate conditioned responses for two reasons: increases in the number of conditioned stimuli and in the complexity of the simulated biological paradigms in different phases. In this paper, a cortico-hippocampal computational quantum (CHCQ) model is proposed for modeling intact and lesioned systems. The CHCQ model is the first computational model that uses the quantum neural networks for simulating the biological paradigms. The model consists of two entangled quantum neural networks: an adaptive single-layer feedforward quantum neural network and an autoencoder quantum neural network. The CHCQ model adaptively updates all the weights of its quantum neural networks using quantum instar, outstar, and Widrow–Hoff learning algorithms. Our model successfully simulated several biological processes and maintained the output-conditioned responses quickly and efficiently. Moreover, the results were consistent with prior biological studies.

1. Introduction

The perceptron is the first fundamental model for artificial neural networks (ANNs) proposed by Rosenblatt in 1958. Therefore, the assumption of weighted connections and neurons has been used to simulate biological brain behavior and find optimal solutions for multivariate problems [1].
For decades, ANNs have been considered to be the dominant approach in tasks requiring intelligence such as object classification, natural language programming, data recommendation, and facial recognition. Topologies such as feedforward, recurrent, convolutional, classical, and deep neural networks have been used with various modifications in different applications. The learning process of an ANN is based on the optimization of an assigned performance function using a sequence of iterations to map the output vectors to the related inputs [2,3].
Classical and deep neural networks have been used to simulate human and animal brain regions with distinct functions for decades. Researchers have proposed various models using ANNs to mimic some specific regions of the brain [4]. They validated their proposed models with empirical biological experiments using conditioned stimulus (CS), unconditioned stimulus (US), and conditioned response (CR) [3,5]. Such models use the powerful ability of ANNs to test brain activities with unprecedentedly strong tendencies in humans and animals [6,7].
ANNs, which mimic many real and well-defined systems, have become popular and practical, specifically in biological fields such as computational neuroscience and cognitive modeling. However, an ANN still has some severe deficiencies, such as the inadequacy of its modeling of memory, its dependency on the convergence of the iterative learning, and the diversity of optimization techniques needed to find the optimized network parameters [2,8]. To address these problems, Schrödinger’s quantum equation has been used to generate the quantum neural network (QNN) approach [9].
Consequently, several pioneering quantum computing models have been proposed, such as the quantum computational network of Deutsch [10], the factoring algorithm of Shor [11], and the search algorithm of Grover [12]. Kak introduced the first quantum network that depends on neural network principles [9]. Since the first QNN model was introduced, various other models have been proposed [13].
The possibility of using quantum mechanics for computational modeling was first proposed by Feynman. In 1985, Feynman examined a fundamental quantum model that represents the elementary logical truth tables by changing the spin directions of the electron in terms of quantum mechanics. These type of features inspired researchers to consider a QNN implementation [14].
Brain memory modeling has opened the doors for entangled QNNs due to quantum computing being considered as a powerful tool to accelerate the performance of computational neural network models [15]. Furthermore, the entangled QNN has initiated the concept of using a quantum bit (qubit) instead of the neurons in the neural network [16,17]. Neural networks are based on the idea of the interconnected units, which represent biological neurons. These units feed the input signal to one another using two states: “active” and “inactive” [18]. Rather than use bits in classical computers or neurons in ANN, QNNs use qubits, which are the smallest units that store a circuit’s state during the computation in a quantum network [19] and also have two states of behavior [18].
In this paper, we propose a cortico-hippocampal computational quantum (CHCQ) model that is based on two entangled QNNs to simulate different biological paradigms in intact and lesioned systems. Studying the behavior of both intact and hippocampal-lesioned systems is significant for the multiphase learning paradigms. Most of these paradigms have a similar response at the first phase but respond differently to the other phases. The CHCQ model mainly consists of an adaptive single-layer feedforward QNN (ASLFFQNN), which models the cortical module; and an autoencoder QNN (AQNN), which represents the hippocampal module. The AQNN forwards its internal representations for adaptive learning in the cortical module. We compare the CHCQ model with the Green model [20], which is our recently published model based on classical neural networks. We further compare the results of the CHCQ model with those of the Gluck/Myers model [21] and the model of Moustafa et al. [22] by simulating different biological paradigms, as described in Table 1. We compare our proposed model with the previous models to confirm how the CHCQ model performs a wide range of biological paradigms and classical conditioning tasks effectively.
However, the usage of quantum computation in the CHCQ model makes it more powerful to simulate the cortico-hippocampal region than the classical computation. Instead of simulating the n–bits input cue of information in classical computation, the quantum computation simulates the same cue as 2 n possible states. The quantum circuit (such as quantum rotation gates) provides computational speedup over the ANNs in classical conditioning simulations. Such accelerated speedup enables the CHCQ model to simulate many biological paradigms efficiently.
This paper proceeds as follows: Initially, we explain the structure of the qubit, quantum rotation gate, and qubit neuron model. Then, the proposed model for both intact and lesioned systems is introduced. Subsequently, we discuss the results of the simulated biological paradigms in Table 1 and compare them with the Gluck/Myers, Moustafa et al., and Green models. Finally, the conclusion summarizes our contributions.

2. Materials and Methods

2.1. Qubit

The smallest unit that stores information in a quantum computer is called a qubit and is the complement of the classical bit in terms of conventional computers. The two quantum physical states | 0 and | 1 denote the classical bit values 0 and 1, respectively. The “ | . ” notation is called Dirac notation and is used to represent the quantum states. In contrast to a bit, a qubit state ϕ forms the superposition of many states as a linear combination, expressed as follows:
| ϕ = α | 0 + β | 1 ,
where α and β are two complex numbers considered to be the probabilities of the | 0 and | 1 states, respectively. Rewriting (1) as function form in which the real and imaginary parts are the relative probabilities of | 0 and | 1 as follows:
f ( θ ) = e i ( θ ) = cos ( θ ) + i sin ( θ ) .
If there are n qubits, then the qubit system ( ψ ) is a superposition of the following 2 n or N ground states:
| ψ = n = 1 N A n | n ,
where A n is the probability of the related quantum state | n . Naturally, the sum of these probabilities equals one, as described in the following equation:
n = 1 N A n 2 = 1 .
As a result, | ϕ in (1) collapses into either | 0 state with probability α 2 or | 1 state with probability β 2 . In particular,
α 2 + β 2 = 1 .

2.2. Quantum Rotation Gate

Just as there is a logic gate in conventional computation, there is also something similar in quantum computation, which is called a quantum gate. The quantum gate is a qubit state influenced by a series of unitary transforms within a particular interval. In this paper, we use Walsh transform or Hadamard gate (H). This gate produces equal relative probabilities for every qubit to apply superposition process on the qubit system as follows:
H ( α | 0 + β | 1 ) = α | 0 + | 1 2 + β | 0 | 1 2 .

2.3. Qubit Neuron Model

The proposed qubit neuron model shown in Figure 1 has input–output stages that behave according to the following equation:
z = f ( y ) ,
y = π 2 S ( ε ) A r g ( v ) ,
S ( ε ) = 1 1 + e ε ,
v j = i R w i j · f ( p i I ˜ ) f ( ϑ ) ,
p i ˜ = π 2 p i ,
where function f is described in (2); y is the output of the network; S is the logsigmoid activation function of reversal parameter ε having the range [0, 1]; A r g ( v ) is the argument function, which returns the phase value of the complex number v; w i j is the weight of from the ith input ( p i ) to the jth hidden qubit, the tilde notation represents the input qubit produced by Hadamard gate; and ϑ is the phase parameter in terms of threshold.

2.4. Proposed Model

In Table 1, both A and B are two input CSs, and the associated numbers are their amplitudes. While the suffix X, Y, or even Z are contexts of the related input cue, and the plus (+) and minus (−) signs represents the pairing and unpairing status between CSs and USs. While the prime sign stands for the related CSs.
The CHCQ model, as shown in Figure 2, principally consists of two QNNs as follows:
  • The hippocampal region has an AQNN that uses both instar and outstar rules to reproduce the inputs and generate the internal representations, which are forwarded to the cortical region in intact systems but not in lesioned ones.
  • The cortical region has an ASLFFQNN that uses both instar and Widrow–Hoff rule to update its weights and quantum parameters.
The instar and outstar rules are learning algorithms developed by [23], and used with normalized input cues; while the Widrow–Hoff rule is a supervised learning algorithm developed by [24] which depends on the desired output.
The CHCQ model was implemented for two systems, as shown in Figure 3. The intact system is the whole proposed CHCQ model, which represents both the cortical and hippocampal networks, whereas the lesioned system is the same model after lesioning the hippocampal module by omitting the connective link that forwards the internal representations from the hidden layer of the AQNN to the ASLFFQNN. The framework is shown in Figure 2, which illustrates the intact system and the lesioned system.
However, the output response of the ASLFFQNN is the measured CR which has a value between 0 and 1. During the learning procedure, the value of the output CR varies depending on its related task and condition. Accordingly, this learning ends when the CR value reaches its desired constant state which is the steady state (either 0 or 1).

2.4.1. General QNN Architecture

The QNNs of both the cortical and hippocampal regions consist of three layers: input, hidden, and output layers.

Input Layer (I)

As shown in Figure 4, this layer converts the values of the input cue p, which are in the range of [ 0 , 1 ] into the range [ 0 , π / 2 ] , which is the range suitable for quantum states using Hadamard gate. Then, it estimates the output according to (7) and (11) as follows:
z i I = f ( p i I ˜ ) ,
p i I ˜ = π 2 p i ,
where p i is the ith input item of the input cue and p i I ˜ is the quantum input.

Hidden Layer (H)

The hidden layer uses the aforementioned set of (7)–(11) to obtain the output as follows:
z j H = f ( y j H ) ,
v j H = i = 1 R w i j · f ( p i I ˜ ) f ( ϑ j H )   = i = 1 R w i j · e i ( p i I ˜ ) f ( ϑ j H )   = i = 1 R w i j · ( cos ( p i I ˜ ) + i sin ( p i I ˜ ) ) cos ( ϑ j H ) i sin ( ϑ j H ) ,
y j H = π 2 S ( ε j H ) A r g ( v j H )   = π 2 S ( ε j H ) arctant i = 1 R w i j · sin ( p i I ˜ ) sin ( ϑ j H ) i = 1 R w i j · cos ( p i I ˜ ) cos ( ϑ j H ) ,
where W i j is the hidden-layer weight from the ith input node to the jth hidden qubit for R inputs and Q qubits.

Output Layer (O)

This layer follows a scheme that is similar to that of the previous layer by using the corresponding set of equations to obtain the final output as follows:
z O = f ( y O ) ,
v O = j = 1 Q w j · f ( y j H ) f ( ϑ O )   = j = 1 Q w j · e i ( y j H ) f ( ϑ O )   = j = 1 Q w j · ( cos ( y j H ) + i sin ( y j H ) ) cos ( ϑ O ) i sin ( ϑ O ) ,
y O = π 2 S ( ε O ) A r g ( v O )   = π 2 S ( ε O ) arctan j = 1 Q w j · sin ( y j H ) sin ( ϑ O ) j = 1 Q w j · cos ( y j H ) cos ( ϑ O ) ,
where W j is the hidden-layer weight from the jth hidden qubit to the output.

2.4.2. Hippocampal Module Network

The AQNN represents the hippocampal region using fully connected qubits with a single hidden layer network. The AQNN encodes the input data and reproduces it at the output layer to generate the internal representations. The weights and QNN circuit parameters ( ε and ϑ ) use instar and outstar learning in input and output layers, respectively, to update their values as follows:
W i j i n s t a r ( k + 1 ) = W i j i n s t a r ( k ) + μ a j h 1 ( k ) ( p i T ( k ) W i j h 1 ( k ) ) ,
ε i j i n s t a r ( k + 1 ) = ε i j i n s t a r ( k ) + μ a j h 1 ( k ) ( p i T ( k ) ε i j h 1 ( k ) ) ,
ϑ i j i n s t a r ( k + 1 ) = ϑ i j i n s t a r ( k ) + μ a j h 1 ( k ) ( p i T ( k ) ϑ i j h 1 ( k ) ) ,
where μ is a small positive number representing learning rate, a j h 1 is the hippocampal internal representation output vector with Q nodes, p is the input cue with R qubits, superscript T means the transpose of the relevant matrix, and W i j h 1 is the hippocampal internal-layer weighs from the ith input node to the jth hidden qubit with R qubits and Q nodes. Note that (k) refers to the current state whereas (k + 1) is the succeeding or new state.
However, the internal representations are gathered to reproduce the input cue similarities at the output layer. Then, the AQNN updates the weights and QNN circuit parameters of the output layer, using the outstar learning algorithm through the following set of equations to complete this unsupervised learning process:
W i j o u t s t a r ( k + 1 ) = W i j o u t s t a r ( k ) + μ ( a j h 2 ( k ) W i j h 2 ( k ) ) a i h 1 T ( k ) ,
ε i j o u t s t a r ( k + 1 ) = ε i j o u t s t a r ( k ) + μ ( a j h 2 ( k ) ε i j h 2 ( k ) ) a i h 1 T ( k ) ,
ϑ i j o u t s t a r ( k + 1 ) = ϑ i j o u t s t a r ( k ) + μ ( a j h 2 ( k ) ϑ i j h 2 ( k ) ) a i h 1 T ( k ) ,
where a j h 2 is the actual output vector of the hippocampal side with R outputs and W i j h 2 is the weight from the ith hippocampal hidden qubit to the jth corresponding output.

2.4.3. Cortical Module Network

This region has a fully connected supervised ASLFFQNN. The input layer is trained by the instar learning rule to adapt the internal mapped representations to the input cue. These representations are generated by the hippocampal side as an adaptive learning signal. First, the hippocampal input layer’s elements (weights and quantum parameters) are initialized randomly for the lesioned system case. Alternatively, the internal elements are initialized by mapping them to their opposing elements on the hippocampal module through an adaptive signal, which carries the adaptive weights and other quantum parameters, as follows:
W i j i n s t a r ( k + 1 ) = W i j i n s t a r ( k ) + μ a j c 1 ( k ) ( p i T ( k ) W i j c 1 ( k ) ) ,
ε i j i n s t a r ( k + 1 ) = ε i j i n s t a r ( k ) + μ a j c 1 ( k ) ( p i T ( k ) ε i j c 1 ( k ) ) ,
ϑ i j i n s t a r ( k + 1 ) = ϑ i j i n s t a r ( k ) + μ a j c 1 ( k ) ( p i T ( k ) ϑ i j c 1 ( k ) ) ,
where a j c 1 is the cortical internal representation output vector with Q qubits, p is the input cue with R inputs, and W i j c 1 is cortical internal layer weight from the ith input node to the jth hidden qubit with R inputs and Q qubits.
Finally, the cortical upper layer uses the Widrow–Hoff learning algorithm to update the weights and quantum parameters at the output layer as follows:
W i j c 2 ( k + 1 ) = W i j c 2 ( k ) + μ a i c 1 ( k ) e j ( k ) ,
ε i j c 2 ( k + 1 ) = ε i j c 2 ( k ) + μ a i c 1 ( k ) e j ( k ) ,
ϑ i j c 2 ( k + 1 ) = ϑ j c 2 ( k ) + μ a i c 1 ( k ) e j ( k ) ,
M A E = 1 2 j = 1 i t e r e j = 1 2 j = 1 i t e r ( y j d j ) ,
where W i , j c 2 is the top-layer weight matrix of the cortical side from input p i to output nodes y j , d j is the desired output or US, and M A E is the mean absolute error between the actual and desired jth output, respectively.

3. Results

The CHCQ model was tested using the tasks listed in Table 1 for both intact and lesioned systems. The results are compared with those of the Moustafa et al., Gluck/Myers, and Green models. For each task, we measured the CR value after reaching the steady state in each trial to obtain the final response. The final stable computed CR values should be either 1 or 0 for CS+ or CS− learning, respectively.

3.1. Primitive Tasks

Basically, the CHCQ model was trained to pair and impair only one CS with a US in addition to the context. For both of the simple tasks (AX+ and AX−), only one learning phase was used to obtain the output final response. The CR of the A+ stimuli was rapidly obtained and directly reached the steady state within a shorter time than it did when the model of Moustafa et al. was used, as shown in Figure 5 and confirmed by the experiments of [25,26,27]. Likewise, Figure 6 shows how fast the CR of the A− stimuli and the related context obtained the zero final state. The CHCQ model took a few numbers of trials to reach the steady state for A+ and took fewer trials for A− than did the other models, as shown in the results in Table 2, Table 3 and Table 4.
Similarly, Figure 7 compares the CR of the A+ stimuli for both intact and lesioned systems obtained by the CHCQ and the model of Moustafa et al. Clearly, our model can complete learning efficiently, directly, and quickly. In addition, the CR of the lesioned system was still more quickly obtained than that of the intact system, as confirmed by the experiments of [28,29,30,31,32].

3.2. Stimulus Discrimination

Similarly to the previous task, the stimulus discrimination task consisted of one phase to discriminate between two different stimuli, but it synchronized the CSs according to their responses to the US. The following CSs, A+ and B− (or <A+, B−>), were simulated for both intact and lesioned systems. Subsequently, the CHCQ model was capable of discriminating the two different CSs due to their CRs, which reached the final states quickly and efficiently, as shown in Figure 8.
We note that the CHCQ model not only discriminated the two CSs successfully, but also obtained a quick response in the lesioned system. Moreover, the CRs of both A+ and B− reached their final states by fewer trial numbers than their opposing elements in the intact system. However, learning was accelerated in the lesioned system, which made it more responsive than the intact system, as confirmed by the experiments of [33,34,35]. The output CR of the CHCQ model was simulated with noisy and disruptive internal representations to test the stimulus discrimination ability under such conditions. Although the CHCQ model required more trials than for regular discrimination, it discriminated the two different CSs successfully and obtained the desired final state efficiently, as shown in Figure 9.

3.3. Discrimination Reversal

Starting from the concept of the previous task, it is worth studying the reverse effects of the training set on the output CR after discriminating them successfully. Thus, the reverse discrimination task was simulated by first training for stimulus discrimination (<A+, B−>) before beginning the reversal task <A−, B+>.
The intact system was initially trained with an <A+, B−> training set, then with <A−, B+>. The results show that discrimination between A and B was completed by fewer trial numbers than the stimulus discrimination task. This indicates that after training the network to differentiate A and B and then doing the reverse, there is an exchange in the CS status and additional learning, which led to getting fewer trial numbers than that in the CS discrimination task, as shown in Figure 10.
By performing the same task in the lesioned system, we revealed that the lesioned system learned quicker than the intact system. In contrast to the intact system, the second phase showed a longer response to discriminate the two CSs, as evident in Figure 10. However, the CHCQ model’s results of this task meet the empirical conclusions of [36,37].

3.4. Blocking

The blocking task consists of three consequent phases: <A+>, <AB+>, and <B−>. The first one exists specifically to initiate the model with a pre-exposure CS. The second phase is a compound of different CSs to block one of them, whereas the last phase is for the blocked CS.
Figure 11 shows the CR of the final phase of learning in the intact system; which presented the blocking effect due to the prior conditioning in the preceding phases, as confirmed experimentally by [38,39,40]. The blocking effect can be eliminated by extending the conditioning of the second phase, which made it longer, as shown in Figure 12 and confirmed experimentally by [38,41,42]. Although lesioning the CHCQ model did not affect the output CR, as shown in Figure 13, this is supported by the experimental findings of [43,44,45].

3.5. Overshadowing

The overshadowing task consists of only two phases. The first phase is the compound learning of two or more CSs; one of them is more efficient than the other, which produces a remarkable CR despite the rest of the CSs.
However, the model was trained by <AB+>first, then by <A+>and <B+>together. As a result, the intact system performed this task efficiently by simulating A with fewer trial numbers than B after pairing them with the US successfully, as shown in Figure 14 and confirmed experimentally by [46,47,48].
Similar to its performance in the blocking task, the lesioned system did not affect by the overshadowing, as shown in Figure 15 and confirmed by [43]. Expanding the pre-exposure phase could also eliminate the overshadowing effect, as shown in Figure 16, which is supported by the experimental findings of [49,50].

3.6. Easy–Hard Transfer

This task has the same concept of the simple or stimulus discrimination tasks, but is for an input cue with different amplitudes within the range of [0, 1]. Such cues could be heating levels, a spectrum of frequencies, or any related inputs that might have gradient values.
Supposing our model needs to recognize two similar CSs, a stimulus with an amplitude greater than 0.5 is considered as a CS–US paired condition (e.g., <A+>). In contrast, those with a value less than 0.5 are considered to be CS–US unpaired conditions for the same stimulus (e.g., <A−>).
We started with an easy task by assigning a stimulus value of 0.9 when a paired condition existed and 0.1 when it did not. Figure 17 and Figure 18 show the output CRs of both intact and lesioned systems for the easy transfer task. Likewise, we simulated a harder transfer task by narrowing the difference between the two levels, which were changed to 0.6 and 0.4, respectively.
After training the CHCQ model with a training set of <A = 0.9+, A = 0.1−> (easy transfer learning), the expected output CRs of the intact and lesioned systems in Figure 17 and Figure 18 were approximately similar to the outputs of the stimulus discrimination task shown in Figure 8.
Both intact and lesioned systems successfully completed the easy transfer stimulus discrimination within a short time or a few trials. In addition, they successfully finished the hard transfer task with a more difficult training set of <A = 0.6+, A = 0.4−>, as shown in Figure 19 and Figure 20, which are approximately similar to the outputs in Figure 8. Moreover, for this task, Figure 21 compares the speed and efficiency of the control and experimental results of the CHCQ model with those of the Green model. Generally, the CHCQ model’s results for this task are confirmed by the experimental studies of [51,52,53,54,55,56,57].

3.7. Latent Inhibition

The latent inhibition task consists of two learning phases, which are an unreinforced pre-exposure (<A−>) followed by CS–US pairing phase (<A+>). Figure 22 shows the output CRs of the intact system for both A+ learning with and without pre-exposure to <A−>. Lesioning the model did not affect the output CR of the second phase after the pre-exposure at the beginning, as shown in Figure 23. All of these results were confirmed experimentally by [58,59,60].
Figure 24 and Figure 25 show the CRs of the second phase for both intact and lesioned systems, respectively, and compares them with the results obtained by the Green model. Clearly, the CR response of the CHCQ model took fewer trial numbers and reached the final state directly. Moreover, it was confirmed that the lesioned CR results required fewer trial numbers than those in the intact system.

3.8. Generic Feedforward Multilayer Network

As mentioned in the latent inhibition task, lesioning the model did not affect the output CR. To prove that, the CHCQ output of the lesioned system has been compared with the generic multilayer feedforward supervised network [61].
As shown in Figure 26, the CHCQ model successfully obtained the desired output. Additionally, the CHCQ output showed a similar response to the output of the generic feedforward network regardless of the pretraining phase at the beginning.

3.9. Sensory Preconditioning

We simulated this task by combining two CSs, let us say A with B, and pairing them in the same training phase <AB−>. Later in the second phase, a conditioned process was applied to predict the US by A only as the <A+> training set. Finally, the second phase conditioning was strengthened by the <B−> training phase to distinguish the effect of the <A+> training set.
Figure 27 shows the output CRs of all three phases, as confirmed by [62,63,64,65,66,67,68,69]. The effect of preconditioning on the CR value of the last trial in the third phase is clear. Figure 28 specifically shows the CR of the last phase. However, lesioning the CHCQ model had no effect on the sensory preconditioning task, as shown in Figure 29 and confirmed by [45,70].

3.10. Compound Preconditioning

The compound preconditioning task was simulated in the intact system by compounding the second and third phases of the sensory preconditioning task and integrating them together in the second phase. It became more difficult for the model to discriminate A and B in this task. Figure 30 shows how much longer the compound discrimination needed without the preconditioning phase. Although the task took relatively longer to complete, the proposed model took fewer trial numbers than that of Moustafa et al., as shown in Figure 31. In addition, as in the previous task, lesioning the CHCQ model had no effect on the compound preconditioning task. Generally, the results of the compound preconditioning task have been confirmed by the experimental research of [68,69,71].

3.11. Context Sensitivity

The context of the input cue was changed after a number of training trials by shifting its value randomly to obtain a different context in the second phase. As a result, the context shifting task slowed the learning in the intact system without affecting the speed of the lesioned one, as shown in Figure 32. Subsequently, the output CR values shown in Figure 33 prove that the CHCQ model behaved more efficiently. Figure 34 shows the effect of expanding the first phase training number, which abolished the context sensitivity effect. All of these results have been confirmed by the experimental research of [72,73,74,75].
In addition, the CR response was affected, simulating the context sensitivity of the latent inhibition in the intact system. The context sensitivity reduces the learning behavior of the acquisition phase that increases the predicted CR value in the first trial of the second phase, as shown in Figure 35.

4. Conclusions

In this paper, we proposed the adaptive CHCQ model shown in Figure 2 and Figure 3. The CHCQ model is the first computational model which uses quantum computation techniques for simulating biological paradigms of classical conditioning. The CHCQ model consists of two main parts: a hippocampal region, represented as an AQNN; and a cortical region, represented as an ASLFFQNN. The AQNN uses quantum instar and outstar learning algorithms to update the weights and QNN circuit parameters to generate the internal representations that are adaptively forwarded to the internal layer of the ASLFFQNN. The ASLFFQNN uses the instar learning algorithm in the input layer and the Widrow–Hoff learning algorithm in the upper layer to update the weights and QNN circuit parameters. The CHCQ model was shown to simulate all the biological paradigms listed in Table 1 successfully with a very efficient and rapid output CR. The quantum circuit provides computational speedup over the ANNs in classical conditioning simulations. The convincing parallel computing features provide an advantage to QNN over ANN.
However, the results presented notable enhancements approved by experimental studies for various tasks that outperform the previously published models. A comparison of the CHCQ model with M1 (Gluck/Myers model) [21], M2 (Mustafa et al. model) [22], and G (Green model) [20] showed that the CHCQ model has a fast and reliable output CR. Table 2 shows that all the CRs of the CHCQ model reached the final desired states directly after fewer trials than were needed by Gluck/Myers model. Similarly, Table 3 proves that the CHCQ model takes even fewer trial numbers than that of Moustafa et al. Moreover, the CHCQ model completed all of its tasks with more reliable results and more quickly than the Green model, as shown in Table 4. It is worth mentioning that the multiphase learning paradigms of the CHCQ model have no improvement in the first phase as compared to the other models as long as the first phase of such tasks acts as a pre-exposure phase, which takes a similar period in other models.
Our future work will be about building a quantum controller for getting more stable output CRs using delay matching to sample task theory.

Author Contributions

Conceptualization, M.K. and J.W.; methodology, M.K., J.W., and T.M.A.; software, M.K.; validation, M.K. and J.W.; formal analysis, M.K. and J.W.; investigation, M.K. and T.A; resources, M.K., T.A., and A.A.M.; data curation, M.K., T.A., and A.A.M.; writing—original draft preparation, M.K.; writing—review and editing, M.K., J.W., and A.A.M.; visualization, M.K. and J.W.; supervision, J.W.; project administration, J.W., Q.Z. and R.X.; funding acquisition, J.W., Q.Z., and R.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China (2017YFB1300400), and in part by the Science and Technology Project of Zhejiang Province (2019C01043).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CHCQCortico-hippocampal computational quantum
ANNsartificial neural networks
CSConditioned stimulus
USUnconditioned stimulus
CRConditioned response
QNNQuantum neural network
ASLFFQNNAdaptive single-layer feedforward quantum neural network
AQNNAutoencoder quantum neural network
IInput layer
HHidden layer
OOutput layer
qubitQuantum bit

References

  1. Daskin, A. A quantum implementation model for artificial neural networks. Quanta 2018, 7, 7–18. [Google Scholar] [CrossRef]
  2. Liu, C.Y.; Chen, C.; Chang, C.T.; Shih, L.M. Single-hidden-layer feed-forward quantum neural network based on Grover learning. Neural Netw. 2013, 45, 144–150. [Google Scholar] [CrossRef] [PubMed]
  3. Lukac, M.; Abdiyeva, K.; Kameyama, M. CNOT-Measure Quantum Neural Networks. In Proceedings of the 2018 IEEE 48th International Symposium on Multiple-Valued Logic (ISMVL), Linz, Austria, 16–18 May 2018; pp. 186–191. [Google Scholar]
  4. Li, F.; Xiang, W.; Wang, J.; Zhou, X.; Tang, B. Quantum weighted long short-term memory neural network and its application in state degradation trend prediction of rotating machinery. Neural Netw. 2018, 106, 237–248. [Google Scholar] [CrossRef]
  5. Janson, N.B.; Marsden, C.J. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system. Sci. Rep. 2017, 7, 1–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Kriegeskorte, N.; Douglas, P. Cognitive computational neuroscience. Nat. Neurosci. 2018, 21, 1148–1160. [Google Scholar] [CrossRef]
  7. Liu, X.; Liu, W.; Liu, Y.; Wang, Z.; Zeng, N.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  8. Da Silva, A.J.; Ludermir, T.B.; de Oliveira, W.R. Quantum perceptron over a field and neural network architecture selection in a quantum computer. Neural Netw. 2016, 76, 55–64. [Google Scholar] [CrossRef] [Green Version]
  9. Altaisky, M.V.; Zolnikova, N.N.; Kaputkina, N.E.; Krylov, V.A.; Lozovik, Y.E.; Dattani, N.S. Entanglement in a quantum neural network based on quantum dots. Photonics Nanostruct. Appl. 2017, 24, 24–28. [Google Scholar] [CrossRef] [Green Version]
  10. Deutsch, D. Quantum Computational Networks. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1989, 425, 73–90. [Google Scholar]
  11. Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 1997, 26, 1484–1509. [Google Scholar] [CrossRef] [Green Version]
  12. Grover, L. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC’96, Philadelphia, PA, USA, 22–24 May 1996; Volume 129452, pp. 212–219. [Google Scholar]
  13. Altaisky, M.V.; Zolnikova, N.N.; Kaputkina, N.E.; Krylov, V.A.; Lozovik, Y.E.; Dattani, N.S. Decoherence and Entanglement Simulation in a Model of Quantum Neural Network Based on Quantum Dots. In EPJ Web of Conferences; EDP Sciences: Les Ulis, France, 2016; Volume 108, p. 2006. [Google Scholar]
  14. Clark, T.; Murray, J.S.; Politzer, P. A perspective on quantum mechanics and chemical concepts in describing noncovalent interactions. Phys. Chem. Chem. Phys. 2018, 2, 376–382. [Google Scholar] [CrossRef] [PubMed]
  15. Ganjefar, S.; Tofighi, M.; Karami, H. Fuzzy wavelet plus a quantum neural network as a design base for power system stability enhancement. Neural Netw. 2015, 71, 172–181. [Google Scholar] [CrossRef] [PubMed]
  16. Cui, Y.; Shi, J.; Wang, Z. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction. Neural Netw. 2015, 71, 11–26. [Google Scholar] [CrossRef] [PubMed]
  17. Gandhi, V.; Prasad, G.; Coyle, D.; Behera, L.; McGinnity, T.M. Evaluating Quantum Neural Network filtered motor imagery brain-computer interface using multiple classification techniques. Neurocomputing 2015, 170, 161–167. [Google Scholar] [CrossRef]
  18. Schuld, M.; Sinayskiy, I.; Petruccione, F. The quest for a Quantum Neural Network. Quantum Inf. Process. 2014, 13, 2567–2586. [Google Scholar] [CrossRef] [Green Version]
  19. Takahashi, K.; Kurokawa, M.; Hashimoto, M. Multi-layer quantum neural network controller trained by real-coded genetic algorithm. Neurocomputing 2014, 134, 159–164. [Google Scholar] [CrossRef]
  20. Khalid, M.; Wu, J.; Ali, T.M.; Moustafa, A.A.; Zhu, Q.; Xiong, R. Green model to adapt classical conditioning learning in the hippocampus. Neuroscience 2019. [Google Scholar] [CrossRef]
  21. Gluck, M.A.; Myers, C.E. Hippocampal mediation of stimulus representation: A computational theory. Hippocampus 1993, 3, 491–516. [Google Scholar] [CrossRef]
  22. Moustafa, A.A.; Myers, C.E.; Gluck, M.A. A neurocomputational model of classical conditioning phenomena: A putative role for the hippocampal region in associative learning. Brain Res. 2009, 1276, 180–195. [Google Scholar] [CrossRef]
  23. Grossberg, S. Embedding fields: A theory of learning with physiological implications. J. Math. Psychol. 1969, 6, 209–239. [Google Scholar] [CrossRef]
  24. Widrow, B.; Hoff, M.E. Adaptive Switching Circuits. In Proceedings of the 1960 IRE WESCON Convention Record, Los Angeles, CA, USA, 23–26 August 1960; Reprinted in Neurocomputing. MIT Press: Cambridge, MA, USA, 1988; pp. 96–104. [Google Scholar]
  25. Zhu, H.; Paschalidis, I.C.; Hasselmo, M.E. Neural circuits for learning context-dependent associations of stimuli. Neural Netw. 2018, 107, 48–60. [Google Scholar] [CrossRef] [PubMed]
  26. Kuchibhotla, K.V.; Gill, J.V.; Lindsay, G.W.; Papadoyannis, E.S.; Field, R.E.; Sten, T.A.H.; Miller, K.D.; Froemke, R.C. Parallel processing by cortical inhibition enables context-dependent behavior. Nat. Neurosci. 2017, 20, 62–71. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Newman, S.E.; Nicholson, L.R. The Effects of Context Stimuli on Paired-Associate Learning. Am. J. Psychol. 1976, 89, 293–301. [Google Scholar] [CrossRef]
  28. Bliss-Moreau, E.; Moadab, G.; Santistevan, A.; Amaral, D.G. The effects of neonatal amygdala or hippocampus lesions on adult social behavior. Behav. Brain Res. 2017, 322, 123–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Ito, R.; Robbins, T.W.; McNaughton, B.L.; Everitt, B.J. Selective excitotoxic lesions of the hippocampus and basolateral amygdala have dissociable effects on appetitive cue and place conditioning based on path integration in a novel Y-maze procedure. Eur. J. Neurosci. 2006, 23, 3071–3080. [Google Scholar] [CrossRef] [Green Version]
  30. Ito, R.; Everitt, B.J.; Robbins, T.W. The hippocampus and appetitive Pavlovian conditioning: Effects of excitotoxic hippocampal lesions on conditioned locomotor activity and autoshaping. Hippocampus 2005, 15, 713–721. [Google Scholar] [CrossRef] [PubMed]
  31. Eichenbaum, H.; Fagan, A.; Mathews, P.; Cohen, N.J. Hippocampal System Dysfunction and Odor Discrimination Learning in Rats: Impairment or Facilitation Depending on Representational Demands. Behav. Neurosci. 1988, 102, 331–339. [Google Scholar] [CrossRef]
  32. Schmaltz, L.W.; Theios, J. Acquisition and extinction of a classically conditioned response in hippocampectomized rabbits (Oryctolagus cuniculus). J. Comp. Physiol. Psychol. 1972, 79, 328–333. [Google Scholar] [CrossRef]
  33. Clawson, W.P.; Wright, N.C.; Wessel, R.; Shew, W.L. Adaptation towards scale-free dynamics improves cortical stimulus discrimination at the cost of reduced detection. PLoS Comput. Biol. 2017, 13, e1005574. [Google Scholar] [CrossRef] [Green Version]
  34. Lonsdorf, T.B.; Haaker, J.; Schümann, D.; Sommer, T.; Bayer, J.; Brassen, S.; Bunzeck, N.; Gamer, M.; Kalisch, R. Sex differences in conditioned stimulus discrimination during context-dependent fear learning and its retrieval in humans: The role of biological sex, contraceptives and menstrual cycle phases. J. Psychiatry Neurosci. 2015, 40, 368–375. [Google Scholar] [CrossRef] [Green Version]
  35. Hanggi, E.B.; Ingersoll, J.F. Stimulus discrimination by horses under scotopic conditions. Behav. Process. 2009, 82, 45–50. [Google Scholar] [CrossRef] [PubMed]
  36. McDonald, R.J.; Ko, C.H.; Hong, N.S. Attenuation of context-specific inhibition on reversal learning of a stimulus–response task in rats with neurotoxic hippocampal damage. Behav. Brain Res. 2002, 136, 113–126. [Google Scholar] [CrossRef]
  37. McDonald, R.J.; King, A.L.; Hong, N.S. Context-specific interference on reversal learning of a stimulus-response habit. Behav. Brain Res. 2001, 121, 149–165. [Google Scholar] [CrossRef]
  38. Azorlosa, J.L.; Cicala, G.A. Increased conditioning in rats to a blocked CS after the first compound trial. Bull. Psychon. Soc. 1988, 26, 254–257. [Google Scholar] [CrossRef] [Green Version]
  39. Chang, H.P.; Ma, Y.L.; Wan, F.J.; Tsai, L.Y.; Lindberg, F.P.; Lee, E.H.Y. Functional blocking of integrin-associated protein impairs memory retention and decreases glutamate release from the hippocampus. Neuroscience 2001, 102, 289–296. [Google Scholar] [CrossRef]
  40. Maes, E.; Boddez, Y.; Alfei, J.M.; Krypotos, A.; D’Hooge, R.; De Houwer, J.; Beckers, T. The elusive nature of the blocking effect: 15 failures to replicate. J. Exp. Psychol. Gen. 2016, 145, e49–e71. [Google Scholar] [CrossRef] [Green Version]
  41. Sanderson, D.J.; Jones, W.S.; Austen, J.M. The effect of the amount of blocking cue training on blocking of appetitive conditioning in mice. Behav. Process. 2016, 122, 36–42. [Google Scholar] [CrossRef] [Green Version]
  42. Pineno, O.; Urushihara, K.; Stout, S.; Fuss, J.; Miller, R.R. When more is less: Extending training of the blocking association following compound training attenuates the blocking effect. Learn. Behav. 2006, 34, 21–36. [Google Scholar] [CrossRef] [Green Version]
  43. Holland, P.C.; Fox, G.D. Effects of Hippocampal Lesions in Overshadowing and Blocking Procedures. Behav. Neurosci. 2003, 117, 650–656. [Google Scholar] [CrossRef]
  44. Todd Allen, M.; Padilla, Y.; Myers, C.E.; Gluck, M.A. Selective hippocampal lesions disrupt a novel cue effect but fail to eliminate blocking in rabbit eyeblink conditioning. Cogn. Affect. Behav. Neurosci. 2002, 2, 318–328. [Google Scholar] [CrossRef] [Green Version]
  45. Gallo, M.; Cándido, A. Dorsal Hippocampal Lesions Impair Blocking but Not Latent Inhibition of Taste Aversion Learning in Rats. Behav. Neurosci. 1995, 109, 413–425. [Google Scholar] [CrossRef] [PubMed]
  46. Kamin, L.J. Predictability, Surprise, Attention, and Conditioning. In Punishment Aversive Behavior; Campbell, B.A., Church, R.M., Eds.; Appleton-Century-Crofts: New York, NY, USA, 1969; pp. 279–296. [Google Scholar]
  47. Sherratt, T.N.; Whissell, E.; Webster, R.; Kikuchi, D.W. Hierarchical overshadowing of stimuli and its role in mimicry evolution. Anim. Behav. 2015, 108, 73–79. [Google Scholar] [CrossRef] [Green Version]
  48. Stockhorst, U.; Hall, G.; Enck, P.; Klosterhalfen, S. Effects of overshadowing on conditioned and unconditioned nausea in a rotation paradigm with humans. Exp. Brain Res. 2014, 232, 2651–2664. [Google Scholar] [CrossRef] [PubMed]
  49. Stout, S.; Arcediano, F.; Escobar, M.; Miller, R.R. Overshadowing as a function of trial number: Dynamics of first- and second-order comparator effects. Learn. Behav. 2003, 31, 85–97. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Urushihara, K.; Miller, R.R. CS-duration and partial-reinforcement effects counteract overshadowing in select situations. Learn. Behav. 2007, 35, 201–213. [Google Scholar] [CrossRef] [Green Version]
  51. Wisniewski, M.G.; Church, B.A.; Mercado, E.; Radell, M.L.; Zakrzewski, A.C. Easy-to-hard effects in perceptual learning depend upon the degree to which initial trials are “easy”. Psychon. Bull. Rev. 2019. [Google Scholar] [CrossRef]
  52. Sanjuan, M.D.C.; Nelson, J.B.; Alonso, G. An easy-to-hard effect after nonreinforced preexposure in a sweetness discrimination. Learn. Behav. 2014, 42, 209–214. [Google Scholar] [CrossRef]
  53. Liu, E.H.; Mercado, E.; Church, B.A.; Orduña, I. The Easy-to-Hard Effect in Human (Homo sapiens) and Rat (Rattus norvegicus) Auditory Identification. J. Comp. Psychol. 2008, 122, 132–145. [Google Scholar] [CrossRef] [Green Version]
  54. Scahill, V.; Mackintosh, N. The easy to hard effect and perceptual learning in flaor aversion conditioning. J. Exp. Psychol.-Anim. Behav. Process. 2004, 30, 96–103. [Google Scholar] [CrossRef]
  55. Williams, D.I.; Williams, D.I. Discrimination learning in the pigeon with two relevant cues, one hard and one easy. Br. J. Psychol. 1972, 63, 407–409. [Google Scholar] [CrossRef]
  56. Doan, H.M.K. Effects of Correction and Non-Correction Training Procedures on ‘Easy’ and ‘Hard’ Discrimination Learning in Children. Psychol. Rep. 1970, 27, 459–466. [Google Scholar] [CrossRef]
  57. Terrace, H.S.; Terrace, H.S. Discrimination learning with and without “errors”. J. Exp. Anal. Behav. 1963, 6, 1–27. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Revillo, D.A.; Gaztañaga, M.; Aranda, E.; Paglini, M.G.; Chotro, M.G.; Arias, C. Context-dependent latent inhibition in preweanling rats. Dev. Psychobiol. 2014, 56, 1507–1517. [Google Scholar] [CrossRef] [PubMed]
  59. Swerdlow, N.R.; Braff, D.L.; Hartston, H.; Perry, W.; Geyer, M.A. Latent inhibition in schizophrenia. Schizophr. Res. 1996, 20, 91–103. [Google Scholar] [CrossRef]
  60. Lubow, R.E.; Markman, R.E.; Allen, J. Latent inhibition and classical conditioning of the rabbit pinna response. J. Comp. Physiol. Psychol. 1968, 66, 688–694. [Google Scholar] [CrossRef] [PubMed]
  61. Rumelhart, D.E.; McClelland, J.L.; PDP Research Group (Eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  62. Renaux, C.; Riviere, V.; Craddock, P.; Miller, R. Role of spatial contiguity in sensory preconditioning with humans. Behav. Process. 2017, 142, 141–145. [Google Scholar] [CrossRef]
  63. Holmes, N.M.; Westbrook, R.F. A dangerous context changes the way that rats learn about and discriminate between innocuous events in sensory preconditioning. Learn. Mem. 2017, 24, 440–448. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Robinson, S.; Todd, T.P.; Pasternak, A.R.; Luikart, B.W.; Skelton, P.D.; Urban, D.J.; Bucci, D.J. Chemogenetic silencing of neurons in retrosplenial cortex disrupts sensory preconditioning. J. Neurosci. 2014, 34, 10982–10988. [Google Scholar] [CrossRef] [Green Version]
  65. Cerri, D.H.; Saddoris, M.P.; Carelli, R.M. Nucleus accumbens core neurons encode value-independent associations necessary for sensory preconditioning. Behav. Neurosci. 2014, 128, 567–578. [Google Scholar] [CrossRef] [Green Version]
  66. Matsumoto, Y.; Hirashima, D.; Mizunami, M. Analysis and modeling of neural processes underlying sensory preconditioning. Neurobiol. Learn. Mem. 2013, 101, 103–113. [Google Scholar] [CrossRef] [Green Version]
  67. Rodriguez, G.; Alonso, G. Stimulus comparison in perceptual learning: Roles of sensory preconditioning and latent inhibition. Behav. Process. 2008, 77, 400–404. [Google Scholar] [CrossRef] [PubMed]
  68. Espinet, A.; González, F.; Balleine, B.W. Inhibitory sensory preconditioning. Q. J. Exp. Psychol. Sect. B 2004, 57, 261–272. [Google Scholar] [CrossRef] [PubMed]
  69. Muller, D.; Gerber, B.; Hellstern, F.; Hammer, M.; Menzel, R. Sensory preconditioning in honeybees. J. Exp. Biol. 2000, 203, 1351–1364. [Google Scholar] [PubMed]
  70. Nicholson, D.A.; Freeman, J.H., Jr. Lesions of the perirhinal cortex impair sensory preconditioning in rats. Behav. Brain Res. 2000, 112, 69–75. [Google Scholar] [CrossRef]
  71. Rodriguez, G.; Angulo, R. Simultaneous stimulus preexposure enhances human tactile perceptual learning. Psicologica 2014, 35, 139–148. [Google Scholar]
  72. Hayes, S.M.; Baena, E.; Truong, T.K.; Cabeza, R. Neural Mechanisms of Context Effects on Face Recognition: Automatic Binding and Context Shift Decrements. J. Cogn. Neurosci. 2010, 22, 2541–2554. [Google Scholar] [CrossRef] [Green Version]
  73. Weiner, I. The ‘two-headed’ latent inhibition model of schizophrenia: Modeling positive and negative symptoms and their treatment. Psychopharmacology 2003, 169, 257–297. [Google Scholar] [CrossRef]
  74. Talk, A.; Stoll, E.; Gabriel, M. Cingulate Cortical Coding of Context-Dependent Latent Inhibition. Behav. Neurosci. 2005, 119, 1524–1532. [Google Scholar] [CrossRef] [PubMed]
  75. Gray, N.S.; Williams, J.; Fernandez, M.; Ruddle, R.A.; Good, M.A.; Snowden, R.J. Context dependent latent inhibition in adult humans. Q. J. Exp. Psychol. Sect. B 2001, 54, 233–245. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The general structure of the qubit neuron model.
Figure 1. The general structure of the qubit neuron model.
Brainsci 10 00431 g001
Figure 2. The CHCQ model framework.
Figure 2. The CHCQ model framework.
Brainsci 10 00431 g002
Figure 3. Intact and lesioned systems of the CHCQ model.
Figure 3. Intact and lesioned systems of the CHCQ model.
Brainsci 10 00431 g003
Figure 4. The qubit neuron model in the input layer of CHCQ model.
Figure 4. The qubit neuron model in the input layer of CHCQ model.
Brainsci 10 00431 g004
Figure 5. The cue contains context and stimulus A as <A+> task.
Figure 5. The cue contains context and stimulus A as <A+> task.
Brainsci 10 00431 g005
Figure 6. The cue contains context and stimulus A only as <A−> task.
Figure 6. The cue contains context and stimulus A only as <A−> task.
Brainsci 10 00431 g006
Figure 7. <A+> task for both intact and lesioned systems.
Figure 7. <A+> task for both intact and lesioned systems.
Brainsci 10 00431 g007
Figure 8. Stimulus discrimination learning <A+, B−> for both intact and lesioned systems of CHCQ model.
Figure 8. Stimulus discrimination learning <A+, B−> for both intact and lesioned systems of CHCQ model.
Brainsci 10 00431 g008
Figure 9. Stimulus discrimination task of the CHCQ-disrupted system.
Figure 9. Stimulus discrimination task of the CHCQ-disrupted system.
Brainsci 10 00431 g009
Figure 10. Discrimination reversal for both intact and lesioned systems of CHCQ model.
Figure 10. Discrimination reversal for both intact and lesioned systems of CHCQ model.
Brainsci 10 00431 g010
Figure 11. Blocking task for the intact system.
Figure 11. Blocking task for the intact system.
Brainsci 10 00431 g011
Figure 12. Extended blocking task for the intact system.
Figure 12. Extended blocking task for the intact system.
Brainsci 10 00431 g012
Figure 13. Blocking task for the lesioned system.
Figure 13. Blocking task for the lesioned system.
Brainsci 10 00431 g013
Figure 14. Overshadowing task for the intact system.
Figure 14. Overshadowing task for the intact system.
Brainsci 10 00431 g014
Figure 15. Overshadowing task for the lesioned system.
Figure 15. Overshadowing task for the lesioned system.
Brainsci 10 00431 g015
Figure 16. Extended overshadowing task for the intact system.
Figure 16. Extended overshadowing task for the intact system.
Brainsci 10 00431 g016
Figure 17. Easy transfer task for the intact system.
Figure 17. Easy transfer task for the intact system.
Brainsci 10 00431 g017
Figure 18. Easy transfer task for the lesioned system.
Figure 18. Easy transfer task for the lesioned system.
Brainsci 10 00431 g018
Figure 19. Hard transfer task for the intact system.
Figure 19. Hard transfer task for the intact system.
Brainsci 10 00431 g019
Figure 20. Hard transfer task for the lesioned system.
Figure 20. Hard transfer task for the lesioned system.
Brainsci 10 00431 g020
Figure 21. Easy–Hard transfer task of CHCQ model compared with the model of Moustafa et al.
Figure 21. Easy–Hard transfer task of CHCQ model compared with the model of Moustafa et al.
Brainsci 10 00431 g021
Figure 22. Latent inhibition task for the intact system of CHCQ model with and without a pre-exposure.
Figure 22. Latent inhibition task for the intact system of CHCQ model with and without a pre-exposure.
Brainsci 10 00431 g022
Figure 23. Latent inhibition task for the lesioned system of CHCQ model with and without a pre-exposure.
Figure 23. Latent inhibition task for the lesioned system of CHCQ model with and without a pre-exposure.
Brainsci 10 00431 g023
Figure 24. Latent inhibition task for the intact system.
Figure 24. Latent inhibition task for the intact system.
Brainsci 10 00431 g024
Figure 25. Latent inhibition task for the lesioned system.
Figure 25. Latent inhibition task for the lesioned system.
Brainsci 10 00431 g025
Figure 26. Generic feedforward multilayer network compared to the CHCQ lesioned system.
Figure 26. Generic feedforward multilayer network compared to the CHCQ lesioned system.
Brainsci 10 00431 g026
Figure 27. All three phases of sensory preconditioning task for the intact system of the CHCQ.
Figure 27. All three phases of sensory preconditioning task for the intact system of the CHCQ.
Brainsci 10 00431 g027
Figure 28. The response of the last phase of sensory preconditioning task for the intact system.
Figure 28. The response of the last phase of sensory preconditioning task for the intact system.
Brainsci 10 00431 g028
Figure 29. The response of the last phase of sensory preconditioning task for the lesioned system.
Figure 29. The response of the last phase of sensory preconditioning task for the lesioned system.
Brainsci 10 00431 g029
Figure 30. Two phases of compound preconditioning task for the intact system of CHCQ.
Figure 30. Two phases of compound preconditioning task for the intact system of CHCQ.
Brainsci 10 00431 g030
Figure 31. Only the last phase of compound preconditioning task for the intact system.
Figure 31. Only the last phase of compound preconditioning task for the intact system.
Brainsci 10 00431 g031
Figure 32. Context sensitivity task for both intact and lesioned systems of the CHCQ model.
Figure 32. Context sensitivity task for both intact and lesioned systems of the CHCQ model.
Brainsci 10 00431 g032
Figure 33. Context sensitivity task for the intact system of the last and first trial values of the two phases, respectively.
Figure 33. Context sensitivity task for the intact system of the last and first trial values of the two phases, respectively.
Brainsci 10 00431 g033
Figure 34. Extended context sensitivity task for the intact system.
Figure 34. Extended context sensitivity task for the intact system.
Brainsci 10 00431 g034
Figure 35. Context shift due to latent inhibition task for the intact system.
Figure 35. Context shift due to latent inhibition task for the intact system.
Brainsci 10 00431 g035
Table 1. All the simulated tasks.
Table 1. All the simulated tasks.
No.Task NamePhase 1Phase 2Phase 3
1AA+AX+
1BA−AX−
2Stimulus discriminationAX+, BX−
3Discrimination reversalAX+, BX−AX−, BX+
4BlockingAX+ABX+BX−
5OvershadowingABX+AX+; BX+
6Easy–Hard transferA1X+, A2X−A3X+, A4X−
7Latent inhibitionAX−AX+
8Sensory preconditioningABX−AX+BX−
9Compound preconditioningABX−AX+, BX−
10AContext sensitivity (Context shift)AX+AY+
10BContext sensitivity of latent inhibitionAX−AY+
11Generic feedforward multilayer networkAX−AX+
Table 2. A comparison between M1 (Gluck/Myers model) and the CHCQ model, in terms of trial numbers needed to get the final state of conditioned response (CR).
Table 2. A comparison between M1 (Gluck/Myers model) and the CHCQ model, in terms of trial numbers needed to get the final state of conditioned response (CR).
Phase 1Phase 2Phase 3
IntactLesioned IntactLesioned IntactLesioned
(a)(b)(c)(d)Improvement(e)(f)(g)(h)Improvement(i)(j)(k)(l)Improvement
No.Task NameM1CHCQM1CHCQb vs. ad vs. cM1CHCQM1CHCQf vs. eh vs. gM1CHCQM1CHCQj vs. il vs. k
1Stimulus discrimination>20024>2001788.0%91.5%
2Reversal learning>20024>2001788.0%91.5%>20022>4003289.0%92.0%
3Easy–Hard transfer learning>20027>2001986.5%90.5%>100034>10002096.6%98%
4Latent inhibition5050505000.0%00.0%>10031>1001869.0%82.0%
5Sensory preconditioning2005075.0%>1003169.0%505000.0%
6Compound preconditioning202000.0%>1003268.0%
7Generic feedforward multilayer network1005050.0%>2002189.5%
8Contextual sensitivity>20024>2001788.0%91.5%>2001>200199.5%99.5%
Table 3. A comparison between M2 (Moustafa et al. model) and the CHCQ model, in terms of trial numbers needed to get the final state of CR.
Table 3. A comparison between M2 (Moustafa et al. model) and the CHCQ model, in terms of trial numbers needed to get the final state of CR.
Phase 1Phase 2Phase 3
IntactLesioned IntactLesioned IntactLesioned
(a)(b)(c)(d)Improvement(e)(f)(g)(h)Improvement(i)(j)(k)(l)Improvement
No.Task NameM2CHCQM2CHCQb vs. ad vs. cM2CHCQM2CHCQf vs. eh vs. gM2CHCQM2CHCQj vs. il vs. k
1A+>10023>1001877.0%82.0%
2A−>1002>100298.0%98.0%
3Sensory preconditioning1005050.0%>1003169.0%505000.0%
4Latent inhibition5050505000.0%00.0%>10031>1001869.0%82.0%
5Context shift1005050.0%1100.0%
6Context sensitivity of latent inhibition1005050.0%1100.0%
7Easy–Hard transfer learning>1001783.0%>1001981.0%
8Blocking>10023>1001877.0%82.0%>10024>1001776.0%83.0%>10012>100388.0%97.0%
9Compound preconditioning1002080.0%>2003284.0%
10Overshadowing100201002080.0%80.0%>10025>1002275.0%78.0%
Table 4. A comparison between G (Green model) and the CHCQ model, in terms of trial numbers needed to get the final state of CR.
Table 4. A comparison between G (Green model) and the CHCQ model, in terms of trial numbers needed to get the final state of CR.
Phase 1Phase 2Phase 3
IntactLesioned IntactLesioned IntactLesioned
(a)(b)(c)(d)Improvement(e)(f)(g)(h)Improvement(i)(j)(k)(l)Improvement
No.Task NameGCHCQGCHCQb vs. ad vs. cGCHCQGCHCQf vs. eh vs. gGCHCQGCHCQj vs. il vs. k
1AA+3223281828.1%35.7%
1BA−222200.0%00.0%
2Stimulus discrimination3324201727.2%15.0%
3Discrimination reversal3324201727.2%15.0%3122383229.0%15.7%
4Blocking3223281828.1%35.7%3224281725.0%39.2%23127347.8%57.1%
5Overshadowing2020202000.0%00.0%2625272203.8%18.5%
6Easy–Hard transfer3527251982.5%87.5%3834272010.5%25.9%
7Latent inhibition5050505000.0%00.0%4131241824.3%25.0%
8Sensory preconditioning505000.0%373116.2%505000.0%
9Compound preconditioning202000.0%343205.8%
10AContext sensitivity3324201727.7%15.0%111100.0%00.0%
10BContext sensitivity of latent inhibition505000.0%1100.0%
11Generic feedforward multilayer network505000.0%242112.5%

Share and Cite

MDPI and ACS Style

Khalid, M.; Wu, J.; M. Ali, T.; Ameen, T.; Moustafa, A.A.; Zhu, Q.; Xiong, R. Cortico-Hippocampal Computational Modeling Using Quantum Neural Networks to Simulate Classical Conditioning Paradigms. Brain Sci. 2020, 10, 431. https://doi.org/10.3390/brainsci10070431

AMA Style

Khalid M, Wu J, M. Ali T, Ameen T, Moustafa AA, Zhu Q, Xiong R. Cortico-Hippocampal Computational Modeling Using Quantum Neural Networks to Simulate Classical Conditioning Paradigms. Brain Sciences. 2020; 10(7):431. https://doi.org/10.3390/brainsci10070431

Chicago/Turabian Style

Khalid, Mustafa, Jun Wu, Taghreed M. Ali, Thaair Ameen, Ahmed A. Moustafa, Qiuguo Zhu, and Rong Xiong. 2020. "Cortico-Hippocampal Computational Modeling Using Quantum Neural Networks to Simulate Classical Conditioning Paradigms" Brain Sciences 10, no. 7: 431. https://doi.org/10.3390/brainsci10070431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop