Emulating Cued Recall of Abstract Concepts via Regulated Activation Networks

: Abstract concepts play a vital role in decision-making or recall operations because the associations among them are essential for contextual processing. Abstract concepts are complex and difﬁcult to represent (conceptually, formally, or computationally), leading to difﬁculties in their comprehension and recall. This contribution reports the computational simulation of the cued recall of abstract concepts by exploiting their learned associations. The cued recall operation is realized via a novel geometric back-propagation algorithm that emulates the recall of abstract concepts learned through regulated activation network (RAN) modeling. During recall operation, another algorithm uniquely regulates the activation of concepts (nodes) by injecting excitatory, neutral, and inhibitory signals to other concepts of the same level. A Toy-data problem is considered to illustrate the RAN modeling and recall procedure. The results display how regulation enables contextual awareness among abstract nodes during the recall process. The MNIST dataset is used to show how recall operations retrieve intuitive and non-intuitive blends of abstract nodes. We show that every recall process converges to an optimal image. With more cues, better images are recalled, and every intermediate image obtained during the recall iterations corresponds to the varying cognitive states of the recognition procedure.


Introduction
Concepts are an important object of research in cognitive and psychological research. Usually, the conceptual representations are process-oriented, symbolic or distributed, and knowledge-based [1][2][3]. In general, a hierarchical structure defines an organization of concepts where the concrete concepts are placed in the lower level, and the abstract Concepts occupy the higher levels (see the example of the vehicle and cars in Figure 1). Therefore, abstract concepts are also seen as the generalization of concrete concepts [4,5]. Abstract concepts are studied mathematically [6] and theoretically [7,8], but computational studies are scarce [1]. This article uses a computational model, regulated activation network (RAN) [9][10][11], capable of building representation of convex abstract concepts, which are later used in recall simulations.
The prime aspect of this article is to emulate the recall procedure, that can be viewed as the cognitive process of remembering. Context plays an eminent role in the recall of concrete concepts (such as the word "table"), which is often termed as concreteness effect. The concreteness effect is expressed through the dual-coding theory [12,13] and the context availability hypothesis [14]. According to this theory, individually, abstract concepts are abstruse in context retrieval when compared with concrete concepts; therefore, their recall procedure is complex. However, an interesting work suggests that context retrieval of abstract concepts is possible when response pairs of abstract concepts are related to one another, thus providing context for the abstract stimuli [15]. In RAN modeling we can learn associations among concepts (including the abstract concepts). These learned associations provide adequate context relations among the abstract concepts. In this work, we exploit these learned associations among the abstract concepts to simulate a regulated recall operation. In recent years, advances in technologies have played an essential role in cognitive and psychological research, e.g., use of devices like GP3 [16], TheEyeTribe [17], and electroencephalography (EEG) [18,19] to study visual attention. In this article, we focus on representations of computational models that are also very useful in understanding the psychological and cognitive phenomena, validating the existing cognitive theories, and helping to formulate fresh ideas related to cognition [20][21][22][23]. The representations of these computational approaches are either symbolic (amodal), distributed (multimodal), or hybrid [24], which helps in simulating or understanding various cognitive phenomena. Below are a few examples of computational models or architecture used to study psychological and cognitive phenomena based upon their representation: • Symbolic: Adaptive Control of Thought-Rational (ACT-R) [25] is a symbolic architecture intended to model memory [26] and simulate attention [25,27], decisionmaking [28], recognition [29], and forgetting [29]. • Distrubuted: Multimodal approaches such as artificial neural networks (ANNs), including the restricted Boltzmann machine (RBM) [30], deep neural networks [31], stacked auto-encoders [32], and convolution neural networks (CNN) [33], have significant contributions in feature recognition [34] and distributed memory representation [35]. Methods like Random Forests have also been used in studies related to visual attention [36]. • Hybrid: Cognitive architectures like Connectionist Learning with Adaptive Rule Induction On-line (CLARION) [37] simulate scenarios related to cognitive and social psychology.
This article makes use of the RAN's hybrid nature and modeling to build the representation of convex abstract concepts and further simulate recall of abstract concepts. F First, the model generation takes place with four basic steps of the RAN approach [9], i.e., concept identification, concept creation, inter-layer learning, and upward activation propagation. An intra-layer procedure also takes place at all the layers to identify the association among the concepts at the same level. Further, these learned associations are uniquely interpreted to determine whether the impact of their learned weights is inhibitory, excitatory, or neutral. Later, these impacts are applied to obtain a regulatory effect on peer concepts (abstract concepts or input layer concepts) during recall operation. A Toy-data problem was used for modeling with RAN and demonstrating the novel geometric backpropagation algorithm for the simulation-cued recall operation. The benchmark dataset of the image domain, MNIST, is also used to demonstrate the cued recall experiment. These experiments also show how blends of abstract concepts can be recalled. To summarize, the following are the main contributions of the article: first, the impact factor calculation to determine the inhibitory, excitatory, or neutral effect of one node over other; second, the novel intra-layer regulation algorithm for the use of the impact factor in order to regulate the activation of other concepts; third, the novel geometric back-propagation algorithm and recall simulations using the geometric back-propagation algorithm.
The remainder of this article is organized in the following way: Section 2 puts forward the state of the art related to recall operations; RAN modeling, the intra-layer regulation algorithm, and the geometric back-propagation algorithm are detailed in Section 3 using a Toy-data problem; the cued recall demonstration with the MNIST dataset is reported in Section 4; Section 5 concludes the article.

Related Work
Recall or retrieval is a cognitive process [38] of remembering a thing or an event. While recalling, the brain activates a neural assembly that was created when the original event occurred [38]. In psychology, there is a plethora of articles studying the recall process. Psychologists used free-recall, cued-recall, and serial-recall as tools to investigate memory processes [39]. Recall has been used to study the effect of cognitive strategies, such as chunking and the use of mnemonics for memorization of things (such as large numbers) [40]. One interesting study reported the benefits of subsequent recall in retrieval operations where memories are related or competing [41]. The proverb "practice makes a man perfect" relates to the fortification of memory, and an investigation shows how retrieval plays an important role in this memory strengthening [42]. Technologies such as functional magnetic resonance imaging (fMRI), magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalography (EEG) played an active role in validating many recall related hypotheses [41,[43][44][45].
Notable contributions to the modeling of memory recall procedures are observed. Based on the temporal context model [46,47] of human behavior, human memory performance was modeled using a probabilistic approach during free-recall experiments [48]. A computational model of interaction between the prefrontal cortex and medial temporal lobe in memory usage was designed to study the prefrontal control in a recall process [49]. The model was a simple neural network with quick and flexible reinforcement learning exhibiting strategic recall. Another computational model differentiates recall from the recognition process depending upon the number of cues involved in the retrieval procedure [50]. For encoding, the model used an inference-based model of memory [51], and retrieval was carried out using a Bayesian observer model [52]. A large number of computational psychology contributions examining the recall process and recognition using the neural networks are available [53][54][55][56].
An interesting study simulated the free-recall process using the ACT-R architecture [57], showing that the classical effect of primacy and recency can be recreated through rehearsal theory based upon ACT-R and Baddeley's phonological Loop [58]. ACT-R architecture was also used to propose a new theory of memory retrieval to predict for intricate serial and free-recall operations [59]. This research also focused on the prospects of associative learning by introducing a strengthening and decaying mechanism depending upon the similarity of the input stimulus. The serial recall has been modeled in a scientific contribution using ACT-R architecture to explain the processes involved while recalling a list of words [60]. The traditional ACT-R recall operations had a limitation: here, the memory access depends upon limiting the capacity of the activation process, consequently inducing errors in the contents being recalled. This theory overcomes the limitations by predicting the latency and errors in a serial recall process.
The free recall process was also modeled using CLARION to determine the role of distractions in an incubation task [61]. This study made a striking observation that rehearsals play an important role in memory consolidation during the free recall procedure, and distractions can hinder the free recall and eventually effect memory strengthening. CLARION was also used to emulate, acquire, and expound human-centric data relevant to incubation and insight through free recall, lexical decision, and problem-solving tasks [62].
This article introduces a novel algorithm named geometric back-propagation, which enable us to simulate the recall simulation using RAN modeling. The main objective of the experiments in this paper is to demonstrate the role cue (activation) on abstract concepts (nodes) in recall operations. An additional goal is to show that when larger number abstract concepts participate (i.e., more cues are available) in recall operations, then better recall is observed in the experiment.

RAN Methodology to Simulate Recall Operations
Here, we describe the emulation of the recall process using RAN modeling. For background understanding in Section 3.2 we describe RAN modeling along with two learning mechanisms, i.e., inter-layer and intra-layer learning. Having explained RAN modeling, the two contributions of this article are elucidated: first, the regulation mechanism is described, the biological inspiration behind this operation is descibed in Section 3.1. Secondly, a novel geometric back-propagation algorithm is proposed that propagates activations from abstract level to input Level. RAN methodology and the article's contributions are illustrated using a Toy-data set. At the end of this section, the experiments of RAN modeling with the Toy-data are also reported, demonstrating the recall operation.

Biological Inspiration of Regulation Operation on RAN's Modeling
The nerve cell (neuron) consists of several main components: the dendrites, the cell body, and the axon, as shown in Figure 2. When an electric signal traverses the whole axon and reaches one of its terminations, it releases chemicals called neurotransmitters, which diffuse across the synaptic gap and are absorbed by the receptive neuron's dendrite. Depending on the neurotransmitter, this absorption can either enhance or inhibit the receptive neuron's activation. Other kinds of synapses occur in biological brains, such as axoaxonic synapses, as shown in Figure 2. These synapses occur when the axon of a neuron connects to the axon of another neuron instead of to its dendrites. Such configuration usually plays a regulatory role by mediating presynaptic inhibition and presynaptic facilitation [63]. By virtue of artificial axoaxonic synapses, this contribution realizes the inhibitory, excitatory, and neutral activation propagation phenomenon in RAN modeling, which is used to induce a regulatory effect on the activation at nodes during recall operation.

Abstract Concept Modeling with RAN
This section is dedicated to describing convex abstract concept modeling with RAN [11]. To demonstrate RAN methodology, a Toy-data problem is used-see Figure 3. The Toy-data are synthetically produced by generating a 2D dataset with five classes. In Figure 3, we can see that out of the five clusters, three are far apart from one another; however, two clusters are very close to each other. This arrangement of clusters was introduced into the Toy-data problem to demonstrate the excitatory and inhibitory impact of concepts, representing each cluster at an abstract level. The dataset consists of 1800 data instances with an equal distribution in all of the classes. RAN modeling is performed using the four basic steps, where step 1 and step 4 consist of two concurrent operations.

3.2.1.
Step 1a: Convex Concept Identification (CCI) CCI is a method to determine convex groups in a given dataset. In RAN, each data instance is considered a point in n-dimensional geometric feature space, inspired by the theory of conceptual spaces [64]. In this method, we also determine the cluster centers that are used in the inter-Layer learning operation (see Section 3.2.4).
Step 1a in Figure 3 shows input Layer 0; here, nodes S 1 and S 2 correspond to the dimensions of the input Toy-data. To identify the five convex groups, the K-mean [65] clustering algorithm is chosen as the concept identifier (CI), and the value of K is set to 5 to determine five clusters. Five cluster centers are also identified in this process, as shown in Figure 3, as cluster representative data points (CRPD). Any clustering algorithm can act as a concept identifier provided that it enables the identification of convex regions along with their respective cluster centers.

3.2.2.
Step 1b and Step 4b: Concept Similarity Relation Learning (CSRL) CSRL is an intra-layer operation in RAN modeling explained in [66]. The main purpose of this process is to determine the alikeness among the concepts and associate them through a similarity relation. This relation also simulates the behavior of activation found in biological neurons; i.e., affine neurons are activated concurrently upon receiving input stimuli, whereas unrelated neurons remain relatively inactive for the same input stimulus. This phenomenon is expressed mathematically through Equation (1) to calculate a pair-wise relation/weight w m→n between node m, and node n at a layer. The numerator (1 − |A I m − A I n |) calculates the similarity of activation (A I m is the activation of Ith instance of propagated data at node C m in a layer, and similarly, A I n is the activation of Ith instance of propagated data at node C n . m = n and m, n are integers.) of node m w.r.t. node n, and the product (1 − A I m ) * (1 − A I n ) is used to reduce the impact of similarity on weight w m→n when both activations (i.e., A I m , and A I n ) are very close to 0, though similar. Consequently, we obtain a symmetric k × k matrix as learned concept similarity relation weights (CSRW) among the nodes within the layer.
where m ∈ 1, ...., k; n ∈ 1, ...., k; and m = n. This learning mechanism is performed two times while modeling with Toy-data: First, in Step 1b at the input layer 0, (see Figure 4, step 1b), and second, when the input data is propagated upward to the convex concept layer 1. The learning at layer 0 has a size of 2 × 2, as the input layer has two nodes. In contrast, the learning at layer 1 has a size of 5 × 5 (see Figure 5 for the CSRL weights). . Regulated activation network (RAN) convex abstract concept modeling process. The procedure displays the four steps in RAN modeling. This figure shows the three learning procedures, i.e., two similarity relation learning procedures at two layers, and one inter-layer learning procedure between layer 1 and layer 0. In step 1, similarity relation learning (step 1a) is performed along with the concept identification process (step 1b). Similarly, in step 4, similarity relation learning (step 4b) is performed together with the upward activation propagation method (step 4a).

Step 2: Convex Abstract Concept Creation (CACC)
Convex abstract concept creation is a method of dynamically creating a layer that consists of nodes as an abstract representative of categories determined in the convex concept identification process (see Section 3.2.1). For instance, step 2 in Figure 4 depicts the creation of new a convex abstract concept (CAC) layer 1 with five nodes (C 1 , ..., C 5 ). These five nodes (C 1 , C 2 , C 3 , C 4 , and C 5 ) represent clusters 3, 1, 4, 2 and 5 in Figure 3, respectively. The count of the CAC layer nodes depends upon the number of clusters identified in the CCI operation at the input layer; i.e., if k clusters were determined in the CCI mechanism, then in step 2, a new layer is created consisting of k nodes.

Step 3: Convex Concept Inter-Layer Learning (CCILL)
Besides intra-Layer learning in Steps 1b and 4b, the second learning mechanism in RAN modeling is used to identify the association among the nodes at the CAC layer and input layer. Since the nodes are the abstract representative of the clusters identified in the CCI process, and according to the theory of prototype, the cluster center (i.e., the CRPDs) is the most probable representative of a cluster [67][68][69]. Therefore, the cluster centers are understood as an association among the CAC layer nodes and input layer nodes, as depicted by Equation (2).
where k = 1, 2, ..., n A , and i = 1, 2, ..., n a In Equation (2) we can see that the coordinates (feature values of data) of centers identified in the CCI process are assigned as learning W, i.e., convex concepts inter-layer wiights (CCILW). In the experiment with Toy-data, we learned a 5 × 2 weight matrix between the five nodes at CAL layer 1 and two nodes of input layer 0, as shown in Figure 4, step 3. Having completed step 3, a basic RAN model is obtained consisting of input layer 0, CAC layer 1, learning between two layers, and learning among the nodes at input layer 0.

Step 4a: Convex Abstract Concept Upward Activation Propagation (CACUAP)
This step is used to propagate i-dimensional input data vector a i to the CAC layer and to obtain k-dimensional data vector A k . This mechanism is used in two stages: in the first stage, Euclidean distance is calculated among the input data a i and all the CCILWs W k,i . This distance is further normalized (using the denominator in Equation (3)) to obtain distance in the range [0, 1](in RAN's modeling, the activation values of are, by definition, real values in the [0, 1] interval-and in such a setting, in an n-dimensional space, the maximum possible Euclidean distance between any two points is In the second stage, the normalized distance obtained from Equation (3) is transformed nonlinearly, establishing a similarity relation conforming to the following three conditions: (1) f (d = 0) = 1, i.e., when distance is 0, similarity is 100%; (2) f (d = 1) = 0 i.e., when distance is 1, similarity is 0%; and (3) f (d = x) is continuous, monotonous, and differentiable in the This similarity relation equation (Equation (4)) transforms the distance values observed at each node in the CAC layer into its similarity value. These similarity values act as degree of confidence (DoC) values to recognize the category being represented by the nodes at the CAC layer. Upon propagating all input values to the CAC Layer, the observed outputs A k are used to perform concept similarity relation learning (CSRL), as shown in Figure 4, step 4b. After completing step 4b, the RAN modeling procedure terminates, and a model is obtained, as shown in step 3 in Figure 4. In order to build more than one layer, all of the steps are repeated iteratively, and the output of all the intermediate CAC layers is pipelined as input to the new layer being built.

Regulation Mechanism
The regulation operation in RAN modeling is performed in three steps: first, an impact factor of the CSRL matrix is deduced; second, the intra-layer (IL) contribution of activation at a node by another node in the same layer is determined; third, activation at a node by a function of self-activation and intra-layer activation induced by other nodes on the latter are obtained.

Impact Factor (σ) Construction and Interpretation
The impact factor is a function that interprets the CSRL weight values (in the range [0, 1]) as excitatory, inhibitory, or neutral weights. The purpose of CSRL weights is to determine how concurrently two nodes (e.g., S 1 and S 2 ) are active. If the CSRL weight is intermediate, i.e., 0.5, it signifies that the two nodes are 50% concurrently active (depicting a state of confusion). Therefore, these nodes do not have an impact on each other in the same layer. If the CSRL weight of the two nodes were ''0", then the two nodes were never active simultaneously. This also indicates that the two nodes are inhibitors of each other. Finally, if the CSRL weights of the two nodes were ''1", then the nodes were always active conjointly, exciting the activation of each other.
The aforementioned comprehension of CSRL weights W is exhibited by a mathematical Equation (5), where σ m→n is the impact of node m over node n. Figure 6 shows a graphical view of the impact factor σ (Equation (5)), depicting the excitatory, inhibitory, or neutral interpretations of CSRL weights. Figure 5 shows the CSRL and their respective σ weights for both layers. At layer 0, the nodes S 1 and S 2 have a very minimal excitatory impact on each other (in Figure 3, every node at layer 1 can be related to clusters C 1 , ..., C 5 serially). However, at layer 1, node C 1 has no impact on node C 2 (and vice versa). There are many negative weights in the σ matrix of layer 1, indicating that these nodes inhibit each other. In Figure 3, we see that the clusters C 2 and C 5 are very close, and the activations observed at both of the nodes must be very similar. Hence, high CSRL weight is learned between nodes C 2 and C 5 . Notably, both exhibit good excitatory behavior towards each other.

Intra-Layer Activation
The objective of calculating intra-layer (IL) activation is to determine the amount of activation a node n receives from all the other m nodes of the same Layer. To obtain the intra-layer activation at node n, the approach must address three prospects. First, intralayer activation must consider the impact (σ) of excitatory, inhibitory, or neutral effects of all m nodes over node n. Second, the current activation of m nodes and their CSRL weight (Wm → n) to node n should be considered in calculating the activation of node n. Third, the intra-layer activation computed for node n must be in the range [0, 1]. Equation (8) conforms to all three requirements.
IL(a n ) = Figure 6. Excitatory, inhibitory, and neutral effect of CSRL weights (W) when transformed using the impact factor σ.

Intra-Layer Regulation
To identify the actual activation AA(a n ) at node n, this operation uses a regulation factor (ρ) to decide the share of self-contribution of activation by node n and intra-layer activation at node n, i.e., IL(a n ). Equation (9) shows the mathematical function for the intra-layer regulation operation. From Equation (9), we can observe if the ρ is '0'; i.e., without any regulation, only the activation of node n contributes to the actual activation.
AA(a n ) = (1 − ρ) * a n + ρ * IL(a n ) (9) Algorithm 1 presents the intra-layer regulation operation in an algorithmic form. This regulation operation has its importance when propagating the activation from an abstract concept layer to the input layer, as described in Section 3.4.

Algorithm 1: Intra-Layer Regulation
Input: current activation a n at node n at layer L. Input: CSRL W at layer L Input: impact matrix σ at layer L Initialization: regulation factor ρ, between [0, 1]; foreach a n in L do Calculate IL(a n ), using Equation (8); Calculate actual activation AA(a n ), using Equation (9); end return AA(a n )

Geometric Back-Propagation Operation
Geometric back propagation (GBP) is a downward propagation mechanism in RAN modeling. This method enables us to determine an activation vector a m (< a 1 , .., a i , .., a m >) at layer L-1, for an expected activation (E-A) vector A n (< A 1 , .., A j , .., A n >) at layer L. This operation is a window operation that takes place between two adjacent layers, i.e., layers L and L-1. For instance, if the RAN model has three layers, L 0 (input layer), L 1 , and L 2 (output layer), then two GBP operations take place, first between L 2 and L 1 , and then between L 1 and L 0 . Figure 7 shows the single-window operation between two layers. The GBP mechanism commences with an expected activation (A n ) vector at layer L. Next, a starting input vector a m (< a 1 , .., a i , .., a m >) is injected in layer L-1. Now, we enter into a cycle where we propagate the activation of the nodes in layer L-1 upwards to layer L and determine the observed activation (O-A) vector A n (< A 1 , .., A j , .., A n >) at layer L. Furthermore, an error vector e is calculated using A n and A n (expected and observed activation vector) through Equation (10). The error vector e is used to determine an accumulated delta value a i (see Equation (12)) based upon the function expressed by Equation (11). This a i value is then added to the activation vector at layer L-1 (see Equation (13)) to obtain a new input vector a new m . Additionally, the cycle is repeated with the new a new m input at layer L-1 until the error is minimized or the cycle equals the user-defined maximum iteration threshold. Algorithm 2 presents the detailed geometric back-propagation algorithm. As mentioned earlier, the GBP operation takes place between two consecutive layers. However, if the hierarchy has more than two layers, then with the window operation, it is possible to propagate down the injected E-As at the nodes of the top-most layer L, to input layer 0.

Recall Demonstration with Toy-Data
There are two types of experiments performed in this section: first, the single-cue recall (SCR) operation, where the recall is performed based upon the expected activation by one node in an abstract concept layer and, second, the multiple-cue recall (MCR) mechanism, where the recall procedure is carried out for the expected activation at all the nodes in the abstract concept layer. The experiments demonstrated in this section use the two-layered model generated with RAN methodology (see the two-layered model obtained at step 3 in Figure 4 in Section 3.2). In the generated RAN model, the five abstract concept nodes (C 1 , C 2 , C 3 , C 4 , and C 5 ) correspond to clusters 3, 1, 4, 2 and 5, respectively (see Figure 3 for the clusters). In both SCR and MCR operations, the five abstract concept nodes at layer 1 are injected with an expected activation set. Furthermore, the geometric back-propagation algorithm (Algorithm 2) performs one thousand iterations of downward propagation of activation to obtain appropriate values at input layer 0 as recalled activation. In all of the recall simulations, the GBP operation is initialized with values 0 and 0.60 as Starting-Point, and maxIter is set for 1000 iterations. The expected activation varies with the experiment, and two sets of the regulation factor were determined empirically to demonstrate both recall procedures.

Single-Cue Recall (SCR) Experiment
In SCR experiments, the objective was to determine the recalled activation at input layer 0 by injecting binary activation values as expected activation in abstract concept layer 1. The expected activation vector contains value 1 for only one abstract concept node, and for the remaining nodes, 0 is assigned. In all SCR experiments, six regulation factors ρ (0%, 0.5%, 0.75%, 1%, 1.25% and 1.5%) were used. Table 1 logs the E-A and the thousandth iteration value of O-A for the five SCR experiments. The following are the five experiments to demonstrate the SCR operation along with observations: • Exp-1: This is the first experiment in which we injected an E-A vector [0, 0, 0, 0, 1] of activation at abstract concept layer 1. The objective was to recall activations at input layer 0 for which a very high activation was observed at node C 5 at layer 1 and comparatively lower activation for the other four nodes at layer 1. The GBP algorithm was executed six times with an E-A of [0, 0, 0, 0, 1] for the six different regulation factors (ρ). The observation for Exp-1 (see Table 1) shows that with a ρ of 0.75%, the maximum activation of 0.85 was observed at node C 5 . As expected, the node C 2 received good activation because nodes C 2 and C 5 represent clusters 1 and 5 (see Figure 3), which are close to one another. Figure 8a shows the six trajectories for the six regulation factors; each trajectory is formed by one thousand iterations. In Figure 8a, the yellow marker shows the CRDP of cluster C 5 and the trajectory with ρ of 0.75% converge closest to this CRPD. Thus, an activation vector [0.1, 0.24] is recalled at input nodes [S 1 , S 2 ] for the given E-A vector [0, 0, 0, 0, 1]. • Exp-2: In this experiment, the E-A provided to the GBP algorithm was [0, 0, 0, 1, 0] to recall activation at layer 0, which is strongly represented by node C 4 . For each regulation factor, the GBP algorithm was run; the O-As obtained at layer 1 are listed in Table 1, and the corresponding recalled activation at layer 0 is shown in Figure 8b. From the observations, it can be deduced that the experiment with ρ of 0.75% produced the best outcome and recall activation [0.9, 0.9] for input layer 0. • Exp-3: In this experiment, the GBP algorithm was supplied with an E-A vector of [0, 0, 1, 0, 0] to recall the input layer 0 vector, which is represented by node C 3 at layer 1. Figure 8c and Table 1 shows the recall trajectories at layer 0 and the O-A vector at layer 1, respectively. The experiment with the regulation of 0.75% displayed the best representation. A vector [0.92, 0.11] was recalled at layer 0 for the injected E-A vector. • Exp-4: The aim of this experiment was to recall an input vector that closely represents the abstract concept node C 2 by feeding the GBP algorithm with an E-A vector of [0, 1, 0, 0, 0]. After applying the six regulation factors to each GBP operation, it was observed that the experiment with ρ of 0.75% displayed the best result. Table 1 shows the O-A for the E-A. Figure 8d shows the trajectories of the recalled values and shows the best outcome with ρ of 0.75% that converges to an activation vector [0.14, 0.07].

Multiple-Cue Recall (MCR) Experiment
The MCR experiments were carried out to determine the recall vector at input layer 0 for an E-A vector at layer 1. The constituents of the E-A vector are degree of confidence (DoC) values that define the expected representation of each abstract concept node at layer 1. To demonstrate MCR, five experiments were performed, and in every experiment, six regulation factors ρ (0%, 0.1%, 0.2%, 0.3%, 0.4%, and 0.5%) were used to make inferences. Table 2 lists the observations of O-A for the respective E-A in each MCR experiment. With this E-A, we wanted to recall activation at input layer 0, that is 57%, 16%, 06%, 15% and 25% represented by nodes C 1 , C 2 , C 3 , C 4 , and C 5 , respectively.  Figure 9e for all trajectories. The same observations were made at the O-A vectors for ρ of 0% and 0.1% at layer 1-see Table 2.

Discussion
The experiments in Sections 3.5.1 and 3.5.2 demonstrate a notable behavior of RAN by simulating the cued recall operation through a Toy-data problem. The intra-Layer learning (i.e., CSRL) is uniquely utilized by RAN modeling to interpret the association among the concepts as inhibitory, excitatory, or neutral. Furthermore, the intra-layer regulation (Algorithm 1) uses intra-layer learning (CSRL) and its interpretations to produce a regulatory effect over the activation of the concepts (at the same layer). The geometric back-propagation operation (Algorithm 2) is a method analogous to remembering something learned in an abstract form and recalling its concrete features. For example, while remembering the abstract concept "house", we recall concrete features related to the house, such as "mother", "father", "wife", and "pets".
In the graphs in Figures 8 and 9, we can see that all the trajectories commence from a starting point (red dot) and converge to a point after one thousand iterations. Each point in a trajectory represents a temporal mental state while recalling a concrete concept. Every time a concrete concept (activation vector in layer 0) is recalled, its corresponding abstract concept (at layer 1) is compared with the expected abstract concepts. The difference between expected and observed activation is propagated back as the error to the previously recalled activations at layer 0. In the next instance, the corrected recalled activation at layer 0 repeats the process until one thousand iterations are completed.
It was observed that without regulation, i.e., 0% ρ, the trajectory converges to a point but with a minimal amount of regulation, and the result improves. For instance, in the graphs in Figure 8, only one abstract concept was being recalled and the results improved when the regulation was introduced. In the two experiments (SCR and MCR), we can see that the two different sets of regulation factors are considered. These sets were obtained empirically, but we can see that the set of the regulation factors for the SCR experiment has a higher value. This is because the GBP algorithm strives to minimizes the error at each abstract concept node at layer 1, and in the geometrical context, similarity cannot be the same for more than one abstract concept. Thus, the trajectory converges to a point, but the result improves when a minimal amount of regulation is induced. In the MCR experiment, the best outcome is observed with little or no regulation because the expected similarity (DoC and E-A) is a non-zero value. The other reason is that these are possible expected similarity vectors, unlike those in the SCR experiments.

Cued Recall Demonstration with MNIST Data
The MNIST [70] dataset is a collection of handwritten images of digits (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9), where each image is black and white in color and has a 28 × 28 pixel size. This dataset of image domains is used to demonstrate the Ccued recall operation of learned abstract concepts representing different digits. Two types of investigations were conducted with this dataset: first, multiple binary valued cue recall (MBVCR), where the E-A vector is a binary value ([0, 1]) vector, and second, multiple-cue recall (MCR).
For this experiment, one thousand images were selected randomly from the MNIST dataset. The 28 × 28 image was transformed in a single vector of 784 attributes, where each attribute corresponds to a pixel of the image. Additionally, the attribute values of the data were normalized between 0 and 1 using min-max normalization (black pixel is min, i.e., 0, and white pixel is max, i.e., 255). Having preprocessed the data, the RAN modeling procedure was instantiated by selecting the K-mean clustering algorithm as the concept identifier. K was initialized with 30 to determine thirty categories in the input space. The model was configured to grow one level deep and build convex abstract concept (CAC) layer 1. After carrying out all four steps of RAN modeling (see Section 3.2), a model was obtained-see Figure 10. In Figure 10, layer 0 has 784 nodes representing each pixel; CAC layer 1 has 30 nodes representing the thirty categories identified during the CCI process in RAN modeling. The interlayer weights (ILWs) are the cluster centers (CRDPs) of the thirty clusters. Figure 11 shows the ILWs reconstructed in image form of 28 × 28 pixels. In RAN modeling, a CRDP is the optimum representative of an input level category at CAC layer 1. Therefore, Figure 11a-ad are the best represented by CAC node N 1 , ..., N 30 , respectively. In Figure 11 it is noticeable that each digit is represented by at least two CAC node of layer 1. The digit 9 is represented by the largest number of nodes, i.e., N 2 , N 15 , N 18 , and N 24 . In contrast, digit 4 is represented by two nodes, N 1 and N 10 . Figure 11d,m,u,x show that the CAC nodes N 4 , N 13 , N 21 , and N 24 do not represent an individual digit. Node N 4 and N 13 jointly represent digits 3 and 8; node N 21 looks like two digits, 3 and 5; and N 24 depicts digits 7 and 9. For simplicity, Figure 10 shows only inter-layer learning; intra-Layer learning (CSRL weights) was also performed on both the input layer 0 and CAC layer 1. The CSRL weights at input layer 0 were a 784 × 784 matrix, and at CAC layer 1, a 30 × 30 matrix was learned. These two intra-layer learning procedures were utilized by the GBP algorithm to simulate the recall operations. In all of the experiments, the GBP algorithm was configured to iterate five hundred times. The GBP algorithm was initialized with a vector with activation 1 for all 784 nodes of input layer 0. The image at Iter-0 (see Tables 3 and 4) is white because activation 1 corresponds to pixel value of 255 depicting white color.  In each experiment, the two cued recall demonstrations, MBVCR and MCR, use the expected activating (E-A) vector as listed in Table 5. The experiments of single digits and combined digits for the MBVCR operation used an E-A vector of binary values, where binary 1 at a node N was assigned w.r.t the digit(s) being recalled. For instance, the E-A vector of digit 2 was formed by initializing the E-A vector with binary 1 for nodes N 16 , N 22 , and N 29 and binary 0 for the remaining 27 CAC nodes (see Table 5). The E-A vectors of MCR experiments are the actual activation values obtained by propagating the inter-layer weights upwards (CRDPs, see Figure 11) as input. The weights represented by Figure 11a-c,e,f,i,l-n,p were provided as input to the CCUAP operation to observe their respective activation at CAC layer 1. These observed activation vectors were used as E-A for each digit recall operation (see last ten MCR E-As in Table 5).

Multiple Binary Valued Cue Recall (MBVCR) Operation
For the MBVCR operation, the RAN model generated with MNIST data (see Figure 10) was used in order to obtain the recalled activation at input layer 0 for a given expected activation vector at CAC layer 1. As described earlier, the E-A vector for MBVCR is a vector of binary values, which is provided as input to the GBP algorithm to perform the recall operation. The experiments themselves are divided into two categories, i.e., intuitive and non-intuitive recall.

Intuitive MBVCR Experiment
In this experiment, by intuition, we hypothesize that if all CAC nodes (representing a digit) are activated with value 1, then its recall at layer 0 must depict that digit. For example, if the CAC nodes N 6 , N 11 , and N 26 (see Figure 11) are activated with a value of 1 (and 0 for others), then we should obtain an image depicting a blend of zero digits after the recall operation. We performed this intuitive recall experiment for all ten digits. The binary E-A vector of all ten digits for the intuitive MBVCR operation is listed in Table 5. Table 3 displays the recalled images of all twenty experiments. For every digit, two investigations were made; the first without regulation, i.e., ρ = 0; the second with a regulation of 0.009%.
The first observation is that there is a very insignificant difference between the images recalled with and without regulation. After the second iteration, the digit being recalled begins to appear. Beyond the 80th iteration, no significant change is observed in the recalled images. The recalled images of digits 0, 1, 2, 3, 7, and 8 are recognizable after the 500th iteration. However, the digits 4, 5, and 9 are not very discernible in their last iteration; this is because these digits are cross-represented by CAC nodes (see Figure 11). All the images recalled in this experiment contain noise (i.e., the gray shades), because the E-A vector has two values, either 0 or 1, and a node can be 100% similar to only one other node. Therefore, the GBP algorithm adjusts the activation at the CAC node such that the best representation of the E-A is achieved.

Non-Intuitive MBVCR Experiment
In these experiments, the E-A vector contains an activation value of 1 for CAC nodes representing two different digits. The objective of the experiment was to determine what is recalled at the input layer 0 when the CAC nodes, representing two different digits, expect high activation. The three E-As used in this experiment are a combination of activation 2 s -with-5 s , 3 s -with-5 s , and 0 s -with-1 s (see Table 5 for E-A vectors with the coupled digits). The observations without regulation and with regulation are similar-see Table 6. The blend of the 2 s -with-5 s recalls an image that looks like the letter x. The fusion of 3 s -with-5 s recalls an image similar to the digit 3. The combination of 0 s -with-1 s in the beginning looked like the symbol Φ, but this was distorted later. It is also observed that the images obtained after all the iterations had less noise when compared to the those of the intuitive MBVCR experiments. This is probably because a number of CAC nodes were expecting activation, i.e., more cues were provided.

Multiple-Cue Recall (MCR) Experiment
This experiment is the same as the experiment discussed in Section 3.5.2. The E-A vectors are the activation values observed at CAC nodes by propagating the inter-layer weights using the CCUAP operation of RAN modeling. Figure 11 shows the images re-constructed for each inter-layer weight. The E-As corresponding to Figure 11a-c,e,f,i,l-n,p are listed in Table 5 and are used in MCR demonstrations of this section.
The objective of this experiment was the same as that of MBVCR experiments, i.e., obtaining an activation vector at input layer 0 that corresponds to an E-A vector. However, in this experiment, an expected recall (E-R) was already known. Therefore, the E-As of ten digits (see MCR E-As in Table 5) were expected to recall the images in Figure 11a-c,e,f,i,l-n,p.
In this experiment, the observations with and without regulation are identical. It is also worth noting that after the 500th iteration, the recalled images of all ten digits were similar to the E-R images of each digits.

Discussion
There are a few things worth mentioning in the recall demonstrations of RAN modeling with the MNIST dataset. First, we can reconstruct cognizable images of a digit by activating the CAC nodes representing that digit. Second, it is possible to recall both an intuitive and non-intuitive blend of learned abstract concepts (in these experiments, the abstract concepts are a generic representations of digits). Third, the recalled activations, with and without regulation, are similar for a complex dataset like MNIST. Last, the more cues we provide in the E-A vector, the more accurate the recall operation becomes. The recall capability of RAN modeling was applied to the reconsctruction of an encoded image, where the image was encoded using RAN convex concept modeling and reconstruction was performend via the geometric back-propagation (or recall) operation [71].

Conclusions
Recall is a cognitive process that can also be seen as an act of remembering a concept. Concepts are normally perceived in a hierarchical form, where the concrete concepts occupy the lower level, and the abstract concepts take up the relatively higher level in the hierarchy. According to context availability theory, the context among the concrete concepts is easily determined when compared to abstract concepts; hence, their comprehension and recall are also difficult. However, if we can relate abstract concepts to one another, it is possible to deduce a contextual relationship among them. In this article, we exploited the intralayer associations learned among the concepts (including abstract concepts) using RAN modeling to establish context among the concepts. We use this context-related information to induce a regulatory effect on the concepts and further to simulate the recall operations.
To demonstrate the effect of regulation of the recall process, a Toy-data problem was considered. First, we modeled with Toy-data to identify five abstract concepts. The proposed regulation algorithm utilized the learned intra-layer weight to determine the excitatory, neutral, and inhibitory impact induced by peer nodes on one another. Two types of cued recall experiments were performed using the unique geometric back-propagation algorithm: first, single-cue recall (SCR), simulation where the recall was simulated by activating only one abstract concept; second, multiple-cue recall (MCR) operation to retrieve the activation vector at the input level by injecting multiple cues at the abstract nodes. In SCR experiments, the regulation induced by peer nodes improved the recalled values. However, the observations with MCR operations were promising because they retrieved identical activation, as expected.
The benchmark MNIST dataset was used to exhibit cued recall as blends of learned abstract concepts. A two-layered model was generated with RAN to obtain thirty abstract concepts generically representing digits. In the multiple binary valued cue recall (MBVCR) experiment, multiple abstract nodes were injected with high activation to recall as blends of digits. Interestingly, it was observed in all the experiments that the blend of abstract nodes recalled an image of the digit that they represent at the abstract Level. The blend of different digits also produced some intriguing outcome; for exmaple, a blend of 2 and 5 recalled x, and a blend of 0 with 1 looked like a Φ symbol. The MCR operations were interesting as upon injecting the multiple cues, the recalled image was very similar to the expected recalled image.
Both the experiments displayed how oncepts can be contextually associated and impact each other's activation through regulation. Furthermore, with cue recall operations, it can be concluded that the more cues injected to an abstract concept, the better the obtained recall results. For future work, we intend to perform conceptual combination experiments and study the aspects of creative concept retrievals with the geometric backpropagation algorithm.
Author Contributions: R.S. performed state of the art, developed and implemented the methodology, carried out data selection and methodology validation, and prepared the original draft of the article. B.R. supervised the research work performed the formal analysis, review and editing, took care of funding. A.M.P. conceived the study plan and methodology, supervised the investigation, methodology development and implementation. A.C. supervised the research work, performed formal analysis, review and edition, managed funding. All authors have read and agreed to the published version of the manuscript.

Funding:
The work presented in this paper was partially carried out in the scope of the SOCIALITE Project (PTDC/EEI-SCR/2072/2014), co-financed by COMPETE 2020, Portugal 2020-Operational Program for Competitiveness and Internationalization (POCI), European Union's ERDF (European Regional Development Fund), and the Portuguese Foundation for Science and Technology (FCT). This work was also partially funded by project ConCreTe. ConCreTe acknowledges the financial support of the Future and Emerging Technologies (FET) program within the Seventh Framework Programme for Research of the European Commission, under FET grant number 611733.
Acknowledgments: I would like to express special thanks to Christine Zhang, Assistant Editor of the Applied Science Journal for her help in finding funds to support this article's publication cost.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations and Notations that are used in this manuscript: