# Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms

^{1}

^{2}

^{3}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. A General Approach to Spatiotemporal Modelling

- 1.
- assigning tuning curves to individual neurons (representation);
- 2.
- defining relationships between the variables the neurons are tuned over (computation and dynamics); and
- 3.
- finding synaptic weights such that both the tuning and the desired computation emerge in the final network (implementation, which was originally combined with the other three principles).

#### 2.1. Tuning Curves as Building Blocks for Models of Neurobiological Systems

#### 2.2. A Linear-Nonlinear Atemporal Tuning Curve Model

#### 2.3. Spatiotemporal Tuning Curves

#### 2.4. Transforming Represented Values

#### 2.5. Solving for Synaptic Weights

## 3. Time

#### 3.1. Time Cells and Temporal Bases

#### 3.2. Predicting Dynamic Systems

#### 3.3. Learned Legendre Predictor: Online Multi-Step Prediction

#### 3.4. Applications of Time Cell Tuning and Temporal Bases

## 4. Space

#### 4.1. Spatial Semantic Pointers

#### 4.2. Place, Grid, and Border Cells

#### 4.3. Sparse Representations

## 5. Space and Time

#### 5.1. Path Integration

#### 5.2. Temporal Integration

## 6. Quasi-Probability Representations

## 7. Discussion

## 8. Conclusions

## 9. Patents

## Author Contributions

## Funding

## Institutional Review Board Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Eliasmith, C.; Anderson, C.H. Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
- Eliasmith, C.; Stewart, T.C.; Choo, X.; Bekolay, T.; DeWolf, T.; Tang, Y.; Rasmussen, D. A Large-Scale Model of the Functioning Brain. Science
**2012**, 338, 1202–1205. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Choo, X. Spaun 2.0: Extending the World’s Largest Functional Brain Model. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 2018. [Google Scholar]
- Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-Maron, G.; Gimenez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A generalist agent. arXiv
**2022**, arXiv:2205.06175. [Google Scholar] - Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature
**2016**, 529, 484–489. [Google Scholar] [CrossRef] - Voelker, A.; Kajić, I.; Eliasmith, C. Legendre memory units: Continuous-time representation in recurrent neural networks. Adv. Neural Inf. Process. Syst.
**2019**, 32, 15570–15579. [Google Scholar] - Chilkuri, N.R.; Eliasmith, C. Parallelizing Legendre Memory Unit Training. In Proceedings of the 38th International Conference on Machine Learning, Virtual Event, 18–24 July 2021; Meila, M., Zhang, T., Eds.; Volume 139, pp. 1898–1907. [Google Scholar]
- Anderson, J.R.; Matessa, M.; Christian Lebiere. ACT-R: A Theory of Higher Level Cognition and Its Relation to Visual Attention. Hum.–Comput. Interact.
**1997**, 12, 439–462. [Google Scholar] [CrossRef] - Laird, J.E.; Newell, A.; Rosenbloom, P.S. SOAR: An architecture for general intelligence. Artif. Intell.
**1987**, 33, 1–64. [Google Scholar] [CrossRef] - McClelland, J.L.; Rumelhart, D.E.; PDP Research Group. Parallel Distributed Processing; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
- Eliasmith, C. How to Build a Brain: A Neural Architecture for Biological Cognition; Oxford Series on Cognitive Models and Architectures; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Dumont, N.S.Y.; Eliasmith, C. Accurate representation for spatial cognition using grid cells. In Proceedings of the 42nd Annual Meeting of the Cognitive Science Society; Online, 29 July–1 August 2020, Cognitive Science Society: Toronto, ON, Canada, 2020; pp. 2367–2373. [Google Scholar]
- Komer, B. Biologically Inspired Spatial Representation. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 2020. [Google Scholar]
- Bartlett, M.; Stewart, T.C.; Orchard, J. Biologically-Based Neural Representations Enable Fast Online Shallow Reinforcement Learning. In Proceedings of the Annual Meeting of the Cognitive Science Society, Toronto, ON, Canada, 27–30 July 2022; Volume 44. [Google Scholar]
- Hubel, D.H.; Wiesel, T.N. Receptive Fields of Single Neurones in the Cat’s Striate Cortex. J. Physiol.
**1959**, 148, 574–591. [Google Scholar] [CrossRef] - Gayler, R. Vector Symbolic Architectures Answer Jackendoff’s Challenges for Cognitive Neuroscience. In Proceedings of the ICCS/ASCS International Conference on Cognitive Science, Sydney, Australia, 13–17 July 2003; pp. 133–138. [Google Scholar]
- Bekolay, T.; Bergstra, J.; Hunsberger, E.; DeWolf, T.; Stewart, T.C.; Rasmussen, D.; Choo, X.; Voelker, A.R.; Eliasmith, C. Nengo: A Python tool for building large-scale functional brain models. Front. Neuroinform.
**2014**, 7, 48. [Google Scholar] [CrossRef] - Stöckel, A.; Eliasmith, C. Computational properties of multi-compartment LIF neurons with passive dendrites. Neuromorphic Comput. Eng.
**2022**. [Google Scholar] [CrossRef] - Duggins, P.; Eliasmith, C. Constructing functional models from biophysically-detailed neurons. PLoS Comput. Biol.
**2022**, 18, e1010461. [Google Scholar] [CrossRef] - Hofmann, T.; Schölkopf, B.; Smola, A.J. Kernel Methods in Machine Learning. Ann. Stat.
**2008**, 36, 1171–1220. [Google Scholar] [CrossRef] [Green Version] - Watson, A.B.; Ahumada, A.J., Jr. A Look at Motion in the Frequency Domain; NASA Technical Memorandum 84352; NASA Ames Research Center: Moffett Field, CA, USA, 1983. [Google Scholar]
- Adelson, E.H.; Bergen, J.R. Spatiotemporal Energy Models for the Perception of Motion. J. Opt. Soc. Am. A Opt. Image Sci.
**1985**, 2, 284–299. [Google Scholar] [CrossRef] [PubMed] - Carandini, M.; Heeger, D.J.; Anthony Movshon, J. Linearity and Gain Control in V1 Simple Cells. In Models of Cortical Circuits; Ulinski, P.S., Jones, E.G., Peters, A., Eds.; Springer: Berlin/Heidelberg, Germany, 1999; pp. 401–443. [Google Scholar]
- Stöckel, A. Harnessing Neural Dynamics as a Computational Resource. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 2022. [Google Scholar]
- Marĉelja, S. Mathematical Description of the Responses of Simple Cortical Cells. J. Opt. Soc. Am.
**1980**, 70, 1297–1300. [Google Scholar] [CrossRef] [PubMed] - Field, D.J.; Tolhurst, D.J.; Campbell, F.W. The Structure and Symmetry of Simple-Cell Receptive-Field Profiles in the Cat’s Visual Cortex. Proc. R. Soc. Lond. Ser. B Biol. Sci.
**1986**, 228, 379–400. [Google Scholar] [CrossRef] - Roth, A.; van Rossum, M.C.W. Modeling Synapses. In Computational Modeling Methods for Neuroscientists; Schutter, E.D., Ed.; The MIT Press: Cambridge, MA, USA, 2009; pp. 139–159. [Google Scholar]
- Stöckel, A.; Eliasmith, C. Passive Nonlinear Dendritic Interactions as a Computational Resource in Spiking Neural Networks. Neural Comput.
**2021**, 33, 96–128. [Google Scholar] [CrossRef] [PubMed] - Stöckel, A.; Stewart, T.C.; Eliasmith, C. Connecting Biological Detail with Neural Computation: Application to the Cerebellar Granule-Golgi Microcircuit. Top. Cogn. Sci.
**2021**, 13, 515–533. [Google Scholar] [CrossRef] - Voelker, A.R.; Eliasmith, C. Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells. Neural Comput.
**2018**, 30, 569–609. [Google Scholar] [CrossRef] [PubMed] - Pastalkova, E.; Itskov, V.; Amarasingham, A.; Buzsáki, G. Internally Generated Cell Assembly Sequences in the Rat Hippocampus. Science
**2008**, 321, 1322–1327. [Google Scholar] [CrossRef] [Green Version] - Lusk, N.A.; Petter, E.A.; MacDonald, C.J.; Meck, W.H. Cerebellar, hippocampal, and striatal time cells. Curr. Opin. Behav. Sci.
**2016**, 8, 186–192. [Google Scholar] [CrossRef] - MacDonald, C.J.; Lepage, K.Q.; Eden, U.T.; Eichenbaum, H. Hippocampal “Time Cells” Bridge the Gap in Memory for Discontiguous Events. Neuron
**2011**, 71, 737–749. [Google Scholar] [CrossRef] [Green Version] - Howard, M.W.; MacDonald, C.J.; Tiganj, Z.; Shankar, K.H.; Du, Q.; Hasselmo, M.E.; Eichenbaum, H. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region. J. Neurosci.
**2014**, 34, 4692–4707. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Tripp, B. A Search For Principles of Basal Ganglia Function. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 2009. [Google Scholar]
- Medina, J.F.; Mauk, M.D. Computer Simulation of Cerebellar Information Processing. Nat. Neurosci.
**2000**, 3, 1205–1211. [Google Scholar] [CrossRef] [PubMed] - Chilkuri, N.; Hunsberger, E.; Voelker, A.; Malik, G.; Eliasmith, C. Language Modeling using LMUs: 10× Better Data Efficiency or Improved Scaling Compared to Transformers. arXiv
**2021**, arXiv:2110.02402. [Google Scholar] - Widrow, B.; Hoff, M.E. Adaptive Switching Circuits. In Proceedings of the 1960 IRE WESCON Convention Record (Pt. 4), Los Angeles, CA, USA, 23–26 August 1960; pp. 96–104. [Google Scholar]
- MacNeil, D.; Eliasmith, C. Fine-Tuning and the Stability of Recurrent Neural Networks. PLoS ONE
**2011**, 6, e0022885. [Google Scholar] [CrossRef] [Green Version] - Furlong, P.M.; Stöckel, A.; Stewart, T.; Eliasmith, C. Learned Legendre Predictor: Learning with Compressed Representaitons for Efficient Online Multistep Prediction; Technical Report; Centre for Theoretical Neuroscience: Waterloo, ON, Canada, 2022. [Google Scholar]
- Stöckel, A.; Stewart, T.C.; Eliasmith, C. A Biologically Plausible Spiking Neural Model of Eyeblink Conditioning in the Cerebellum. In Proceedings of the CogSci, Online, 29 July–1 August 2020. [Google Scholar]
- De Jong, J.; Voelker, A.R.; van Rijn, H.; Stewart, T.C.; Eliasmith, C. Flexible Timing with Delay Networks—The Scalar Property and Neural Scaling. In Proceedings of the 17th Annual Meeting of the International Conference on Cognitive Modelling (ICCM), Montreal, Canada, July 2019. [Google Scholar]
- Kominsky, J.F.; Baker, L.; Keil, F.C.; Strickland, B. Causality and continuity close the gaps in event representations. Mem. Cogn.
**2021**, 49, 518–531. [Google Scholar] [CrossRef] - Tolman, E.C. Cognitive maps in rats and men. Psychol. Rev.
**1948**, 55, 189. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Thorndyke, P.W. Distance estimation from cognitive maps. Cogn. Psychol.
**1981**, 13, 526–550. [Google Scholar] [CrossRef] - Izard, V.; Pica, P.; Spelke, E.S.; Dehaene, S. Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proc. Natl. Acad. Sci. USA
**2011**, 108, 9782–9787. [Google Scholar] [CrossRef] [Green Version] - Fernandez, J.M.; Farell, B. Is perceptual space inherently non-Euclidean? J. Math. Psychol.
**2009**, 53, 86–91. [Google Scholar] [CrossRef] [Green Version] - O’Keefe, J.; Dostrovsky, J. The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain Res.
**1971**, 34, 171–175. [Google Scholar] [CrossRef] - Solstad, T.; Boccara, C.N.; Kropff, E.; Moser, M.B.; Moser, E.I. Representation of geometric borders in the entorhinal cortex. Science
**2008**, 322, 1865–1868. [Google Scholar] [CrossRef] [Green Version] - Høydal, O.; Skytøen, E.R.; Andersson, S.O.; Moser, M.B.; Moser, E.I. Object-vector coding in the medial entorhinal cortex. Nature
**2019**, 568, 400–404. [Google Scholar] [CrossRef] [PubMed] - Hafting, T.; Fyhn, M.; Molden, S.; Moser, M.B.; Moser, E.I. Microstructure of a spatial map in the entorhinal cortex. Nature
**2005**, 436, 801–806. [Google Scholar] [CrossRef] - Dang, S.; Wu, Y.; Yan, R.; Tang, H. Why grid cells function as a metric for space. Neural Netw.
**2021**, 142, 128–137. [Google Scholar] [CrossRef] [PubMed] - Komer, B.; Stewart, T.C.; Voelker, A.R.; Eliasmith, C. A neural representation of continuous space using fractional binding. In Proceedings of the 41st Annual Meeting of the Cognitive Science Society; Cognitive Science Society: Montreal, QC, Canada, 2019. [Google Scholar]
- Rahimi, A.; Recht, B. Random features for large-scale kernel machines. Adv. Neural Inf. Process. Syst.
**2007**, 20, 1177–1184. [Google Scholar] - Plate, T.A. Distributed Representations and Nested Compositional Structure. Ph.D. Thesis, University of Toronto, 1994. [Google Scholar]
- Frady, E.P.; Kleyko, D.; Kymn, C.J.; Olshausen, B.A.; Sommer, F.T. Computing on functions using randomized vector representations. arXiv
**2021**, arXiv:2109.03429. [Google Scholar] - Brun, V.H.; Solstad, T.; Kjelstrup, K.B.; Fyhn, M.; Witter, M.P.; Moser, E.I.; Moser, M.B. Progressive increase in grid scale from dorsal to ventral medial entorhinal cortex. Hippocampus
**2008**, 18, 1200–1212. [Google Scholar] [CrossRef] - Sargolini, F.; Fyhn, M.; Hafting, T.; McNaughton, B.L.; Witter, M.P.; Moser, M.B.; Moser, E.I. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science
**2006**, 312, 758–762. [Google Scholar] [CrossRef] [Green Version] - Cerasti, E.; Treves, A. How informative are spatial CA3 representations established by the dentate gyrus? PLoS Comput. Biol.
**2010**, 6, e1000759. [Google Scholar] [CrossRef] [Green Version] - Chevalier-Boisvert, M.; Willems, L.; Pal, S. Minimalistic Gridworld Environment for Gymnasium. 2018. Available online: https://github.com/Farama-Foundation/Minigrid (accessed on 31 December 2022).
- Markus, E.J.; Barnes, C.A.; McNaughton, B.L.; Gladden, V.L.; Skaggs, W.E. Spatial information content and reliability of hippocampal CA1 neurons: Effects of visual input. Hippocampus
**1994**, 4, 410–421. [Google Scholar] [CrossRef] - Dumont, N.S.Y.; Orchard, J.; Eliasmith, C. A model of path integration that connects neural and symbolic representation. In Proceedings of the Annual Meeting of the Cognitive Science Society, Toronto, ON, Canada, 27–30 July 2022; Volume 44. [Google Scholar]
- Voelker, A.R.; Blouw, P.; Choo, X.; Dumont, N.S.Y.; Stewart, T.C.; Eliasmith, C. Simulating and predicting dynamical systems with spatial semantic pointers. Neural Comput.
**2021**, 33, 2033–2067. [Google Scholar] [CrossRef] [PubMed] - Orchard, J. Oscillator-interference models of path integration do not require theta oscillations. Neural Comput.
**2015**, 27, 548–560. [Google Scholar] [CrossRef] [PubMed] - Furlong, P.M.; Eliasmith, C. Fractional Binding in Vector Symbolic Architectures as Quasi-Probability Statements. In Proceedings of the Annual Meeting of the Cognitive Science Society, Toronto, ON, Canada, 27–30 July 2022; Volume 44. [Google Scholar]
- Furlong, P.M.; Stewart, T.C.; Eliasmith, C. Fractional Binding in Vector Symbolic Representations for Efficient Mutual Information Exploration. In Proceedings of the ICRA Workshop: Towards Curious Robots: Modern Approaches for Intrinsically-Motivated Intelligent Behavior, Virtual Event, 1–5 June 2022. [Google Scholar]
- Glad, I.K.; Hjort, N.L.; Ushakov, N.G. Correction of density estimators that are not densities. Scand. J. Stat.
**2003**, 30, 415–427. [Google Scholar] [CrossRef] - Tzilivaki, A.; Kastellakis, G.; Poirazi, P. Challenging the point neuron dogma: FS basket cells as 2-stage nonlinear integrators. Nat. Commun.
**2019**, 10, 3664. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Zang, Y.; Hong, S.; De Schutter, E. Firing rate-dependent phase responses of Purkinje cells support transient oscillations. eLife
**2020**, 9, e60692. [Google Scholar] [CrossRef] - Zang, Y.; De Schutter, E. The cellular electrophysiological properties underlying multiplexed coding in Purkinje cells. J. Neurosci.
**2021**, 41, 1850–1863. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Illustration of the original Hubel and Wiesel experiment [15]. (

**A**) Measuring the activity ${a}_{i}$ of a neuron in response to a bar of light projected onto a screen. (

**B**) The neuron produces different levels of activity depending on the orientation of the bar of light $\mathbf{x}$ (data from [15]). (

**C**) Plotting the activity ${a}_{i}\left(\mathbf{x}\right)$ (dashed line) and a least-squares fit $G\left[J\right(\mathbf{x}\left)\right]$ (gray line; G is a LIF response curve).

**Figure 2.**Examples of tuning curves in the linear-nonlinear tuning-curve family with a LIF nonlinearity G. (

**A**) Using $d=1$ results in monotonic tuning over a quantity x. Coloured lines correspond to individual neurons i. (

**B**) Bell-shaped tuning-curves can be obtained with $d=2$ and transforming $\mathbf{x}=(sin(\pi x),cos(\pi x\left)\right)$. (

**C**) Tuning curve of a single hexagonal grid cell constructed using spatial semantic pointers (SSPs) with $d=7$ (deep blue corresponds to a firing rate of 100 $\mathrm{Hz}$).

**Figure 3.**Model of a neuron in visual cortex tuned to downwards motion. The spatiotemporal encoder e is a two-dimensional Gabor filter sampled in two spatial dimensions ${\xi}_{1}$, ${\xi}_{2}$ that correspond to coordinates in the visual field [25,26]. (

**A**,

**B**) Different slices through e. Blue corresponds to positive, red to negative values. (

**C**) Computing the neural activity for a grating pattern x moving in the arrow direction. Downwards motion results in the strongest response amplitude.

**Figure 4.**Transforming signals. (

**A**) Two LIF neuron populations (blue, green) are tuned to variables x, y. The first population projects onto to the second; we impose the relationship $y=f\left(x\right)={x}^{2}$ when solving for weights. The black “neuron” is a linear readout with $G\left[J\right]=J$. (

**B**) Tuning curves of the neurons depicted in (A). Left column depicts the tuning curves over x, right column the tuning curves over y. The tuning of the first (blue) population is undefined with respect to y. When controlling the stimulus variable x the network implicitly computes $f\left(x\right)={x}^{2}$. (

**C**) Spike raster of the two LIF populations when varying the stimulus x over time; although we solve for weights using a rate approximation $G\left[J\right]$, the resulting network is compatible with spiking neurons.

**Figure 5.**Realising time cells in NEF networks. (

**A**) Top: Manually selected temporal encoders ${e}_{i}$ modelling core properties of biological time cells: bias towards shorter delays ${\theta}_{i}$, and larger spread in activity for larger ${\theta}_{i}$. Bottom: Activities of 200 recurrently connected integrate-and-fire neurons in response to a positive pulse after solving for weights realising the ${e}_{i}$. Activities are normalised to the maximum activity of each neuron (yellow). Only active neurons are depicted; 50% of the neurons are “off”-neurons that react to negative input pulses. (

**B**,

**C**) Qualitatively similar activities can be obtained when selecting a linear combination of temporal basis functions as temporal encoders. The basis functions depicted here are the impulse response of the Legendre Delay Network (LDN) and the Modified Fourier (MF) Linear Time Invariant (LTI) systems for $q=7$. Having closed-form state-space LTI systems with matrices ($\mathbf{A}$,$\mathbf{B}$) simplifies solving for recurrent weights in the NEF.

**Figure 6.**Using neurons with time cell tuning to predict nonlinear pendulum dynamics. (

**A**,

**B**) Overview of the experimental setup. The torque $\tau \left(t\right)$ and a delayed angle $\phi (t-{\theta}^{\prime})$ are fed into a recurrent neural network with time-cell tuning over two dimensions. We use the delta learning rule to learn connection weights online that recombine the neural activities to predict the angle in ${\theta}^{\prime}$ seconds. (

**C**) The system learns to predict the pendulum angle with a normalized RMSE of about 20%.

**Figure 7.**One trial of LLP learning to predict the motion of a ball bouncing off the walls of a box with lossless collisions. (

**A**) Initially the system cannot predict the future motion of the ball. (

**B**) Using a LDN dimensionality of $q=10$, the LLP learns to predict the future motion of the ball. (

**C**) shows the windowed mean (window of 1 s) of the root mean square error of the predicted path for the LLP algorithm with three different context representations over 100 trials. The LDN context uses an LDN to summarize the recent motion of the ball. The SSP context encodes the current position of the ball, and the SSP Speed context encodes the position and velocity of the ball. For each context encoding we used the largest learning rate that provided a stable learning rule. The solid line is the average performance, and the shaded regions (not visible in plot) represent a 95% confidence interval. While the SSP algorithms learn more slowly than the LLP with the LDN context, they ultimately reach lower prediction error. In all cases, by working in the LDN’s compressed representation we can learn to predict delayed signals, updating historical predictions with simple linear operations.

**Figure 8.**(

**A**) Illustration of the projection to the frequency domain of the SSP space given in Equation (4). The dot products between a 2D variable $\mathbf{x}$ and a set of three vectors $\left\{{\mathbf{a}}_{j}\right\}$ are cast as the phases of a set of phasors, $\left\{{e}^{i{\mathbf{a}}_{j}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\mathbf{x}}\right\}$, which reside on the unit circle in the complex plane. The IDFT of the vector $[{e}^{i{\mathbf{a}}_{0}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\mathbf{x}},{e}^{i{\mathbf{a}}_{1}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\mathbf{x}},{e}^{i{\mathbf{a}}_{3}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\mathbf{x}}]$ is a SSP representation of $\mathbf{x}$, which resides in a higher dimensional vector space. In this example, the SSP is only 3-dimensional, but in practice a much larger set $\left\{{\mathbf{a}}_{j}\right\}$ is used to produce high dimensional SSPs. (

**B**) Consider how these phasors change for a ${\mathbf{x}}^{\prime}$ traversing 2D space. The banded heat maps shows how the real part of the vectors $\left\{{e}^{i{\mathbf{a}}_{j}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\mathbf{x}}\right\}$ repeats over a 2D region of ${\mathbf{x}}^{\prime}$ values. Each component of the SSP in the Fourier domain is a plane wave with wave vector ${\mathbf{a}}_{j}$. The gridded heat map is the similarity between the SSP representation of $\mathbf{x}$ from (A) and SSPs of neighboring points: $\varphi \left(\mathbf{x}\right)\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\varphi \left({\mathbf{x}}^{\prime}\right)$. The similarity map is periodic due to the inference pattern of all plane waves. Here a hexgaonlly gridded similarity pattern is obtained.

**Figure 9.**Map encoding using SSPs. (

**A**) A 2D environment consisting of a rat, walls, and cheese. Information about the objects and their locations was encoded in single vector E, as per Equation (5). (

**B**) The vector E was queried for the location of the rat by approximate unbinding: $E\u229b{R}^{-1}\approx \varphi ({x}_{1},{y}_{1})+$ noise. The cosine similarity between the query output and SSP representations of points gridded over 2D space was computed and plotted to produce the above heat map. (

**C**) The similarity map obtained from querying the map E for the location of cheese. (

**D**) The similarity map obtained from querying for the wall area.

**Figure 10.**Firing patterns of LIF neurons representing SSPs. (

**A**) A grid cell from a population encoding $\varphi \left(\mathbf{x}\right(t\left)\right)$, where $\mathbf{x}\left(t\right)$ is the path shown in grey (obtained from [58]). Red dots indicate the positions at which the cell fired. (

**B**) A place cell from a population encoding $\varphi \left(\mathbf{x}\right(t\left)\right)$. (

**C**) An object vector cell from a population encoding the SSP representation of the vector between $\mathbf{x}\left(t\right)$ and any objects in view. Object locations are marked with an ‘x’. (

**D**) A border cell from a population encoding the SSP representation of the vector between $\mathbf{x}\left(t\right)$ and a wall along the right side of the environment.

**Figure 11.**Mean reward gained over 200 learning trials for each configuration of the Actor-Critic network exploring how sparsity (proportion of neurons active at any given time) and number of neurons impacts network performance on a spatial reinforcement learning task (MiniGrid).

**Figure 12.**The path integration model results on a 60 s long 2D path (a rat’s trajectory running in a cylinder with a diameter of 180 cm; obtained from [51]). The grey line is the ground truth. As input the model received a initial position and the velocity along the path (computed via finite differences). The output of the model was a position estimate, in the form of an SSP, over time. The 2D path estimate plotted as a black dashed line was decoded from the raw SSP output.

**Figure 13.**Results from temporal integration of SSPs to obtain trajectory representations. Path integration was performed on a 2D path (the black line in (

**B**)). The output of the path integrator was fed into a temporal integrator, with dynamics given by Equation (10). (

**A**) Two panels show the x-dimension of the trajectory output at different points (indicated by black stars) over the simulation time (the x axis). The output $\Phi \left(t\right)$ is visualized by a contour plot of its similarity with SSP representations across x-space. This is analogous to a probability distribution of the x position at different points in the past (see Section 6). (

**B**) The 2D trajectory estimate, decoded from $\Phi \left(t\right)$ at the end of the simulation, as a blue line that fades with how far the estimate is into the past.

**Figure 14.**Kernel Density Estimators (KDEs; green line) approximate probability distributions (shaded region). Using the Spatial Semantic Pointer representation we can approximate the Fourier Integral Estimator (FIE)—a density estimator using a $\mathrm{sinc}$ kernel function. More importantly, we can represent probability with finite neural resources, and interpret operations on that representation as probability statements. Figure adapted from [66].

**Figure 15.**The regret performance of Bayesian optimization implemented using Gaussian processes with a Matern kernel (GP-BO Matern) and implemented using the Hexagonal Spatial Semantic Pointer representation (SSP-BO Hex) on the Himmelblau standard optimization test function (

**A**). The regret performance of SSP-based algorithms is statistically equivalent to the GP methods, however, by working in the neurally-plausible feature spaces, the computation time becomes constant in the number of samples collected (

**B**). Figured adapted from [65].

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Dumont, N.S.-Y.; Stöckel, A.; Furlong, P.M.; Bartlett, M.; Eliasmith, C.; Stewart, T.C.
Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms. *Brain Sci.* **2023**, *13*, 245.
https://doi.org/10.3390/brainsci13020245

**AMA Style**

Dumont NS-Y, Stöckel A, Furlong PM, Bartlett M, Eliasmith C, Stewart TC.
Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms. *Brain Sciences*. 2023; 13(2):245.
https://doi.org/10.3390/brainsci13020245

**Chicago/Turabian Style**

Dumont, Nicole Sandra-Yaffa, Andreas Stöckel, P. Michael Furlong, Madeleine Bartlett, Chris Eliasmith, and Terrence C. Stewart.
2023. "Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms" *Brain Sciences* 13, no. 2: 245.
https://doi.org/10.3390/brainsci13020245