# Implementation of Kalman Filtering with Spiking Neural Networks

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Neuron Modeling

#### Frequency Response of the Neuron

#### 2.2. Synapse Modeling

#### 2.3. Reward-Modulated STDP (RSTDP)

#### 2.4. Encoding and Decoding in Spiking Neural Networks

#### Encoding Algorithm

#### 2.5. Discrete Extended Kalman Filter

- 1.
- Prediction: First, a preliminary estimation ${\widehat{x}}_{k|k-1}$, ${\widehat{y}}_{k|k-1}$ is computed by:$${\widehat{x}}_{k|k-1}=f({\widehat{x}}_{k-1|k-1},{u}_{k})$$$${\widehat{y}}_{k|k-1}=h\left({\widehat{x}}_{k|k-1}\right)$$Then, a covariance estimate ${P}_{k|k-1},{S}_{k|k-1}$ is computed, and the noise covariance matrices $Q,R$ and the estimate in the previous timestep ${P}_{k-1|k-1}$ are taken into account:$${P}_{k|k-1}=A\xb7{P}_{k-1|k-1}\xb7{A}^{T}+Q$$$${S}_{k|k-1}=C\xb7{P}_{k|k-1}\xb7{C}^{T}+R$$
- 2.
- Update: The second step consists of computing the Kalman gain matrix $\kappa \in {\mathbb{R}}^{n\times m}$ with$$\kappa ={P}_{k|k-1}\xb7{C}^{T}\xb7{S}_{k|k-1}^{-1}$$$$\Delta {y}_{k}={y}_{k}-{\widehat{y}}_{k|k-1}.$$We can obtain a final estimation ${\widehat{x}}_{k|k}$ that considers errors in measurement and noise statistics:$${\widehat{x}}_{k|k}={\widehat{x}}_{k|k-1}+\kappa \xb7\Delta {y}_{t}$$Finally, the moment of the prediction ${P}_{k|k}$, which will be used for the next timestep in prediction, is computed:$${P}_{k|k}={P}_{k|k-1}-\kappa \xb7{S}_{k|k-1}\xb7{\kappa}^{T}$$

#### 2.6. Proposed Kalman-Filtering SNN Structure

## 3. Results

#### 3.1. Van der Pol Simulation

#### 3.2. Lorenz System Simulation

## 4. Discussion

## 5. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Abbreviations

ANN | Artificial Neural Network |

AWGN | Additive White Gaussian Noise |

CBA | Crossbar Array |

CMOS | Complement Metal-Oxide Semiconductor |

EKF | Extended Kalman Filter |

GPU | Graphic Processing Unit |

GRU | Gated Recurrent Unit |

IBM | International Business Machines |

IK | Inverse Kinematics |

KF | Kalman Filter |

LIF | Leaky Integrate and Fire |

LTD | Long-Term Depreciation |

LTP | Long-Term Potentiation |

PINN | Physics-Informed Neural Networks |

RSTDP | Reward-Modulated STDP |

SINDY | Sparse Identification of Nonlinear Dynamics |

SNN | Spiking Neural Network |

STDP | Synaptic Time-Dependent Plasticity |

TSMC | Taiwan Semiconductor Manufacturing Company |

VLSI | Very Large Scale of Integration |

## References

- Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd ed.; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar] [CrossRef]
- Kaiser, E.; Kutz, J.N.; Brunton, S.L. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proc. R. Soc. A Math. Phys. Eng. Sci.
**2018**, 474, 20180335. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Kaheman, K.; Kutz, J.N.; Brunton, S.L. SINDy-PI: A robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proc. R. Soc. A Math. Phys. Eng. Sci.
**2020**, 476, 20200279. [Google Scholar] [CrossRef] [PubMed] - Teng, Q.; Zhang, L. Data driven nonlinear dynamical systems identification using multi-step CLDNN. AIP Adv.
**2019**, 9, 085311. [Google Scholar] [CrossRef] [Green Version] - Kálmán, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng.
**1961**, 83, 95–108. [Google Scholar] [CrossRef] [Green Version] - Haykin, S. (Ed.) Kalman Filtering and Neural Networks; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
- Revach, G.; Shlezinger, N.; Ni, X.; Escoriza, A.L.; van Sloun, R.J.G.; Eldar, Y.C. KalmanNet: Neural Network Aided Kalman Filtering for Partially Known Dynamics. IEEE Trans. Signal Process.
**2022**, 70, 1532–1547. [Google Scholar] [CrossRef] - Bing, Z.; Jiang, Z.; Cheng, L.; Cai, C.; Huang, K.; Knoll, A. End to End Learning of a Multi-Layered Snn Based on R-Stdp for a Target Tracking Snake-Like Robot. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, AB, Canada, 20–24 May 2019; pp. 9645–9651. [Google Scholar] [CrossRef]
- Thompson, N.C.; Greenewald, K.H.; Lee, K.; Manso, G.F. The Computational Limits of Deep Learning. arXiv
**2020**, arXiv:2007.05558. [Google Scholar] - Sandamirskaya, Y. Rethinking computing hardware for robots. Sci. Robot.
**2022**, 7, eabq3909. [Google Scholar] [CrossRef] - Tavanaei, A.; Ghodrati, M.; Reza Kheradpisheh, S.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw.
**2019**, 111, 47–63. [Google Scholar] [CrossRef] [Green Version] - Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci.
**2022**, 2, 10–19. [Google Scholar] [CrossRef] - Kendall, J.D.; Kumar, S. The building blocks of a brain-inspired computer. Applied Physics Reviews
**2020**, 7, 011305. [Google Scholar] [CrossRef] - Zaidel, Y.; Shalumov, A.; Volinski, A.; Supic, L.; Ezra Tsur, E. Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front. Neurorobotics
**2021**, 15, 631159. [Google Scholar] [CrossRef] [PubMed] - Volinski, A.; Zaidel, Y.; Shalumov, A.; DeWolf, T.; Supic, L.; Ezra-Tsur, E. Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics. Patterns
**2022**, 3, 100391. [Google Scholar] [CrossRef] [PubMed] - Davies, M.; Wild, A.; Orchard, G.; Sandamirskaya, Y.; Guerra, G.A.F.; Joshi, P.; Plank, P.; Risbud, S.R. Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook. Proc. IEEE
**2021**, 109, 911–934. [Google Scholar] [CrossRef] - Modha, D.S. The Brain’s Architecture, Efficiency on a Chip. 2016. Available online: https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/ (accessed on 12 October 2022).
- Modha, D.S. Products–Akida Neural Processor SoC. 2022. Available online: https://brainchip.com/akida-neural-processor-soc/ (accessed on 12 October 2022).
- Sandamirskaya, Y.; Kaboli, M.; Conradt, J.; Celikel, T. Neuromorphic computing hardware and neural architectures for robotics. Sci. Robot.
**2022**, 7, eabl8419. [Google Scholar] [CrossRef] [PubMed] - Li, Y.; Ang, K.W. Hardware Implementation of Neuromorphic Computing Using Large-Scale Memristor Crossbar Arrays. Adv. Intell. Syst.
**2021**, 3, 2000137. [Google Scholar] [CrossRef] - Zhang, X.; Lu, J.; Wang, Z.; Wang, R.; Wei, J.; Shi, T.; Dou, C.; Wu, Z.; Zhu, J.; Shang, D.; et al. Hybrid memristor-CMOS neurons for in-situ learning in fully hardware memristive spiking neural networks. Sci. Bull.
**2021**, 66, 1624–1633. [Google Scholar] [CrossRef] - Payvand, M.; Moro, F.; Nomura, K.; Dalgaty, T.; Vianello, E.; Nishi, Y.; Indiveri, G. Self-organization of an inhomogeneous memristive hardware for sequence learning. Nat. Commun.
**2022**, 13, 1–12. [Google Scholar] [CrossRef] - Kimura, M.; Shibayama, Y.; Nakashima, Y. Neuromorphic chip integrated with a large-scale integration circuit and amorphous-metal-oxide semiconductor thin-fil msynapse devices. Sci. Rep.
**2022**, 12, 5359. [Google Scholar] [CrossRef] - Kim, H.; Mahmoodi, M.R.; Nili, H.; Strukov, D.B. 4K-memristor analog-grade passive crossbar circuit. Nat. Commun.
**2021**, 12. [Google Scholar] [CrossRef] - Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef]
- Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front. Neurorobotics
**2018**, 12, 35. [Google Scholar] [CrossRef] [Green Version] - Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput.
**2022**, 34, 1289–1328. [Google Scholar] [CrossRef] [PubMed] - Guo, W.; Fouda, M.E.; Eltawil, A.M.; Salama, K.N. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Front. Neurosci.
**2021**, 15, 638474. [Google Scholar] [CrossRef] [PubMed] - Juarez-Lora, A.; Ponce-Ponce, V.H.; Sossa, H.; Rubio-Espino, E. R-STDP Spiking Neural Network Architecture for Motion Control on a Changing Friction Joint Robotic Arm. Front. Neurorobotics
**2022**, 16, 904017. [Google Scholar] [CrossRef] [PubMed] - Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature
**2020**, 585, 357–362. [Google Scholar] [CrossRef] [PubMed] - Meurer, A.; Smith, C.P.; Paprocki, M.; Čertík, O.; Kirpichev, S.B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J.K.; Singh, S.; et al. SymPy: Symbolic computing in Python. PeerJ Comput. Sci.
**2017**, 3, e103. [Google Scholar] [CrossRef] [Green Version] - Eshraghian, J.K.; Ward, M.; Neftci, E.; Wang, X.; Lenz, G.; Dwivedi, G.; Bennamoun, M.; Jeong, D.S.; Lu, W.D. Training spiking neural networks using lessons from deep learning. arXiv
**2021**, arXiv:2109.12894. [Google Scholar] - Saito, T. Piecewise linear switched dynamical systems: A review. Nonlinear Theory Its Appl. IEICE
**2020**, 11, 373–390. [Google Scholar] [CrossRef]

**Figure 1.**(

**a**) Membrane voltage ${v}_{m}\left(t\right)$ and spike voltage ${v}_{s}\left(t\right)$ of an LIF neuron for an excitatory input current of ${I}_{syn}=1.5001nA$. (

**b**) Tuning curve of the neuron, which shows the riobase value for the parameters given in Table 1.

**Figure 2.**(

**a**) An SNN with three LIF neurons in the input layer and one output layer. (

**b**) Spiking activity of the first layer. (

**c**) Evolution of the weight of the synapse. (

**d**) Neural activity (input current, membrane voltage, and spike voltage) of the output neuron.

**Figure 3.**Signal reconstruction using neurons and encoding/decoding algorithms. (

**a**) Assembly of the encoding/decoding, which alternate the input currents of two different neurons. (

**b**) Comparison between the original signal $x\left(t\right)$ and reconstructed signal $\widehat{x}\left(t\right)$. (

**c**) Spiking activity response for each neuron. (

**d**) Input currents ${I}_{syn}^{+},{I}_{syn}^{-}$ for the neurons (Blue) versus the riobase (red dotted). (

**d**) Output spikes for each neuron in the assembly.

**Figure 4.**Block diagram of the Kalman filter in which the typical Kalman gain-obtaining procedure is replaced by an SNN.

**Figure 6.**Time evolution of the reconstruction of the Van der Pol oscillator using the proposed architecture in comparison with the ground truth. (

**a**) Comparison of the ground truth x (blue) with the of the reconstruction $\widehat{x}$ (orange) made using the proposed architecture. (

**b**) Time error reconstruction $x-\widehat{x}$ of the two states of the system. (

**c**) Time evolution of each value of the resulting Kalman gain matrix. (

**d**,

**e**) Weight value evolution over time of the $3\times 2$ synapse set (multiple colors) for $Ens+$ and $Ens-$, respectively. (

**f**) Time error state estimation of the Van der Pol system using the standard discrete EKF algorithm without knowledge of the covariance matrices $Q,R$.

**Figure 7.**Time evolution of the Lorenz system’s reconstruction when using the proposed architecture in comparison with the ground truth. (

**a**) Comparison of the ground truth x (blue) with the reconstruction $\widehat{x}$ (orange) made by using the proposed SNN architecture. (

**b**) Time error reconstruction $x-\widehat{x}$ of the three states of the Lorenz system. (

**c**) Evolution of each value of the Kalman gain matrix. (

**d**,

**e**) Weight value evolution over time of the $4\times 3$ synapse set (multiple colors) for $Ens+$ and $Ens-$, respectively. (

**f**) Time estimation of the Lorenz system using the standard discrete EKF algorithm without knowledge of the covariance matrices $Q,R$.

LIF Model | Parameter Value |
---|---|

Membrane charging constant | ${\tau}_{m}=10$ ms |

Membrane resistance | ${R}_{m}=10$ M$\mathsf{\Omega}$ |

Capacitance of the neuron | ${C}_{m}=1$ nF |

Threshold voltage of the neuron | ${v}_{th}=-55$ mV |

Resting potential of the neuron | ${E}_{L}=-70$ mV |

Reset potential of the neuron | ${v}_{reset}=-70$ mV |

Spike amplitude | ${v}_{spk}=20$ mV |

Postsynaptic current decay time | ${\tau}_{pstc}=10$ ms |

Refractory Period | ${\tau}_{ref}=2$ ms |

Conductance-Based LIF | |

Time decay of the injection current | ${\tau}_{syn}=10$ ms |

Temporal injection current constant | ${C}_{syn}=1\times {10}^{-5}$ |

RSTDP Synapse Model | |
---|---|

Long-term potentiation constant | ${A}_{+}=1$$\mathsf{\mu}$S/mSeg |

Long-term depreciation constant | ${A}_{-}=-1$$\mathsf{\mu}$S/mSeg |

Transient memory decay time | ${\tau}_{E}-=10$ ms |

Max. conductance Value | ${w}_{max}=1$ ms |

Min. conductance Value | ${w}_{min}=1$$\mathsf{\mu}$s |

SF Encoding and Decoding | |

Encoding sensibility threshold value in a Van der Pol test | ${x}_{th}=1\times {10}^{-4}$ |

Encoding sensibility threshold value in a Lorenz test | ${x}_{th}=1\times {10}^{-5}$ |

Decoding sensibility threshold value in both tests | ${x}_{th}=1\times {10}^{-5}$ |

Slope modulation constant | $c=1$ |

Noise Parameters | |

Measurement noise’s standard deviation | $r=0.1$ |

System uncertainties’ standard deviation | $q=0.0316$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Juárez-Lora, A.; García-Sebastián, L.M.; Ponce-Ponce, V.H.; Rubio-Espino, E.; Molina-Lozano, H.; Sossa, H.
Implementation of Kalman Filtering with Spiking Neural Networks. *Sensors* **2022**, *22*, 8845.
https://doi.org/10.3390/s22228845

**AMA Style**

Juárez-Lora A, García-Sebastián LM, Ponce-Ponce VH, Rubio-Espino E, Molina-Lozano H, Sossa H.
Implementation of Kalman Filtering with Spiking Neural Networks. *Sensors*. 2022; 22(22):8845.
https://doi.org/10.3390/s22228845

**Chicago/Turabian Style**

Juárez-Lora, Alejandro, Luis M. García-Sebastián, Victor H. Ponce-Ponce, Elsa Rubio-Espino, Herón Molina-Lozano, and Humberto Sossa.
2022. "Implementation of Kalman Filtering with Spiking Neural Networks" *Sensors* 22, no. 22: 8845.
https://doi.org/10.3390/s22228845