Implementation of Kalman Filtering with Spiking Neural Networks

A Kalman filter can be used to fill space–state reconstruction dynamics based on knowledge of a system and partial measurements. However, its performance relies on accurate modeling of the system dynamics and a proper characterization of the uncertainties, which can be hard to obtain in real-life scenarios. In this work, we explore how the values of a Kalman gain matrix can be estimated by using spiking neural networks through a combination of biologically plausible neuron models with spike-time-dependent plasticity learning algorithms. The performance of proposed neural architecture is verified with simulations of some representative nonlinear systems, which show promising results. This approach traces a path for its implementation in neuromorphic analog hardware that can learn and reconstruct partial and changing dynamics of a system without the massive power consumption that is typically needed in a Von Neumann-based computer architecture.


Introduction
System dynamics can be represented as a set of differential equations in a space-state manner, and they are defined by using several techniques that explore the system's energetic relationships, such as Newtonian, Lagrangian, or Hamiltonian mechanics. However, trying to describe some phenomena correctly without knowing the governing modeling equations or without a proper selection of the space-state variables results in inaccurate representations or a complex set of equations that could be represented in a more straightforward but unknown form [1]. Data-driven system modeling refers to a set of optimization techniques intended to obtain a system's description based on data observations and measurements of the system's evolution. For example, the sparse identification of nonlinear dynamics (SINDY) [2,3] creates a matrix filled with proposed functions and a coefficient matrix, which must be obtained by using well-documented optimization techniques, such as least-square optimization, to replicate the proportionated data as closely as possible.
Artificial neural networks (ANNs) have tackled this challenge on multiple frontiers. Physics-informed neural networks (PINNs) use prior knowledge of the laws of general physics as a regularization agent during their training process, thus limiting the space of admissible solutions [4]. For instance, a Kalman filter (KF) is a model-based technique that allows sensor fusion in order to construct a full space-state recovery based on preliminary knowledge of the system's model and the nature of perturbation noise, which is useful for unknown perturbances or noisy sensor measurements [5,6]. In [7], the proposal of KalmanNet replaced parts of the equations of the extended Kalman filter (EKF) with an ANN with gated recurrent units (GRUs) to find the proper Kalman gain matrix that would allow a full state recovery.
For example, robotic systems exhibit changing dynamics during their lifespan due to the attrition of joints or their interactions within a changing environment. Therefore, compact and energy-efficient learning platforms are required for any autonomous robotic to test the capabilities of the architecture. Section 4 discusses the results, while Section 5 closes this work by showing our conclusions and proposing future research.

Materials and Methods
In this section, we start by reviewing how neurons, synapses, and learning rules are modeled. Then, we show encoding/decoding algorithms to determine the current input for the neurons in order to represent the signals used in our proposal. Next, the Kalman filter algorithm is illustrated. After the building blocks are introduced, our proposal is shown at the end of this section.

Neuron Modeling
Leaky Integrate and Fire (LIF) is one of the simplest models available for neuron modeling. It resembles the dynamics of a low-pass filter [25], as it considers a neuron as a switching resistance-capacitance circuit that is governed by: In (1), v m (t) represents the membrane's potential, E L is the membrane's potential at rest, τ m = R m C m stands for the membrane's temporal charging constant, R m is the membrane's resistance, and C m is the membrane's capacitance. I syn (t) acts as an excitatory input current for the neuron, which charges the membrane's potential v m (t) until it passes a threshold voltage value v th , at which point a spike is emitted. The spike's voltage, v s (t), is shaped as follows: where t f is the last moment at which a spike was produced, whereas δ(·) ∈ [0, 1] is the Dirac delta function that models the impulse's decay alongside the synapses, which decay from a maximum value v spk at t = t f to zero at the following post-synaptic rate τ pstc : Once the spike is produced, v m (t) resets to E L . The neuron will not spike again during a refractory period τ re f , as it does not admit an excitatory input current. When Given a connection between the j-th and k-th neuron by a synapse with a certain conductance value w jk (the modeling of which will be reviewed in Section 2.2), the input current for the postsynaptic neuron will be a function of each spike from the presynaptic neuron and its propagation through the corresponding synapse. For j presynaptic neurons, the current I syn (t) for the k-th neuron is modeled by the following expression: where t f pre is the firing time of each presynaptic neuron. Equations (1) and (4) make up the conductance-based LIF model [26], where τ syn is the injection current time decay and C syn stands for the temporal injection current constant, which models the scale of the current injection of the presynaptic impulses. Figure 1 shows a step impulse of 1.5nA fed to a single neuron, which is modeled by Equation (1), showing its internal state v m (t) and the produced spike voltage v s (t). The parameters used for the neuron that is used are provided in Table 1.

Riobase
Tuning Curve of the Neuron Neuronal activity for a given impulse  Table 1. Table 1. Neuron, synapse, and encoding parameters.

LIF Model Parameter Value
Membrane charging constant τ m = 10 ms Membrane resistance R m = 10 MΩ Capacitance of the neuron C m = 1 nF Threshold voltage of the neuron v th = −55 mV Resting potential of the neuron E L = −70 mV Reset potential of the neuron v reset = −70 mV Spike amplitude v spk = 20mV Postsynaptic current decay time τ pstc = 10 ms Refractory Period τ re f = 2 ms

Conductance-Based LIF
Time decay of the injection current τ syn = 10 ms Temporal injection current constant C syn = 1 × 10 −5

Frequency Response of the Neuron
To compute how much current has to be fed into the neuron to obtain a given frequency response, first, we need to compute how much time it will take for the neuron to pass from a resting stage to a firing stage by analytically solving the differential Equation (1): For t = 0, we can rewrite C 1 = v m (0) − E L − R m I syn (t). Setting the initial conditions to the values of v m (0) = E L , and v m (t) = v th in Equation (5), we can solve for t to obtain the expression of the membrane's potential charging time t spk : As the firing frequency f spk = 1/T spk , where T spk = τ re f + t spk , we have: Equation (7) computes the frequency response of a neuron given a certain current. The inverse function computes the opposite-the amount of current needed for a given frequency: Figure 1b shows the firing response of a neuron with respect to the firing frequency response for a given excitatory input current; this is called a tuning curve, and it was obtained using Equation (7) (analytical solution) and a numerical simulation of Equation (1), with a sweep from 0 A to 6 nA, using the neuron parameter values that appeared in Table 1. Setting f = 1 Hz, we obtain I r = 1.5 nA. This is called the riobase current of the neuron.

Synapse Modeling
Spike-time-dependent plasticity (STDP) is a Hebbian learning algorithm that reflects how a synapse's conductivity increases or decreases according to the neuron spiking activity [27]. Given the j-th layer of N presynaptic neurons and the k-th layer of M postsynaptic neurons, a matrix of W = [w jk ] ∈ R N×M synapses will form between them, and its weight value will be modified by: In Equation (9), ∆t = t f post − t f pre is the difference between the firing times of the postsynaptic and presynaptic neurons. τ + , τ − are the long-term potentiation (LTP) and longterm depreciation (LTD) constants, which map the decay effect of a spike in the modification of the weight. For each spike, the synaptic weight is then modified by a learning rate of A + , A − . When A + = A − and τ + = τ − , the response is symmetrical, that is, the synapse modifies its value equally for presynaptic or postsynaptic spikes. STDP is included in the unsupervised learning paradigm [8], as there is no teaching signal involved, rather than the input and output signals to be processed.

Reward-Modulated STDP (RSTDP)
In order to introduce a teaching signal, some modifications to the STDP algorithm were described in [8] based on dopamine's modulation of the learning ability in the synapses observed in biological systems. Starting from Equation (9), an eligibility trace E can be defined by taking into account only the last pre-and postsynaptic spike potentials at time t: The eligibility trace is intended to model the tendency of the change in the synaptic weight value as a transient memory of all of the spiking activity, where τ E depicts its decay time. The rate of change in the synaptic weights w is then obtained as follows: where R(t) ∈ [−1, 1] is a reward signal, which is defined according to the network's objectives. It is worth mentioning that when R = 0, learning is deactivated, as no change in synapses is produced. When R = −1, the weights are forced to converge in the opposite direction. Finally, when R = 1, the eligibility trace remains unaltered.
Three presynaptic neurons and one postsynaptic neuron were arranged as shown in Figure 2a, and they produced different spiking activities (Figure 2b), showing how the output neuron's membrane voltage accumulated with each arriving spike ( Figure 2d). As each neuron spiked with a different frequency, the synaptic weight evolved into different values ( Figure 2c).

Encoding and Decoding in Spiking Neural Networks
Given an analog input signal that is intended to be processed by an SNN, a proper truly excitatory input current that represents every possible value from the input signal should be computed (encoding). Furthermore, the spiking activity of a neuron must be interpreted back from the spiking domain into the analog domain in order to interact with external systems (decoding).

Encoding Algorithm
There are several encoding and decoding algorithms that have been proposed in the literature. Some of them have the intention of reflecting biological plausibility, or easing the construction of neuromorphic devices. Rate-based encoding takes an input signal x(t) ∈ [x min , x max ] and a minimum and maximum spiking frequency operation of the neuron F = [F min , F max ], and it uses Equations (7) and (8) to encode/decode, respectively. Nonetheless, the encoding process can be performed as a function of the variability of the signal, which can be divided into phase encoding and time-to-first-spike encoding, among others [27,28].
Step-forward encoding, which was described in [29], is a temporal encoding algorithm that harnesses the low-pass filter dynamics of the LIF neuron in conjunction with a temporal encoding methodology. The input signal x(t) is compared with an initial baseline signal x b (t) and a sensibility encoding threshold value x th . If x(t) > x b + x th , a certain current I + syn is fed into an LIF neuron, which is denoted as N + . However, if syn is then fed into another LIF neuron (denoted as N − ). Therefore, N + will only spike for a growing signal, while N − will spike for decreasing signals. In this work, the conditional part of this encoding algorithm is replaced with differentiable functions with the aim of easing future mathematical convergence analyses.
where c is a slope modulation constant, which, for high values, approximates tanh(·) function as closely as the hardlim function. The baseline signal for the encoding is then updated by: For decoding, the output signalx(t) is computed with the following expression: where t + f stands for the spiking time of the N + neuron and t − f is the firing time of the N − neuron. Figure 3a shows a simple configuration for reconstructing an input sine signal, which is shown in Figure 3b, by using the spiking activities of two neurons (Figure 3c) that are fed by an encoding block composed of Equations (13) and (14), which feed N + and N − with the current levels shown in Figure 3d.

Discrete Extended Kalman Filter
The discrete extended Kalman filter (EKF) allows full state estimation of system dynamics based on partial and/or noisy measurements. Given a system represented in a discrete space-state manner [5], where x k ∈ R n is the state vector of the system, and f (·) is nonlinear and describes the evolution of the dynamics given the state value at the previous timestep x k−1 and a control input u k ∈ R n . y k ∈ R m is the available output of the system, which is described by h(·). w ∼ N(0, Q) and v ∼ N(0, R) are additive white Gaussian noise (AWGN) with a covariance matrix Q ∈ R n×n and R ∈ R m×m , respectively, representing the system uncertainties given by perturbations or noisy measurements. The EKF algorithm retrieves an estimationx k|k that ideally tends tox k|k → x k . As f (·), h(·) are nonlinear, the EKF uses a linearized version of the system's model by obtaining their respective Jacobians: wherex k−1|k−1 is the estimation of the EKF in the previous timestep. The discrete EKF is a two-step procedure involving a prediction and an update:

1.
Prediction: First, a preliminary estimationx k|k−1 ,ŷ k|k−1 is computed by: Then, a covariance estimate P k|k−1 , S k|k−1 is computed, and the noise covariance matrices Q, R and the estimate in the previous timestep P k−1|k−1 are taken into account: 2.
Update: The second step consists of computing the Kalman gain matrix κ ∈ R n×m with κ = P k|k−1 · C T · S −1 k|k−1 (25) in which the difference between the measurable output and estimated output of the prediction step is used: We can obtain a final estimationx k|k that considers errors in measurement and noise statistics:x Finally, the moment of the prediction P k|k , which will be used for the next timestep in prediction, is computed: In order to successfully reconstruct the full state, both the KF and EKF demand full knowledge of the system dynamics, as the correct characterization of perturbations and measurement noise can become cumbersome or unavailable in real scenarios.

Proposed Kalman-Filtering SNN Structure
An SNN structure of an EKF that replaces Equations (23), (24), and (28) is shown in Figure 4. First, the error between the current and prior estimations is defined for all of the space-state variables: Then, ∆x and ∆y t are stacked into an input vector s in for the SNN as follows: This vector is encoded using Equation (13), which produces excitatory input current vectors for two ensembles of neurons inside the SNN, which are called Ens+ and Ens−, and they spike for increasing and decreasing input signals, respectively. Both ensembles count with two densely connected LIF neuron layers-the j-th layer with n + m neurons, which is modeled by Equation (1), and the k-th layer with n × m LIF neurons, which is modeled by Equations (1) and (4). These are connected by RSTDP synapses, as depicted in Equations (11) and (12), with the reward signal set to R(t) = 1. The spikes of the k-th layer from Ens+ and Ens− are finally decoded with Equation (16) to obtain each value of the Kalman gain matrix in order to properly reconstruct the full state vector of the system. Figure 5 shows the described SNN structure.

Results
In order to show the performance of the proposal, two nonlinear systems were used. For each system, the nonlinear equations were simulated to create noiseless ground-truth data x(t). Then, the resulting vector was noised as described in (17) and (18) by setting w k , v k with the diagonal covariance matrices Q, R as follows: where ν = 1 would imply that the state noise and the observation noise have the same variance, i.e., q 2 = r 2 . The resulting contaminated data then corresponded to a system with noisy measurements and unknown perturbations. The simulation was intended to compare the performance of a standard EKF against the SNN proposal under equal conditions; that is, only noisy measurements were provided. The SNN had to be able to recover this information, while for the EKF, Q, R were set as identity matrices, as these were supposed to be unknown. To create the system's synthetic data, as the used models were shaped withẋ = A(x, u) · x, the solution of the nonlinear system could be expressed as a Taylor series expansion with five terms, as in [7], assuming that for a small timestep ∆t, f (x(t)) ≈ f (x(t + ∆t)). By doing this, we obtained a system that shaped as described in Equation (17).
For the SNN, the neuron parameters in Table 1 were used. The synapse, encoding/decoding, and simulation parameters are found in Table 2. The synapses were ran-domly initialized in the range of [w min , w max ]. To display the neural activity, the observed spike frequency for each neuron f obs was computed as follows: where n obs counts how many spikes were produced inside a period of length T obs = 50 ms. The procedure was repeated for the whole simulation timeline of t = 60 s, with simulation a timestep of ∆t = 1 × 10 −4 s. Table 2. Encoding/decoding and RSTDP parameters in the simulation.

SF Encoding and Decoding
Encoding sensibility threshold value in a Van der Pol test x th = 1 × 10 −4 Encoding sensibility threshold value in a Lorenz test x th = 1 × 10 −5 Decoding sensibility threshold value in both tests

Noise Parameters
Measurement noise's standard deviation r = 0.1 System uncertainties' standard deviation q = 0.0316 The simulation scripts were coded from scratch using Python (v+3.8) [30] and the Numpy (v+1.20) and Sympy (v+1.8) [31] libraries. However, during our testing, the Lorenz system's SNN network was also coded using the SNNtorch (v+0.5.3) library [32]. The resulting code is available in the Data Availability Statement section.

Van der Pol Simulation
Proposed by electrical engineer and physicist Balthasar Van der Pol, this nonlinear model is used to find oscillations on electric circuits using vacuum tubes, and it can be written in theẋ = Ax form as follows: where µ = 3 refers to the damping strength of the oscillations. For this test, we set our output to y = [1,0]x, that is, only x 1 was available for the measurement, while x 2 was set to be recovered from the system. Figure 6a shows a correct estimation of x 2 . This can also be seen in the difference x −x shown in Figure 6b. The κ ∈ R 2×1 matrix values estimated by the SNN are displayed in Figure 6c; these were obtained by using the spikes of the output layer. The evolution of the synaptic weight is also shown for both ensembles (Ens+, Ens−) in Figure 6d,e, respectively. While the SNN's estimation became noisier as the time moved forward, it can be seen in Figure 6f that the EKF was not able to properly reconstruct the missing states at any point.

Lorenz System Simulation
A typical dynamic system for testing the obtention of unknown or partial dynamics is the Lorenz attractor, which is composed of the following nonlinear dynamics: For this system, the EKF can be implemented by using five Taylor series approximation terms, as in [7]. In this test, we set the output to y = [1, 0, 0]x, which meant that only the x 1 state was available for measurement. Therefore, x 2 , x 3 should be recovered. Figure 7a shows the estimation of x 2 , x 3 . The error x −x is shown in Figure 7b for the three states. The κ ∈ R 3×1 matrix values estimated by the SNN are displayed in Figure 7c. The weight evolution is also shown for both ensembles (Ens+, Ens−) in Figure 7d,e, respectively. In this test, while the error estimation converged to close to zero for the three states (Figure 7b), Figure 7f shows that the EKF quickly diverged to infinity at t = 6.9 s due to the missing noise characterization of the system.

Discussion
A proper full state reconstruction of the space state was achieved. However, some considerations should be addressed. On the one hand, in the KalmanNet structure, the intention being the usage of GRUs is to use them as storage for the internal ANN's memory in order to jointly track the underlying second-order statistical moments required for implicitly computing the KG [7]. In our SNN proposal, the intention is to replace them with the eligibility traces defined by the RSTDP weight update mechanism (Equation (11)), as E collects the weight changes proposed by STDP; thus, they represent the potentiation/degradation tendency of the synaptic weight [8].
The energy consumption of an SNN relies on the spiking activity. Therefore, only the necessary spikes should be performed to represent our signals. Rate-based encoding mechanisms return a constant excitatory input current for a constant input signal (no matter its magnitude), resulting in spiking activity for non-changing signals. In temporal encoding schemes, such as the one used in this work, the neurons are only excited based on the rate of change in the input signal. The introduction of Equations (13) and (14) is intended to restrain the excitatory input current of the neurons to minimum and maximum values. In the range of tanh(·) ∈ [−1, 1], for high rates of change, the maximum input current is I syn = 2I r ; according to Equation (7) and the neuron parameters in Table 1, this would correspond to a spike frequency of f ≈ 120 Hz for a maximum input current of I syn = 2(1.5nA) = 3nA. This can be seen in the resulting spike frequency graphs for both the Van der Pol and Lorenz tests (see the Data Availability section).
However, while the neuron parameters were selected to resemble biological plausibility, proper selection of the encoding/decoding sensibility and the values of the learning rates is fundamental. x th should proportionate enough to I syn to produce a suitable spiking activity, though selecting sufficiently high A + , A − values should appropriately modify the synaptic weights with the supplied spikes. Low learning rates may require a higher spike frequency but a higher precision, leading to slow convergence. In contrast, high learning rate values require less spiking activity but lead to a lower precision, which may result in divergence. In addition, to translate this SNN structure into a hardware implementation, the min/max synaptic weight values might be restricted to the observed values in available memristive devices.
A mathematical convergence analysis would determine the boundary conditions for selecting proper parameters. However, the LIF reset condition makes this dynamic nondifferentiable, which disables this analysis or the adaptation of back-propagation for SNNs. A way to deal with this is to move the analysis to the frequency domain by solving the LIF model and obtaining the tuning curve produced by Equation (7) and its corresponding graph (Figure 1b). It can be seen that the function only is differentiable in the range of [I r , ∞). In [15], the authors used a polynomial differentiable tuning curve (which can be obtained through least square regression) to avoid this restriction. In this work, the introduction of bounded and differentiable encoding/decoding functions and the usage of two (Ens + , Ens − ) neuron ensembles allowed the usage of this approximation to be avoided, as the dynamics of Ens + are only affected by the growth of input signals, while for Ens − , only the decay is processed, thus creating a switching dynamical system [33] that might allow us to propose a Lyapunov candidate function whose derivative is negatively defined.

Conclusions and Future Work
An SNN-embedded architecture inside the extended Kalman filter algorithm was used to perform the full state recovery of a nonlinear dynamic system based on partial knowledge while assuming unknown but bounded perturbations. Numerical simulations showed the feasibility of the system. While in other works, the encoding/decoding process was performed by using a function approximation relating the input current with the spike frequency, the proposed modifications allow this to be avoided by setting a switched current designation that lets each ensemble of neurons and their respective synapses evolve towards the growth/decay of the SNN input signals while bounding the excitatory input current, thus limiting the spike frequency.
In order to move towards a hardware construction, neuron design with a very large scale of integration (VLSI), the replacement of synapses with memristive devices, and a VLSI design of the encoding/decoding modules would define the building blocks for a system-on-a-chip proposal. However, moving to a hardware implementation in currently available technologies might lead to modifications, such as changes in the values of the memristive range or achievable spike frequencies. Therefore, a framework for mathematical convergence analysis should be defined to study the SNN's performance with these new parameters. Nonetheless, it was shown that a few resources (in terms of the number of neurons, synapses, and energy consumption) were able to achieve proper performance by taking advantage of existing explainable PINN architectures.