1. Introduction
In contemporary computing, the majority of tasks necessitate some form of digital signal processing [
1]. With the escalating computational capabilities of digital processing systems, there is a concurrent surge in power consumption. This is further exacerbated by the rapid advancements in the field of artificial intelligence. Neuromorphic sampling, or time encoding, is an alternative to traditional digital encoding that transforms an analog signal into a sequence of lowpower timebased pulses, often referred to as a spike train. Neuromorphic sampling draws its inspiration from neuroscience and introduces a paradigm shift by significantly reducing power consumption during encoding and transmission [
2,
3]. Despite these advantages, as of now, there exist no equivalents of digital signal processing operations tailored to neuromorphic sampling. This unexplored territory holds the promise of groundbreaking developments in lowpower and efficient signal processing.
In this paper, we address the problem of filtering a signal using its neuromorphic measurements, thus extending the principle of digital signal filtering for the case of neuromorphic sampling. In posing this problem, we do not seek alternatives for spiking neural networks, but rather theoretically validated analytical approaches. Moreover, the proposed problem is not to replace existing conventional digital signal processing, but it is posed under the assumption that the communication protocol is emitting and receiving spike trains [
4,
5]. In this case, the proposed problem is to perform the mathematical operation of filtering on the analog signals encoded in these spike trains.
In the literature, the concept of spike train filtering predominantly refers to convolving the filter function with a sequence of Diracs centered in the spike times [
6]. This is not an operation on the analog signal that resulted in those neuromorphic samples. Moreover, in the context of neuromorphic measurements generated from multidimensional signals, such as those generated by event cameras [
7], filtering also refers to performing multidimensional spatial convolution [
8,
9]. The case of filtering the analog signal via its neuromorphic measurements was proposed in [
10], but the process also involves signal reconstruction. A filtering approach of neuromorphic signals without reconstruction was first studied in [
11].
Therefore, conventionally, to process the signal underlying a sequence of spike measurements, the signal is recovered in the analog domain, followed by filtering and neuromorphic sampling. There are a number of drawbacks with this approach. First, this method does not exploit the power consumption advantage of neuromorphic measurements. Second, this approach is heavier computationally due to the complexity of signal reconstruction from neuromorphic samples [
12,
13]. Third, reconstruction is only possible if the input satisfies some restrictive smoothness or sparsity constraints. However, for conventional sampling, the process of digital filtering is independent of the charateristics of the signal that generated the measurements.
In this paper, we derive a direct mapping between the time encoded inputs and outputs of an analog filter. The proposed mapping forms the basis for a practical filtering algorithm of the underlying signal corresponding to some given neuromorphic measurements, without direct access to the signal. We introduce theoretical guarantees and error bounds for the measurements generated with the proposed algorithm. Through numerical simulations, we demonstrate the performance of our method in terms of speed, but also reduced restrictions on the input signal, in comparison with the existing conventional method.
This paper is structured as follows.
Section 2 presents a brief review of the time encoding model used in this paper and associated input reconstruction methods.
Section 3 introduces the proposed problem.
Section 4 describes the proposed filtering method. Numerical results are presented in
Section 5.
Section 6 presents the concluding remarks.
2. Time Encoding
The time encoding machine (TEM) is a conceptualization of neuromorphic sampling that maps input
$u\left(t\right)$ into sequence of samples
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$. The particular TEM considered here is the integrateandfire (IF) TEM, which is inspired by neuroscience. Consequently, sequence
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$ is called a spike train, where the spike refers to the firing of an action potential, representing the information transmission method in the mammalian cortex. Previously, the IF model was used for system identification of biological neurons [
14,
15] to perform machine learning tasks [
16,
17] but also for input reconstruction [
13,
18,
19,
20,
21]. The IF model adds input
$g\left(t\right)$ with bias parameter
b, and subsequently integrates the output to generate a strictly increasing function
$y\left(t\right)$. When
$y\left(t\right)$ crosses threshold
$\delta $, the integrator is reset and the IF generates output spike time
${t}_{k}$. The IF TEM is described by the following equations:
Without reducing the generality, it is assumed that
$\phantom{\rule{4pt}{0ex}}{t}_{0}=0$. A common assumption is that the input is bounded by
$c\in {\mathbb{R}}_{+}$, such that
$\leftu\left(t\right)\right\u2a7dc<b$. This bound enables derivation of the following density guarantees [
2]:
Signal
$u\left(t\right)$ is, in the general case, not recoverable from
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$. To ensure it can be reconstructed, we require imposing restrictive assumptions. A common assumption is that
$u\left(t\right)$ belongs to
$P{W}_{\mathsf{\Omega}}$, the space of functions bandlimited to
$\mathsf{\Omega}\phantom{\rule{4pt}{0ex}}\mathrm{rad}/\mathrm{s}$ that are also square integrable, i.e.,
$u\in {L}^{2}\left(\mathbb{R}\right)$. If this assumption is satisfied, then
$u\left(t\right)$ can be recovered from
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$ if
To ofer an intuition on the recovery condition and its link to the Nyquist rate, we note that TEM Equation (
1) can be rewritten as
where
${\u2329\xb7,\xb7\u232a}_{{L}^{2}}$ denotes the inner product in Hilbert space
${L}^{2}\left(\mathbb{R}\right)$ of square integrable functions and
${\mathbb{1}}_{\left[{t}_{k},{t}_{k+1}\right]}$ is the characteristic function of
$\left[{t}_{k},{t}_{k+1}\right]$. We note that although
${\mathbb{1}}_{\left[{t}_{k},{t}_{k+1}\right]}\in {L}^{2}\left(\mathbb{R}\right)$, it is not bandlimited, i.e.,
${\mathbb{1}}_{\left[{t}_{k},{t}_{k+1}\right]}\notin P{W}_{\mathsf{\Omega}}$. However, due to the properties of the inner product [
22],
where
${\phi}_{k}\left(t\right)\triangleq \frac{sin\left(\mathsf{\Omega}\xb7\right)}{\pi \xb7}\ast {\mathbb{1}}_{\left[{t}_{k},{t}_{k+1}\right]}\left(t\right)$ denotes the projection of
${\mathbb{1}}_{\left[{t}_{k},{t}_{k+1}\right]}$ in
$P{W}_{\mathsf{\Omega}}$. In the case of the Nyquist rate condition, the uniform samples can be described as
The Nyquist rate criterion [
23] ensures that
${\left\{u\left(kT\right)\right\}}_{k\in \mathbb{Z}}$ uniquely identify
$u\left(t\right)$ if
$T<\frac{\pi}{\mathsf{\Omega}}$, which is guaranteed by the fact that functions
${\left\{\frac{sin\left(\mathsf{\Omega}\left(\xb7kT\right)\right)}{\pi \left(\xb7kT\right)}\right\}}_{k\in \mathbb{Z}}$ form a basis in
$P{W}_{\mathsf{\Omega}}$. The same is not true in the case of time encoding, which is a form of nonuniform sampling [
24,
25,
26], where
${\left\{{\phi}_{k}\right\}}_{k\in \mathbb{Z}}$ do not form a basis. They can, however, form the more general concept of frame [
27], which guarantees that
$u\left(t\right)$ is uniquely determined by
${\u2329u,{\phi}_{k}\u232a}_{{L}^{2}}$ if sequence
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$ is dense enough. Via (
2), we can use
$\frac{\delta}{bc}$ as a measure of density of sequence
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$, yielding Nyquistlike criterion (
3).
Input
$u\left(t\right)$ is then recovered from
${\left\{{t}_{k}\right\}}_{k\in \mathbb{Z}}$ as
where
${s}_{k}=\frac{{t}_{k}+{t}_{k+1}}{2}$ are the midpoints of intervals
$\left[{t}_{k},{t}_{k+1}\right]$ and
${\tilde{c}}_{k}$ are the solution in the least square sense of the following system [
22]:
Unlike uniform sampling, the input recovery for an IF TEM is much more complex computationally because, for each new input, functions
$\frac{sin\mathsf{\Omega}\left(t{s}_{k}\right)}{\pi \left(t{s}_{k}\right)}$ have to be computed and System (
6) needs to be solved. This becomes very demanding computationally for long sequences
$\left\{{t}_{k}\right\}$. Alternative recovery approaches are based on optimizing a smoothnessbased criterion instead of aiming to uniquely recover the input [
28,
29]. Moreover, the problem of input recovery was shown to be equivalent to that of system identification of a filter in series with an IF TEM in the case of linear [
30,
31] and nonlinear filters [
32].
The methods presented so far are assuming the input is bandlimited. Further generalizations were introduced for the case where
$u\left(t\right)$ is a function in a shiftinvariant space [
13,
19] or in a space with finite rate of innovation [
20,
21]. However, if
$u\left(t\right)$ does not belong to one of the classes above, or if it is bandlimited and does not satisfy (
3), then the conventional theory does not allow any processing of signal
$u\left(t\right)$ via its samples
${t}_{k}$. The same is not true for conventional digital signals, which can be processed even when they are not sampled at the Nyquist rate. We show that some types of processing such as filtering are still doable even when (
3) is not true.
3. Problem Statement
Here, we formulate the proposed signal filtering problem as follows. We assume the neuron input is continuous, i.e.,
$u\in C\left(\mathbb{R}\right)$. To satisfy the neuron encoding requirement in
Section 2, we assume the input is bounded, such that
$\leftu\left(t\right)\right\u2a7dc<b,\forall t\in \mathbb{R}$. Furthermore, we assume that the input is absolutely integrable
$u\in {L}^{1}\left(\mathbb{R}\right)$ and square integrable
$u\in {L}^{2}\left(\mathbb{R}\right)$. Following the same idea as in digital filtering, we do not impose any general conditions on the bandwidth, smoothness, or sparsity of the analog signal in order to compute the filter output.
The filter is assumed to be linear, with impulse response
$g\left(t\right)$ that is continuous,
$g\in C\left(\mathbb{R}\right)$, and absolutely integrable,
$g\in {L}^{1}\left(\mathbb{R}\right)$. The output of the filter then satisfies
where the last inequality assumes that
${g}_{{L}^{1}}\u2a7d1$, which is introduced to ensure that
$\lefty\left(t\right)\right\u2a7dc$, which in turn allows
$y\left(t\right)$ sampling by the same neuron. According to the properties of the convolution operator, we also have
$y\in {L}^{2}\left(\mathbb{R}\right)\cap {L}^{1}\left(\mathbb{R}\right)$, where
${L}^{2}\left(\mathbb{R}\right)$ denotes the space of square integrable functions.
We let
${\left\{{t}_{k}^{u}\right\}}_{k\in \mathbb{Z}}$ and
${\left\{{t}_{k}^{y}\right\}}_{k\in \mathbb{Z}}$ be the neuromorphic samples of signals
u and
y, respectively, computed using an IF neuron with parameters
$\delta ,b$. The proposed problem is to compute
${t}_{k}^{y}$ knowing
${t}_{k}^{u}$, sampling parameters
$\delta ,b$ and filter
$g\left(t\right)$. This problem, illustrated in
Figure 1, is inspired by digital signal processing, where a digital filter is applied directly to the samples of a signal. The conventional way to address this problem would be to recover
$u\left(t\right)$ from
${t}_{k}^{u}$, apply filter
$g\left(t\right)$ in the analog domain, and subsequently sample output
$y\left(t\right)$ with the same IF model to obtain
${t}_{k}^{y}$. We refer to this as the indirectmethod for filtering. The first step of recovery, however, is not possible unless we impose some further restrictive conditions on
$u\left(t\right)$ such as being bandlimited [
22], shiftinvariant [
13,
19], or having a finite rate of innovation [
18,
20]. Therefore, the proposed problem is not solvable in its full generality using conventional approaches.
However, if we replace neuromorphic sampling by conventional uniform sampling, this problem leads to the widely used operation of digital filtering. The operation itself does not require any special conditions on the analog signal that generated these samples. Therefore, an equivalent of this solution for the case of neuromorphic sampling is highly desirable.
4. The Proposed Neuromorphic Direct Filtering Method
In this section, we describe the proposed direct filtering method. To compute
${t}_{k}^{y}$ from
${t}_{k}^{u}$ we need to create an analytical link between the integrals of the underlying analog signals
$u\left(t\right)$ and
$y\left(t\right)$ due to the integral operators in (
1) and (
7). To this end, we define the following auxiliary functions:
We note that
U satisfies
$\leftU\left(t\right)\right\u2a7d{\u2225u\u2225}_{{L}^{1}}$. Using Young’s convolution inequality, we obtain
$\leftY\left(t\right)\right\u2a7d{\u2225y\u2225}_{{L}^{1}}\u2a7d{\u2225u\u2225}_{{L}^{1}}\xb7{\u2225g\u2225}_{{L}^{1}}$. Using these functions, we derive the equivalent of
ttransform Equations (
1) as
Assuming we know
$Y\left(t\right)$, the target spike train
${t}_{k}^{y}$ satisfies
${f}_{k}\left({t}_{k}^{y}\right)={t}_{k}^{y}$, where
The following result shows that ${t}_{k}^{y}$ can be uniquely computed using ${f}_{k}\left(t\right)$.
Lemma 1 (Exact Output Samples Computation).
Function ${f}_{k}\left(t\right)$ has a unique fixed point $t={t}_{k}^{y}$. Furthermore, we let ${t}_{k,m}^{y}$ be computed recursively such that ${t}_{k,0}^{y}\in \mathbb{R}$ is arbitrary andThen, ${lim}_{m\to \infty}{t}_{k,m}^{y}={t}_{k}^{y}$ and $\left{t}_{k,m}^{y}{t}_{k}^{y}\right<\left{t}_{k}^{y}{t}_{k,0}^{y}\right{\left(\frac{c}{b}\right)}^{m}$.
Proof. We assume, by contradiction, that
$\exists \overline{t}\ne {t}_{k}^{y}$ such that
${f}_{k}\left(\overline{t}\right)=\overline{t}$. It follows that
On the other hand, we know that
$\lefty\left(t\right)\right\u2a7dc<b$ due to (
7), and thus
${\int}_{0}^{\overline{t}}\left(y\left(\tau \right)+b\right)d\tau $ is a strictly increasing function of
$\overline{t}$, which ensures that
${\int}_{0}^{\overline{t}}\left(y\left(\tau \right)+b\right)d\tau =k\delta $ has a unique solution. Using (
9), we obtain
$\overline{t}={t}_{k}^{y}$, which invalidates our initial assumption and proves uniqueness.
From (
9), it follows that
$\forall k\in \mathbb{Z},{f}_{k}\left({t}_{k}^{y}\right)={t}_{k}^{y}.$ The following holds:
and thus
We let
$\zeta =\left{t}_{k}^{y}{t}_{k,0}^{y}\right$ and
$t={t}_{k,0}^{y}$. It follows that
Similarly, by choosing
$\zeta =\left{t}_{k}^{y}{t}_{k,1}^{y}\right$ and
$t={t}_{k,1}^{y}$, we obtain
and the process continues recursively, which completes the proof via
$c<b$. □
Therefore,
${t}_{k}^{y}$ can be computed by solving the fixedpoint equation
${f}_{k}\left(t\right)=t$. This equation requires knowing
$Y\left(t\right)$, which satisfies
where the last equality uses the variable change
$\tau \to t\tau $. In reality, however,
$Y\left(t\right)$ is unknown, since it could only be precisely computed using
$U\left(t\right)$. Given that we only know
${t}_{k}^{u}$ and do not impose any smoothness or sparsity conditions on
$u\left(t\right)$, we do not have access to
$U\left(t\right)$, but only to its samples
$U\left({t}_{k}^{u}\right)$ via (
9). In the following, we show that
$Y\left(t\right)$,
${f}_{k}\left(t\right)$ and subsequently
${t}_{k}^{y}$ can be estimated using a piecewise constant approximation of
$U\left(t\right)$ at points
${t}_{k}^{u}$. We let
${\tilde{f}}_{k}:\mathbb{R}\to \mathbb{R}$ be defined by
where
where
${I}_{1}U\left(t\right)$ is the piecewise constant interpolant to
U at points
${\left\{{t}_{k}^{u}\right\}}_{k\in \mathbb{Z}}$, such that
${I}_{1}U\left(t\right)=U\left({t}_{k}^{u}\right)$ for
$t\in \left[{t}_{k}^{u},{t}_{k+1}^{u}\right)$. The next proposition derives some properties of
$\tilde{Y}\left(t\right)$.
Proposition 1. Function $\tilde{Y}\left(t\right)$ is continuous and satisfieswhere $G\left(t\right)\triangleq {\int}_{0}^{t}g\left(s\right)ds$. Proof. Using (
15), we obtain
which proves (
16). It follows that
$\tilde{Y}\left(t\right)$ is a linear combination of continuous functions; thus, it is itself continuous. □
Proposition 1 shows that, unlike $Y\left(t\right)$ and ${f}_{k}\left(t\right)$, function $\tilde{Y}\left(t\right)$ and, consequently, also ${\tilde{f}}_{k}\left(t\right)$, are fully known from the IF parameters and input samples ${t}_{k}^{u}$. The remaining challenge is to show that the fixed point equation ${\tilde{f}}_{k}\left(t\right)=t$ can be solved and to provide an error bound for estimating ${t}_{k}^{y}$. This challenge is addressed rigorously in the next theorem. Moreover, the result allows computing recursively a sequence of estimations ${\tilde{t}}_{k,m}^{y}$ that converges to a vicinity of ${t}_{k}^{y}$.
Theorem 1 (Estimating Output Samples from Input Samples). We let $u\left(t\right)$ be a signal satisfying $u\in {L}^{1}\left(\mathbb{R}\right)\cap {L}^{2}\left(\mathbb{R}\right)\cap C\left(\mathbb{R}\right),\leftu\left(t\right)\right\u2a7dc<b$. Furthermore, we let $g\left(t\right)$ be the impulse response of a filter satisfying $g\in {L}^{1}\left(\mathbb{R}\right)\cap C\left(\mathbb{R}\right),{\parallel g\parallel}_{{L}^{1}}\u2a7d1$, and let $y\left(t\right)$ be the filter output in response to input $u\left(t\right)$. Signals $u\left(t\right)$ and $y\left(t\right)$, sampled with an IF neuron with parameters $\delta ,b$, generate values ${\left\{{t}_{k}^{u}\right\}}_{k\in \mathbb{Z}}$ and ${\left\{{t}_{k}^{y}\right\}}_{k\in \mathbb{Z}}$, respectively. Then, the following hold true:
 (a)
$\forall k\in \mathbb{Z},\exists \phantom{\rule{4pt}{0ex}}{\tilde{t}}_{k}^{y}\in \left[{t}_{k}^{y}\frac{2\delta c}{{\left(bc\right)}^{2}},{t}_{k}^{y}+\frac{2\delta c}{{\left(bc\right)}^{2}}\right]$ such that ${\tilde{f}}_{k}\left({\tilde{t}}_{k}^{y}\right)={\tilde{t}}_{k}^{y},$ where ${\tilde{f}}_{k}\left(t\right)$ satisfies (14).  (b)
We let ${\tilde{t}}_{k,m}^{y}$ be a sequence defined recursively as ${\tilde{t}}_{k,m+1}^{y}={\tilde{f}}_{k}\left({\tilde{t}}_{k,m}^{y}\right)$, where ${\tilde{t}}_{k,0}^{y}\in \mathbb{R}$. Then,  (c)
For ${\tilde{t}}_{k,m}^{y}$ defined above, $\exists {m}_{0}\in \mathbb{Z}$ such that ${\tilde{t}}_{k,m}^{y}\in \left[{t}_{k}^{y}\frac{2\delta c}{{\left(bc\right)}^{2}},{t}_{k}^{y}+\frac{2\delta c}{{\left(bc\right)}^{2}}\right],\forall m>{m}_{0}$.
Proof. $\left(\mathbf{a}\right)$ Function
$\tilde{Y}$ satisfies
where
$E=2\Delta c{\u2225g\u2225}_{{L}^{1}},\Delta =su{p}_{k\in \mathbb{Z}}\left({t}_{k+1}^{u}{t}_{k}^{u}\right).$ From (
12), the following holds:
Using
$\left{f}_{k}\left(t\right){\tilde{f}}_{k}\left(t\right)\right\u2a7d\frac{E}{b},\forall t\in \mathbb{R}$,
Unlike in the case of Lemma 1, in this case, applying
$\tilde{{f}_{k}}\left(t\right)$ recursively does not guarantee the exact computation of
${t}_{k}^{y}$. However, we observe that by picking
$\zeta =\frac{E}{bc}$, we obtain identical intervals for
t and
$\tilde{{f}_{k}}\left(t\right)$:
This observation is very useful as it enables applying Brouwer’s fixedpoint theorem which states that for any continuous function
$f:\mathbb{S}\to \mathbb{S}$ where
$\mathbb{S}$ is a nonempty compact convex set, there is a point
${t}_{0}$ such that
$f\left({t}_{0}\right)={t}_{0}$. Given that
$\tilde{Y}\left(t\right)$ is continuous due to Proposition 1, it follows that
${\tilde{f}}_{k}$ is also continuous. By applying Brouwer’s fixed point theorem for
$f\left(t\right)={\tilde{f}}_{k}\left(t\right)$ and
$\mathbb{S}=\left[{t}_{k}^{y}\frac{E}{bc},{t}_{k}^{y}+\frac{E}{bc}\right]$, it follows that
${f}_{k}\left(t\right)$ has a fixed point in
$\mathbb{S}$. We recall that
$E=2\Delta c{g}_{{L}^{1}}$, which, using (
2), leads to
It follows that $\mathbb{S}\subseteq \left[{t}_{k}^{y}\frac{2\delta c}{{\left(bc\right)}^{2}},{t}_{k}^{y}+\frac{2\delta c}{{\left(bc\right)}^{2}}\right]$, which yields the required result.
$\left(\mathbf{b}\right)$ We approach this proof using mathematical induction. We select
$t={\tilde{t}}_{k,0}^{y}$ in (
21). Using (
23), it follows that
We note that
${\tilde{t}}_{k,0}^{y}\in \left[{t}_{k}^{y}\zeta ,{t}_{k}^{y}+\zeta \right]$ is always true for
$\zeta =\left{t}_{k}^{y}{\tilde{t}}_{k,0}^{y}\right$, which yields
This demonstrates that (
18) is true for
$m=1$. To finalize the induction, we assume (
18) to be true, and show it is true for
$m+1$ as follows:
Finally, as before, we use the fact that
$\zeta =\left{t}_{k}^{y}{\tilde{t}}_{k,m}^{y}\right$ guarantees
${\tilde{t}}_{k,m}^{y}\in \left[{t}_{k}^{y}\zeta ,{t}_{k}^{y}+\zeta \right]$. We also use the fact that
$\zeta $ is bounded by (
18), which, when substituted in (
26), leads to the desired result via (
23).
(
c) Equation (
18) can be expanded into
The required result follows from
${lim}_{m\to \infty}{\left(\frac{c}{b}\right)}^{m}=0$ via (
23). □
Theorem 1 shows that one can construct sequence ${\tilde{t}}_{k,m}^{y}$ that approximates ${t}_{k}^{y}$ with error $\frac{2\delta c}{{\left(bc\right)}^{2}}$. We note that this error is dependent only on neuron parameters, and thus can be made arbitrarily small by changing the IF model. In practice, we use a finite sequence of input measurements ${\left\{{t}_{l}^{u}\right\}}_{l=0,\dots ,L}$ to approximate output samples ${\left\{{t}_{k}^{y}\right\}}_{k=0,\dots ,K}$ that satisfy ${t}_{0}^{u}\u2a7d{t}_{k}^{y}\u2a7d{t}_{L}^{u},\forall k=0,\dots ,K$. The proposed direct filtering method is summarized in Algorithm 1.
We note that, in practice, convergence was achieved in Algorithm 1 for
$M\u2a7d4$ iterations in all examples we evaluated. Moreover, we note that, in the proposed manuscript, computing
${\tilde{t}}_{k+1}^{y}$ uses
${\tilde{t}}_{k}^{y}$ as an initial condition. When new input data samples become available, Algorithm 1 incorporates them in computing new output data samples, but does not need to recompute the output samples that are already known. In the next section, we numerically evaluate the proposed algorithm.
Algorithm 1 Computing the neuromorphic output data samples via the proposed method. 
Data: $\left\{{t}_{k}^{u}\right\}$, $g\left(t\right)$, $\delta $, b; 
Result: $\left\{{\tilde{t}}_{k}^{y}\right\}$; 
Step 1. Set $k=1$ and ${\tilde{t}}_{0}^{y}=0$. While ${\tilde{t}}_{k1}^{y}<{t}_{L}^{u}$, 
Step 1a. ${\tilde{t}}_{k,0}^{y}={\tilde{t}}_{k1}^{y}$; 
Step 1b. Compute ${\tilde{t}}_{k,m+1}^{y}={\tilde{f}}_{k}\left({\tilde{t}}_{k,m}^{y}\right)$ for $m=1,\dots ,M$, where ${\tilde{f}}_{k}\left(t\right)=\frac{1}{b}\left(k\delta \tilde{Y}\left(t\right)\right)$,

and $G\left(t\right)={\int}_{0}^{t}g\left(\tau \right)d\tau ,U\left({t}_{l}^{u}\right)=l\delta b{t}_{l}^{u},l=0,\dots ,L;$ 
Step 1d. ${\tilde{t}}_{k}^{y}={\tilde{t}}_{k,M}^{y}$; 
Step 1e. $k=k+1$; 
Step 2. Set $K=k2$. 
6. Conclusions
In this work, we introduced a new method to filter an analog signal via its neuromorphic measurements. Unlike existing approaches, the method does not require imposing smoothness type assumptions on the analog input and filter output, such as a limited bandwidth. We introduced recovery guarantees, showing that it is possible to approximate the output spike train with arbitrary accuracy for an appropriate choice of the sampling model. We compared the proposed method numerically against the conventional solution to this problem, which involves reconstruction of the analog signal. The results show the accuracy of the proposed method is comparable to that of the conventional approach. However, the computing time was smaller for the proposed method in all examples, ranging from 2–3 times up to more than one order of magnitude smaller.
Conceptually, the proposed method has the advantage of not depending on the characteristics of the analog signal, and therefore it is not restricted to satisfy any reconstruction guarantees. As demonstrated numerically, the method works well in the case of random inputs, as well as when the input and output of the filter are sampled below Nyquist. Moreover, given the fact that it bypasses input reconstruction, the proposed method is not affected by known artefacts of recovery methods such as boundary errors.
This work can be extended in several directions. First, the theoretical results can be extended to work with higherorder interpolation rather than a piecewise constant, which may lead to better error bounds. Second, the results can be extended for the more general scenarios of multichannel or nonlinear filters. Third, the proposed algorithm could be implemented in hardware and tested in practical communication scenarios. This work has the potential to lead to the development of neuromorphic filters that would facilitate a faster transition towards a powerefficient computing infrastructure.