We introduce an ultra-compact, single-neuron-per-class end-to-end readout for binary classification of noisy, image-encoded sensor time series. The approach compares a linear single-unit perceptron (E2E-MLP-1) with a resonate-and-fire (RAF) neuron (E2E-RAF-1), which merges feature selection and decision-making in a single block. Beyond empirical evaluation,
[...] Read more.
We introduce an ultra-compact, single-neuron-per-class end-to-end readout for binary classification of noisy, image-encoded sensor time series. The approach compares a linear single-unit perceptron (E2E-MLP-1) with a resonate-and-fire (RAF) neuron (E2E-RAF-1), which merges feature selection and decision-making in a single block. Beyond empirical evaluation, we provide a mathematical analysis of the RAF readout: starting from its subthreshold ordinary differential equation, we derive the transfer function
, characterize the frequency response, and relate the output signal-to-noise ratio (SNR) to
and the noise power spectral density
(brown, pink, and blue noise). We present a stable discrete-time implementation compatible with surrogate gradient training and discuss the associated stability constraints. As a case study, we classify walk-in-place (WIP) in a virtual reality (VR) environment, a vision-based motion encoding (72 × 56 grayscale) derived from 3D trajectories, comprising 44,084 samples from 15 participants. On clean data, both single-neuron-per-class models approach ceiling accuracy. At the same time, under colored noise, the RAF readout yields consistent gains (typically +5–8% absolute accuracy at medium/high perturbations), indicative of intrinsic band-selective filtering induced by resonance. With ∼8 k parameters and sub-2 ms inference on commodity graphical processing units (GPUs), the RAF readout provides a mathematically grounded, robust, and efficient alternative for stochastic signal processing across domains, with virtual reality locomotion used here as an illustrative validation.
Full article