Next Article in Journal
Distributional CNN-LSTM, KDE, and Copula Approaches for Multimodal Multivariate Data: Assessing Conditional Treatment Effects
Previous Article in Journal
Multiplicative Decomposition Model to Predict UK’s Long-Term Electricity Demand with Monthly and Hourly Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators

1
College of Computer & Information Sciences, University of Massachusetts Amherst, Amherst, MA 01003, USA
2
Department of Mathematics, Virginia Tech, Blacksburg, VA 24061, USA
3
Allen Discovery Center, Tufts University, Medford, MA 02155, USA
*
Author to whom correspondence should be addressed.
Analytics 2025, 4(4), 28; https://doi.org/10.3390/analytics4040028
Submission received: 29 July 2025 / Revised: 15 September 2025 / Accepted: 14 October 2025 / Published: 20 October 2025

Abstract

Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output in response to large, sustained inputs, we focus on using differentiating neurons, which produce an output in response to large changes in input. Here, we introduce a small-world graph built from rings of differentiating neurons as a Reservoir Computing substrate. We find the coupling strength and network topology that enable these small-world networks to function as an effective reservoir. The dynamics of differentiating neurons naturally give rise to oscillatory dynamics when arranged in rings, where we study their computational use in the Reservoir Computing setting. We demonstrate the efficacy of these networks in the MNIST digit recognition task, achieving comparable performance of 90.65% to existing Reservoir Computing approaches. Beyond accuracy, we conduct systematic analysis of our reservoir’s internal dynamics using three complementary complexity measures that quantify neuronal activity balance, input dependence, and effective dimensionality. Our analysis reveals that optimal performance emerges when the reservoir operates with intermediate levels of neural entropy and input sensitivity, consistent with the edge-of-chaos hypothesis, where the system balances stability and responsiveness. The findings suggest that differentiating neurons can be a potential alternative to integrating neurons and can provide a sustainable future alternative for power-hungry AI applications.

1. Introduction

The current AI revolution is driven by deep neural networks (DNNs) that are powered by fast processing hardware accelerators. As these DNNs grow in size and complexity, yielding improved performance, so does the computational cost of training and inference [1]. This motivates the need for alternative models that are capable of similar performance while consuming less energy. One culprit for the high energy demand of DNNs is their fundamental building block—the integrating neuron. Integrating neurons respond to inputs based on a decaying history of past inputs. They can be modeled by a resistor–capacitor circuit, where the internal state evolves over time to integrate past inputs, resulting in an output that resembles a smoothed and slightly time-shifted version of the input signal. This behavior arises from the integrating neuron’s tendency to accumulate input over time, producing a large output in response to sustained, gradual changes.
In contrast to integrating neurons, differentiating neurons produce large outputs in response to large changes in input. This characteristic allows them to act as detectors for rapid transitions rather than cumulative changes. Moreover, when these neurons are arranged in a ring configuration, they naturally exhibit oscillatory dynamics characterized by pulses of neuron activity propagating from neuron to neuron. As these pulses travel around the ring, the system effectively encodes a form of dynamic memory in its oscillatory state. This characteristic oscillation is a key property that makes rings of differentiating neurons useful for computational tasks involving temporal data.
Differentiating neurons also offer a compelling alternative to integrating neurons in terms of energy efficiency, as they activate only in response to changes in input rather than requiring a continuous current. While early applications of small, manually designed networks of differentiating neurons focused on tasks such as robotic control [2] and motion tracking [3], expanding their use to larger networks unlocks new possibilities for machine learning, particularly in resource-constrained environments or low-power hardware implementations.
The fundamental distinction in dynamics and power usage raises important questions regarding the computational utility of differentiating neurons within neural network architectures. Given their distinct oscillatory properties, it is essential to systematically evaluate their computational capacity as functional units in network-based systems. The inherent oscillatory dynamics of these neurons suggest particular compatibility with Reservoir Computing paradigms, which naturally accommodate temporal dynamics and nonlinear transformations. Consequently, this study employs Reservoir Computing as the theoretical and computational framework to investigate the processing capabilities of differentiating neuron networks.
Building upon past studies that saw success using abstract coupled oscillator networks as computation substrates [4], in this work, we study the computational capabilities of differentiating ring oscillator networks. The differentiating neural network we propose is organized in two levels, as depicted in Figure 1. In the first level, the neurons are organized in a ring where the current flows in one direction, naturally giving rise to oscillatory dynamics. In the second level, these rings are organized into small-world graphs [5] and connected via weak coupling between individual neurons in each ring. The second-level organization enables the ring oscillators to interact and produce complex dynamical behavior necessary for reservoir computing.
To investigate these computational capabilities, we apply the principles of Reservoir Computing (RC) [6], an unconventional machine learning paradigm particularly suited for systems with intrinsic autonomous dynamics. Since differentiating neurons arranged in small-world graph topologies exhibit natural oscillatory behavior, Reservoir Computing emerges as the most suitable computational framework for evaluating their processing capabilities. We develop a differentiating ring oscillator network architecture and employ the MNIST digit recognition benchmark [7] as a proof of concept to demonstrate the computational ability of these neurons. Our investigation encompasses a comprehensive analysis of key architectural parameters, including the number of ring oscillators, inter–oscillator coupling strengths, and network topological configurations, to determine their respective influences on computational performance. Through systematic hyperparameter optimization, we identify the optimal parameter regimes that enable differentiating neural networks to achieve competitive performance relative to established Reservoir Computing models.
To gain deeper insights into the mechanisms underlying effective computation in these networks, we introduce a theoretical framework for analyzing reservoir quality through three complementary complexity measures: mean neural entropy, input-driven entropy deviation, and neural diversity. These metrics allow us to characterize the internal dynamics of the reservoir independently of readout training, providing principled insight into how different network configurations influence computational capacity. Our analysis reveals that optimal performance emerges when the reservoir operates at intermediate levels of neural activity balance and input sensitivity, supporting the edge-of-chaos hypothesis, where dynamical systems achieve maximal computational expressiveness by balancing stability and responsiveness [8]. This theoretical understanding not only validates our empirical findings but also provides design principles for optimizing differentiating neuron networks in future applications.

2. Related Work

In this section, we begin with an overview of the Reservoir Computing framework and its reliance on dynamical systems for computation, motivated by the Principle of Computational Equivalence [9]. Following this, we detail Reservoir Computation pipelines and highlight key implementations, including differentiating neuron architectures. Finally, we explore the dynamics of differentiating neurons and their feasibility as a reservoir.

2.1. Dynamical Systems as Reservoir Computers

The Principle of Computational Equivalence (PCE), introduced by Stephen Wolfram [9], posits that a wide range of natural systems can achieve computational equivalence to universal computers. This concept broadens the scope of computation to include physical and dynamical systems such as excitable media, programmable matter, and biological substrates [10,11]. Reservoir Computing leverages this principle by utilizing dynamical systems as computational substrates by encoding input signals into high-dimensional, nonlinear representations. These reservoirs, whether physical or digital, offer a computational paradigm distinct from traditional logical operations, in which a model must explicitly learn to encode inputs into higher dimensions, and they have also been demonstrated to be universal computers [12].
Reservoirs vary widely in their physical or simulated realizations. Examples include digital echo state networks [13], physical systems such as programmable quantum matter [14], and hybrid approaches employing high-precision numerical simulations. A notable hybrid example is the skyrmion lattice reservoir simulated via the classical Heisenberg model by Lee and Mochizuki [15]. Given its similarity to our approach, we later compare our implementation with Lee and Mochizuki [15] to contextualize our results.
RC architectures, first introduced by Maass et al. [11], Jaeger [16], share a common pipeline. An input signal, typically representing time series data, perturbs a reservoir, where the inherent dynamics of the system apply a nonlinear transformation. Then a simple, often linear, machine learning model is trained on the output for downstream tasks. This simplicity in training, relying on the dynamics of a nonlinear system as opposed to backpropagation, is a key distinction between Reservoir Computing and traditional artificial neural networks.
Dynamical systems near the edge of chaos, as discussed by Langton [8], Dambre et al. [17], are particularly effective reservoirs. Operating at this boundary balances stability and responsiveness, maximizing the expressiveness of the system. For instance, networks of oscillatory circuits can optimize performance by maintaining weak coupling and avoiding over-saturation, as demonstrated in Choi and Kim [4]. In our implementation, we tune the hyperparameters of our differentiating ring oscillator networks to achieve optimal dynamics.

2.2. Differentiating Neuron Architectures

Differentiating neurons are artificial neurons described by differential equations in which the output of the neuron is a function of the time derivative of the voltage across its capacitor. First developed by Hasslacher and Tilden [18], these neurons have been widely applied across various domains, including satellite control [19], robotic control inspired by living organisms [20], and pattern recognition [21].
From the perspective of electronic circuits, differentiating neurons are modeled as a resistor–capacitor circuit, where v describes the voltage across the capacitor, y denotes the output voltage, and u denotes the input voltage. Then the internal state of the neuron v, modulated by the time-constant τ , is updated according to the following differential equation:
τ d v ( t ) d t = u ( t ) v ( t )
In contrast to differentiating neurons, traditional neural networks rely on integrating neurons, whose outputs represent a decaying integral of past inputs over time. This means that integrating neurons respond primarily to sustained input magnitudes. Differentiating neurons, on the other hand, compute their output as the time derivative of their internal state, making them particularly sensitive to rapid changes in input rather than steady-state values. As a result, the outputs of integrating neurons capture long-term trends in their inputs, whereas differentiating neurons’ outputs emphasize transient dynamics. In our case, this is described as:
y ( t ) = ϕ ( τ v ˙ ) = ϕ u ( t ) v ( t )
In our simulations, we use binary output differentiating neurons modeled as a Schmitt trigger logic gate that enables short-term memory retention in hysteresis [20]. As a result, changes in output for each neuron are parameterized by crossing an upper and lower threshold of the Schmitt trigger, denoted as v t h h and v t h l , respectively. Thus, our output function y is defined as:
y i ( t ) = 0 , τ v ˙ i ( t ) v t h h 0 , τ v ˙ i ( t ) [ v t h l , v t h h ] and y i ( t ) = 0 1 , otherwise
where y i ( t ) denotes the output of neuron i immediately before it crosses one of the thresholds. Since these architectures use an inverter, a value of 0 denotes that a neuron is firing, while a value of 1 denotes dormancy.

2.3. Networks of Differentiating Neuron Oscillators as Reservoir Computers

Hasslacher and Tilden [18] first described the dynamics of differentiating neurons composed in ring oscillators while also highlighting some potential applications. We intend to build on this work, using differentiable ring oscillator networks as reservoirs for machine learning tasks. Figure 2 illustrates the structure and dynamics of ring oscillator networks we consider in the paper.
An n-ring oscillator consists of n neurons arranged in a ring, where the output of the i-th neuron serves as the input to the ( i + 1 ) -th neuron (modulo n). The system’s behavior is governed by the following differential equations:
τ i d v i d t = y i 1 ( t ) v i ( t ) , i = 1 , 2 , , n ,
where y i ( t ) = ϕ ( τ i v ˙ i ) is the output of neuron i, and  i 1 is considered modulo n. With a Schmitt trigger as the activation function ϕ , the neurons can transmit pulses that persist and circulate indefinitely around the ring [18].
To extend this mechanism into a Reservoir Computing architecture, we investigated how to interconnect individual ring oscillators into a larger network. Prior work, such as DelMastro et al. [22], found that grids of ring oscillators saturated too quickly for pattern recognition. To address this, we explored small-world networks in which nodes correspond to rings of random sizes [5]. The transition from a reservoir of oscillators with a grid topology to a small-world topology substantially enhances the computational capabilities of the differentiating neural network, and allows for parameterized control of the network’s connectivity. In a grid topology, information propagation is constrained by the nearest-neighbor coupling, resulting in a finite velocity of information transfer through the system. This configuration limits the mixing of information across distant spatial regions of the reservoir, requiring many time steps before signals can interact across the full domain. Consequently, the system’s memory capacity and ability to learn complex temporal features from the input signals are restricted by this rigid communication structure.
Small-world topologies overcome these limitations by introducing strategic, long-range connections that maintain most of the local coupling structure while adding critical shortcuts across the domain. These shortcuts reduce the effective path length between any two points in the reservoir, enabling rapid information mixing throughout the system without requiring a proportional increase in physical size or number of oscillators [5]. Moreover, small-world topologies have been demonstrated to have numerous advantages in other oscillator networks [23]. Utilizing this architecture preserves the energy conservation constraints while significantly expanding its expressivity, allowing it to learn more complex temporal dependencies and improving classification accuracy on tasks such as digit recognition.

3. Materials and Methods

Our network consists of n rings oscillatory rings. Each ring contains between three and ten neurons, sampled uniformly. Rings of fewer than three neurons are excluded, since they cannot sustain circulating pulses. Within each ring, neurons interact via intra-ring connections, while inter-ring connectivity is governed by a small-world topology generated by the Watts–Strogatz graph construction [5]. This method introduces a degree of randomness in connectivity, governed by a rewiring probability p. Additionally, the strength of inter-ring couplings is controlled by a parameter ϵ that modulates the influence of one ring on another.
A subset of rings is reserved for input and is externally stimulated by a sequence of inputs with frequency controlled by the connectivity parameter p above.
Thus, the input voltage u i ( j ) for neuron i in the j-th ring is given by:
u i ( j ) = y i 1 mod n ( j ) + ϵ k N ( i , j ) y i 1 mod n ( k ) + I ( i , j ) u ( ext )
where
  • y i 1 mod n ( j ) accounts for the intra-ring input contribution received from the neuron ( i 1 ) -th neuron in ring j;
  • N ( i , j ) denotes the set of external rings connected to neuron i in ring j via the small-world coupling;
  • I ( i , j ) represents the set of external signal(s) to neuron i in oscillator j. This set may be empty if the neuron is not chosen to receive input.
Note that since every oscillator in our network must contain at least three neurons, we arbitrarily select a first and second neuron in each oscillator to send and receive outputs to external neurons in other oscillators modulated by a coupling constant, as denoted by N ( i , j ) above.
Our networks exhibit organization on two levels. On the more granular level, oscillators are connected through the coupling mechanisms described above. Figure 3 shows how two rings are coupled to each other, where one of the rings has four differentiating neurons and the other has six differentiating neurons. On a broader level, we determine which oscillators are coupled to one another by arranging our network in a small-world topology, generated by the Watts–Strogatz method [5]. Figure 4 depicts a network that exemplifies this overall structure.

3.1. Initialization and Simulation

The network is initialized by assigning random states to neurons, with a small number randomly selected to fire at the start. This then begins a warm-up phase where the initial dynamics of each ring are simulated independently of one another for a random duration of up to 20 timesteps. This warm-up phase is conducted to stabilize oscillations and ensure coherent dynamics before coupling.
Following the warm-up phase, the state of each neuron is updated at discrete timesteps, incorporating contributions from intra-ring, inter-ring, and external inputs. We chose the time constant τ for all simulations to be equal to one since we were primarily interested in exploring the role of network topologies and coupling strengths in our networks.
The role of coupling in shaping network behavior is analyzed in Section 4, while implementation details of our simulation, time-series preprocessing, and network construction are provided in Appendix A.

3.2. Data

We evaluated our networks on the MNIST handwritten digit recognition dataset, consisting of ten classes of grayscale images (digits 0–9) [7]. The dataset contains 60,000 training images and 10,000 test images, each of size 28 × 28 pixels. We used the PyTorch ver. Stable (2.9.0) torchvision.datasets.MNIST implementation, as documented at https://docs.pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html (accessed on 17 October 2025).
Each image is resized from 28 × 28 to 32 × 32 pixels using bilinear interpolation via the ndimage zoom function for compatibility with the reservoir input. Pixel values are normalized to the range [ 0 , 1 ] and augmented with Gaussian noise to improve robustness. The image is then sequentially traversed along a Hilbert curve. This traversal yields a one-dimensional sequence of pixel intensities while maintaining spatial relationships between neighboring pixels, which motivates our use of the Hilbert curve as opposed to alternatives such as a raster scan (Figure 5).
To further enhance temporal structure, the Hilbert-ordered sequence undergoes a sliding-window embedding. A fixed-size window, determined by the number of input neurons ( n in ), is applied to extract overlapping subsequences from the traversal. These subsequences form the initial temporal inputs to the reservoir. To match the desired length of the time-series data ( n ts ), each subsequence is repeated proportionally, with any remainder filled by duplicating entries from the start of the sequence. This process ensures that the input to the reservoir is both length-consistent and spatially meaningful.
From the original 60,000 training images, we created a split of 40,000 for training and 20,000 for validation. The 10,000 test images were held out and used only for final evaluation.

3.3. Training and Prediction

For classification, we employed a single-layer perceptron readout trained on the reservoir states. The readout consisted of a fully connected linear layer mapping from the reservoir state dimension (equal to the number of neurons in the network) to an output dimension of 10 (corresponding to the digit classes). A bias term was included. The output logits were passed through a softmax activation, and training was performed using the cross-entropy loss function.
Reservoir states were normalized to the range [ 0 , 1 ] prior to training. We did not apply temporal pooling; instead, readout training was performed directly on the reservoir states at each time step. Model parameters were optimized with AdamW (learning rate 10 3 , no weight decay) [24], with a MultiStepLR scheduler ( γ = 0.1 ) for learning rate decay. Training was conducted with a batch size of 32 for a maximum of 20 epochs, and each experiment was repeated across five random seeds. All experiments were run on an Nvidia Tesla V100 GPU with dual Intel(R) Xeon(R) Gold 5118 CPUs (24 cores).

3.4. Metrics for Measuring Reservoir Quality

To characterize the internal dynamics of the reservoir independently of the readout, we introduce three complementary complexity measures. These metrics quantify the balance of neuronal activity, the degree of input dependence, and the effective dimensionality of the network dynamics. Given a network with n neurons, let y i ( j ) ( t k ) { 0 , 1 } denote the output of the i-th neuron at time t k in response to the j-th input sequence. Assume the state is recorded at K timesteps t 1 , , t K for J different input sequences.
The first measure is the mean neural entropy H , defined as
H = 1 n i = 1 n H i , H i = p i log p i ( 1 p i ) log ( 1 p i ) ,
where p i = 1 J K j = 1 J k = 1 K y i ( j ) ( t k ) is the mean firing probability of neuron i. This metric captures the overall balance of activity in the network. Low values indicate frozen or trivial dynamics, while values near 1 reflect highly disordered, noise-like behavior. Intermediate values correspond to reservoirs in which neurons exhibit a balanced mixture of activity and dormancy. Observing this balance in a network might indicate that a reservoir is behaving in a way that is optimal for computation [8].
To quantify the sensitivity of the reservoir to external inputs, we compute the mean input-driven entropy deviation δ H defined as:
δ H = 1 n i = 1 n V j [ H i ( j ) ] .
where H i ( j ) is the entropy of neuron i in response to input sequence j:
H i ( j ) = p i ( j ) log p i ( j ) ( 1 p i ( j ) ) log ( 1 p i ( j ) ) , p i ( j ) = 1 K k = 1 K y i ( j ) ( t k ) ,
This statistic measures the extent to which the input sequence affects neuronal entropy. Small values indicate dynamics largely independent of input, while large values reflect strong input modulation. Intermediate values represent reservoirs that balance autonomous change with sensitivity to external signals.
Finally, we measure the effective dimensionality of our reservoir’s dynamics using the neural diversity score D . Let
C i , i = 1 J K j = 1 J k = 1 K y i ( j ) ( t k ) p i y i ( j ) ( t k ) p i
denote the covariance between neuron i and i , and let { σ 1 2 , , σ n 2 } be the singular values of C. We define
log 10 ( D ) = 1 n 1 i = 2 n log 10 σ i 2 σ 1 2 .
This measure decreases when neural correlations increase, and values near machine precision signal a collapse in effective dimensionality, which can cause instability in readout training [17]. Larger values correspond to more decorrelated and diverse dynamics, which support richer linear readout mappings.

4. Results

The performance of the reservoir was analyzed as a function of network size and hyperparameter configurations: the coupling constant ( ε ) and the connectivity constant (p). These analyses provide insights into the trade-offs between model complexity, generalization, and overfitting.

4.1. Hyperparameter Sensitivity

To assess the robustness of our reservoir architecture to network-level design choices, we conducted a grid search over the coupling strength ε and connectivity parameter p, each varied from 0.1 to 0.9 in increments of 0.1. Each configuration was evaluated across five randomly initialized trials using networks of 300 ring oscillators.
Training accuracy exhibited considerable variability across the hyperparameter grid, with no single region emerging as consistently optimal. This variability suggests that convergence during training may be influenced by factors beyond ε and p alone, such as initial conditions or stochastic noise in the readout training pipeline. These findings underscore the importance of robustness to hyperparameter variation in practical Reservoir Computing applications.
In contrast, test accuracy showed more structured behavior. Performance was generally highest when the coupling strength ε was in the lower to intermediate range (e.g., 0.2–0.4), and remained relatively stable across a broad range of p. This suggests that generalization performance is more sensitive to coupling dynamics than to global connectivity in the small-world regime. Notably, the best-performing configuration achieved a test accuracy of 90.65% for ε = 0.2 and p = 0.4 , with a corresponding validation accuracy of 88.47%.
These results support the hypothesis that optimal performance emerges near the edge of chaos—a dynamical regime in which the reservoir balances high sensitivity to input perturbations with sufficient stability to avoid signal degradation [8,17]. Excessively high coupling or dense connectivity tends to push the system toward oversaturation, limiting its ability to generalize to unseen data. By contrast, operating in an intermediate regime allows the system to exhibit rich but tractable dynamics, thereby enhancing its expressiveness.
Together, these findings affirm the value of small-world reservoir topologies with well-tuned coupling parameters, as they provide a flexible substrate for modeling temporally extended input patterns while remaining robust to network variation.

4.2. Impact of Network Size

Figure 6 illustrates the relationship between network size and performance, with all networks using identical hyperparameters. These values were chosen as an intermediate setting to ensure both hyperparameters had equal influence across varying ring sizes. Interestingly, test accuracy showed diminishing returns beyond approximately 300 ring oscillators, after which it did not increase significantly. At 300 ring oscillators, the mean test accuracy was 87.9%, whereas the highest overall test accuracy occurred at a network size of 450 with a mean test set performance of 88.5%. This stagnation suggests potential saturation as network size grows beyond this threshold.
The error bars in Figure 6, which represent the standard deviation across all trials, show the variability in performance. The increase in variability for larger network sizes indicates that the test performance of larger networks may be more sensitive to initialization or specific data characteristics.

4.3. Analysis of Reservoir Quality

To complement the accuracy-based evaluations presented above, we conducted a systematic analysis of reservoir quality by computing a set of complexity measures that directly quantify the richness, input dependence, and dimensionality of the reservoir dynamics. These metrics, defined in Section 3, are computed from the binary activity traces of all neurons in response to MNIST sequences. Importantly, they allow us to characterize the internal dynamics of the reservoir independently of readout training, providing a more principled view of the conditions under which a given parameterization ( ε , p ) is likely to yield useful computational substrates.
The three metrics considered are: the mean neural entropy H , the input-driven entropy deviation δ H , and the neural diversity D . Together, these statistics provide a multidimensional perspective on reservoir quality, and their dependence on ( ε , p ) is shown in Figure 7 alongside test accuracy. Below, we analyze each measure in turn and discuss their relevance to the edge-of-chaos hypothesis [8].
We observe that regions of high test accuracy coincide with intermediate values of mean neural entropy H , typically between 0.5 and 0.6. This balance indicates that neurons are neither frozen in a constant state nor dominated by noise. Instead, they fluctuate in a way that creates an information-rich reservoir. Here, balanced entropy corresponding with areas of high test accuracy further reinforces the edge-of-chaos hypothesis [8], as the neural entropy metric would indicate that our reservoir performs best when its dynamics represent a happy medium between order and disorder.
The input-driven entropy deviation δ H further refines this picture by quantifying the degree to which reservoir variability depends on external input. Here, we again find that the regions of high test accuracy correspond to those with intermediate input-driven entropy deviation. This suggests that the reservoir is most effective when its internal fluctuations (entropy) are not only balanced, but also reliably modulated by the input. Too little input dependence leads to poor separability of signals, while excessive sensitivity risks instability. The intermediate δ H regime corresponds to reservoirs that amplify meaningful differences without overwhelming stability, further reinforcing the edge-of-chaos interpretation.
The neural diversity score D remains above critical thresholds across all hyperparameters, indicating stable readout training throughout. Nonetheless, regions with elevated D overlap with those of highest accuracy, suggesting that effective computation requires not only balanced entropy and input sensitivity, but also sufficiently diverse internal representations. Greater diversity reduces redundancy and expands the subspace of operators that can be reliably learned by the linear readout [17].
The neural diversity score D remains well above values close to machine precision across the entire hyperparameter grid. As highlighted by Dambre et al. [17], reservoirs with diversity approaching numerical precision can suffer from ill-conditioned readout training and poor effective dimensionality. Our results indicate that this risk is not present in the oscillator network reservoirs studied here, since D consistently exceeds this threshold for all hyperparameters. In this sense, the reservoirs are generally suitable for Reservoir Computation with respect to diversity. However, because performance varies substantially across coupling strengths and connectivity probabilities despite consistently safe values of D , this measure alone does not explain accuracy differences. Instead, metrics such as mean neural entropy and input-driven entropy deviation provide more distinguishable insight into how hyperparameters influence reservoir quality.
Taken together, these results suggest that reservoir quality is governed by a balance of dynamical features. Neural diversity D consistently remains at values well above machine precision, ensuring that the effective dimensionality of the reservoir is sufficient for stable readout training. By contrast, mean neural entropy H and input-driven entropy deviation δ H vary substantially across hyperparameters and more directly track differences in test accuracy. High performance emerges when the reservoir achieves both balanced internal activity and sensitivity to input, consistent with the edge-of-chaos hypothesis.

5. Discussion and Future Works

Our work demonstrates that differentiating neuron oscillators can serve as effective reservoir computers. By leveraging small-world topologies and weak coupling strengths, we achieved competitive performance, reaching up to 90.65% accuracy on MNIST digit recognition. This performance places our system on par with other in-material (in materio) reservoirs, such as the PZT cubes studied by Rietman et al. [14] (80%), diffusive memristors implemented by Midya et al. [25] (83%), and magnetic skyrmion reservoirs described by Lee and Mochizuki [15] (90%). Schaetti et al. [13] achieved a higher performance of 95% when using digital Echo State networks of comparable size.
The analysis of our reservoir revealed key insights into the dynamics of coupled oscillators and their influence on Reservoir Computing. The ideal network configurations found from our experiments, along with the insights gained from analyzing the internal dynamics of our reservoirs, align with previous work by Acebrón et al. [26], which found performance peaks near the edge of chaos, where oscillatory behavior balances between order and disorder. Our systematic analysis using complexity measures provides empirical validation of this theoretical framework, demonstrating that optimal computation emerges when neural entropy and input sensitivity operate at intermediate levels. The observed decline in accuracy at higher coupling thresholds suggests that excessive homogeneity may diminish diversity, while insufficient coupling fails to create the rich dynamics necessary for effective information processing. The introduction of three complementary complexity measures, mean neural entropy, input-driven entropy deviation, and neural diversity, represents a significant methodological contribution to Reservoir Computing analysis. These metrics provide a principled framework for understanding reservoir quality independently of task performance, offering insights into the fundamental mechanisms that govern computational effectiveness. Our findings demonstrate that high-performing reservoirs consistently exhibit balanced neural activity, appropriate input sensitivity, and sufficient diversity to support stable readout training. This analytical framework establishes design principles that extend beyond our specific implementation, providing guidance for optimizing reservoir architectures across different substrates and applications. Another perspective on the coupling performance relationship is based on the spatial extent enabled by weak couplings. Recent studies [27,28] have shown that spatial organization enables structured memory storage in the propagation of neural activity waves. These propagating waves enable the storage of recent history [29] and encode temporal information [30]. We suggest that understanding the relationship between this form of spatial computation and the efficacy of information processing is necessary to better design Reservoir Computing systems. This relationship remains underexplored and warrants further investigation.
Additionally, our investigation into network size revealed a plateau in test performance beyond 300 oscillators. This may indicate that additional network complexity does not translate to meaningful improvements in generalization. Instead, it highlights the importance of architectural efficiency and the risk of diminishing returns as network size increases. The variability observed across different hyperparameter settings further underscores the importance of tuning network dynamics to avoid over-saturation and ensure optimal performance.

Future Works

Building on these findings, we identify the following several avenues for future research:
  • Application to New Tasks: Extending this reservoir architecture to time-series data, such as spoken digits [31], EMNIST [32], and predictions of physical systems like Lorenz attractors, may reveal its broader utility in machine learning tasks.
  • Network Topology: Conducting experiments and similar analyses of reservoir quality with different topologies may deepen our understanding of these kinds of oscillator networks under different configurations, potentially uncovering structures that boost performance [33]. Using the metrics that quantify information diversity on experiments with alternative topologies (e.g., Erdős–Rényi graphs) could refine our understanding.
  • Energy-Efficient Hardware Implementations: Translating this architecture into physical systems opens pathways for energy-efficient computation inspired by natural dynamical systems, such as central pattern generators. Investigating the energy efficiency of these networks may reveal their utility in resource-constrained computing environments.
  • Theoretical Framework Extension: The complexity measures introduced in this work could be applied to other Reservoir Computing systems to establish universal principles for reservoir quality assessment. This could lead to standardized metrics for comparing diverse reservoir implementations.
  • Scaling Behavior: Future work should investigate why performance plateaus beyond a network size of 300 oscillators, and whether modifications in coupling structure or readout design can mitigate this limitation.

6. Conclusions

This study demonstrates that networks of differentiating neuron oscillators can serve as effective reservoir computers when structured with small-world connectivity and weak coupling strengths. Our experiments revealed that test accuracy stagnates beyond a network size of 300 oscillators, suggesting that larger networks do not necessarily yield better generalization. Furthermore, we observed that optimal performance and internal dynamics emerge at intermediate coupling and connectivity values, aligning with the edge-of-chaos hypothesis [8], where reservoirs balance stability and expressiveness.
A key contribution of this work is the introduction of a systematic framework for analyzing reservoir quality through three complementary complexity measures: mean neural entropy, input-driven entropy deviation, and neural diversity. This analytical approach provides principled insights into the mechanisms underlying effective computation in dynamical systems, demonstrating that high performance emerges when neural activity is balanced, input sensitivity is appropriately tuned, and internal representations maintain sufficient diversity. These findings establish design principles that extend beyond differentiating neuron networks, offering guidance for optimizing reservoir architectures across diverse implementations.
These findings contribute to a growing understanding of how network dynamics influence Reservoir Computing. The convergence of empirical performance results with theoretical predictions from complexity analysis strengthens the foundation for designing and optimizing unconventional computing systems. Future research should explore how these principles extend to tasks beyond static image classification, such as time-series forecasting and control applications. Additionally, further investigation into the role of network homogeneity, synchronization, and alternative topologies could provide deeper insights into optimizing reservoir architectures. The potential for hardware implementations also presents an exciting avenue for leveraging the efficiency of physical dynamical systems for computation.
By demonstrating competitive performance with other in-material reservoirs while providing a theoretical framework for understanding reservoir quality, our results reinforce the viability of oscillatory networks for neuromorphic computing. As research in this area progresses, refining the interplay between network structure, coupling dynamics, and task-specific requirements will be key to unlocking the full potential of reservoir computing.

Author Contributions

Conceptualization, A.Y., P.D., H.H. and E.R.; Methodology, A.Y., P.D., H.H. and E.R.; Software, A.Y. and A.K.; Validation, A.Y., P.D., A.K., H.H. and E.R.; Formal Analysis, A.Y. and A.K.; Investigation, A.Y. and A.K.; Resources, E.R.; Data Curation, A.Y. and A.K.; Writing—Original Draft Preparation, A.Y.; Writing—Review and Editing, A.Y., P.D., A.K., H.S., H.H. and E.R.; Visualization, A.Y. and P.D.; Supervision, H.H. and E.R.; Project Administration, E.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

We appreciate the assistance of Mohsin Shah for his early work in exploring the dynamics of the differentiating ring oscillator networks used in this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Reservoir Simulation

Algorithm A1 Reservoir Computing Simulation
  1:
Parameters:
  2:
    n rings : Number of ring oscillators
  3:
   lower, upper: Range of ring sizes
  4:
    ϵ : Coupling constant
  5:
   p: Adjacency constant
  6:
    n in : Number of inputs
  7:
    d t : Length of each time step
  8:
   T: Simulation time
  9:
    v thh : Upper voltage threshold
10:
    v thl : Lower voltage threshold
11:
    v in : Input as time series data
12:
Outputs:
13:
v out : History of neuron outputs where v out [ i , j ] is the output at time step i of neuron j
14:
v cap : History of the neuron capacitor values where v cap [ i , j ] is the voltage at the capacitor at time step i of neuron j
15:
Initialize Network:
16:
Generate ring sizes R i uniformly over [lower, upper].
17:
n neurons i R i
18:
idx indices of the first neuron in each ring.
19:
Create W as an adjacency matrix for intra- and inter-ring connections using the Watts-Strogatz model, scaled by ϵ .
20:
Create W in for input connections using Bernoulli trials with parameter p, scaled by ϵ .
21:
Initialize v cap [ 0 ] 0.9 and v out [ 0 ] 1 .
22:
Randomly select initial firing nodes in each ring.
23:
Simulate each ring independently for a random duration.
24:
function SchmittTrigger( o , v )
25:
    if  v v thl  then return 0
26:
    else if  v v thh  then return 1
27:
    elsereturn o
28:
    end if
29:
end function
30:
Simulate Network:
31:
for  t = 1 to T d t  do
32:
     α e d t
33:
    Update capacitor voltages:
34:
     u W × v out [ t ] + W in × v in [ t ]
35:
     v cap [ t + 1 ] α × v cap [ t ] + ( 1 α ) × u
36:
    Update outputs:
37:
     v out [ t + 1 ] SchmittTrigger ( v out [ t ] , u v cap [ t + 1 ] )
38:
end for
39:
Main Function:
40:
Initialize the network
41:
Simulate the network with input v in
42:
Return v out , v cap

References

  1. Stiefel, K.M.; Coggan, J.S. The energy challenges of artificial superintelligence. Front. Artif. Intell. 2023, 6, 1240653. [Google Scholar] [CrossRef] [PubMed]
  2. Still, S.; Tilden, M.W. Controller for a Four-Legged Walking Machine. In Neuromorphic Systems; World Scientific: Hackensack, NJ, USA, 1998; pp. 138–148. Available online: https://www.worldscientific.com/doi/pdf/10.1142/9789812816535_0012 (accessed on 8 June 2025). [CrossRef]
  3. Frigo, J.R.; Tilden, M.W. Biologically inspired neural network controller for an infrared tracking system. In Proceedings of the Mobile Robots XIII and Intelligent Transportation Systems, Boston, MA, USA, 1 November 1998; Choset, H., Gage, D.W., Kachroo, P., Kourjanski, M., de Vries, M.J., Eds.; SPIE Proceedings. Volume 3525, pp. 393–403. [Google Scholar] [CrossRef]
  4. Choi, J.; Kim, P. Critical Neuromorphic Computing based on Explosive Synchronization. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 043110. [Google Scholar] [CrossRef] [PubMed]
  5. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  6. Jaeger, H. Towards a generalized theory comprising digital, neuromorphic and unconventional computing. Neuromorphic Comput. Eng. 2021, 1, 012002. [Google Scholar] [CrossRef]
  7. Deng, L. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
  8. Langton, C.G. Computation at the edge of chaos: Phase transitions and emergent computation. Phys. D Nonlinear Phenom. 1990, 42, 12–37. [Google Scholar] [CrossRef]
  9. Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, USA, 2002. [Google Scholar]
  10. Stepney, S. Programming Unconventional Computers: Dynamics, Development, Self-Reference. Entropy 2012, 14, 1939–1952. [Google Scholar] [CrossRef]
  11. Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef]
  12. Grigoryeva, L.; Ortega, J.-P. Echo state networks are universal. arXiv 2018, arXiv:1806.00797. [Google Scholar] [CrossRef]
  13. Schaetti, N.; Salomon, M.; Couturier, R. Echo State Networks-Based Reservoir Computing for MNIST Handwritten Digits Recognition. In Proceedings of the 2016 IEEE Intl Conference on Computational Science and Engineering (CSE) and IEEE Intl Conference on Embedded and Ubiquitous Computing (EUC) and 15th Intl Symposium on Distributed Computing and Applications for Business Engineering (DCABES), Paris, France, 24–26 August 2016; pp. 484–491. [Google Scholar] [CrossRef]
  14. Rietman, E.; Schuum, L.; Salik, A.; Askenazi, M.; Siegelmann, H. Machine Learning with Quantum Matter: An Example Using Lead Zirconate Titanate. Quantum Rep. 2022, 4, 418–433. [Google Scholar] [CrossRef]
  15. Lee, M.K.; Mochizuki, M. Handwritten digit recognition by spin waves in a Skyrmion reservoir. Sci. Rep. 2023, 13, 19423. [Google Scholar] [CrossRef]
  16. Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn Ger. Natl. Res. Cent. Inf. Technol. GMD Tech. Rep. 2001, 148, 13. [Google Scholar]
  17. Dambre, J.; Verstraeten, D.; Schrauwen, B.; Massar, S. Information Processing Capacity of Dynamical Systems. Sci. Rep. 2012, 2, 514. [Google Scholar] [CrossRef] [PubMed]
  18. Hasslacher, B.; Tilden, M.W. Theoretical foundations for nervous networks. AIP Conf. Proc. 1997, 411, 179–184. [Google Scholar] [CrossRef]
  19. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  20. Hasslacher, B.; Tilden, M.W. Living machines. Robot. Auton. Syst. 1995, 15, 143–169. [Google Scholar] [CrossRef]
  21. Zhang, T.; Haider, M.R.; Alexander, I.D.; Massoud, Y. A Coupled Schmitt Trigger Oscillator Neural Network for Pattern Recognition Applications. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 238–241. [Google Scholar] [CrossRef]
  22. DelMastro, P.; Karuvally, A.; Hazan, H.; Siegelmann, H.T.; Rietman, E.A. Transient Dynamics in Lattices of Differentiating Ring Oscillators. arXiv 2025, arXiv:2506.07253. [Google Scholar]
  23. Bassett, D.S.; Bullmore, E.T. Small-world brain networks revisited. Neuroscientist 2017, 23, 499–516. [Google Scholar] [CrossRef]
  24. Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. arXiv 2019, arXiv:1711.05101. [Google Scholar] [CrossRef]
  25. Midya, R.; Wang, Z.; Asapu, S.; Zhang, X.; Rao, M.; Song, W.; Zhuo, Y.; Upadhyay, N.; Xia, Q.; Yang, J.J. Reservoir Computing Using Diffusive Memristors. Adv. Intell. Syst. 2019, 1, 1900084. [Google Scholar] [CrossRef]
  26. Acebrón, J.A.; Bonilla, L.L.; Pérez Vicente, C.J.; Ritort, F.; Spigler, R. The Kuramoto model: A simple paradigm for synchronization phenomena. Rev. Mod. Phys. 2005, 77, 137–185. [Google Scholar] [CrossRef]
  27. Budzinski, R.C.; Nguyen, T.T.; Benigno, G.B.; Đoàn, J.; Mináč, J.; Sejnowski, T.J.; Muller, L.E. Analytical prediction of specific spatiotemporal patterns in nonlinear oscillator networks with distance-dependent time delays. Phys. Rev. Res. 2022, 5, 013159. [Google Scholar] [CrossRef]
  28. Benigno, G.B.; Budzinski, R.C.; Davis, Z.W.; Reynolds, J.H.; Muller, L.E. Waves traveling over a map of visual space can ignite short-term predictions of sensory input. Nat. Commun. 2023, 14, 3409. [Google Scholar] [CrossRef] [PubMed]
  29. Keller, T.A.; Muller, L.E.; Sejnowski, T.J.; Welling, M. Traveling Waves Encode the Recent Past and Enhance Sequence Learning. arXiv 2023, arXiv:2309.08045. [Google Scholar]
  30. Karuvally, A.; Sejnowski, T.; Siegelmann, H.T. Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks. In Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024; Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., Berkenkamp, F., Eds.; Proceedings of Machine Learning Research. Volume 235, pp. 23266–23290. [Google Scholar]
  31. Jackson, Z.; Souza, C.; Flaks, J.; Pan, Y.; Nicolas, H.; Thite, A. Jakobovski/Free-Spoken-Digit-Dataset; V1.0.8 (v1.0.8); Zenodo: Geneva, Switzerland, 2018. [Google Scholar] [CrossRef]
  32. Cohen, G.; Afshar, S.; Tapson, J.; van Schaik, A. EMNIST: An extension of MNIST to handwritten letters. arXiv 2017, arXiv:1702.05373. [Google Scholar] [CrossRef]
  33. Hazan, H.; Manevitz, L.M. Topological constraints and robustness in liquid state machines. Expert Syst. Appl. 2012, 39, 1597–1606. [Google Scholar] [CrossRef]
Figure 1. Reservoir architecture for computing with networks of differentiating ring oscillators. The j-th ring oscillator in the reservoir is characterized by its time-dependent internal state v ( j ) ( t ) R n j and output vector y ( j ) ( t ) { 0 , 1 } n j , where n j denotes the number of differentiating neurons forming that ring. The R ring oscillators in the reservoir are organized into a small-world network, resulting in a reservoir with N = j = 1 R n j neurons overall. The recurrent weights, therefore, consist of both intra-ring and inter-ring connections, depicted as black and blue in the diagram, respectively. The network is driven by external input u ( t ) R B for an interval of time [ 0 , T ] , during which multiple output snapshots y ( t i ) = [ y ( 1 ) ( t i ) , , y ( R ) ( t i ) ] { 0 , 1 } N are recorded. For image classification, as presented in this paper, the input signal u ( t ) is a sliding-window embedding of Hilbert curve traversal of the image, so that B pixels of the image are being input to the network at each point in time (cf. Section 3.2 for more details). The class probability predictions p [ 0 , 1 ] # classes are obtained by a linear+softmax layer on top of the aggregated snapshot matrix Y = [ y ( t 1 ) , , y ( t n ) ] .
Figure 1. Reservoir architecture for computing with networks of differentiating ring oscillators. The j-th ring oscillator in the reservoir is characterized by its time-dependent internal state v ( j ) ( t ) R n j and output vector y ( j ) ( t ) { 0 , 1 } n j , where n j denotes the number of differentiating neurons forming that ring. The R ring oscillators in the reservoir are organized into a small-world network, resulting in a reservoir with N = j = 1 R n j neurons overall. The recurrent weights, therefore, consist of both intra-ring and inter-ring connections, depicted as black and blue in the diagram, respectively. The network is driven by external input u ( t ) R B for an interval of time [ 0 , T ] , during which multiple output snapshots y ( t i ) = [ y ( 1 ) ( t i ) , , y ( R ) ( t i ) ] { 0 , 1 } N are recorded. For image classification, as presented in this paper, the input signal u ( t ) is a sliding-window embedding of Hilbert curve traversal of the image, so that B pixels of the image are being input to the network at each point in time (cf. Section 3.2 for more details). The class probability predictions p [ 0 , 1 ] # classes are obtained by a linear+softmax layer on top of the aggregated snapshot matrix Y = [ y ( t 1 ) , , y ( t n ) ] .
Analytics 04 00028 g001
Figure 2. (top) Dynamics of a ring of 8 differentiating neurons. The snapshots of oscillator dynamics are shown at initialization, and again after 10 and 20 time steps. Colored circles indicate the neurons firing at each time step, with different colors delineating between different snapshots. (bottom) The dynamics of each neuron in the ring y i are displayed. Shaded regions indicate neuron firing.
Figure 2. (top) Dynamics of a ring of 8 differentiating neurons. The snapshots of oscillator dynamics are shown at initialization, and again after 10 and 20 time steps. Colored circles indicate the neurons firing at each time step, with different colors delineating between different snapshots. (bottom) The dynamics of each neuron in the ring y i are displayed. Shaded regions indicate neuron firing.
Analytics 04 00028 g002
Figure 3. Example of the coupling of two oscillators: one with four neurons and one with six. Edges are labeled with connection strengths. The coupling constant ε is set to 0.7.
Figure 3. Example of the coupling of two oscillators: one with four neurons and one with six. Edges are labeled with connection strengths. The coupling constant ε is set to 0.7.
Analytics 04 00028 g003
Figure 4. Example of a small-world network constructed via the Watts–Strogatz method. The individual oscillators are labeled according to the number of differentiating neurons they contain. The connections are coupled, as shown in Figure 3.
Figure 4. Example of a small-world network constructed via the Watts–Strogatz method. The individual oscillators are labeled according to the number of differentiating neurons they contain. The connections are coupled, as shown in Figure 3.
Analytics 04 00028 g004
Figure 5. An MNIST digit resized and read along a Hilbert curve. Original images are 28 × 28 , but are resized to 32 × 32 to fit the curve. The resized image is then read with a sliding window to generate the time-series input to the reservoir.
Figure 5. An MNIST digit resized and read along a Hilbert curve. Original images are 28 × 28 , but are resized to 32 × 32 to fit the curve. The resized image is then read with a sliding window to generate the time-series input to the reservoir.
Analytics 04 00028 g005
Figure 6. Test accuracy of ring oscillator networks ranging from 50 to 500 rings. All networks were initialized with the same hyperparameters, ε and p, both set at 0.5. The points depict the mean of the 10 trials of each network size, and the error bars display the standard deviation of the trials.
Figure 6. Test accuracy of ring oscillator networks ranging from 50 to 500 rings. All networks were initialized with the same hyperparameters, ε and p, both set at 0.5. The points depict the mean of the 10 trials of each network size, and the error bars display the standard deviation of the trials.
Analytics 04 00028 g006
Figure 7. Reservoir quality metrics defined in Section 3 evaluated across coupling strength ε and connectivity probability p. Top left: test accuracy. Top right: neural diversity D . Bottom left: mean neural entropy H . Bottom right: input-driven entropy deviation δ H . Each panel reports the average of five trials per configuration. For all graphs, lighter colors indicate higher values of the reported metric and darker colors indicate lower values.
Figure 7. Reservoir quality metrics defined in Section 3 evaluated across coupling strength ε and connectivity probability p. Top left: test accuracy. Top right: neural diversity D . Bottom left: mean neural entropy H . Bottom right: input-driven entropy deviation δ H . Each panel reports the average of five trials per configuration. For all graphs, lighter colors indicate higher values of the reported metric and darker colors indicate lower values.
Analytics 04 00028 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeung, A.; DelMastro, P.; Karuvally, A.; Siegelmann, H.; Rietman, E.; Hazan, H. Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators. Analytics 2025, 4, 28. https://doi.org/10.3390/analytics4040028

AMA Style

Yeung A, DelMastro P, Karuvally A, Siegelmann H, Rietman E, Hazan H. Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators. Analytics. 2025; 4(4):28. https://doi.org/10.3390/analytics4040028

Chicago/Turabian Style

Yeung, Alexander, Peter DelMastro, Arjun Karuvally, Hava Siegelmann, Edward Rietman, and Hananel Hazan. 2025. "Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators" Analytics 4, no. 4: 28. https://doi.org/10.3390/analytics4040028

APA Style

Yeung, A., DelMastro, P., Karuvally, A., Siegelmann, H., Rietman, E., & Hazan, H. (2025). Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators. Analytics, 4(4), 28. https://doi.org/10.3390/analytics4040028

Article Metrics

Back to TopTop