Next Article in Journal
A New Index for Measuring the Non-Uniformity of a Probability Distribution
Previous Article in Journal
Uncovering the Psychometric Properties of Statistics Anxiety in Graduate Courses at a Minority-Serving Institution: Insights from Exploratory and Bayesian Structural Equation Modeling in a Small Sample Context
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime

Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Bathurst, NSW 2795, Australia
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(3), 101; https://doi.org/10.3390/appliedmath5030101
Submission received: 13 June 2025 / Revised: 10 July 2025 / Accepted: 5 August 2025 / Published: 7 August 2025
(This article belongs to the Topic A Real-World Application of Chaos Theory)

Abstract

The rising computational and energy demands of artificial intelligence systems urge the exploration of alternative software and hardware solutions that exploit physical effects for computation. According to machine learning theory, a neural network-based computational system must exhibit nonlinearity to effectively model complex patterns and relationships. This requirement has driven extensive research into various nonlinear physical systems to enhance the performance of neural networks. In this paper, we propose and theoretically validate a reservoir-computing system based on a single bubble trapped within a bulk of liquid. By applying an external acoustic pressure wave to both encode input information and excite the complex nonlinear dynamics, we showcase the ability of this single-bubble reservoir-computing system to forecast a Hénon benchmarking time series and undertake classification tasks with high accuracy. Specifically, we demonstrate that a chaotic physical regime of bubble oscillation—where tiny differences in initial conditions lead to wildly different outcomes, making the system unpredictable despite following clear rules, yet still suitable for accurate computations—proves to be the most effective for such tasks.

1. Introduction

The 2024 Nobel Prize in Physics, awarded to John J. Hopfield and Geoffrey E. Hinton for their foundational contributions to artificial neural networks, demonstrates the importance of physics in advancing artificial intelligence (AI). In modern machine learning (ML) and AI, physics plays an increasingly vital role, particularly through physics-informed neural networks [1], which, for example, are widely applied in areas such as fluid mechanics [2,3], quantum neural networks [4,5], and neuromorphic computing (NC) systems [6,7,8,9,10]. Many novel NC systems have been proposed, employing physical principles from diverse fields such as nanomagnetism [11,12,13], quantum mechanics [14,15,16,17], fluid mechanics [18,19,20,21,22], soft matter [23,24], photonics [8,25], electronics [26,27,28,29,30], acoustics [31,32], and human-made objects [4,19,33].
In this paper, we theoretically demonstrate that a single gas bubble trapped within a bulk of liquid can function as a reservoir-computing (RC) system—a type of NC system that employs the nonlinear dynamics of physical systems for unconventional computation [7,8,10]. Unlike traditional RC models, which are governed by mathematical principles, and ML frameworks relying on differential equations where output variations are not directly proportional to input changes, RC systems based on fluid-mechanical physical objects [18,19,20,21,22] offer a more energy-efficient and computationally efficient alternative. While fluidic computational systems are not always ideal candidates for practical, especially large-scale, commercial applications, their study remains highly valuable from a fundamental perspective and often provides insights that find niche applications within the modern technological landscape [34,35,36,37,38,39,40].
In fact, energy efficiency is a critical consideration in resolving the growing challenge of rapidly increasing energy consumption by AI systems [19,41]. In the case of a single bubble located far from boundaries—as in our study—nonlinear oscillations of the bubble can lead to intriguing phenomena such as chemical reactions and the conversion of sound into light (sonoluminescence) [42,43,44,45]. While our work may not necessarily exploit all these effects to their full extent, by using the inherent nonlinearity and energy efficiency of bubble dynamics, our approach offers a promising avenue for developing sustainable and scalable computing solutions.
Many RC systems draw inspiration from the functioning of a biological brain that operates through vast intricate networks of neural connections [46,47]. Like the brain, an RC system is dynamic, meaning it evolves over time and exhibits complex, nonlinear, and sometimes chaotic behavior [48,49,50]. Subsequently, this work specifically focuses on a bubble oscillation regime that induces chaotic dynamical behavior, optimizing the computational capabilities of the RC system.
A previous theoretical study [31] explored the nonlinear dynamics of an acoustically excited bubble cluster, treating each bubble as a node within a virtual network of interconnected oscillators, analogous to neural network architectures typically employed in RC systems [40]. Building on an earlier investigation of bubble clusters with similar equilibrium radii [51], it was proposed that, for a short yet sufficient duration to conduct measurements, the cluster would maintain stability, making practical implementation feasible. Experimental follow-up research (unpublished) confirmed the feasibility of this approach in principle. However, it was found that more reproducible results could be achieved using only a few interacting bubbles rather than a larger cluster. These findings, combined with a theoretical demonstration of substantial computational memory possessed by an oscillating bubble [52], further motivate the present study on a single-bubble RC system, highlighting its potential for practical applications using relatively simple equipment well-suited for an in-depth analysis of single-bubble properties [43,53,54].
The remainder of this paper is organized as follows. In Section 2, we present the conventional RC algorithm, providing key reference information to aid in understanding both the computational and physical aspects of the single-bubble RC system. Additionally, we introduce the mathematical model of a single oscillating bubble and discuss the relevant physical operating regime for this study. In Section 3, we present the bubble-based RC algorithm and justify the choice in benchmarking tasks adopted in this work. The main results and their analysis are presented in Section 4, followed by the conclusions and recommendations for future work. The analysis of the energy consumption by the single-bubble RC system, in comparison with a modern ML algorithm for nonlinear time-series prediction, as well as the evaluation of the output of the RC system against non-ML-based methods, are presented in Appendix A.

2. Theory

2.1. Traditional Reservoir Computing

Figure 1a illustrates the structure of the reference traditional RC system, where the states of the randomly connected nodes evolve over time under nonlinear dynamics described by the following differential equation [6,55]:
x n = ( 1 α ) x n 1 + α tanh ( W i n u n + W x n 1 ) ,
where n is the sequential index of the discrete equally spaced time steps t n , u n is a vector of N u input values, and x n is a vector of N x neural activations. The element-wise operator tanh ( · ) serves as the activation function. The input weight matrix W i n , randomly generated with dimensions N x × N u , represents the input weights, while the recurrent weight matrix W , of size N x × N x , defines the interconnections among the network nodes. The parameter α ( 0 , 1 ] is the leaking rate that regulates the temporal dynamics of the system.
To compute the output weights W o u t , we solve the linear equation Y t a r g e t = W o u t X , where the state matrix X is constructed from the neural activations x n and the target matrix Y t a r g e t contains the corresponding target outputs y n t a r g e t for each time step t n . The output weights W o u t are computed as
W o u t = Y t a r g e t X ( X X + β I ) 1 ,
where I is the identity matrix, β is a regularization coefficient, and X denotes the transpose of X  [55]. Once the output weights W o u t have been determined, they are used to compute the output vector y n for new input data u n according to the equation [55]
y n = W o u t [ 1 ; u n ; x n ] .
Several additional requirements must be met to create an efficient RC system [55,56]. Firstly, the reservoir weights W have to be scaled to ensure the echo state property, which guarantees that the neural states of the reservoir are stable and dependent on the input history [6,55]. This is typically achieved by setting the spectral radius ρ of W to be less than 1:  ρ = max | λ | , where λ are the eigenvalues of W . In this context, we note that nonlinear dynamics of a bubble inherently satisfy the echo state property [31,40], enabling the RC system proposed in this paper to function as an efficient computational framework.
Furthermore, software implementing the traditional RC algorithm must ensure that different inputs are mapped to distinct reservoir states, while similar states map to the same output, thereby enhancing robustness against minor variations. These properties are typically achieved using a large network of reservoir nodes [55] (typically more than 1000). However, as will be demonstrated later in this paper, a physical implementation of the RC system relaxes the requirement for a large number of nodes, making the physical system more computationally efficient and energy-efficient.

2.2. Nonlinear Dynamics of a Single Bubble

2.2.1. Keller–Miksis Equation

The study of the dynamics of a single gas bubble has a rich history rooted in fluid mechanics and acoustics. Pioneering work by Lord Rayleigh laid the foundation for understanding bubble collapse [57], where he derived an equation to describe the radial motion of a spherical bubble in an incompressible fluid. This was later extended by Plesset and others to include effects such as surface tension [58,59], viscosity, and compressibility, resulting in the well-known Rayleigh–Plesset equation and its modifications, including the Keller–Miksis equation [43,60]. Additionally, researchers like Minnaert explored the acoustic oscillations of bubbles, leading to the formulation of the Minnaert frequency, which characterizes the natural resonance of a gas bubble in a liquid [61].
To date, the nonlinear dynamical properties of oscillating bubbles have remained a subject of substantial theoretical and experimental research [43,60,62,63,64,65,66,67], including phenomena such as cavitation [68], sonoluminescence [42,43,44,45], bubble collapse-induced shock waves [69], and the translational motion of bubbles [70,71]. These studies have profound implications across various fields, ranging from medical applications like ultrasound imaging and drug delivery [72,73] to industrial processes such as sonochemistry [44,74,75].
Numerical models of bubbles trapped in a bulk of liquid and forced by an external acoustic pressure field are well-known and thoroughly documented [59,60,65,68,72,76,77]. In the following, we consider a single bubble with an equilibrium radius R 0 suspended in an incompressible viscous liquid. When exposed to an external time-varying pressure field P ( t ) , the instantaneous radius of the bubble R ( t ) undergoes an oscillatory motion, with dynamics governed by the Keller–Miksis equation [60,72]
1 R ˙ c R R ¨ + R ˙ 2 2 3 R ˙ c = 1 ρ 1 + R ˙ c + R c d d t P ( R , R ˙ ) P ( t ) ,
where R ˙ and R ¨ are the first and second time derivatives of R ( t ) , c is the speed of sound in the liquid, and ρ is the fluid density. The internal pressure of the bubble is defined as
P ( R , R ˙ ) = P 0 P v + 2 σ R 0 R 0 R 3 κ 2 σ R 4 μ R R ˙ .
The external pressure is
P ( t ) = P 0 P v + P a sin ( ω t ) ,
where P 0 denotes ambient pressure, P v is the vapor pressure inside the bubble, P a represents the amplitude of the acoustic pressure, and ω = 2 π f a is the angular frequency of the driving acoustic pressure wave. The initial conditions are given by R ( 0 ) = R 0 + R ˜ 0 , R ˙ ( 0 ) = V . The remaining model parameters are the dynamic viscosity of the liquid μ , the polytropic exponent κ for the gas inside the bubble, and the surface tension σ at the gas–liquid interface (Table 1).
The natural frequency of the bubble is [54]
f n a t = 1 2 π ρ R 0 3 κ P 0 P v + 2 σ R 0 2 σ R 0 4 μ 2 ρ R 0 2 .
This expression can be recast as
f n a t f M 1 + ( 3 κ 1 ) σ 3 κ R 0 ( P 0 P v ) 2 μ 2 3 κ ρ R 0 2 ( P 0 P v ) ,
where f M = 3 κ ( P 0 P v ) 2 π R 0 ρ is the well-known Minnaert frequency [61].
We nondimensionalize Equation (4) to reduce the number of governing parameters [72]. This is conducted by using the equilibrium radius R 0 and the inverse of the ultrasound frequency ω 1 as characteristic length and time scales, respectively. We define the nondimensional bubble radius r and time τ as r = R ( t ) R 0 and τ = ω t , respectively. Thus, we obtain
r ¨ ( 1 Ω r ˙ ) r + Ω R = Ω r ˙ 3 r ˙ 2 2 W + R r ˙ r + ( M + W ) 1 + ( 1 K ) Ω r ˙ r K ( 1 + Ω r ˙ ) ( M + M e sin τ ) M e Ω r cos τ ,
where the nondimensional parameters are defined as
Ω = ω R 0 c , R = 4 μ ρ ω R 0 2 , W = 2 σ ρ ω 2 R 0 3 , M = P 0 P v ρ ω 2 R 0 2 , M e = P a ρ ω 2 R 0 2 , K = 3 κ .
Each of the nondimensional groups listed above has a straightforward physical meaning. The parameter Ω , which is the ratio of the equilibrium bubble radius to the acoustic wavelength, characterizes the bubble size. Parameters R and W characterize the viscous dissipation and surface tension effects, respectively, and can be treated as inverse Reynolds and Weber numbers. The parameter M represents the elastic properties of the gas and its compressibility, while M e measures the external acoustic excitation. Additionally, the nondimensional Minnaert frequency can conveniently be expressed as ω 0 = K M  [54].
These parameters enable us to systematically analyze the behavior of bubbles under different physical conditions, making the study of bubble dynamics in a liquid more general and widely applicable. We numerically solve Equation (9) using the odeint procedure from the SciPy library of Python 3.1 programming language using the material parameters listed in Table 1.

2.2.2. Physical Operating Regimes of Interest

Figure 2a presents the bifurcation diagram [43,65,72], depicting the normalized bubble radius R / R 0 as a function of the peak driving acoustic pressure P a , systematically varied from 300 kPa to 450 kPa in 0.1  kPa increments. The equilibrium bubble radius, R 0 = 0.8   μ m, is chosen to reflect typical conditions in both research and industrial applications, ensuring a realistic study context [72]. The diagrams visually depict how the response of the system evolves as P a is varied at a constant value of the driving acoustic frequency f a (Table 1).
Providing complementary information [78], Figure 2b presents the evolution of the acoustic spectrum of the bubble, where the x-axis corresponds to the normalized frequency f / f a . The y-axis represents the peak pressure P a of the incident wave, but the false color encodes the amplitude (in dB) of the acoustic pressure scattered by the bubble [54]. We can see that, at lower pressures, the spectrum shows frequency peaks at f / f a = 1 , 2 , 3 , and so on (the nondimensional Minnaert frequency is ω 0 = 0.8624 ). As the peak pressure increases, subharmonic peaks at f / f a = 1 2 and their ultraharmonic components at f / f a = 3 2 , 5 2 , and so on become more prominent. We also observe that the emergence of additional peaks in the spectrum correlates with the bifurcation regions identified in the diagram in Figure 2a.
In Figure 2a, we can see that, at low values of P a , oscillations exhibit simple periodic behavior, referred to in this paper as single-period oscillations. As the pressure increases to 330 kPa, the bubble transitions into a more complex double-periodic oscillation regime. Further increasing the driving pressure to approximately 385 kPa leads to a quadratic periodic regime. When the pressure reaches approximately 400 kPa, the system enters a chaotic regime, where the radial oscillations become highly irregular and unpredictable, which is especially seen for P a 430  kPa.
We also established that the so-called edge-of-chaos regime, occurring within the pressure range of P a = 400 410  kPa and marked by the transition from a periodic state to a chaotic one, is particularly promising for applications in RC systems. In this regime, the response of the bubble exhibits a trade-off between order and chaos, enabling it to respond dynamically to inputs without becoming entirely unpredictable or overly sensitive to small perturbations.
We now discuss the significance of these physical regimes in the context of ML. The specific nonlinear regimes suitable for the operation of both traditional and physical RC systems have been extensively discussed in the literature [6,7,79]. At the same time, it has been shown that RC systems [48,80,81,82,83,84,85,86], particularly those based on fluid-mechanical systems [40], can operate using a broader range of nonlinear phenomena, including chaos.
In the periodic regime of single-bubble oscillations, the physical system exhibits high stability and predictability, making it resistant to small variations in initial conditions. However, when used as an RC system, this stability reduces flexibility and limits the ability of the system to capture complex patterns, leading to the undesired effect of underfitting.
In contrast, the chaotic regime of the system is highly sensitive to initial conditions, where small differences in input can lead to vastly different outputs. This sensitivity enables the system to model diverse behaviors, enhancing pattern recognition, but it also increases the risk of amplifying noise and overfitting. While this sensitivity improves pattern recognition, it makes the system less reliable when generalizing to new data.
Thus, the choice in operating regime significantly influences the performance of an RC system. A highly stable system may struggle to learn effectively, while a purely chaotic one becomes unreliable in terms of reservoir computing. Subsequently, in this paper, the edge-of-chaos regime is regarded as the optimal regime as it strikes a crucial balance, enabling the reservoir to model and predict real-world data efficiently.

2.3. Relevance of the Theoretical Framework to Experimental Realization

Potential practical implementations of the proposed theoretical framework are discussed throughout the main body of this work. Here, we specifically highlight the relevance of the theoretical framework to experimental realization, with particular focus on potential challenges such as sensitivity and stability.
The same theoretical framework was previously employed to explain findings from an experiment involving oscillating bubbles in water [54]. It was shown that careful post-processing of raw experimental traces using standard techniques, such as digital filtering, produces clean results that can be readily compared with theoretical spectra, like those plotted in Figure 2. It is noteworthy that only standard inexpensive laboratory equipment was used in Ref. [54], indicating that even better results could be achieved, allowing direct fitting of experimental data with theoretical models.
In the context of the present work, this means that the theoretical results presented here have been obtained under conditions closely mimicking those of a potential experiment. Specifically, the accuracy and tolerance parameters of the differential equation solvers used in the computations were set to ensure that the theoretical results would closely reproduce a properly post-processed experimental counterpart. This, in turn, implies that the provided theoretical framework not only captures the key physical processes but also produces outcomes stable and accurate enough to be observed in practical scenarios.

3. Bubble-Based Reservoir Computing

In this section, we introduce the algorithm for the bubble-based RC system. Since our approach builds on specific computational steps from the traditional RC procedure outlined above, we reference the computational aspects of the traditional system, as summarized in Figure 1. Additionally, we explain the rationale behind the choice in benchmarking tasks and discuss the computational operating regimes of the RC system suitable for these tasks.

3.1. Physico-Computational Framework

From a theoretical perspective, implementing a single bubble as a reservoir involves replacing the core update equation of the traditional RC system Equation (1) with Equation (9), which governs the nonlinear dynamics of the bubble. This step is complemented by the proper encoding of input data as the driving signal that forces oscillations of the bubbles, as well as the computation and simultaneous processing of the acoustic pressure waves emitted by the bubble, play a key role in forming the output data (Figure 1b). Additionally, as demonstrated in prior work [31,40], such a replacement enables relaxing certain computational requirements, such as the calculation of the spectral radius. This is because, similar to other physical RC systems [6,7], the nonlinear dynamics of the bubble inherently satisfies the echo state property, which is central to the concept of reservoir computing.
To implement the aforementioned procedures, we encode the input signal u ( t ) of interest, with discrete time steps t n implied, as the peak pressure amplitude of a sinusoidal acoustic wave Figure 1b. The input signal u ( t ) is first normalized to the interval [ 0 , 1 ] and then mapped to the peak acoustic pressure amplitude in the range [ min ( P a ) , max ( P a ) ] , which represents the boundary values of the physical operating regime inferred from the analysis of the bifurcation diagram and Fourier spectra, as discussed above.
The normalization procedure is implemented by computing
u norm ( t ) = u ( t ) u min u max u min ,
where u max and u min are the maximum and minimum values in the dataset of interest, respectively. The so-obtained signal is then used in the transformation that produced a discrete-time function
U ( t ) = min ( P a ) + max ( P a ) min ( P a ) · u norm ( t )
to ensure the proper scaling of the input signal required to match the operational parameters, such that u norm ( t ) [ min ( P a ) , max ( P a ) ] .
To implement the RC system, independent simulations of bubble dynamics are conducted, with the number of simulations corresponding to the number of discrete points in U ( t ) (see the inset table in Figure 1b). The function U ( t ) contains the values of the peak pressure of the sinusoidal acoustic wave that drives the oscillations of the bubble; each value of U ( t ) is used in one simulation.
In each individual computational run of the model that numerically solves Equation (9), the bubble is subjected to a fixed number of empirically chosen (typically 65) cycles of the sinusoidal signal forcing (the choice in the initial conditions for each run will be discussed below). A temporal signal associated with numerically simulated radial oscillations of the bubble is recorded and transient effects are removed to achieve stable system dynamics. The steady-state portion of each signal is then divided into N = 5 segments, and then k = 10 evenly spaced points are chosen within each segment, thereby producing a total of k × N = 50 discrete data points (these data points are visualized in Figure 1b as the dot markers superimposed on the representative acoustic response of the bubble). Importantly, when arranged into the vector r n = [ r 1 , r 2 , , r k × N ] , the so-prepared data points serve as the neural activations of the reservoir and correspond to the activation states x n of the traditional reservoir system [see Equation (1)].
Then, following the training procedure employed in the traditional RC algorithm, we calculate the output weights W out between the target point y n target and the corresponding points contained in the vector r n using a linear regression procedure. To this end, we solve the linear equation Y target = W out R , where the state matrix R contains all the neural activations r n derived from the nonlinear response of the oscillating bubble, but the target matrix Y target contains the corresponding target outputs y n target for each discrete time step t n .
The output weights W out are then computed as
W out = Y target R R R + β I 1 ,
i.e., by recasting Equation (2) and applying it to the data derived from the numerical model of the oscillating bubble. Analogous to the traditional RC algorithm, once the output weights W out have been determined, we use them to compute the predicted output vector y ^ n for new, i.e., unseen by the trained RC system, input data u n as
y ^ n = W out r n .
Thus, the goal of the training process is to determine W out by minimizing a loss function, which is typically achieved by computing the normalized mean squared error between the predicted outputs and the actual target values as
NMSE = 1 N i = 1 N y i target y ^ i 2 ,
where N is the number of training samples. It is worth noting that, in many test problems, including those discussed below, the target data are obtained by splitting a known time series into two parts: the first part is used for training, and the second part is used for testing. Unless discussed otherwise, during training, the system is presented only with the training portion, while the testing portion remains unknown to it. However, the testing portion is known to the human operator, who may use it to compute NMSE to evaluate the accuracy of the model.
We also discuss the initial physical conditions used in each individual computational run, as outlined in the algorithm above. When the bubble-based RC system operates in a periodic regime, the initial conditions are set once in the very first computational run, ensuring that the system dynamics remains continuous throughout the operation. However, when the system operates in a chaotic regime, the same input can lead to different trajectories due to sensitivity to the initial conditions. In our proposed RC scheme, we address this challenge by enforcing the same initial conditions for each input. This strict control ensures reproducibility of the output for a given input, thereby satisfying the practical echo state property required for reservoir computing [6,55,56,80,86].

3.2. Benchmarking Tasks

Typically, the accuracy of forecasts conducted by RC systems has been assessed by evaluating their ability to learn and predict highly nonlinear and chaotic time series of natural, mathematical, and synthetic origin. Examples of such benchmarking tasks include the Mackey–Glass time series [40,55,86], the Lorenz [87,88] and Rössler attractors [31,88], and the Hénon [30,89] and Ikeda [31] maps. Each of these tests has its own advantages and limitations, and their selection is often task- and RC-system-specific, relying mostly on heuristics.
In this paper, as the principal benchmarking task, we employ the Hénon map, a two-dimensional discrete-time dynamical system known for its chaotic behavior [90]. The recursive equations of this mathematical model are
x n + 1 = 1 a x n 2 + y n , y n + 1 = b x n ,
where the typical parameters a = 1.4 and b = 0.3 are used alongside the initial values x 0 = 0.1 and y 0 = 0.1 .
In the context of RC systems, the Hénon map test offers distinct advantages over the other commonly used benchmarks. Firstly, the Hénon map has a lower dimensionality since it is a discrete-time system governed by only two variables (e.g., the Lorenz and Rössler attractors consist of a set of three coupled differential equations [91]). This reduced dimensionality lowers computational costs, making this benchmarking task particularly suitable for efficiently evaluating RC performance in autonomous systems [19].
Secondly, the nonlinear and chaotic dynamics of the Hénon map is captured through a relatively simple recurrence relation, compared with the Lorenz and Rössler systems that require numerical integration due to their continuous nature. This feature aligns well with many practical applications of RC systems where data are naturally sampled at discrete time intervals (e.g., sequence prediction, financial time series forecasting, and digital signal processing [92]).
Thirdly, the Hénon map exhibits sharp transitions and sudden regime shifts, providing a valuable test for assessing the short-term memory capacity of an RC system [6,55]. This characteristic makes this test task particularly useful for evaluating architectures designed to handle rapid fluctuations and abrupt changes in dynamical systems [20]. Additionally, a well-defined structure and relatively low-dimensional nature of a Hénon map facilitate interpretability and benchmarking, enabling a human operator on an RC system to isolate and analyze key aspects of RC performance without the added complexity of high-dimensional attractors [40].
Fourthly, the Hénon map serves as a key model for studying advanced forecasting methods in irregularly sampled time series, which frequently occur in healthcare, finance, and physics [93]. While the present study does not address irregular sampling, our results suggest promising avenues for fluid-mechanical physical reservoir computing to expand into this research domain [4,94].
Of course, the Lorenz system and the other popular test tasks remain an important benchmark for RC systems, particularly for tasks that require long-term stability in chaotic forecasting or involve continuous system dynamics [87]. Nevertheless, when the focus is on computational efficiency, discrete-time compatibility, and the ability to capture sharp nonlinear transitions, which is the case of this present paper, the Hénon map provides a compelling and practical alternative for evaluating RC systems.
It is also worth noting that, unless considerable computational resources are employed and judicious fine-tuning of the hyperparameters in the RC algorithm is performed [6,40,87,88], an RC system cannot be universally applied to all benchmarking time series. This limitation arises because the temporal behavior of each time series is inherently unique [31] and even sophisticated algorithms require additional adjustments—such as the learning rate α —to effectively capture the underlying dynamics. Thus, the choice in hyperparameters significantly influences the ability of the reservoir to generalize across different types of nonlinear and chaotic datasets, reinforcing the necessity of problem-specific optimization when applying RC models.
Nevertheless, to address the challenge of problem specificity, we extend our evaluation beyond the Hénon map test by demonstrating the capability of the bubble-based RC system to successfully perform classification tasks—a less conventional application for RC systems but one that represents a valuable and intriguing extension of their capabilities [17,95]. Unlike traditional RC applications, which primarily focus on time-series prediction and forecasting, classification tasks require the system to identify patterns and assign inputs to distinct categories based on learned features.

3.3. Computational Operating Regimes

In addition to the ability of the bubble-based RC system to operate in either a periodic or chaotic regime—depending on physical system parameters and input conditions—its performance can be evaluated through various computationally defined metrics.
Typically, the first computational test involves the predictive mode, where the RC system is presented with a previously unseen data point and then predicts the next point, continuing iteratively [20,55]. While this test is relatively simple, it helps to assess the quality of the training process and prepares the RC system for more challenging tasks.
The second test used in this paper is the generative mode, also known as the free-running forecast [55]. In this mode, the output produced by the trained reservoir at the previous time step serves as the input for the next time step [55], effectively making the reservoir a self-generator [96]. Operating in generative mode is a more challenging task compared with the predictive mode, but it holds greater practical significance as generative reservoirs can be applied to a wide range of problems, including the prediction of complex and hard-to-analyze processes such as financial market behavior and climate variations [40].
It is also worth noting that RC systems, in general, are inherently less prone to overfitting by design [6,56]. In particular, this robustness arises from the fixed reservoir structure and the simplicity of training. That said, overfitting can still occur under certain conditions—for example, if the reservoir is excessively large, the dataset is too noisy, or the readout layer is unnecessarily complex. None of these concerns apply to this study. We also note that physical RC systems can successfully operate with smaller training datasets, as demonstrated both in this paper and in previous published works [12,17,20,31,86,95]. Specifically, this ability enables an RC system to perform real-time prediction of an input signal’s future behavior using only a short recorded dataset [19].

4. Results and Discussion

4.1. Predictive Mode

We begin by discussing the performance of the RC system in the predictive mode, evaluating its accuracy in forecasting the Hénon map. This is quantified by computing the NMSE values in both the periodic and chaotic physical regimes of bubble oscillations.
Figure 3a shows a typical predictive-mode output of the bubble-based RC system operating in the chaotic regime, presenting a one-dimensional plot of the x-components of the Hénon map, with an NMSE value of approximately 8 × 10 3 . Figure 3b presents the corresponding two-dimensional representation of the Hénon map, with the ground truth data points shown as black dots and predicted points marked in magenta. It is worth noting that, in both panels of Figure 3, the numerical values of the x-component of the Hénon map have been rescaled according to the procedure described above. However, this rescaling does not affect the subsequent discussion and is therefore inconsequential for our analysis.
Although a visually similar result was also achieved by selecting the periodic physical oscillation regime, as shown in Figure 4, the NMSE obtained in the chaotic regime is not only lower than that in the periodic regime but also converges more quickly. Importantly, the result in Figure 4 demonstrates that a minimal configuration of just k × N = 15 virtual neurons is sufficient to achieve reliable RC system performance in the chaotic regime.
Such a low number of neurons—compared to at least 1000 in highly optimized traditional algorithmic RC systems [55]—was previously observed in quantum-mechanical [17,95], spin wave-based [12], and water wave-based [20] computational reservoirs. Recall that the paradigm of RC relies fundamentally on the nonlinearity of the underlying dynamical system, which enables the reservoir to transform input data into a high-dimensional state space suitable for solving complex tasks [6,56]. In the case of the bubble-based reservoir, the dynamics is governed by the nonlinear oscillations of a bubble under acoustic excitation. Such a system exhibits particularly rich and diverse behavior that significantly enhances its computational capacity [31,40,43]. In particular, this dynamical richness improves the reservoir’s learning process by ensuring that similar inputs produce correlated but distinguishable responses, while dissimilar inputs are mapped to well-separated regions of the reservoir state space [6]. As a result, the system can achieve accurate predictions and low error rates, even with a relatively small number of virtual neurons.
This finding confirms the ability of a single oscillating bubble to function as an efficient classical neuromorphic computing unit. In fact, the requirement of more than 1000 neurons in a traditional algorithmic RC system implies that computations must handle large matrices that need to be stored in computer memory and processed using complex linear algebra algorithms in a loop spanning all the input and output data points [55]. Therefore, the introduction of virtual neurons, extracted from the nonlinear dynamical behavior of the oscillating bubble using the procedure proposed in this paper, significantly reduces the computational requirements compared to traditional algorithms.

4.2. Free-Running Mode

The computational advantage of the bubble-based reservoir extends to tests conducted in the free-running mode, where the RC system autonomously generates future time-series points without external input, relying solely on its internal state dynamics. We reveal that, in the chaotic regime [Figure 5a], the bubble reservoir accurately predicts a substantial segment of the Hénon time series before a noticeable deviation from the target data emerges. We can see that the predicted trajectory closely follows the ground truth within this range, indicating short-term predictive capability of the reservoir. However, beyond this range, error accumulation and sensitivity to initial conditions lead to divergence, which is a common physical characteristic of chaotic systems and a fundamental computational behavior of ML algorithms designed to predict highly nonlinear time series [31,40,55].
Interestingly, we can also see that, after a region of significant divergence from the ground truth, the output of the RC system returns to convergence for a short period of time before diverging again. This behavior has also been reported in other physical RC systems [40]. Prolonged regions of good convergence occur when the dynamics of the reservoir temporarily aligns with the true behavior of the time series of interest. Since the memory of the reservoir is fundamentally limited [56], as well as because noise and numerical instabilities amplify small deviations over time, divergence from the ground truth occurs. Nevertheless, intermittent reconvergence can occur when the dynamics of the reservoir realigns with that of the time series, particularly when the input data exhibit recurrent dynamical patterns [87,97,98].
We reconstruct the two-dimensional Hénon map using an iterative windowing computational approach. In this method, a sliding window of 20 consecutive points is used to generate predictions. In the first iteration, the RC system is trained on the original training data, the data points to the left of the vertical line in Figure 5a, resulting in a partially converged prediction compared to the ground truth. In subsequent iterations, the first 20 points of the training dataset are omitted but the first 20 points correctly predicted by the RC system are appended to the end of the training dataset. This process is repeated iteratively, continuously updating the dataset with newly generated points, until the two-dimensional Hénon map is reconstructed (Figure 5b). As shown, this approach yields feasible results using only a few iterations, requiring minimal additional computer memory and only slightly increasing the computational time needed for forecasting.
Of course, advanced algorithmic RC systems implemented on high-performance hardware [87,88] may achieve a broader range of high divergence between the forecast and ground truth. However, they require a significantly larger number of neurons and input data points to train the reservoir. In contrast, as indicated by the vertical dashed line in Figure 5a, the bubble-based reservoir can operate effectively with just 100 training points. This capability is particularly advantageous for unconventional computing systems designed to operate onboard autonomous vehicles [19].

4.3. Classification Task

In addition to the tests on chaotic time series, we evaluate the performance of the bubble-based RC system by tasking it with a binary classification problem involving a synthetic periodic waveform. The input signal is constructed by randomly alternating between two types of 10-sample segments: a sinusoidal waveform defined by sin 2 π n 10 and a square wave with the values + 1 for the first five samples and 1 for the next five. Each segment spans ten samples, where each sample represents a temporal resolution of 10  μ s, corresponding to a sampling rate of 100 kHz, which means that each waveform segment spans 100  μ s in real time.
The label assigned to each sample is 0 for the sinusoidal segments and 1 for the square wave segments, resulting in a time-labeled binary sequence. This signal is used to drive the bubble-based RC system. The reservoir output is evaluated against the known labels at each time step, and correct classification indicates that the reservoir possesses both sufficient nonlinearity and short-term memory, which are crucial for performing temporal tasks such as pattern recognition and signal classification [6,17,95].
Figure 6b,c present the classification results produced by the bubble-based RC system equipped with six and sixteen neurons, respectively. The solid black line represents the target output, while the red line shows the output of the reservoir. We note that the target signal is displayed on this figure only for comparison purposes and is not provided to the RC system during the exploitation stage. For the configuration with six neurons, the system achieved an NMSE of 6.77 × 10 2 , while, for 16 neurons, the NMSE increased to 8.94 × 10 4 , resulting in nearly perfect graphical accuracy.

4.4. Impact of Training Data Noise

An active area of current research within the broader field of RC systems is the investigation of the noise robustness of RC algorithms and physical implementations [99,100,101]. Generally speaking, an RC architecture is expected be more robust to noise than the competing ML algorithms because the internal dynamics of the reservoir is fixed and only the readout layer is trained [6,7,56]. Additionally, the temporal memory and low-pass filtering characteristics of many reservoir designs (like liquid-state machines [48] and echo-state networks [6]) should help to suppress short-term high-frequency noise while preserving the underlying dynamics of the signal of interest [56].
However, in practice, each specific implementation of the RC algorithm has its own strengths and limitations in handling noisy inputs, which continues to motivate further research in this area [99,100,101]. For instance, it has been demonstrated that the so-called next-generation RC systems [87] exhibit different noise-handling characteristics compared with more traditional RC approaches [100]. Another recent study has demonstrated that any physical analog RC system exposed to noise can only support a limited amount of learning, even when aided by exponential post-processing efforts [101].
We analyzed the impact of noise on the bubble-based RC system operating in three modes: predictive, free-running, and classification. In these tests, random noise was added to the training dataset, with the noise level expressed as a percentage of the amplitude of the input signal. Our results show that, in the predictive mode—both in periodic and chaotic regimes—the system maintains its computational efficiency for moderate noise levels of up to 4%. In contrast, the free-running mode is more sensitive to noise, with performance degrading significantly at noise levels above 1%. In the classification mode, the reservoir continues to function mostly reliably. However, under a 1% noise level, the output contrast between logical ‘0’ and ‘1’ decreases from approximately 1 and 0.9 in Figure 6b,c, respectively, to approximately 0.3 in Figure 7, where achieving accurate classification may necessitate an additional standard signal post-processing step.
The observed impact of noise on the performance of the RC system correlates with the physical origin of its nonlinear computational dynamics. Specifically, the sensitivity to input noise arises from the physical design of the reservoir, where the input signal amplitude is mapped to the peak acoustic pressure that drives bubble oscillations. While this sensitivity may require mitigation—such as through signal filtering, a widely used and well-established strategy [99]—addressing such measures lies beyond the scope of the present study. Nevertheless, as discussed in the following section, this same design feature can be advantageous. Indeed, the inherent nonlinearity of the bubble-based RC system and signal-to-pressure mapping may contribute to a degree of robustness against out-of-distribution inputs, highlighting its potential utility in dynamic or uncertain real-world environments.

5. Conclusions and Outlook

In this work, we have presented a detailed study on the performance of a bubble-based RC system for both time-series prediction and classification tasks. Our results confirm that this physical reservoir can effectively handle nonlinear and chaotic data while requiring fewer computational resources compared to traditional RC approaches.
The bubble-based RC system exhibits strong predictive performance for chaotic time series, as demonstrated by the Hénon map test, while requiring only a minimal number of virtual neurons. It operates effectively in both periodic and chaotic physical regimes, with the chaotic regime providing computational advantages by capturing rich temporal dependencies. The results from the free-running mode confirm that the bubble-based RC system can autonomously generate future time-series points. Furthermore, the classification test highlights its high accuracy in pattern recognition, showcasing its potential for applications beyond conventional RC frameworks.
The potential to implement the proposed RC system in hardware has been discussed throughout this paper in various contexts. An earlier attempt was made to investigate an RC system based on a cluster of oscillating bubbles (unpublished). While using a cluster may be advantageous as it produces a strong signal compared with that originating from a single bubble, post-processing signals from multiple bubbles simultaneously has proven technically challenging, although it has produced practically valuable by-product results [51]. Since the acoustic response of a single bubble remains strong enough for practical use, potentially requiring only basic amplification and filtering when measured electronically or optically, we anticipate that standard laboratory equipment typically used in experiments with acoustically oscillating bubbles will suffice to attempt implementing the proposed RC system in hardware.
As also discussed in Appendix A, a carefully planned and optimized experimental hardware implementation of the proposed bubble-based RC system may yield an energy-efficient solution, whose consumption is comparable to that of a single computational run of a state-of-the-art long short-term memory (LSTM) model, a technique that is highly optimized for predicting complex time series, implemented using modern software ML libraries, and executed on an expensive high-performance workstation. Although the LSTM software model can produce a single forecast at a minimal cost—approximately USD 0.000008—with very high repeatability, it is important to recognize that the optimized hardware system, despite costing around USD 0.0001 per forecast, offers benefits that extend well beyond energy efficiency. Indeed, both approaches achieve reliable predictive performance, but the value of a physical reservoir-computing system lies not just in its accuracy or cost-effectiveness. Rather, it resides in its potential to generate new scientific insights and drive discovery in ways that purely digital models cannot.
In fact, from a philosophical perspective, even highly optimized and efficient ML algorithms like LSTM, while powerful, are ultimately limited by the assumptions and constraints imprinted into their software frameworks. Physical experiments, on the other hand, possess the unique capacity to reveal unexpected nonlinear phenomena, provide robust ground truth for validating models, and uncover emergent behaviors arising from complex multi-physics interactions. These are aspects that software, no matter how sophisticated, often fails to capture. Therefore, while ML can replicate patterns with great efficiency, physical systems remain irreplaceable in advancing fundamental understanding and inspiring new lines of scientific inquiry.
Future research will focus on optimizing the parameter space of the bubble-based physical system, exploring additional nonlinear datasets, and investigating practical implementations in real-world scenarios such as environmental monitoring, autonomous vehicle guiding, and biomedical signal processing. Furthermore, future work will explore the use of different bubble sizes to predict various time series and the implementation of multiple reservoirs to enhance the performance of bubble-based prediction tasks.
Finally, we note that the proposed physics-based reservoir-computing algorithm can be extended to operate with irregularly sampled input data, as discussed in Ref. [102]. In fact, the interpolation echo-state network framework introduced therein can, in principle, be implemented using the rich nonlinear dynamical behavior of a physical oscillating bubble system. This extension has the potential to broaden the applicability of our approach, particularly to real-world scenarios where data are often sampled non-uniformly—such as biomedical signals, environmental monitoring, and financial time series [102].
In addition, the inherent nonlinearity of the proposed bubble-based reservoir may provide a degree of robustness to out-of-distribution inputs. This is particularly due to the physical design of the RC system, where the input signal amplitude is mapped to the peak acoustic pressure that drives the oscillations of the bubble. Generally speaking, while conventional deep learning models are often sensitive to deviations from the training distribution [103], physical reservoir systems—owing to their continuous state space—may exhibit improved generalization in such scenarios [7,31,56]. These characteristics suggest that the bubble-based reservoir could be a promising candidate for applications in uncertain or dynamically changing environments, where inputs are unpredictable or only partially observed [19].

Author Contributions

Conceptualization, H.A.-G., A.H.A. and I.S.M.; Methodology, H.A.-G., A.H.A. and I.S.M.; Software, H.A.-G., A.H.A. and I.S.M.; Validation, H.A.-G., A.H.A. and I.S.M.; Formal analysis, H.A.-G., A.H.A. and I.S.M.; Investigation, H.A.-G., A.H.A. and I.S.M.; Resources, H.A.-G., A.H.A. and I.S.M.; Data curation, H.A.-G., A.H.A. and I.S.M.; Writing—original draft, H.A.-G., A.H.A. and I.S.M.; Writing—review & editing, H.A.-G., A.H.A. and I.S.M.; Visualization, H.A.-G. and I.S.M.; Supervision, I.S.M.; Project administration, I.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
AIartificial intelligence
ARautoregression
LSTMlong short-term memory
MLmachine learning
NCneuromorphic computing
NMSEnormalized mean squared error
PFpolynomial fitting
RCreservoir computing

Appendix A. Energy Consumption Analysis and Comparative Evaluation with Non-Machine-Learning Methods and Long Short-Term Memory Model

In the following, we first compare the output of the bubble-based RC system with predictions made using the non-ML-based models—polynomial fitting (PF) [104] and autoregression (AR) [105]. Then, demonstrating an advantage of the bubble-based RC system over PF and AR, we compare its performance with a state-of-the-art long short-term memory (LSTM) [106,107] neural network implemented using PyTorch 2.7 library ML library implemented in Python 3.1. Showing that the bubble RC system can be successfully trained and exploited on an approximately five times shorter dataset compared with the length of the dataset required to construct an efficient LSTM model, we also compare the energy consumed by the bubble-based RC system in comparison with a single run of LSTM software implemented on a high-end workstation computer.
Figure A1. Predictions of the x-component of the Hénon map obtained using the PF method and the AR method are compared against the ground truth. The full time series comprises 200 discrete time steps (in arbitrary units), of which the first 150 points were used for training and the remaining 50 for testing. The vertical dashed line indicates the train–test split.
Figure A1. Predictions of the x-component of the Hénon map obtained using the PF method and the AR method are compared against the ground truth. The full time series comprises 200 discrete time steps (in arbitrary units), of which the first 150 points were used for training and the remaining 50 for testing. The vertical dashed line indicates the train–test split.
Appliedmath 05 00101 g0a1

Appendix A.1. Non-Machine-Learning-Based Methods

Figure A1 shows the predictions of the x-component of the Hénon map obtained using the PF method (ridge-regularized polynomial regression with a window size of 5 and polynomial degree of 3) and the AR method, and compares them with the ground truth. The NMSE is approximately 11.73 for the PF method and 0.97 for the AR method. We can see that, while PF and AR can capture some features of the Hénon map, the overall performance of these techniques—both in terms of NMSE and visual comparison—is inferior compared with the bubble-based RC system.
Indeed, in general, PF and AR models are not the ideal approaches to predict the Hénon map because they are inherently designed for smooth, continuous, and often linear or mildly nonlinear systems. The Hénon map, however, is a chaotic dynamical system characterized by sensitive dependence on initial conditions and nonlinearity. In particular, PF attempts to capture data behavior by fitting a static curve to observations, but chaotic systems like the Hénon map exhibit dynamics that evolve over time and cannot be well described by a single polynomial equation of fixed degree. Higher-degree polynomials can approximate parts of the trajectory but often lead to overfitting, numerical instability, and poor generalization beyond the training interval, especially when attempting long-term predictions—which is demonstrated in Figure A1.
Similarly, standard AR models assume a linear relationship between current and past values, making them inadequate for capturing the complex nonlinear feedback mechanisms present in the Hénon map. While AR can model short-term trends in simple time series, it lacks the computational power to reconstruct the folding and stretching mechanisms typical of the Hénon time series. This also concerns nonlinear AR models, where the choice regarding lag, embedding dimension, and model structure becomes critical and highly sensitive.

Appendix A.2. Long Short-Term Memory Model

LSTM neural networks are well-suited for modeling systems like the Hénon map because they are specifically designed to capture complex temporal dependencies, including nonlinearity and long-range correlations in time series data [106,107]. Unlike PF and AR models, which have fixed functional forms and limited memory, LSTMs use a recurrent architecture with gated memory cells that can learn when to remember or forget information over time. This dynamic memory mechanism enables LSTMs to effectively model the recursive and sensitive nature of chaotic dynamics, where future states depend in intricate ways on long sequences of past states. Moreover, LSTMs can generalize better to unseen data when trained properly, making them useful for forecasting chaotic signals over short to medium horizons. This combination of memory, nonlinearity, and adaptability gives LSTMs a significant advantage over non-ML-based models in learning and predicting complex dynamical behavior.
Nevertheless, as shown in the top panel of Figure A2, LSTM requires at least 800 training data points to conduct a feasible long-term forecast. Indeed, the result presented in the bottom panel of Figure A2 demonstrates that, unlike the bubble-based RC system, the LSTM model cannot be trained to create a feasible forecast when it is presented with just 150 data points. (A similar observation holds for other types of hardware-based fluid-dynamical RC systems when compared with algorithmic RC systems [108].) In fact, LSTMs, due to their large number of trainable parameters, are prone to instability and poor generalization when trained on small datasets, especially those with complex nonlinear behavior [109].
Figure A2. Predictions of the x-component of the Hénon map obtained using the LSTM model. In the top panel, the full time series comprises 1000 discrete time steps (in arbitrary units), of which the first 800 points were used for training and the remaining 200 for testing (only the testing part is shown). In the bottom panel, the LSTM model is trained on 150 data points—the number of points used to train the bubble-based RC system. The titles of each panel also show the energy consumption per one run of the respective LSTM models.
Figure A2. Predictions of the x-component of the Hénon map obtained using the LSTM model. In the top panel, the full time series comprises 1000 discrete time steps (in arbitrary units), of which the first 800 points were used for training and the remaining 200 for testing (only the testing part is shown). In the bottom panel, the LSTM model is trained on 150 data points—the number of points used to train the bubble-based RC system. The titles of each panel also show the energy consumption per one run of the respective LSTM models.
Appliedmath 05 00101 g0a2

Appendix A.3. Energy Consumption Analysis

The benchmark computer used in this work is a Mac Studio M2 Ultra 20-core CPU workstation. When idle, it consumes approximately 10 W of electric power but can reach more than 200 W at maximum performance [110]. Using customized software that implements the LSTM model, we estimated the energy consumption by the workstation based on its processor type and workload activity. Upon initialization, using Python’s os library, the software detects the CPU type by querying the system information and then uses the power consumption profiles predefined for both idle and maximum performance states [110]. Then, the algorithm calculates the average power draw by interpolating between idle and maximum active power based on a parameter representing workload intensity. The total energy consumption in watt-hours (Wh) is then derived by scaling the average power by the operational duration (in seconds) and converting to hours.
To estimate the electrical power consumption of the bubble-based RC system (labeled as “Macro Bubble” in Table A1), we consider a typical experimental setup used in studies investigating the dynamical properties of acoustic bubbles [54]. Such a setup typically comprises an imaging system, including a digital camera or a low-power light-emitting diode. The acoustic excitation system consists of a piezoelectric transducer, a piezoelectric driver, and a signal generator. Assuming the experimental equipment is pre-calibrated and operational, a standard experiment would require 3–5 min to complete. Additionally, we estimate the power consumption of a potential microfluidic implementation of the aforementioned setup [111,112], which is expected to draw significantly less power while remaining physically consistent with the underlying physics of the macro-scale scheme.
Furthermore, although the linear readout of a physical RC system can generally be implemented using electronic circuits [113] or other hardware-based approaches [40,114,115], this computational step is often delegated to a digital computer system [7,12] for research convenience. However, in a practical bubble-based RC system scenario analyzed in this section, it would be sufficient to use a micro-computer such as Raspberry Pi 4B [116] (or an even less computationally powerful microcontroller [108]) that would additionally consume approximately 50 J per one computational run.
As shown in Table A1, a single run of the LSTM model consumes approximately 130 J of energy, compared to at least 10,000 J required by the macro-scale experimental setup implementing the bubble-based RC system. In contrast, in the best-case scenario, the microfluidic version of the setup—which uses a significantly smaller volume of liquid and offers a higher energy density figure of merit—requires approximately the same amount of energy as the LSTM model.
We also calculate the electricity cost of one run based on energy per run and average industrial electricity price of USD 0.12 per kWh (varies by region). Thus, one LSTM run costs approximately USD 0.000008. In turn, one computational run of the macro-scale and microfluidic system would cost at least USD 0.03 and USD 0.0001, respectively. Finally, we estimate that the LSTM model produces results with very high repeatability, compared to the moderate and high repeatability expected from the macro-scale and microfluidic setups.
Table A1. Comparison of the LSTM and macro-scale and microfluidic bubble-based RC systems.
Table A1. Comparison of the LSTM and macro-scale and microfluidic bubble-based RC systems.
MetricMacro BubbleMicrofluidicLSTM Model
Energy per run (J)10,000–500,000150–1000130
Volume10–100  m L 10–100 μLN/A
Energy density (J/L)0.1–5  M J / L 1 1–10  M J / L 1 N/A
Run cost (USD)0.03–0.600.0001–0.0030.000008
RepeatabilityModerateHighVery High

References

  1. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  2. Doan, N.A.K.; Polifke, W.; Magri, L. Short- and long-term predictions of chaotic flows and extreme events: A physics-constrained reservoir computing approach. Proc. R. Soc. A 2021, 477, 20210135. [Google Scholar] [CrossRef]
  3. Pfeffer, P.; Heyder, F.; Schumacher, J. Hybrid quantum-classical reservoir computing of thermal convection flow. Phys. Rev. Res. 2022, 4, 033176. [Google Scholar] [CrossRef]
  4. Perrusquía, A.; Guo, W. Reservoir computing for drone trajectory intent prediction: A physics informed approach. IEEE Trans. Cybern. 2024, 54, 4939–4948. [Google Scholar] [CrossRef] [PubMed]
  5. Choi, S.; Salamin, Y.; Roques-Carmes, C.; Dangovski, R.; Luo, D.; Chen, Z.; Horodynski, M.; Sloan, J.; Uddin, S.Z.; Soljačić, M. Photonic probabilistic machine learning using quantum vacuum noise. Nat. Commun. 2024, 15, 7760. [Google Scholar] [CrossRef] [PubMed]
  6. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  7. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Newt. 2019, 115, 100–123. [Google Scholar] [CrossRef] [PubMed]
  8. Nakajima, K.; Fisher, I. Reservoir Computing; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  9. Marković, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
  10. Cucchi, M.; Abreu, S.; Ciccone, G.; Brunner, D.; Kleemann, H. Hands-on reservoir computing: A tutorial for practical implementation. Neuromorph. Comput. Eng. 2022, 2, 032002. [Google Scholar] [CrossRef]
  11. Furuta, T.; Fujii, K.; Nakajima, K.; Tsunegi, S.; Kubota, H.; Suzuki, Y.; Miwa, S. Macromagnetic simulation for reservoir computing utilizing spin dynamics in magnetic tunnel junctions. Phys. Rev. Appl. 2018, 10, 034063. [Google Scholar] [CrossRef]
  12. Watt, S.; Kostylev, M. Reservoir computing using a spin-wave delay-line active-ring resonator based on yttrium-iron-garnet film. Phys. Rev. Appl. 2020, 13, 034057. [Google Scholar] [CrossRef]
  13. Allwood, D.A.; Ellis, M.O.A.; Griffin, D.; Hayward, T.J.; Manneschi, L.; Musameh, M.F.K.; O’Keefe, S.; Stepney, S.; Swindells, C.; Trefzer, M.A.; et al. A perspective on physical reservoir computing with nanomagnetic devices. Appl. Phys. Lett. 2023, 122, 040501. [Google Scholar] [CrossRef]
  14. Marković, D.; Grollier, J. Quantum neuromorphic computing. Appl. Phys. Lett. 2020, 117, 150501. [Google Scholar] [CrossRef]
  15. Bravo, R.A.; Najafi, K.; Gao, X.; Yelin, S.F. Quantum Reservoir Computing Using Arrays of Rydberg Atoms. PRX Quantum 2022, 3, 030325. [Google Scholar] [CrossRef]
  16. Govia, L.C.G.; Ribeill, G.J.; Rowlands, G.E.; Krovi, H.K.; Ohki, T.A. Quantum reservoir computing with a single nonlinear oscillator. Phys. Rev. Res. 2021, 3, 013077. [Google Scholar] [CrossRef]
  17. Abbas, A.H.; Maksymov, I.S. Reservoir computing using measurement-controlled quantum dynamics. Electronics 2024, 13, 1164. [Google Scholar] [CrossRef]
  18. Goto, K.; Nakajima, K.; Notsu, H. Twin vortex computer in fluid flow. New J. Phys. 2021, 23, 063051. [Google Scholar] [CrossRef]
  19. Abbas, A.H.; Abdel-Ghani, H.; Maksymov, I.S. Classical and quantum physical reservoir computing for onboard artificial intelligence systems: A perspective. Dynamics 2024, 4, 643–670. [Google Scholar] [CrossRef]
  20. Maksymov, I.S.; Pototsky, A. Reservoir computing based on solitary-like waves dynamics of liquid film flows: A proof of concept. EPL 2023, 142, 43001. [Google Scholar] [CrossRef]
  21. Marcucci, G.; Caramazza, P.; Shrivastava, S. A new paradigm of reservoir computing exploiting hydrodynamics. Phys. Fluids 2023, 35, 071703. [Google Scholar] [CrossRef]
  22. Matsuo, T.; Sato, D.; Koh, S.G.; Shima, H.; Naitoh, Y.; Akinaga, H.; Itoh, T.; Nokami, T.; Kobayashi, M.; Kinoshita, K. Dynamic nonlinear behavior of ionic liquid-based reservoir computing devices. ACS Appl. Mater. Interfaces 2022, 14, 36890–36901. [Google Scholar] [CrossRef] [PubMed]
  23. Nakajima, K.; Hauser, H.; Kang, R.; Guglielmino, E.; Caldwell, D.; Pfeifer, R. A soft body as a reservoir: Case studies in a dynamic model of octopus-inspired soft robotic arm. Front. Comput. Neurosci. 2013, 7, 91. [Google Scholar] [CrossRef]
  24. Nakajima, K.; Fischer, I.; Hauser, H. Exploiting physical reservoir computing with soft materials. J. R. Soc. Interface 2018, 15, 20180282. [Google Scholar] [CrossRef]
  25. Chembo, Y.K. Machine learning based on reservoir computing with time-delayed optoelectronic and photonic systems. Chaos 2020, 30, 013111. [Google Scholar] [CrossRef]
  26. Penkovsky, B.; Larger, L.; Brunne, D. Efficient design of hardware-enabled reservoir computing in FPGAs. J. Appl. Phys. 2018, 124, 162101. [Google Scholar] [CrossRef]
  27. Sun, L.; Wang, Z.; Jiang, J.; Kim, Y.; Joo, B.; Zheng, S.; Lee, S.; Yu, W.J.; Kong, B.S.; Yang, H. In-sensor reservoir computing for language learning via two-dimensional memristors. Sci. Adv. 2021, 7, eabg1455. [Google Scholar] [CrossRef]
  28. Cao, J.; Zhang, X.; Cheng, H.; Qiu, J.; Liu, X.; Wang, M.; Liu, Q. Emerging dynamic memristors for neuromorphic reservoir computing. Nanoscale 2022, 14, 289–298. [Google Scholar] [CrossRef]
  29. Ikeda, S.; Awano, H.; Sato, T. Modular DFR: Digital delayed feedback reservoir model for enhancing design flexibility. ACM Trans. Embed. Comput. Syst. 2023, 22, 014008. [Google Scholar] [CrossRef]
  30. Wang, R.; Liang, Q.; Wang, S.; Cao, Y.; Ma, X.; Wang, H.; Hao, Y. Deep reservoir computing based on self-rectifying memristor synapse for time series prediction. Appl. Phys. Lett. 2023, 123, 042109. [Google Scholar] [CrossRef]
  31. Maksymov, I.S.; Pototsky, A.; Suslov, S.A. Neural echo state network using oscillations of gas bubbles in water. Phys. Rev. E 2022, 105, 044206. [Google Scholar] [CrossRef] [PubMed]
  32. Henderson, A.; Yakopcic, C.; Harbour, S.; Taha, T.M. Detection and classification of drones through acoustic features using a spike-based reservoir computer for low power applications. In Proceedings of the 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC), Portsmouth, VA, USA, 18–22 September 2022; pp. 1–7. [Google Scholar] [CrossRef]
  33. Lymburn, T.; Algar, S.D.; Small, M.; Jüngling, T. Reservoir computing with swarms. Chaos 2020, 31, 033121. [Google Scholar] [CrossRef] [PubMed]
  34. Natschläger, T.; Maass, W.; Markram, H. The “Liquid Computer”: A novel strategy for real-time computing on time series. Telematik 2002, 8, 39. [Google Scholar]
  35. Jones, B.; Stekel, D.; Rowe, J.; Fernando, C. Is there a Liquid State Machine in the Bacterium Escherichia coli? In Proceedings of the 2007 IEEE Symposium on Artificial Life, Honolulu, HI, USA, 1–5 April 2007; pp. 187–191. [Google Scholar]
  36. Adamatzky, A. Advances in Unconventional Computing. Volume 2: Prototypes, Models and Algorithms; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  37. Adamatzky, A. A brief history of liquid computers. Philos. Trans. R. Soc. B 2019, 374, 20180372. [Google Scholar] [CrossRef]
  38. Marcucci, G.; Pierangeli, D.; Conti, C. Theory of neuromorphic computing by waves: Machine learning by rogue waves, dispersive shocks, and solitons. Phys. Rev. Lett. 2020, 125, 093901. [Google Scholar] [CrossRef]
  39. Kheirabadi, N.R.; Chiolerio, A.; Szaciłowski, K.; Adamatzky, A. Neuromorphic liquids, colloids, and gels: A review. ChemPhysChem 2023, 24, e202200390. [Google Scholar] [CrossRef] [PubMed]
  40. Maksymov, I.S. Analogue and physical reservoir computing using water waves: Applications in power engineering and beyond. Energies 2023, 16, 5366. [Google Scholar] [CrossRef]
  41. Verdecchia, R.; Sallou, J.; Cruz, L. A systematic review of Green AI. WIREs Data Min. Knowl. 2023, 13, e1507. [Google Scholar] [CrossRef]
  42. Akhatov, I.; Gumerov, N.; Ohl, C.D.; Parlitz, U.; Lauterborn, W. The role of surface tension in stable single-bubble sonoluminescence. Phys. Rev. Lett. 1997, 78, 227–230. [Google Scholar] [CrossRef]
  43. Lauterborn, W.; Kurz, T. Physics of bubble oscillations. Rep. Prog. Phys. 2010, 73, 106501. [Google Scholar] [CrossRef]
  44. Tandiono; Ohl, S.W.; Ow, D.S.W.; Klaseboer, E.; Wong, V.V.; Dumke, R.; Ohl, C.D. Sonochemistry and sonoluminescence in microfluidics. Proc. Natl. Acad. Sci. USA 2011, 108, 5996–5998. [Google Scholar] [CrossRef]
  45. Maksymov, I.S. Gas Bubble Photonics: Manipulating Sonoluminescence Light with Fluorescent and Plasmonic Nanoparticles. Appl. Sci. 2022, 12, 8790. [Google Scholar] [CrossRef]
  46. McKenna, T.M.; McMullen, T.A.; Shlesinger, M.F. The brain as a dynamic physical system. Neuroscience 1994, 60, 587–605. [Google Scholar] [CrossRef]
  47. Korn, H.; Faure, P. Is there chaos in the brain? II. Experimental evidence and related models. Comptes Rendus Biol. 2003, 326, 787–840. [Google Scholar] [CrossRef]
  48. Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef]
  49. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
  50. Krauhausen, I.; Coen, C.T.; Spolaor, S.; Gkoupidenis, P.; van de Burgt, Y. Brain-inspired organic electronics: Merging neuromorphic computing and bioelectronics using conductive polymers. Adv. Funct. Mater. 2024, 34, 2307729. [Google Scholar] [CrossRef]
  51. Nguyen, B.Q.H.; Maksymov, I.S.; Suslov, S.A. Spectrally wide acoustic frequency combs generated using oscillations of polydisperse gas bubble clusters in liquids. Phys. Rev. E 2021, 104, 035104. [Google Scholar] [CrossRef]
  52. Maksymov, I. Artificial musical creativity enabled by nonlinear oscillations of a bubble acting as a physical reservoir computing system. Int. J. Unconvent. Comput. 2023, 18, 249–269. [Google Scholar]
  53. Gaitan, D.F.; Crum, L.A.; Church, C.C.; Roy, R.A. Sonoluminescence and bubble dynamics for a single, stable, cavitation bubble. J. Acoust. Soc. Am. 1992, 91, 3166–3183. [Google Scholar] [CrossRef]
  54. Nguyen, B.Q.H.; Maksymov, I.S.; Suslov, S.A. Acoustic frequency combs using gas bubble cluster oscillations in liquids: A proof of concept. Sci. Rep. 2021, 11, 38. [Google Scholar] [CrossRef]
  55. Lukoševičius, M. A Practical Guide to Applying Echo State Networks. In Neural Networks: Tricks of the Trade, Reloaded; Montavon, G., Orr, G.B., Müller, K.R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 659–686. [Google Scholar]
  56. Jaeger, H. Short Term Memory in Echo State Networks; GMD Report 152; German National Research Center for Information Technology: Birlinghoven, Germany, 2001. [Google Scholar]
  57. Rayleigh, L. On the pressure developed in a liquid during the collapse of a spherical cavity. Philos. Mag. 1917, 34, 94–98. [Google Scholar] [CrossRef]
  58. Cole, R.H. Underwater Explosions; Princeton University Press: New York, NY, USA, 1948. [Google Scholar]
  59. Plesset, M.S. The dynamics of cavitation bubbles. J. Appl. Mech. 1949, 16, 228–231. [Google Scholar] [CrossRef]
  60. Keller, J.B.; Miksis, M. Bubble oscillations of large amplitude. J. Acoust. Soc. Am. 1980, 68, 628–633. [Google Scholar] [CrossRef]
  61. Minnaert, M. On musical air-bubbles and the sound of running water. Phil. Mag. 1933, 16, 235–248. [Google Scholar] [CrossRef]
  62. Prosperetti, A. Nonlinear oscillations of gas bubbles in liquids: Steady-state solutions. J. Acoust. Soc. Am. 1974, 56, 878–885. [Google Scholar] [CrossRef]
  63. Prosperetti, A. Application of the subharmonic threshold to the measurement of the damping of oscillating gas bubbles. J. Acoust. Soc. Am. 1977, 61, 11–16. [Google Scholar] [CrossRef]
  64. Prosperetti, A. Bubble phenomena in sound fields: Part one. Ultrasonics 1984, 22, 69–77. [Google Scholar] [CrossRef]
  65. Parlitz, U.; Englisch, V.; Scheffczyk, C.; Lauterborn, W. Bifurcation structure of bubble oscillators. J. Acoust. Soc. Am. 1990, 88, 1061–1077. [Google Scholar] [CrossRef]
  66. Lauterborn, W.; Mettin, R. Nonlinear Bubble Dynamics. In Sonochemistry and Sonoluminescence; Crum, L.A., Mason, T.J., Reisse, J.L., Suslick, K.S., Eds.; Springer: Dordrecht, The Netherlands, 1999; pp. 63–72. [Google Scholar]
  67. Leighton, T. The Acoustic Bubble; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  68. Brennen, C.E. Cavitation and Bubble Dynamics; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  69. Mettin, R.; Akhatov, I.; Parlitz, U.; Ohl, C.D.; Lauterborn, W. Bjerknes forces between small cavitation bubbles in a strong acoustic field. Phys. Rev. E 1997, 56, 2924–2931. [Google Scholar] [CrossRef]
  70. Doinikov, A.A. Translational motion of two interacting bubbles in a strong acoustic field. Phys. Rev. E 2001, 64, 026301. [Google Scholar] [CrossRef]
  71. Doinikov, A.A. Translational motion of a spherical bubble in an acoustic standing wave of high intensity. Phys. Fluids 2002, 14, 1420. [Google Scholar] [CrossRef]
  72. Suslov, S.A.; Ooi, A.; Manasseh, R. Nonlinear dynamic behavior of microscopic bubbles near a rigid wall. Phys. Rev. E 2012, 85, 066309. [Google Scholar] [CrossRef]
  73. Sojahrood, A.; Earl, R.; Kolios, M.; Karshafian, R. Investigation of the 1/2 order subharmonic emissions of the period-2 oscillations of an ultrasonically excited bubble. Ultrason. Sonochem. 2021, 72, 105423. [Google Scholar] [CrossRef]
  74. Sojahrood, A.J.; Kolios, M.C. Classification of the nonlinear dynamics and bifurcation structure of ultrasound contrast agents excited at higher multiples of their resonance frequency. Phys. Lett. A 2012, 376, 2222–2229. [Google Scholar] [CrossRef]
  75. Haghi, H.; Sojahrood, A.; Kolios, M. Collective nonlinear behavior of interacting polydisperse microbubble clusters. Ultrason. Sonochem. 2019, 58, 104708. [Google Scholar] [CrossRef]
  76. Maksymov, I.S.; Nguyen, B.Q.H.; Suslov, S.A. Biomechanical sensing using gas bubbles oscillations in liquids and adjacent technologies: Theory and practical applications. Biosensors 2022, 12, 624. [Google Scholar] [CrossRef] [PubMed]
  77. Maksymov, I.S.; Nguyen, B.Q.H.; Pototsky, A.; Suslov, S.A. Acoustic, phononic, Brillouin light scattering and Faraday wave-based frequency combs: Physical foundations and applications. Sensors 2022, 22, 3921. [Google Scholar] [CrossRef] [PubMed]
  78. Zandi-Mehran, N.; Nazarimehr, F.; Rajagopal, K.; Ghosh, D.; Jafari, S.; Chen, G. FFT bifurcation: A tool for spectrum analyzing of dynamical systems. Appl. Math. Comput. 2022, 422, 126986. [Google Scholar] [CrossRef]
  79. Nakajima, K.; Fischer, I. (Eds.) Reservoir Computing: Theory, Physical Implementations, and Applications; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  80. Bertschinger, N.; Natschläger, T. Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput. 2004, 16, 1413–1436. [Google Scholar] [CrossRef]
  81. Carroll, T.L. Do reservoir computers work best at the edge of chaos? Chaos 2020, 30, 121109. [Google Scholar] [CrossRef]
  82. Bollt, E. On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD. Chaos 2021, 31, 013108. [Google Scholar] [CrossRef]
  83. Liao, Z.; Wang, Z.; Yamahara, H.; Tabata, H. Echo state network activation function based on bistable stochastic resonance. Chaos Solitons Fract. 2021, 153, 111503. [Google Scholar] [CrossRef]
  84. Nishioka, D.; Tsuchiya, T.; Namiki, W.; Takayanagi, M.; Imura, M.; Koide, Y.; Higuchi, T.; Terabe, K. Edge-of-chaos learning achieved by ion-electron–coupled dynamics in an ion-gating reservoir. Sci. Adv. 2022, 8, eade1156. [Google Scholar] [CrossRef]
  85. Baccetti, V.; Zhu, R.; Kuncic, Z.; Caravelli, F. Ergodicity, lack thereof, and the performance of reservoir computing with memristive networks. Nano Express 2024, 5, 015021. [Google Scholar] [CrossRef]
  86. Abbas, A.H.; Abdel-Ghani, H.; Maksymov, I.S. Edge-of-chaos and chaotic dynamics in resistor-inductor-diode-based reservoir computing. IEEE Access 2025, 13, 18191–18199. [Google Scholar] [CrossRef]
  87. Gauthier, D.J.; Bollt, E.; Griffith, A.; Barbosa, W.A.S. Next generation reservoir computing. Nat. Commun. 2021, 12, 5564. [Google Scholar] [CrossRef]
  88. Harding, S.; Leishman, Q.; Lunceford, W.; Passey, D.J.; Pool, T.; Webb, B. Global forecasts in reservoir computers. Chaos 2024, 34, 023136. [Google Scholar] [CrossRef] [PubMed]
  89. Sun, X.; Gao, J.; Wang, Y. Towards fault tolerance of reservoir computing in time series prediction. Information 2023, 14, 266. [Google Scholar] [CrossRef]
  90. Hénon, M. A two-dimensional mapping with a strange attractor. Commun. Math. Phys. 1976, 50, 69–77. [Google Scholar] [CrossRef]
  91. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
  92. Watt, S.; Kostylev, M.; Ustinov, A.B.; Kalinikos, B.A. Implementing a magnonic reservoir computer model based on time-delay multiplexing. Phys. Rev. Appl. 2021, 15, 064060. [Google Scholar] [CrossRef]
  93. Lee, J.; De Brouwer, E.; Hamzi, B.; Owhadi, H. Learning dynamical systems from data: A simple cross-validation perspective, Part III: Irregularly-sampled time series. Phys. D 2023, 443, 133546. [Google Scholar] [CrossRef]
  94. Sun, C.; Hong, S.; Song, M.; Chou, Y.H.; Sun, Y.; Cai, D.; Li, H. TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data. arXiv 2021. [Google Scholar] [CrossRef]
  95. Dudas, J.; Carles, B.; Plouet, E.; Mizrahi, F.A.; Grollier, J.; Marković, D. Quantum reservoir computing implementation on coherently coupled quantum oscillators. NPJ Quantum Inf. 2023, 9, 64. [Google Scholar] [CrossRef]
  96. Shougat, M.R.E.U.; Perkins, E. The van der Pol physical reservoir computer. Neuromorph. Comput. Eng. 2023, 3, 024004. [Google Scholar] [CrossRef]
  97. Pathak, J.; Lu, Z.; Hunt, B.R.; Girvan, M.; Ott, E. Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos 2017, 27, 121102. [Google Scholar] [CrossRef]
  98. Pathak, J.; Hunt, B.R.; Girvan, M.; Lu, Z.; Ott, E. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett. 2018, 120, 24102. [Google Scholar] [CrossRef]
  99. Nathe, C.; Pappu, C.; Mecholsky, N.A.; Hart, J.; Carroll, T.; Sorrentino, F. Reservoir computing with noise. Chaos 2023, 33, 041101. [Google Scholar] [CrossRef]
  100. Liu, S.; Xiao, J.; Yan, Z.; Gao, J. Noise resistance of next-generation reservoir computing: A comparative study with high-order correlation computation. Nonlinear Dyn. 2023, 111, 14295–14308. [Google Scholar] [CrossRef]
  101. Polloreno, A.M. Limits to Analog Reservoir Learning. arXiv 2025, arXiv:2307.14474. [Google Scholar]
  102. Li, Z.; Andreev, A.; Hramov, A.; Blyuss, O.; Zaikin, A. Novel efficient reservoir computing methodologies for regular and irregular time series classification. Nonlinear Dyn. 2025, 113, 4045–4062. [Google Scholar] [CrossRef] [PubMed]
  103. Chen, Q.; Li, K.; Chen, Z.; Maul, T.; Yin, J. Exploring feature sparsity for out-of-distribution detection. Sci. Rep. 2024, 14, 28444. [Google Scholar] [CrossRef] [PubMed]
  104. Regonda, S.; Rajagopalan, B.; Lall, U.; Clark, M.; Moon, Y.I. Local polynomial method for ensemble forecast of time series. Nonlinear Process. Geophys. 2005, 12, 397–406. [Google Scholar] [CrossRef]
  105. Shumway, R.H.; Stoffer, D.S. Time Series Analysis and Its Applications; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  106. Sangiorgio, M.; Dercole, F.; Guariso, G. Forecasting of noisy chaotic systems with deep neural networks. Chaos Solitons Fractals 2021, 153, 111570. [Google Scholar] [CrossRef]
  107. Vismaya, V.S.; Hareendran, A.; Nair, B.V.; Muni, S.S.; Lellep, M. Comparative Analysis of Predicting Subsequent Steps in Hénon Map. arXiv 2024, arXiv:2405.10190. [Google Scholar]
  108. Maksymov, I.S. Physical reservoir computing enabled by solitary waves and biologically inspired nonlinear transformation of input data. Dynamics 2024, 4, 119–134. [Google Scholar] [CrossRef]
  109. Malashin, I.; Tynchenko, V.; Gantimurov, A.; Nelyub, V.; Borodulin, A. Applications of long short-term memory (LSTM) networks in polymeric sciences: A review. Polymers 2024, 16, 2607. [Google Scholar] [CrossRef] [PubMed]
  110. Apple Inc. Mac Mini (2018)—Technical Specifications. Apple Support. 2020. Available online: https://support.apple.com/en-au/102027 (accessed on 15 March 2024).
  111. Shang, X.; Huang, X. Investigation of the dynamics of cavitation bubbles in a microfluidic channel with actuations. Micromachines 2022, 13, 203. [Google Scholar] [CrossRef]
  112. González, I.; Candil, M.; Luzuriaga, J. Acoustophoretic trapping of particles by bubbles in microfluidics. Front. Phys. 2023, 11, 1062433. [Google Scholar] [CrossRef]
  113. Pedretti, G.; Ielmini, D. In-memory computing with resistive memory circuits: Status and outlook. Electronics 2021, 10, 1063. [Google Scholar] [CrossRef]
  114. Shirmohammadli, V.; Bahreyni, B. Physics-based approach to developing physical reservoir computers. Phys. Rev. Res. 2024, 6, 033055. [Google Scholar] [CrossRef]
  115. Picco, E.; Jaurigue, L.; Lüdge, K.; Massar, S. Efficient optimisation of physical reservoir computers using only a delayed input. Commun. Eng. 2025, 4, 3. [Google Scholar] [CrossRef] [PubMed]
  116. Yu, Z.; Sadati, S.M.H.; Perera, S.; Hauser, H.; Childs, P.R.N.; Nanayakkara, T. Tapered whisker reservoir computing for real-time terrain identification-based navigation. Sci. Rep. 2023, 13, 5213. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) Schematic illustration of a traditional RC system. The reservoir consists of a network of interconnected artificial neurons that generate a vector of neural activations x n from a dataset of input values u n . Only the linear readout is trained to produce the output y n . (b) Bubble-based RC system. The input data are encoded in the peak amplitude of the acoustic pressure waves. Neural activations are extracted by sampling the acoustic response of the oscillating bubble, as detailed in the main text and summarized in the inset table. The training and exploitation procedures for the physical RC system mirror those of the traditional algorithmic RC system.
Figure 1. (a) Schematic illustration of a traditional RC system. The reservoir consists of a network of interconnected artificial neurons that generate a vector of neural activations x n from a dataset of input values u n . Only the linear readout is trained to produce the output y n . (b) Bubble-based RC system. The input data are encoded in the peak amplitude of the acoustic pressure waves. Neural activations are extracted by sampling the acoustic response of the oscillating bubble, as detailed in the main text and summarized in the inset table. The training and exploitation procedures for the physical RC system mirror those of the traditional algorithmic RC system.
Appliedmath 05 00101 g001
Figure 2. (a) Theoretical bifurcation curve of a single bubble functioning as an RC system and excited with a sinusoidal acoustic wave with the peak pressure P a . (b) Spectral representation of the nonlinear dynamical regimes of oscillation.
Figure 2. (a) Theoretical bifurcation curve of a single bubble functioning as an RC system and excited with a sinusoidal acoustic wave with the peak pressure P a . (b) Spectral representation of the nonlinear dynamical regimes of oscillation.
Appliedmath 05 00101 g002
Figure 3. Predictive mode output of the bubble-based RC systems operating in the chaotic regime. (a) The x-component of the Hénon map. The data points to the left and right of the vertical dashed line represent the performance of the RC system in the training and exploitation regimes, respectively. (b) Two-dimensional representation of the predicted Hénon map, with the ground truth data points shown as black dots and predicted points marked in magenta.
Figure 3. Predictive mode output of the bubble-based RC systems operating in the chaotic regime. (a) The x-component of the Hénon map. The data points to the left and right of the vertical dashed line represent the performance of the RC system in the training and exploitation regimes, respectively. (b) Two-dimensional representation of the predicted Hénon map, with the ground truth data points shown as black dots and predicted points marked in magenta.
Appliedmath 05 00101 g003
Figure 4. Predictive-mode NMSE plotted as a function of the reservoir size, defined as the total number of virtual nodes k × N , for both periodic and chaotic physical regimes of bubble oscillation.
Figure 4. Predictive-mode NMSE plotted as a function of the reservoir size, defined as the total number of virtual nodes k × N , for both periodic and chaotic physical regimes of bubble oscillation.
Appliedmath 05 00101 g004
Figure 5. Free-running mode output of the bubble-based RC systems operating the chaotic regime. (a) A one-dimensional plot of the x-component of the Hénon map. The RC system is trained on the training portion of the time series (up to the vertical dashed line), and an iterative windowing approach is used to generate predictions. (b) Two-dimensional phase space representation of Hénon map, where the data points are shown as black dots and the generated points from the RC system are marked with magenta dots.
Figure 5. Free-running mode output of the bubble-based RC systems operating the chaotic regime. (a) A one-dimensional plot of the x-component of the Hénon map. The RC system is trained on the training portion of the time series (up to the vertical dashed line), and an iterative windowing approach is used to generate predictions. (b) Two-dimensional phase space representation of Hénon map, where the data points are shown as black dots and the generated points from the RC system are marked with magenta dots.
Appliedmath 05 00101 g005
Figure 6. (a) Classification task distinguishing between sinusoidal and square waveforms using the bubble-based RC system with (b) 6 and (c) 16 neurons, respectively.
Figure 6. (a) Classification task distinguishing between sinusoidal and square waveforms using the bubble-based RC system with (b) 6 and (c) 16 neurons, respectively.
Appliedmath 05 00101 g006
Figure 7. Result of the classification test under a 1% noise level, for comparison with the noiseless results presented in Figure 6b,c. Panel layout and line color coding follow the same conventions as in Figure 6.
Figure 7. Result of the classification test under a 1% noise level, for comparison with the noiseless results presented in Figure 6b,c. Panel layout and line color coding follow the same conventions as in Figure 6.
Appliedmath 05 00101 g007
Table 1. Model parameters used in this study.
Table 1. Model parameters used in this study.
ParameterValueUnit
Density of water ( ρ )998kg/m3
Static pressure ( P stat ) 100 × 10 3 Pa
Vapor pressure ( P v ) 2.33 × 10 3 Pa
Surface tension ( σ ) 7.25 × 10 2 N/m
Gas polytropic exponent ( κ )1.4-
Driving acoustic frequency ( f a ) 6.2362 × 10 6 Hz
Dynamic viscosity ( μ ) 1 × 10 3 kg/(m·s)
Equilibrium bubble radius ( R 0 ) 0.8 × 10 6 m
Velocity of sound in water (c) 1.50 × 10 3 m/s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdel-Ghani, H.; Abbas, A.H.; Maksymov, I.S. Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime. AppliedMath 2025, 5, 101. https://doi.org/10.3390/appliedmath5030101

AMA Style

Abdel-Ghani H, Abbas AH, Maksymov IS. Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime. AppliedMath. 2025; 5(3):101. https://doi.org/10.3390/appliedmath5030101

Chicago/Turabian Style

Abdel-Ghani, Hend, A. H. Abbas, and Ivan S. Maksymov. 2025. "Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime" AppliedMath 5, no. 3: 101. https://doi.org/10.3390/appliedmath5030101

APA Style

Abdel-Ghani, H., Abbas, A. H., & Maksymov, I. S. (2025). Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime. AppliedMath, 5(3), 101. https://doi.org/10.3390/appliedmath5030101

Article Metrics

Back to TopTop