Next Article in Journal
Preface for “Big Data Mining and Analytics with Applications”
Previous Article in Journal
Adaptive PPO-RND Optimization Within Prescribed Performance Control for High-Precision Motion Platforms
Previous Article in Special Issue
FR3 Path Loss in Outdoor Corridors: Physics-Guided Two-Ray Residual Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing

1
Department of Mechanical and Aerospace Engineering, Graduate School of Engineering, Nagoya University, Nagoya 464-8603, Japan
2
Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Koganei 184-8588, Japan
3
Department of Mechanical Engineering, College of Engineering, Chubu University, Kasugai 487-8501, Japan
4
Department of Informatics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3440; https://doi.org/10.3390/math13213440
Submission received: 18 September 2025 / Revised: 20 October 2025 / Accepted: 27 October 2025 / Published: 28 October 2025
(This article belongs to the Special Issue Machine Learning: Mathematical Foundations and Applications)

Abstract

We discuss necessary and sufficient conditions for universal approximation using reservoir computing. Reservoir computing is a machine learning method used to train a dynamical system model by tuning only the static part of the model. The universality is the ability of the model to approximate any dynamical system with any precision. In the previous studies, we provided two sufficient conditions for the universality. We employed the universality definition that has been discussed since the earliest studies on reservoir computing. In this present paper, we prove that these two conditions and the universality are equivalent to one another. Using this equivalence, we show that a universal model must have a “pathological” property that can only be achieved or approached by chaotic reservoirs.

1. Introduction

Reservoir computing (RC) is a computationally cheap method used to train dynamical system models, which was initially proposed for recurrent neural networks (RNNs) [1]. Generally, RNNs are trained using the gradient descent method, but this is computationally expensive because the signal flow must be tracked in the network for a certain period. Jaeger et al. [2] and Maass et al. [3] found that RNNs achieve approximation tasks by training only a static function, converting network states to the output. Their methods were unified as RC, a framework that trains dynamical models by training static functions [4,5].
An RC model is a dynamical system model designed to be trained through RC, and it consists of a dynamical system called the “reservoir” and a static function called the “readout.” As shown in Figure 1, the reservoir processes the input to the model first, and the readout maps the reservoir state to the output. In supervised learning, the model aims to approximate the given target system. During RC model training, only the readout is tuned for each target, but the reservoir does not adapt to targets. Empirically, an RC model performs better using a reservoir with complex dynamics. For example, echo state networks [2] use an RNN with random parameters as a reservoir.
Another advantage of RC is that various dynamical systems can be used as the reservoir, even if they are difficult to train or adjust. Recently, physical RC, which uses a reservoir implemented as hardware, has been drawing attention in terms of energy efficiency and computation speed [6,7,8]. Many studies have been conducted on various implementations of reservoirs, e.g., electric and electronic circuits [9,10,11,12,13], network [14] and delay feedback systems [15,16] using optical elements, and spin torque oscillators [17,18,19].
An RC model and its reservoir are said to be universal if the model can approximate an arbitrary target with arbitrary precision. The concept of universality concerning RC appeared simultaneously with RC itself [3]. Maass et al. [3] also proposed a sufficient condition for a continuous-time RC model to be universal. The sufficient condition in [3] is the combination of the continuity and injectivity of the reservoir, which is a functional from input functions to output values. The injectivity of the reservoir is also a necessary condition for universality.
Sugiura et al. [20] proposed the relaxed condition called the neighborhood separation property (NSP), which can be applied to more complex reservoirs with multiple equilibrium states. The authors of [21] showed that a reservoir with a finite-dimensional output can satisfy the NSP but not the condition in [3]. In [21], another sufficient condition was proposed: the existence of the continuous inverse of a reservoir. This condition is also sufficient for the NSP and used to show that the NSP can be satisfied. Relationships among universality and the explained reservoir conditions are summarized in Figure 2.
As mentioned above, some necessary conditions and sufficient conditions for universality are known, but the necessary and sufficient condition, equivalent to universality, is still unknown. Such a condition is critical as it provides an essential answer to the question regarding which properties of the reservoir enable approximation with an RC model. In this paper, we show that the NSP and continuous inverse, which are sufficient conditions for a reservoir to be universal, are also equivalent to the universality itself. Moreover, we also show that a universal reservoir has a “pathological” property using the obtained equivalence. Similarly to previous studies, we considered a continuous-time RC model with a polynomial readout and evaluated the approximation using the maximum error. A dynamical system, e.g., a model, reservoir, and target, is treated as a functional from input functions to output values.
Our result concerning the conditions equivalent to universality can be extended to a general case where the input function space is compactifiable. Hence, our result may be applied to various types of RC not discussed in this paper, such as discrete-time ones.
The pathological property of a universal reservoir is that it has dense discontinuous points. As we show later, a universal reservoir has a continuous inverse map. Hence, if there is a continuous and universal reservoir, it is a homeomorphism. However, the infinite-dimensional space of input functions and the finite-dimensional space of output values cannot be homeomorphic. The same holds if we restrict the reservoir domain to an arbitrary open subset, i.e., a universal reservoir has a discontinuous point in any open subset. This result suggests that a universal reservoir is highly sensitive to inputs, and chaotic reservoirs, such as those described in [19,22,23,24], are necessary to achieve universality. These facts support the empirical rule that a complex reservoir tends to be effective and provide significant insight into the development of high-performance reservoirs.
Although considering noise in inputs and observations is important in practice, we focus on the deterministic and noiseless case for the following reasons. First, theoretical research on continuous-time RC remains limited even in the deterministic setting. Second, the definition of universality in the stochastic case is not straightforward and has not yet been established.
The reminder of this paper is structured as follows: Section 2 provides preliminaries and describes RC and previous results. In Section 3, we prove that the NSP and the continuous inverse of a reservoir are equivalent to universality. In Section 4, we prove that a universal reservoir has dense discontinuous points. The main symbols used in this paper are summarized in Table 1.

2. Preliminary

We discuss a dynamical system represented as a functional on functions of time. Let A R n be a compact and convex set of input values and K > 0 be the limit of the speed of input change. We define the set V of input functions as follows:
V = v : R A t 1 , t 2 0 , v t 1 v t 2 K t 1 t 2 ,
where · is the Euclidean norm, and R is defined as , 0 . One must set A and K > 0 large and wide enough according to the input functions that one considers. In reality, such A and K may be unknown or may not exist, but we do not consider these cases here.
Because input functions are given on a finite time interval in practice, we define another set of input functions by restricting the domain of functions in V. For a function v and t 0 , we write the restriction of v to t , 0 as v | t , 0 . We define the input functions on a finite time interval as
V res = v | t , 0 v V , t 0 .
Note that V res is not defined for a specific t 0 and contains input functions on time intervals of various lengths. The dynamical system that we discuss is a functional from V res to R m . In the real world, such a functional is a machine or a device that processes an input v for a period t , 0 in real time and outputs its state at time 0.
For example, we define a functional using the following state-space system:
x ˙ t = ϕ x t , u t , x 0 = x init u : R + R n , t 0 ,
where x t R m and u t R n are the system state and input at time t R + = 0 , . The initial state is x init R m , and the derivative of the state is given by the function ϕ of the state and input. In considering the fixed x init , System (3) determines x t for u τ τ 0 , t . Hence, we can define the functional f : V res R m as
f v | t , 0 = x 0 , x ˙ τ = ϕ x τ , v τ , x t = x init v V , t 0 , τ t , 0 .
Note that time is shifted so that the input signal starts at time t and ends at time 0. The functional output f v | t , 0 is the state to which the system transitions from the initial state x init R m with the input v | t , 0 of the length t. State spaces can represent many physical systems, and state-space systems can also be represented as functionals, as in the example. Therefore, our discussion of functionals covers a fairly wide range of dynamical systems.
In supervised learning, reservoir computing (RC) trains a model to approximate a functional we call the “target.” Let f * and f ^ : V res R be the target and the model, respectively. We consider the uniform approximation, which is evaluated based on the worst error, sup v V res f * v f ^ v . The RC model is represented as the composition of two maps as f ^ = p f . The map f : V res R m is the dynamical part of the model called the “reservoir,” and the map p : R m R is the static part called the “readout.” To approximate the target, RC trains only the readout and fixes the reservoir because training the dynamical part of the model is technically difficult and computationally expensive. Because the reservoir is fixed, we can implement it not only as software on a general-purpose computer but also as hardware. In the field of physical reservoir computing, many physical phenomena are studied as reservoirs and expected in terms of computation speed and energy consumption.
Let F * be a set of targets, which are functionals from V res to R . An RC model and its reservoir are said to be universal in F * if the model can approximate an arbitrary f * F * . Let P m be the set of polynomial functions from R m to R . If we use a polynomial readout, training the model involves selecting a polynomial function from P m , and universality in F * is defined as follows.
Definition 1. 
Reservoir f : V res R m is said to be universal for uniform approximations in F * if
f * F * , ε > 0 , p P m , v V res , f * v p f v < ε .
Polynomial readouts are not practical because of their limited generalization ability. However, they are theoretically tractable, and the discussion on them can be extended to other types of readouts that can approximate continuous functions. Hence, it suffices to discuss the theoretical aspects by assuming polynomial readouts, even if we use other types of continuous functions, such as feed-forward neural networks. The definition of universality using a polynomial readout and uniform approximation, like Definition 1, has been discussed since the earliest studies on reservoir computing [3].
In previous studies, two sufficient conditions for universality were provided. One condition [20] is the weakest ever known and has been shown to be satisfiable using the other [21]. To explain these conditions, we need a metric on V res V . Let w : R + 0 , 1 be a non-increasing function that satisfies lim t w t = 0 . Using the supremum on the domain, we define the weighted norm v w of the function v as
v w = sup τ t , 0 v τ w τ v : t , 0 R n sup τ 0 v τ w τ v : R R n ,
where t R + . Let v | , 0 = v for any v V and define the map λ : V res V 0 , as
λ v | t , 0 = t v V , t 0 , .
For an input function, the map λ returns the length of the interval on which that function is defined.
Let θ : R + R + be a strictly increasing, bounded, and continuous function. Using the weighted norm · w and function θ , we define the map d v 1 , v 2 as
d v 1 , v 2 = v 1 | t min , 0 v 2 | t min , 0 w + θ t 1 θ t 2 v 1 , v 2 V res V ,
where t 1 = λ v 1 , t 2 = λ v 2 , t min = min t 1 , t 2 , and θ = lim t θ t . The map d is a metric on V res V under the conditions we describe later. The first term of the distance (8) compares the inputs on the intersection of their domains. The function w assigns greater weight to the difference in the newer part of the inputs. The second term of (8) compares the length of the two inputs via the function θ . From the definition of θ , the second term is negligible if t 1 and t 2 are sufficiently large. The following proposition [20] provides the condition that makes d a metric:
Proposition 1. 
Suppose that the following triangle inequality holds for any v 1 , v 2 V , t 1 0 , and t 2 t 1 :
d v 1 | t 2 , 0 , v 2 | t 2 , 0 d v 1 | t 1 , 0 , v 2 | t 2 , 0 + d v 1 | t 1 , 0 , v 1 | t 2 , 0 .
Then, V res V , d is a compact metric space, and V res is dense in V res V .
The triangle inequality (9) guarantees that d satisfies other triangle inequalities. The density of V res is confirmed as follows: for any v V , v | t , 0 V res converges to v V as t . From the density of V res , we have V res V = V res ¯ , where the symbol · ¯ is the closure. An example of a pair, w , θ , that makes d a metric is shown by (38) in [20].
In the rest of this paper, we assume that targets in F * are uniformly continuous following the previous study [3]. This assumption enables us to extended a functional f * F * onto V res ¯ , i.e., there is a continuous functional f ¯ * : V res ¯ R such that f * v = f ¯ * v for any v V res . Such continuity on a compact set is needed for theoretical discussion on approximation.
The known weakest sufficient condition for universality is called the neighborhood separation property (NSP) [20]. The NSP means that the reservoir transfers the neighborhoods of distinct points to the images disconnected from each other. For δ > 0 and v V res ¯ , we define the set N δ v V res as
N δ v = v V res d v , v < δ .
Although the set N δ v is similar to a general neighborhood, note that N δ v V res holds even if v V . The mathematical definition of the NSP of the reservoir f : V res R m is the following.
Condition 1. 
For any distinct v 1 , v 2 V res ¯ , some δ > 0 satisfies f N δ v 1 ¯ f N δ v 2 ¯ = .
Another sufficient condition for universality is that the reservoir f has a uniformly continuous inverse, as shown below [21].
Condition 2. 
There is a uniformly continuous map, g : f V res V res , that satisfies g f = id V res .
Condition 2 means that the input function can be continuously reconstructed from the reservoir output and that there is a continuous map from the reservoir outputs to the target outputs through input functions. Condition 2 is sufficient for Condition 1, which shows that there is a universal reservoir. The Hahn–Mazurkiewicz theorem [25] claims that there is a continuous surjection g : 0 , 1 V res ¯ , called a space-filling curve. Hence, we can obtain the reservoir f : V res R , satisfying Condition 2, by restricting the domain of g for its image to be V res and taking a right inverse of the restriction. This result can be easily extended to f : V res R m with a general m N .

3. Necessary and Sufficient Condition for Universality

In this section, we prove the following theorem:
Theorem 1. 
Let F * be the set of uniformly continuous functionals from V res to R . Let f : V res R m be a bounded functional. Then, Equation (5), Condition 1, and Condition 2 are equivalent to one another.
Theorem 1 means that a reservoir’s universality, NSP, and uniformly continuous inverse are equivalent. This is the first result that provides the necessary and sufficient conditions of a reservoir to be universal. Theorem 1 is proved as a corollary of the following generalization:
Theorem 2. 
Let X be a metric space with the metric d and X ¯ be a compactification of X. Suppose that m N and that f : X R m is bounded. Then, the following three conditions are equivalent to one another:
(i)
For any uniformly continuous f * : X R and ε > 0 , there is some polynomial function, p : R m R , that satisfies
sup x X f * x p f x < ε .
(ii)
For any distinct x 1 and x 2 X ¯ , there is some δ > 0 that satisfies
f N δ x 1 ¯ f N δ x 2 ¯ = ,
where N δ x = x X d x , x < δ for x X ¯ .
(iii)
There is a uniformly continuous map g : f X X that satisfies
g f = id X ,
where id X is the identity mapping on X.
From Proposition 1, Theorem 1 is a corollary of Theorem 2 where X = V res . Conditions (i)–(iii) correspond to the universality of Definition 1 and Conditions 1 and 2, respectively. Set X does not have to be a strict subset of X ¯ , i.e., X = X ¯ is allowed. Hence, instead of V res , we can consider the compact set V or V V res as X. In this case, we obtain the same result as that of Theorem 1 for input functions on an infinite time interval. We prove Theorem 2 by proving the following propositions: (iii)⇒(i), ¬(iii) ¬ (i), and ¬(iii) ¬ (ii). To prove propositions premised on ¬(iii), we use the following lemma:
Lemma 1. 
Suppose that Condition (iii) does not hold on the same premise as Theorem 2. Then, there exist the Cauchy sequences x 1 , k k N and x 2 , k k N X that satisfy
lim k x 1 , k lim k x 2 , k , lim k f x 1 , k = lim k f x 2 , k .
Because of x 1 , k = g f x 1 , k and x 2 , k = g f x 2 , k , Equation (14) means that even if two inputs to g are close, the outputs are not necessarily close. More precisely, g is not uniformly continuous if (14) holds. Lemma 1 claims the inverse of this proposition.
Proof of Lemma 1. 
First, we consider the case where no map g : f X X satisfies g f = id X , i.e., f does not have a left inverse. Having no left inverse is equivalent to not being injective. Hence, there exist distinct x 1 and x 2 X that satisfy f x 1 = f x 2 , and the sequences x 1 , k = x 1 and x 2 , k = x 2 k N , which do not change with k, satisfy (14).
Next, we consider the case where a map g : f X X satisfying g f = id X exists but is not uniformly continuous, i.e.,
ε > 0 , δ > 0 , α 1 , α 2 f X , α 1 α 2 < δ , d g α 1 , g α 2 ε .
Then, there exist ε > 0 and the sequences α 1 , k k N , α 2 , k k N f X that satisfy
α 1 , k α 2 , k < 1 k , d g α 1 , k , g α 2 , k ε k N .
A bounded f means that f X ¯ R m is bounded and closed, i.e., compact. An infinite sequence on a compact metric space includes a subsequence that converges on that space. Hence, there exist an infinite set N N and α f X ¯ such that the subsequence α 1 , k k N converges to α . From the first inequality in (16), we have
α 1 , k α , α 2 , k α k , k N .
Because α 1 , k and α 2 , k are included in f X , there exist the sequences x 1 , k k N and x 2 , k k N X that satisfy the following:
f x 1 , k = α 1 , k , f x 1 , k = α 2 , k k N .
We show that x 1 , k k N and x 2 , k k N include subsequences that satisfy (14). From (17) and (18), the subsequences x 1 , k k N and x 2 , k k N satisfy the second equation in (14). Because X ¯ is compact, there is some infinite set, N N , such that the subsequence x 1 , k k N converges. Similarly, there is also some infinite set, N N , such that the subsequence x 2 , k k N converges. Using g in (18) and substituting g f = id X , the following holds:
x 1 , k = g α 1 , k , x 2 , k = g α 2 , k k N .
From (19) and the second inequality in (16), we have d x 1 , k , x 2 , k ε for any k N . Therefore, the Cauchy sequences x 1 , k k N and x 2 , k k N satisfy the first equation in (14), which proves Lemma 1. □
Theorem 2 is proved as follows:
Proof of Theorem 2. 
As we explained, we prove Propositions (iii)⇒(i), ¬(iii) ¬ (i), ¬(iii) ¬ (ii), and ¬(ii) ¬ (iii).
Proof of (iii)⇒(i): Let f * : X R and ε > 0 be an arbitrary uniformly continuous functional and an arbitrary real number. We define a function q : f X R as q = f * g . Because f * = f * g f = q f , (11) is equivalent to
sup x X q f x p f x < ε .
The function q is uniformly continuous because f * and g are uniformly continuous. Hence, there is a unique continuous extension, q ¯ : f X ¯ R , satisfying q ¯ α = q α for any α f X . From the Stone–Weierstrass theorem [26], there is some polynomial function p : R m R that satisfies q ¯ α p α < ε for any α on the compact set f X ¯ . The polynomial function p also satisfies (20), which proves (iii)⇒(i). The relationship among maps in the proof is shown in Figure 3.
Proof of ¬(iii) ¬ (i): From Lemma 1, there exist the Cauchy sequences x 1 , k k N and x 2 , k k N X that satisfy (14). We define distinct x 1 and x 2 X ¯ as
x 1 = lim k x 1 , k , x 2 = lim k x 2 , k .
Let f * : X R be a uniformly continuous function that satisfies
lim k f * x 1 , k = 0 , lim k f * x 2 , k = 1 .
A specific definition of f * is not necessary for the proof, but it can be defined as follows:
f * x = d x 1 , x d x 1 , x 2 x X .
To prove by contradiction, we suppose that (i) holds and let ε = 1 / 3 . Then, there is some polynomial function p : R m R that satisfies
f * x 1 , k p f x 1 , k < 1 3 , f * x 2 , k p f x 2 , k < 1 3 k N .
From (14) and the continuity of p, the following holds:
lim k p f x 1 , k = lim k p f x 2 , k .
However, from (22) and the limit of (24) when k , the following also holds:
lim k p f x 1 , k < 1 3 , 2 3 < lim k p f x 1 , k .
This contradicts (25), and we have ¬(i). The relationship among sets and sequences in the proof is shown in Figure 4.
Proof of ¬(iii) ¬ (ii): From Lemma 1, there exist the Cauchy sequences x 1 , k k N and x 2 , k k N X that satisfy (14). We define distinct x 1 and x 2 X ¯ and α f X ¯ as
x 1 = lim k x 1 , k , x 2 = lim k x 2 , k , α = lim k f x 1 , k = lim k f x 2 , k .
Let δ > 0 be an arbitrary real number. From the first and second equations in (27), there is a sufficiently large k N that satisfies f x 1 , k f N δ x 1 and f x 2 , k f N δ x 2 . Because both f x 1 , k and f x 2 , k converge to α as k , we have α f N δ x 1 ¯ f N δ x 2 ¯ , which proves ¬(iii) ¬ (ii). The relationship among sets and sequences in the proof is shown in Figure 5.
Proof of ¬(ii) ¬ (iii): Suppose that distinct x 1 and x 2 X ¯ satisfy f N δ x 1 ¯ f N δ x 2 ¯ for any δ > 0 . Let δ = d x 1 , x 2 / 3 . We consider the sequences α 1 , k k N f N δ x 1 and α 2 , k k N f N δ x 2 that converge to α f N δ x 1 ¯ f N δ x 2 ¯ . To prove by contradiction, suppose that a uniformly continuous map g : f X X exists and satisfies g f = id X . Then, we have g α 1 , k N δ x 1 and g α 2 , k N δ x 2 . Therefore, although α 1 , k and α 2 , k converge to the same point α as k , we have d g α 1 , k , g α 2 , k > d x 1 , x 2 / 3 for any k N . This means that g is not uniformly continuous, and we have ¬(iii). The relationship among sets and sequences in the proof is shown in Figure 6.
From the above four propositions, we have (iii)⇔(i) and (iii)⇔(ii), which proves Theorem 2. □

4. Pathological Property of Universal Reservoir

Although a previous study [21] showed that a universal reservoir exists mathematically, whether we can physically construct one is still unknown. The authors of [21] suggested that the proposed universal reservoir using a space-filling curve has an infinite number of discontinuous points and is difficult to implement. In this section, we show that all universal reservoirs have the same problem as the example in [21]. The main result of this section is the following:
Theorem 3. 
Let F * be the set of uniformly continuous functionals from V res to R . Let f : V res R m be a functional that is bounded and universal for uniform approximations in F * . Then, the set of points v V res at which f is discontinuous is dense in V res .
Theorem 3 says that a universal reservoir has a discontinuous point on an arbitrary neighborhood of an arbitrary point and is very sensitive to input change. Note that the discontinuity of f : V res R does not mean the discontinuity of the time derivative ϕ of states in the state-space representation (4) of f. The same holds for f , which has m outputs. Theorem 3 is proved using the contradiction that if f is continuous on some open set, two spaces with different dimensions are homeomorphic. To this end, we use the following lemma about a topological embedding:
Lemma 2. 
For any v V res , δ > 0 , and any l N , there is some topological embedding h : 0 , 1 l N δ v .
Proof of Lemma 2. 
Let v V res be an arbitrary function. Let δ > 0 and l be arbitrary real and natural numbers, respectively. Let t = λ v , i.e., the domain of v is t , 0 . Because the bounded function θ : R + R + in (8) is continuous and monotonically increasing, there is some T > 0 satisfying θ t + l T θ t < δ . Let an input value a A satisfy 0 < a v t K T , where A is the input value space and K is the maximum Lipschitz constant in (1). Using T and a, we define the map h : 0 , 1 l N δ v as
h β = v β , v β τ = v τ t < τ 0 , v t + a v t r β τ t l T τ t β 0 , 1 l ,
where r β : l T t , t 0 , 1 is defined for β = β 1 β l as
r β τ = i = 1 l β i max 1 τ + i T + t T , 0 t l T τ t .
As shown in Figure 7, v β is an extension of v to t l T , 0 with the values on the added domain defined as some internal division point between v t and a. The division rate r β is piece-wise linear and takes the value of β i at t i T for i 1 , , l , which are the borders of the linear pieces.
First, we show that v β N δ v holds for any β 0 , 1 l . Because the convex set A includes a and v t , the image of v β is included in A. Because the Lipschitz constant of r β is at most 1 / T , and a v t K T holds, the Lipschitz constant of v β is less than or equal to K. Hence, we have v β V res . The functions v and v β have the same values on the intersection of their domain. Hence, from the definition of T, we have d v , v β = θ t + l T θ t < δ , i.e., v β N δ v .
Next, we show that h is an embedding. A continuous bijection from a compact set is homeomorphic, and the domain 0 , 1 l of h is compact. Hence, if h is a continuous injection, it is homeomorphic to its image, i.e., an embedding. As we explained, we have r β t i T = β i for i 1 , , l . Hence, different β gives different r β and h β , i.e., the map h is injective. Let v β 1 = h β 1 and v β 2 = h β 2 for β 1 , β 2 0 , 1 l . Then, the distance between v β 1 and v β 2 is written as
d v β 1 , v β 2 = sup t l T τ 0 v β 1 τ v β 2 τ a v t β 1 β 2 .
This means that the map h is continuous. Therefore, the map h : 0 , 1 l N δ v is an embedding, which proves Lemma 2. □
Using Lemma 2, we prove Theorem 3 as follows:
Proof of Theorem 3. 
Let bounded f : V res R m be universal for uniform approximations in F * . To prove by contradiction, suppose that discontinuous points of f are not dense in V res , i.e., there exist v V res and δ > 0 such that f is continuous on N δ v . From Theorem 1, universality is equivalent to Condition 2, and f has a continuous inverse map. Hence, the restriction f | N δ v of f to N δ v is a topological embedding because it and its inverse are continuous. From Lemma 2, there is a topological embedding h : 0 , 1 m + 1 N δ v . The composition f | N δ v h is also an embedding, i.e., 0 , 1 m + 1 and f | N δ v h 0 , 1 m + 1 are homeomorphic. The relationship among the domains and images of h and f is shown in Figure 8.
We call the small inductive dimension simply a dimension and write the dimension of the space A as ind A . Dimensions have the following two properties (see pages 3–4 of [27]): First, two homeomorphic topological spaces have the same dimension. Second, a topological space has a dimension equal to its subspace or larger. Therefore, we have the contradiction of
m + 1 = ind 0 , 1 m + 1 = ind f | N δ v h 0 , 1 m + 1 ind R m = m ,
which proves Theorem 3. □

5. Conclusions

We show that the universality of a reservoir and its two sufficient conditions, the NSP and the continuous inverse, are equivalent to one another. We also show that a reservoir has dense discontinuous points if it has a continuous inverse. Our results indicate that universality, discussed since [3], is an ideal approximation ability difficult to achieve in practice.
The behavior of a universal reservoir, as shown in this study, indicates that chaotic systems are necessary to achieve or approach universality. The difference between two inputs remains in a state-space system as the difference between states. Hence, a system sensitive to inputs is also sensitive to the initial state. As we showed, a universal reservoir has dense discontinuous points and is very sensitive to inputs. Therefore, the universality can be related to chaotic systems. Some studies support this interpretation [19,22,23,24].
Our results also provide an insight to guarantee a weaker but practical approximation ability. We reveal that a reservoir’s “resolution,” ability to distinguish inputs, is directly linked to the approximation ability of the RC model. The NSP can be seen as an infinite resolution that distinguishes two input functions, regardless of how similar they are. We can also consider reservoirs with finite resolution, and it may be possible to guarantee approximation abilities for them according to their resolution. Such reservoirs behave more gently and are easier to implement than those with the NSP.

Author Contributions

Conceptualization, S.S. and R.A.; methodology, S.S.; investigation, S.S., R.A., T.A. and S.-i.A.; writing—original draft preparation, S.S.; writing—review and editing, S.S., R.A., T.A. and S.-i.A.; supervision, R.A, T.A. and S.-i.A.; project administration, R.A.; funding acquisition, R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI, Grant Number JP25K03203.

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; Technical Report; California Univ San Diego La Jolla Inst For Cognitive Science: La Jolla, CA, USA, 1985. [Google Scholar] [CrossRef]
  2. Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks—With an erratum note. In Technical Report GMD Report 148; German National Research Center for Information Technology: Bonn, Germany, 2001. [Google Scholar]
  3. Maass, W.; Natschl, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef] [PubMed]
  4. Verstraeten, D.; Schrauwen, B.; D’Haene, M.; Stroobandt, D. An experimental unification of reservoir computing methods. Neural Netw. 2007, 20, 391–403. [Google Scholar] [CrossRef] [PubMed]
  5. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  6. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef]
  7. Nakajima, K. Physical reservoir computing–an introductory perspective. Jpn. J. Appl. Phys. 2020, 59, 6. [Google Scholar] [CrossRef]
  8. Nakajima, K.; Fischer, I. Reservoir Computing; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  9. Soriano, M.C.; Brunner, D.; Escalona-Morán, M.; Mirasso, C.R.; Fischer, I. Minimal approach to neuro-inspired information processing. Front. Comput. Neurosci. 2015, 9, 68. [Google Scholar] [CrossRef]
  10. Antonik, P.; Smerieri, A.; Duport, F.; Haelterman, M.; Massar, S. FPGA implementation of reservoir computing with online learning. In Proceedings of the 24th Belgian-Dutch Conference on Machine Learning, Delft, The Netherlands, 19 June 2015. [Google Scholar]
  11. Lin, C.; Liang, Y.; Yi, Y. FPGA-based reservoir computing with optimized reservoir node architecture. In Proceedings of the 23rd International Symposium on Quality Electronic Design, Santa Clara, CA, USA, 6–7 April 2022; pp. 1–6. [Google Scholar] [CrossRef]
  12. Schürmann, F.; Meier, K.; Schemmel, J. Edge of chaos computation in mixed-mode vlsi–a hard liquid. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 13–18 December 2004; p. 17. [Google Scholar]
  13. Kulkarni, M.S.; Teuscher, C. Memristor-based reservoir computing. In Proceedings of the 2012 IEEE/ACM International Symposium on Nanoscale Architectures, Amsterdam, The Netherlands, 4–6 July 2012; pp. 226–232. [Google Scholar] [CrossRef]
  14. Vandoorne, K.; Dierckx, W.; Schrauwen, B.; Verstraeten, D.; Baets, R.; Bienstman, P.; Campenhout, J.V. Toward optical signal processing using photonic reservoir computing. Opt. Express 2008, 16, 11182–11192. [Google Scholar] [CrossRef] [PubMed]
  15. Larger, L.; Soriano, M.C.; Brunner, D.; Appeltant, L.; Gutiérrez, J.M.; Pesquera, L.; Mirasso, C.R.; Fischer, I. Photonic information processing beyond Turing: An optoelectronic implementation of reservoir computing. Opt. Express 2012, 20, 3241–3249. [Google Scholar] [CrossRef] [PubMed]
  16. Guo, X.; Zhou, H.; Xiang, S.; Yu, Q.; Zhang, Y.; Han, Y.; Wang, T.; Hao, Y. Experimental demonstration of a photonic reservoir computing system based on Fabry Perot laser for multiple tasks processing. Nanophotonics 2024, 13, 1569–1580. [Google Scholar] [CrossRef] [PubMed]
  17. Torrejon, J.; Riou, M.; Araujo, F.A.; Tsunegi, S.; Khalsa, G.; Querlioz, D.; Bortolotti, P.; Cros, V.; Yakushiji, K.; Fukushima, A.; et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 2017, 547, 428–431. [Google Scholar] [CrossRef] [PubMed]
  18. Allwood, D.A.; Ellis, M.O.; Griffin, D.; Hayward, T.J.; Manneschi, L.; Musameh, M.F.; O’Keefe, S.; Stepney, S.; Swindells, C.; Trefzer, M.A.; et al. A perspective on physical reservoir computing with nanomagnetic devices. Appl. Phys. Lett. 2023, 122, 040501. [Google Scholar] [CrossRef]
  19. Namiki, W.; Nishioka, D.; Nomura, Y.; Tsuchiya, T.; Yamamoto, K.; Terabe, K. Iono-Magnonic Reservoir Computing With Chaotic Spin Wave Interference Manipulated by Ion-Gating. Adv. Sci. 2025, 12, 2411777. [Google Scholar] [CrossRef] [PubMed]
  20. Sugiura, S.; Ariizumi, R.; Asai, T.; Azuma, S. Nonessentiality of Reservoir’s Fading Memory for Universality of Reservoir Computing. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 16801–16815. [Google Scholar] [CrossRef] [PubMed]
  21. Sugiura, S.; Ariizumi, R.; Asai, T.; Azuma, S. Existence of reservoir with finite-dimensional output for universal reservoir computing. Sci. Rep. 2024, 14, 8448. [Google Scholar] [CrossRef] [PubMed]
  22. Jensen, J.H.; Tufte, G. Reservoir computing with a chaotic circuit. In Artificial Life Conference Proceedings; MIT Press: Cambridge, MA, USA, 2017; pp. 222–229. [Google Scholar] [CrossRef]
  23. Choi, J.; Kim, P. Reservoir computing based on quenched chaos. Chaos Solitons Fractals 2020, 140, 110131. [Google Scholar] [CrossRef]
  24. Fukuda, K.; Horio, Y. Analysis of dynamics in chaotic neural network reservoirs. Nonlinear Theory Its Appl. IEICE 2021, 12, 639–661. [Google Scholar] [CrossRef]
  25. Hocking, J.G.; Young, G.S. Topology; Addison-Wesley Publishing Company: Reading, MA, USA, 1961. [Google Scholar]
  26. Dieudonne, J. Foundations of Modern Analysis; Academic Press: Cambridge, MA, USA, 1969. [Google Scholar]
  27. Engelking, R. Dimension Theory; North-Holland Publishing Company: Amsterdam, The Netherlands, 1978. [Google Scholar]
Figure 1. Structure of an RC model.
Figure 1. Structure of an RC model.
Mathematics 13 03440 g001
Figure 2. Reservoir conditions relating to universality. Arrows (⇒) indicate implication.
Figure 2. Reservoir conditions relating to universality. Arrows (⇒) indicate implication.
Mathematics 13 03440 g002
Figure 3. Maps used in the proof of (iii)⇒(i).
Figure 3. Maps used in the proof of (iii)⇒(i).
Mathematics 13 03440 g003
Figure 4. Sets and sequences used in the proof of ¬(iii) ¬ (i). The dotted arrows indicate convergence.
Figure 4. Sets and sequences used in the proof of ¬(iii) ¬ (i). The dotted arrows indicate convergence.
Mathematics 13 03440 g004
Figure 5. Sets and sequences used in the proof of ¬(iii) ¬ (ii). The dotted arrows indicate convergence.
Figure 5. Sets and sequences used in the proof of ¬(iii) ¬ (ii). The dotted arrows indicate convergence.
Mathematics 13 03440 g005
Figure 6. Sets and sequences used in the proof of ¬(ii) ¬ (iii). The dotted arrows indicate convergence.
Figure 6. Sets and sequences used in the proof of ¬(ii) ¬ (iii). The dotted arrows indicate convergence.
Mathematics 13 03440 g006
Figure 7. Examples of functions v, v β , and r β when A R and l = 3 .
Figure 7. Examples of functions v, v β , and r β when A R and l = 3 .
Mathematics 13 03440 g007
Figure 8. Domains and images of the maps in the proof of Theorem 3.
Figure 8. Domains and images of the maps in the proof of Theorem 3.
Mathematics 13 03440 g008
Table 1. Main symbols and their definitions.
Table 1. Main symbols and their definitions.
SymbolDefinition
R R , 0
R + R 0 ,
P m Set of all polynomial functions from R m to R
A R m Compact and convex set of input values
K > 0 Bound of Lipschitz constants of input functions
VSet of input functions on R defined as (1)
v | · Restriction of map v to set ·
V res Set of input functions on a finite time interval defined as (2)
· ¯ Closure of set ·
V res ¯ Compactification of V res defined as V res ¯ = V res V
d : V res ¯ × V res ¯ R Metric on V res ¯ defined as (8)
N δ v V res v V res d v , v < δ for any δ > 0 and v V res ¯
id · Identity mapping on set ·
ind · Inductive dimension of set ·
Symbol (only in Theorem 2)Definition
XGeneral compactifiable metric space
X ¯ Compactification of X
d : X ¯ × X ¯ R Metric on X ¯
N δ x X x X d x , x < δ for any δ > 0 and x X ¯
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sugiura, S.; Ariizumi, R.; Asai, T.; Azuma, S.-i. Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing. Mathematics 2025, 13, 3440. https://doi.org/10.3390/math13213440

AMA Style

Sugiura S, Ariizumi R, Asai T, Azuma S-i. Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing. Mathematics. 2025; 13(21):3440. https://doi.org/10.3390/math13213440

Chicago/Turabian Style

Sugiura, Shuhei, Ryo Ariizumi, Toru Asai, and Shun-ichi Azuma. 2025. "Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing" Mathematics 13, no. 21: 3440. https://doi.org/10.3390/math13213440

APA Style

Sugiura, S., Ariizumi, R., Asai, T., & Azuma, S.-i. (2025). Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing. Mathematics, 13(21), 3440. https://doi.org/10.3390/math13213440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop