Stochastic Analysis of the Time Continuum †

: This paper considers the time continuum of Brouwer in terms of the complex system physics. It is based upon a processual deﬁnition of real numbers which concern the measurement problem. The multiresolution hierarchy of the measurement process is represented by the time operator acting on continuous signals. The wavelet domain hidden Markov model, which recapitulates statistical properties of the hierarchy, is veriﬁed experimentally on a wide range of signal ensembles. It indicates a novel method that has already been proved to be tremendously useful in applied mathematics.


Introduction
Chaitin has announced the decline and fall of reductionism in mathematics, considering the randomness in arithmetic elucidated by some results of the computation theory. The concluding remark concerns experimental mathematics in order to stress the impact of computers that have an enormous contribution to the mathematical experience, which impels people to proceed in a more pragmatic fashion. Mathematicians are coerced to go ahead proof, postulating hypotheses based upon results of computer experiments. He particularly points out a relation to the contemporary physics wherein randomness is a crucial agent, which is regarded to be the core of an emergent science paradigm. Chaitin finally remarked that the question of how one should actually do mathematics requires at least another generation of work [1] (pp. 156-159).
Randomness that emerges in a formal theory set upon deterministic assumptions concerns the complex system physics, which considers systems wherein the best method of description is not clear a priori [2] (p. 203). The statistical complexity suggested by Grassberger [3], which corresponds to stochastic computing in terms of the Bernoulli-Turing machine, is analogous to the deterministic one designed on the Turing machine [4]. Within the stochastic computation theory, deterministic and random behaviors are regarded to be elemental extremes deprived of a vital component since they both share a common failure to support emergent properties. Being the amalgam of both, complex patterns have an inherent tendency towards hierarchical organization [2] (p. 200).
Such a hierarchy has substantial implications concerning cognition, since observation and intellection are related to the neural architecture whose structure is reflected by cognitive complexity [2] (pp. [204][205][206]. It corresponds to the evolution in a hierarchical manner [5] (pp. 470-477), indicating the concept of time that operates not only physically or biologically but also in terms of organization theory. In this regard, time corresponds to a primordial intuition which is the very base of conscious life-as stated by Brouwer's intention to establish the continuum upon such intuitionism [6] (pp. [36][37][38][39][40][41][42][43][44][45]. Intuitionistic mathematics is considered to be the intellection of increasingly complex features, which is related to self-organization of complex systems due to the definition that originates from Shalizi concerning the increase in complexity over time [7]. The issue requires formulation of the complex system physics based upon the time operator acting on the space of continuous signals, which is actually a straightforward generalization of a multiresolution hierarchy that relates the measurement process [8] (p. 107).
The paper is intended to elucidate the intuitionistic mathematics in terms of complex systems. The next section briefly outlines the intuitionism by Brouwer and suggests the time operator formulation of the complex system physics, which was developed by the Brussels school of thermodynamics. The third section concerns the continuum structure presenting a processual definition of real numbers corresponding to the measurement. In order to consider the time operator of a multiresolution hierarchy, the concept of a signal ensemble was derived in analogy to the quantum theory one. The fourth section elaborates on results in regard to a signal processing model that recapitulates statistical properties of the hierarchy, which is obtained by experimental mathematics methods. The last section contains concluding remarks.
The article contents have mostly been published in the form of a conference paper by M.M. and S.V. [9].The main advancement corresponds to the discussion of the measurement problem, wherein some corroborations of the statistical model are presented. Proofs are indicated using the theory of quantum ensembles, which has not appeared in relation to the signal processing so far. The paper also contains an elaboration of the measurement process in terms of the signal space and its dual, which is seldom encountered in the literature. Novel illustrations have been added as well in order to make the article more comprehensible.

The Intuitionistic Mathematics
Brouwer's contribution to mathematics foundations concerns the context of the XIX and the XX centuries, whose eminent personalities were Hilbert, Russell and Whitehead. Russell and Whitehead's view was based upon the statement that logic represents the fundament of mathematical thought as Hilbert stated that a formal language was the design of mathematics. Brouwer referred to both viewpoints as platonic ones, since they use timeless conceptions which make them deterministic theories.
He advocated irreducibility of mathematics to a language, in aim to separate it from formalism and logicism as well [10]. In his opinion, the base of consciousness is the time continuum transcending any language in order to provide the original creation. It represents a continual activity of the creative subject that is not formally determined, which means that for mathematics there is no certain language [11]. He used the term choice sequence, also separating intuitionism from constructive mathematics based upon deterministic decisions [6] (pp. [38][39][40].
Considering the intuitionistic logic, one discerns its deviation from formal laws of the excluded middle and of the double negation. Brouwer regarded a structure to be discrete if the law of excluded middle x = y ∨ x = y holds, which is not generally true [12]. Intuitionism is actually a logic of continuum, unlike the formal logic that is a discrete one. The double negation law ¬x = y ⇒ x = y is also violated, which makes the existence of infinitesimals possible since the negation of diversity from zero ¬ = 0 is not reduced to the identity = 0 [13]. The continuum in such a regard does not reduce to a pointwise set, which means that the primordial intuition is not dependent on a formal spatiality. However, the sense of time has been seriously damaged by treating it as an additional dimension of the formal space that modern science peddles for an ultimate reality [6] (pp. [36][37]. Brouwer actually refers to the elimination of time stated by Meyerson due to his definition of modern science in terms of progressively realizing the fundamental bias in human reasoning, which concerns the reduction of difference and change onto identity and constancy [14]. As early as 1754, d'Alambert noted that one should consider duration as a fourth dimension supplementing the common three-dimensional design [15] and Lagrange went so far to term it the four-dimensional geometry [16] (p. 223). Such a historical trail reached its climax in Einstein, who was categorically rejecting the existence of change, considered by him to be a mere illusion [17] (pp. 201-203). Einstein's intension to reach the timeless world of supreme rationality was definitely manifested in the celebrated Einstein-Bohr debate on the foundations of quantum theory. The core of the debate concerned the fundamental role of randomness in specifying a system's state, which was emphatically denied by Einstein who was supporting an objective epistemology of science. Although he overturned Newtonian mechanics, Einstein firmly held the Cartesian view of reducing physics to geometry in terms of the formal spatiality that was but a deterministic assumption. In that respect, contemporary restoration of time indicates postmodernism in science, whose forerunner has been Brouwer [6] (p. 49).

The Time Operator
The occurrence of postmodern science is related to quantum theory, whose origin dates back to the beginning of the XX century [18]. The statistical formulation concerning the evolution of probabilities has given rise to the operator mechanics by Koopman and von Neumann, in which a system corresponds to the evolutionary group of transformations U t acting on the Hilbert space L 2 µ (Ω). Considering that the transformation is induced by pointwise dynamics X t : Ω → Ω, which preserves the probability measure µ in the phase space Ω, the operator U t = X t • · should be unitary and, according to the Stone theorem, the group has an infinitesimal generator termed the Liouvillian L [19].
The uncertainty principle concerns a pair of complementary observables, which consist of the position Q = q· and the momentum operator P = ∂· i∂q . Formulated in terms of the commutator [Q, P] = QP − PQ, the uncertainty implies relation [Q, P] = iI. In the same manner, the time operator T is defined to satisfy the uncertainty relation concerning the Liouvillian L [17]. The variable F, whose domain is Ω, evolves by action of the group U t = e −iLt , whilst the distribution density ρ is governed by its adjoints U t † = e iLt , which implies the Liouville equation ∂ρ i∂t = Lρ. Consequently, the Liouvillian L and the time operator T correspond to complementary observables in analogy to the position and the momentum in quantum theory. In terms of the group action, the relation (1) is equivalent to which comes down to [T, U] = U, supposing the cyclic group generated by U ≡ U 1 . The time operator existence in the system induces a change in representation Λ = λ(T), transfiguring, with no loss of information, the group to a semigroup action [20]. The semigroup corresponds to an irreversible evolution of the complex system, conjugated to the reversible one of the group. It addresses a stochastic process that is irreducible to deterministic description, whose existence is analogous to the Gödel theorem concerning an incompleteness of the arithmetic [21].
In modern science, based upon the elimination of time, irreversibility is cognized through the measurement problem that demands a departure from determinism in favor of statistical causality [17] (pp. 65-67). Quantum measurement corresponds to a reduction in the distribution density, which is a nonunitary transformation. Von Neumann expressed its difference from the unitary evolution by the Liouville equation in terms of the entropy increase, invoking a substantial role of the observer [22] (pp. 347-416). Irreversibility therefore appears to be at the very core of physics.

Real Numbers
The concept of continuum corresponds to real numbers which designate the measurement process originating from Euclide's geometrical algebra. In the V book of the Elements, he elaborates on the doctrine of proportion, which concerns commensuration of magnitudes. According to the Euclidean algorithm, magnitudes a and b measure each other through which is termed the continued fraction, having the spectrum n 1 , n 2 , . . . One assumes that a ≤ b, i.e., a/b ≤ 1. The proportion a b = c d , which is indicated by matching of the respective terms in both spectra, induces identity on the continuum of reals. Consequently, a real number corresponds to the fraction expansion (4), implying a measurement process that takes place step by step over time ( Figure 1). The time evolution is represented by the Ford diagram of circles [23], whose intersections with a vertical line correspond to the sequence identifying a real number x ( Figure 2). The element ξ i = h i k i is termed the Diophantine approximation, being the most approximate to the real number x in regard to fractions h k , k ≤ k i whose denominators are not greater than one of ξ i . Denominators and numerators of the sequence are obtained by the recurrence equations considering the initial conditions h 0 = 0, h 1 = 1 and k 0 = 1, k 1 = n 1 . The difference of successive members is and keeping in mind vh, In this respect, the continued fraction concerning a real number takes the form of an alternating series which is a sparse representation [24] composed of terms from the redundant dictionary 1 1 , 1 2 , 1 3 ....

The Continuum Structure
Equation (10) corresponds to a binary code wherein 0 is assigned to the terms of the dictionary that do not participate in the series and 1 to those that do participate, preceded by an alternating ± sign. Such a representation of the real number is highly redundant since the entire dictionary cannot be involved in a series. One should therefore eliminate excess zeros, which is achieved by coding the spectrum n 1 , n 2 , . . . A binary code like this is composed of alternative ± values different from zero at positions n 1 , n 1 + n 2 , . . . , which gives rise to the Minkowski question mark function ( Figure 3) ? : 1 n 1 + 1 that is a procedure transfiguring the continued fraction to the binary code. It represents an automorphism of the continuum, mapping the real number x = 1 n 1 + 1 Under the term continuum, one assumes the skeletal category that is specified up to an isomorphism. The transformation ? is therefore considered to be an automorphism of the structure, since an isomorphism (12) hereinafter is also the identity. The Ford diagram is structured by the hierarchy of scales, each one corresponding to the insertion of circles tangent to two of them at the previous scales, as well as to the number line ( Figure 4). In such a hierarchy, each circle is attributed to an irreducible fraction that represents its contact with the line. If designators of two circles are r 1 s 1 and r 2 s 2 , the inserted circle between them corresponds to the fraction r 1 +r 2 s 1 +s 2 . One denotes it as r 1 s 1 ⊕ r 2 s 2 , which is an operation termed the mediant or the Farey sum. It is due to John Farey, who noticed that successive fractions ζ < ξ < η, whose denominators in the reduced form are up to a given value, relate by ξ = ζ ⊕ η. For instance, the fractions up to the denominator value 5 form the order 0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1, where each successive threesome is related by the Farey sum [25].
The question mark function maps the mediant to the arithmetic mean [26] ?(x ⊕ y) = ?(x)+?(y) 2 (12) which is an isomorphism of topological quasigroups, whose action turns circles of the Ford diagram into square-like diamonds ( Figure 5). Such a diagram has the binary tree structure wherein nodes, coordinated by x = 2k−1 2 j+1 and y = 1 2 j+1 , correspond to paracomplex numbers designates binary digits of the real number and, in this regard, the hierarchy of the continuum is related to such a binary coding. The Rényi map that concerns a shift in terms of binary digits is a self-similarity of the structure, mapping both the left and right subtrees to the entire one.

The Signal Space
In order to regard the measurement process in terms of the time operator, one requires the space of continuous signals that should be discussed in a dual manner. If one considers the measurement states Σ, a dual space Σ = ∆ corresponds to the devices evaluating them. If one, on the other hand, considers the measurement devices ∆, states appear in the form of a dual space ∆ = Σ. These options may differ in more than a conceptual sense: taking the dual of the dual does not necessarily bring back to the departure. Even if it does, there may be some reasons to favor one over the other since an aspect of the continuum structure is obscured [28] (pp. [16][17][18][19][20][21][22][23]. The intention presented in the paper considers continuous signals to be both states and devices concurrently, which should lead to a source-detector interchangeability, termed crossing in quantum theory [28] (p. 20). This is a reason the signal space is an autodual one Σ = ∆ = L 2 (I), emphasizing the Lebesgue measure on the domain I = [0, 1]. The topology of the Hilbert space L 2 (I) is not a trivial issue, since it is governed by equality that tolerates continuous signals to differ in zero measured domains. Such an occasion makes it possible to establish the norm · and the scalar product ·|· playing an essential role in autoduality.
The continuum structure is reflected by wavelet bases of L 2 (I), which are hierarchically indexed according to nodes of the binary tree. The signal decomposition considering a wavelet base takes place in the representational form whereat A 0 is the approximate coefficient and D j,k are detail coefficients that corresponds to tree nodes [29] (pp. 304-307). The Haar base is paradigmatically designed by translations and normalized dilatations of the mother wavelet meaning that the basic elements have a value of zero elsewhere. A wavelet base is also interpreted in a dual manner concerning states and devices of the measurement process. It corresponds to a distribution density, which implies the unit norm ψ j,k = 1. A variable X j,k , which is distributed according to the density |ψ j,k | 2 , has the expectation value EX j,k = 2k−1 2 j+1 , which is actually the spatial position of a tree node. On the other hand, it consists of variables ψ j,k that are mutually independent since (j, k) = (l, m) ⇒ Eψ j,k ψ l,m = Eψ j,k Eψ l,m = 0, which means orthogonality ψ j,k |ψ l,m = δ (l,m) (j,k) . Moreover, their expectations are zero and they are equally distributed within each scale. Such an interpretation of the wavelet base represents measurement devices ∆ which are modelled due to hierarchical structuring. In that respect, measurement states are represented by detail coefficients D j,k = ψ j,k |· and the approximation coefficient The hierarchical structure of a wavelet base is reflected in detail coefficients, which means that each of them at a scale j of the binary tree has two successors at the next one j + 1 ( Figure 6). The succession is related to the measurement process whose steps correspond to scales of the hierarchy. In that respect, the time concerning a wavelet base is presented by the operator whose eigenvalues are scales of basic eigenfunctions [30]. It acts on a dense subset of L 2 (I) 1, which is the signal space reduced by the subspace of constant signals. The time operator domain includes finite sums ∑ J j=0 ∑ 2 j k=1 ·ψ j,k whose components ∑ 2 j k=1 ·ψ j,k constitute detail subspaces D j wandering by the unilateral shift induced by the Rényi map (13). It generates the time succession of L 2 (I) 1, establishing the multiresolution hierarchy whose main axiom is the shift property Represented through the spectral decomposition whereby each projector corresponds to details at a resolution scale, the time operator appears to be a straightforward generalization of a multiresolution hierarchy [8] (p. 107). The axiom (18) is equivalent to the uncertainty relation which concerns the definition of time in complex systems (2).

Signal Ensembles
In order to generate the evolutionary group, U should be extended to an invertible operator. The natural extension concerns the baker map inducing the bilateral shift U χ = B • ·, which is invertible [31] (pp. [35][36][37][38]. To that end, In that respect, variables of the extended space come to be signal ensembles. On the other hand, L 2 (I × I) = Σ ⊗ ∆ is regarded to be the tensor product of devices and states acting on them. One therefore implements the matrix multiplication in the space of signal ensembles, which results in the representational form considering a wavelet base The baker map crosses information between coordinates of the domain in such a manner that the first binary digit of x, which has been lost by the Rényi map, becomes the first digit of y. The induced operator crosses between components of the space, whereby devices of the measurement process become states in the next step. A concise discussion of the measurement problem considering signal ensembles is given in the following section. The time operator T χ of the system whose evolutionary group is generated by U χ , has been explicitly constructed and the relation (21) is generalized [8] (pp. 47-60). Its projection onto the signal space L 2 (I) corresponds to multiresolution hierarchy generated by the Haar wavelet (15). The time operator of any hierarchy is obtained through conjugation T = CT χ C −1 by C, which transforms the Haar base to the other one. It corresponds to the system whose evolutionary group is generated by U = CU χ C −1 , which is also an extension of the operator U. However, it is not the natural extension in a manner that it is induced by any pointwise dynamics of the domain I × I. A problem might occur concerning preservation of positivity, since the approximation operators ∑ j≤J P D j do not preserve it any more [32] (pp. [16][17][18]. The distribution density, however, does not correspond to the state but to its absoulte square, which evolves in another manner. The evolution shares time operator, which is corroborated in the Discussion, and according to that the change in representation is constructed analogously. The fact that the Haar wavelet is privileged in that respect does not really mean anything. The Rényi map (13) that has been extended to the baker one (22), is actually not defined pointwisely, since there is no definite value corresponding to the point ¬x = 1/2. The Brouwer continuum is not a pointwise set and therefore it does not reduce to the identity x = 1/2, also involving choice sequences 0.011... and 0.100... in the binary code that correspond to diverse values of the Rényi map. Since it is a zero-measured domain, there is no obstruction to induce the operator U considering the topology of L 2 (I). However, it is not induced by pointwise dynamics, as well as the operator U for any wavelet base. Elaborating on the relation between wavelets and stochastic processes, Antoniou asserted that wavelets are not motivated by any pointwise dynamics of the phase space. He concludes that the ergodic theory is richer than the wavelet one, since the former fundamentally involves an underlying dynamical system of point trajectories [31] (p. 96). Such a remark is irrelevant in regard to the wavelet multiresoultion hierarchy, considering that none of them is defined pointwisely. The evolutionary operator is not affected at all by dynamics of particular points, but of domains which are continuum powered. The main advantage of the operator mechanics is just an avoidance of single trajectories that concern points of the phase space in order to distinguish a common behavior. The pointwise dynamics is actually an incomplete description of the system, which is analogous to the Gödel theorem [21]. The completization required gives rise to the concept of the point operator acting on the space of continuous signals [28] (pp. 57-65).
The operator U has been extended to U , which acts on signal ensembles, generating the evolutionary group that corresponds to a wavelet multiresolution hierarchy. The time operator T , which is the extension of (16), induces a change in representation that should transfigure the group evolution to the semigroup one (3) representing the Markov process [20] (p. 9) The semigroup action concerns blurring of the signal, which is related to the expansion of the spatial domain due to the action of U on the signal space (Figure 7). It extends to diverse operators in the space of signal ensembles, depending on the wavelet that is represented by the time operator. In that manner, the optimality issue arises, which means a multiresolution hierarchy that is best fitted to the ensemble.

. Local and Global Complexity
In the poem On the Nature of Reality, Lucretius describes not only how things vanish at a distance, but also how they appear to change [33] (p. 156). For instance, distant square towers look rounded. A pair of distinct islands appear to merge into a single one. When distance is increased, details become generalized and distinctions might merge or vanish [34] (p. xv). The effect concerns blurring of the signal wherewith details are successively suppressed. The opposite process, in which there is an emergence of details unfolding the time of a system, is termed self-organization, implying an increase in complexity [7]. The concept originates from Grassberger, who defined statistical complexity as the minimal information required for an optimal prediction of system's behavior [3]. Crutchfield and Young extended the conception by an accurate definition of the optimal predictor. The causal structure has been established in that manner, relating to the intrinsic computability of a process in terms of the Bernoulli-Turing machine [4].
The optimal base requires minimizing of the correlation between scales in a signal ensemble. In that respect, detail coefficients D = (D j,k ) are regarded as joint variable whose component is obtained by applying a state D j,k from Σ to the ensemble form Σ ⊗ ∆. However, such an application should also result in the state of Σ, which means that the energy of a coefficient |D j,k | 2 corresponds to the distribution density. Random realizations of detail coefficients are therefore distorted measurements of a physical entity whose appearance corresponds to hidden variables S j,k evolving in a stochastic manner. Elaboration of the measurement problem has been left for the following subsection.
In this section, a statistical model of signal ensembles is presented which has been obtained in the manner of experimental mathematics. The model is based upon the statistical properties of the wavelet transform, among which the most significant one is approximate decorrelation. It claims that correlations between detail coefficients are realized entirely through hidden variables S j,k forming a Markovian tree, due to inheritance along branches which follow the continuum structure ( Figure 8). In that manner, the wavelet domain hidden Markov model has been established, which has been extremely useful in a variety of applications, including speech recognition and artificial intelligence [35] (p. 887). Exhausting all correlations in the ensemble, the Markovian tree S = (S j,k ) is proven to be a causal structure of the system [36]. It is appropriate to assume statistical stationarity, which means that detail coefficients D j,k and causal variables S j,k are equally distributed within each scale j, independent of the spatial position k. Statistical stationarity of the system enables a reduction in the model parameters, which is the practice known as tying [37]. It is about sharing statistical information between related variables at certain scales, whose distribution parameters are tied to a common value, with the aim to perform a robust estimation. A usage of the Baum-Welch algorithm, given an observation from the signal space, usually converges in as few as ten iterations supposing a locally two state causal structure of the model [35] (p. 893).
The information entropy of local variables dependent on the scale only, is termed the local complexity, whose increase in the temporal domain represents self-organization. Time unfolds in the sense of the complexity increase, and so it is essential to find the optimal base wherein self-organization is the most prominent. Entropy of the causal structure termed the global complexity, is proven to be a measure of the representation optimality [36]. The information of an ensemble is decomposed through the canonical relation wherein H(S) is entropy of the causal variable and the conditional entropy H(D|S) is an irreducible randomness that remains even after all correlations are subsumed. When adding white noise to the signal ensemble, only the extensive term H(D|S) should increase while the complexity H(S) remains unchanged. The optimal base performs superior denoising since it best respects the self-organization of a system corresponding to the time operator. The multiresolution hierarchy it provides temporally decomposes the ensemble, specifying its significance through a complexity insight. In that regard, multiscale pyramids are proposed to be likely models of visual perception [34] (p. xx). An incisive phenomenology of the fact has been presented by Ruskin, who gave it a complete description [38] (p. 174): Go to the top of Highgate Hill on a clear summer morning at five o'clock, and look at Westminster Abbey. You will receive an impression of a building enriched with multitudinous vertical lines. Try to distinguish one of these lines all the way down from the one next to it: You cannot. Try to count them: You cannot. Try to make out the beginning or end of any of them: You cannot. Look at it generally, and it is all symmetry and arrangement. Look at it in its parts, and it is all inextricable confusion.
Yet, he adamantly insists that the draughtsman should render such a confusion veridically, meaning that the complexity is optimally represented. Rendering like that is carried out in the hierarchical manner, since it is a description of the complex system [5] (p. 477).

Dynamical Identity
Koendreink has indicated that one is faced with a fundamental and important, though unfortunately ill understood, aspect of perception [34] (pp. xvii-xx). Having taken a first look at the subject, he admitted a shock by the fact that there existed essentially no science on the topic. The only discipline that considered such phenomena turned out to be cartography [39]. Although there is certainly a lot of science in cartography, its arguably most important aspect has always remained an art conducted largely on intuition. It corresponds to an esthetical criterion relating truth to the original creation [40], which has been termed by Bachelard as the poetics of space [41].
Concerning the physical reality, Koenderink concludes the same as Mandelbrot about fractal geometry, i.e., that a complex description of nature is required [42]. The phrase experimental mathematics comes up a lot in a field of chaos, fractals and nonlinear dynamics [1] (p. 158). Fractal signals appear in the spectral decomposition of significant operators, including one induced by the Rényi map (17). Results like this should be relevant for exploring the wavelet theory and its relationship to stochastic processes [43] (p. 243). The conception is concisely exposed in the book Powers of Ten, which has assigned a significance of the multiresolution hierarchy to such a number [44]. A link between the number of ten and multiscaling is the time continuum designed upon the measurement process. Information entropy of the continued fraction expansion (4) evolving by the Gauss map is π 2 6log2 that approximates to log 10, which means that each term of the fraction should designate a decimal digit almost certainly [45]. The result was disclosed by Lochs, but it has been subsequently elaborated in the ergodic theory of continued fractions [46][47][48]. However, a distinction between the continued fraction and the binary coding is evident. Scales of the binary hierarchy evolving by the Rényi map correspond to the time that counts, one by one, squares of (Figure 1), while the continued fractions consider all squares of the same size to be one step in evolution by the Gauss map. In that regard, dynamical systems are not equivalent and results are incomparable since their time operators are diverse. Nevertheless, the importance of the decimal system in coding numbers is also related to the continuum structure since the real number corresponds to a choice sequence of the continued fraction terms, as well as the binary ones unfolding in time. It is easy to see that the Gauss map is equivalent to a power R n of the Rényi one in the domain of real numbers x whose first term of the continued fraction is n = 1 x . The isomorphism is realized through the question mark function (11).
The time continuum in that manner appears to be the paradigm of intuitionistic logic, whereat the excluded middle x = y ∨ x = y does not hold, considering a dynamical identity unfolded by choice [49]. Such an identity has also appeared in the Jungian psychology wherein the natural number emerges as a timestamp [50]. It is a feature inherent to complex systems, since the time operator is contingent on the mixing property, which means that the states of the system become indiscernible as it goes on [8] (p. 99).
Since the validity of the excluded middle is the definition of a discrete structure by Brouwer, the complex system physics inevitably falls under the continuum category. In that respect, the time continuum is a categorical skeleton of complex systems.
The law of the excluded middle is valid concerning the diversity, since it holds In terms of intuitionism, the negation of identity = is diversity =, whose negation is undiversity ≈, which is a discrete relation. The negation ≈ of undiversity is diversity =, and therefore the law of the excluded middle (28) holds in the form indicating a discrete structure of the formal logic. It is obtained through negative translation of the intuitionistic one [51], which means that formalism is a reduction of the intuitionistic continuum onto a discrete method. The reduction concerns deterministic computation based upon the Turing machine, which is analogous to the stochastic one using the Bernoulli-Turing machine. In that manner, the concept of statistical complexity is reduced onto the algorithmic one [4].
The discretization due to the double negation of identity makes a pointwise structure based upon undiversity of elements, which also emerges in JavaScript as the legendary castto-bool operator !! that is written in the form of a double negation. In terms of the signal space, it gives rise to the point operator corresponded to the Markov process (24) that needs to be sufficiently smooth in order to transfer the concepts of continuity and differentiation to discrete signals [28] (pp. 57-65). In the time continuum domain, however, all signals have been considered continuous since they represent morphisms of the structure [52].

Quantum Measurement
The space of signal ensembles L 2 (I × I) = Σ ⊗ ∆ is regarded as the tensor product of devices and states acting on them. One, therefore, implements the matrix multiplication FG(x, y) = 1 0 F(·, y)G(x, ·)d· (30) which is a semigroup operation corresponding to the action. Measurement devices ψ j,k from ∆ embed into the space by the form of 1 ⊗ ψ j,k , and states D j,k from Σ are embedded by D j,k ⊗ 1. The baker map (22) crosses information between coordinates of the domain in such a manner that the first binary digit of x, which has been lost by the Rényi map, becomes the first digit of y. The induced operator crosses between space components, which is evident in the relation U χ χ = χ|· that transforms a device to a state considering the embedment, and likewise for other wavelets. At the beginning of a measurement, there are devices ∆ and states Σ. The interaction between them takes place according to the action of an operator U on Σ ⊗ ∆. Such a procedure is completed as soon as all devices have become states from Σ ⊗ Σ acting on ∆ ⊗ ∆ through the inner product which means sampling in signal processing terminology [28] (p. 40). In that respect, all states have also become devices that the psychophysics of the observer is involved in. Both states Σ and devices ∆ are therefore embedded into Σ ⊗ Σ = ∆ ⊗ ∆ by the form of projectors P j,k = D j,k ⊗ ψ j,k . It concerns crossing between these options, which is quantum measurement characterized by.
The distribution of a quantum ensemble is given by probability P → p(P), which maps each projector onto I, such that p(0) = 0, p(1) = 1 and p(∑ i P i ) = ∑ i p(P i ) for mutually orthogonal projectors of the sum [8] (pp. 131-134). According to the Gleason theorem, it corresponds to p(P) = ρ|P , wherein ρ is the density operator which is positive semidefinite and the Hermitian one. It follows that ρ = FF † , wherewith F is the root operator having the unit norm ||F|| = 1. The measurement in a wavelet base is represented by the operation M(ρ) = ∑ j,k P j,k ρP j,k , whereby a density is reduced to the sum ∑ j,k |d j,k | 2 P j,k of projectors multiplying probabilities |d j,k | 2 = ρ|P j,k = P j,k F|P j,k F = ||D j,k F|| 2 (32) which are equal to expectations E|D j,k F| 2 . The measurement problem concerns the issue of how such a reduction has taken place. Considering the representational form the variables |D j,k F| 2 are interpreted as distribution densities and the reduction is a collapse to expected values |d j,k | 2 . The process implies the time operator whose eigenvalues constitute the base ψ j,k (16).
Since the density operator ρ = FF † is Hermitian, there is an orthonormal base of their eigenfunctions, which is assumed to be the optimal wavelet base ψ o j,k corresponding to projectors P o j,k . If the root operator F has an optimal wavelet base, they should be the same and In that instance, there is no reduction in the signal ensemble due to the measurement process. Detail coefficients are represented by the variables D o j,k F = d o j,k ψ o j,k , which are mutually independent, considering their orthogonality. In that manner, the signal ensemble is decorrelated, which suits well with the maximization of the statistical complexity. Regarding another base ψ l,m , detail coefficients of the same ensemble are Since basic elements ψ o j,k and ψ l,m are almost entirely localized in the domains [ k−1 2 j , k 2 j ] and [ m−1 2 l , m 2 l ], respectively, the values ψ l,m |ψ o j,k are negligible if these segments do not intersect. The consequence is an approximate decorrelation of the ensemble, which means that the dependence between coefficients predominantly concerns inheritance along branches of the tree representing the continuum structure. The signal ensemble is statistically stationary if detail coefficients are equally distributed within each scale, which means that the eigenvalues d j = d o j,k do not depend on the spatial position k and both the root F = d(T) and the density ρ = |d(T)| 2 are operator functions of the optimal time whereby P D o j = ∑ k P o j,k are projectors in the subspaces D o j that constitute the optimal multiresolution hierarchy.
Discussing quantum theory foundations, von Neumann noted that the measurement process should not be regarded in terms of a temporal evolution [22] (pp. 351-354). The reason for this is determinism of modern science which has eliminated time, reducing it to a linear parameter. The elimination makes not only optimal measurements but any measurement impossible, since the process implies the time operator constituting a multiresolution hierarchy. Although in the classical formulation of quantum theory there is no potential for such an institution, the time operator is consistent with the Liouvillevon Neumann formulation, which elaborates on the evolution of the density operator [8] (pp. 131-134).
If a root F is governed by the operator U , the density ρ = FF † evolves according to the operation Uρ = (U F)(U F) † = U ρU −1 and U is the unitary operator whose adjoint is U † = U −1 . The time operator T of the evolution by U is also relevant to the evolution by U, since the uncertainty relation (2) is satisfied considering that [T , U]ρ = [T , U ]ρU −1 = Uρ. It induces a change in representation Λ = λ(T ), which should transfigure the evolutionary group generated by U † to the semigroup (3) generated by W † = ΛU † Λ −1 , whose adjoint is the Markov process acting on a density ρ = FF † in the manner of In that respect, W = Λ −1 † U Λ † represents the irreversible evolution (24) of the root operator F = ∑ j,k D j,k F ⊗ ψ j,k . The measurement reduces the density operator to the sum ∑ j,k |d j,k | 2 ψ j,k ⊗ ψ j,k , wherein the probabilities |d j,k | 2 = E|D j,k F| 2 are the expected values of variables |D j,k F| 2 , which are interpreted as the distribution densities. In order to consider their evolution, one regards the adjoint root F † = ∑ j,k ψ j,k ⊗ D j,k F and the sum F † F = ∑ j,k D j,k F ⊗ D j,k F, whose components correspond to the density operators ρ j,k . Due to the change in representation, they become densities Λρ j,k of hidden variables S j,k that are governed by the semigroup generator W † . Prigogine discerned that the theory of quantum ensembles should be both complete and probabilistic, whereby the celebrated Einstein-Bohr debate began to take new shapes [17] (p. 255). The link between reversible and irreversible evolution, established due to the time operator, is the fundamental one for elucidation of the measurement problem. The process concerns statistical causality of complex systems, which operates through the Markovian tree S = (S j,k ) of hidden variables that evoke a stochastic computation. It corresponds to an experimental mathematics whose paradigmatic framework is the time continuum of Brouwer.

The Euclidean Paradigm
A paradigm of the measurement process is commensuration of magnitudes, due to the Euclidean algorithm which is transfigured to the binary code through the question mark function (11). Binary digits c j of the measurement F = d(T) correspond to eigenvalues Fχ j,k = d j χ j,k , wherein one has preferably assumed the Haar eigenbase to represent devices. It is thought to be the root operator of an ensemble that is the embedment of the signal ∑ j c j U j χ = ∑ j,k 2 − j 2 c j χ j,k = ∑ j,k d j χ j,k , which makes the correspondence of each digit to an eigenvalue in the manner of d j = 2 − j 2 c j . The density operator ρ = FF † suggests normalization by the divisor F 2 = ∑ j 2 −j c j , and in that respect the normalized are probabilities representing the contribution of a digit to the real number. It suits well to an expected value of |D j,k F| 2 , that is the normalized square of a detail coefficient D j,k F = d j χ j,k , which means that the variables |D j,k F| 2 are interpreted as distribution densities and the reduction concerns a collapse to expectations |d| 2 j = E|D j,k F| 2 . In order to elucidate evolution, one should consider the density operator ρ =∑ j,k D j,k F⊗ D j,k F, which is the sum of tensor squares corresponding to the densities ρ j,k = χ j,k ⊗ χ j,k . They evolve according to Uρ j,k = ρ j+1,k + ρ j+1,k+2 j , which is reminiscent of the density evolution |D j,k F| 2 by the Rényi map Uχ 2 j,k = 1 2 χ 2 j+1,k + 1 2 χ 2 j+1,k+2 j . The detail coefficient D j,k F diverges from zero only in the range of the variable distributed by χ 2 j,k . It relates a hidden variable evolving in a stochastic manner, due to a change in representation that should delocalize distribution densities [21]. The transformation Λ corresponds to describing a system in novel terms of elementary events that cover both states in which the detail coefficient has a nonzero value and those in which it does not. In terms of the novel description, one speaks of irreversible processes which concern an inheritance of states in the temporal domain that is constituted by the time operator of a wavelet. Considering that the measurement has been performed in the optimal base, detail coefficients are mutually independent. If it had been performed suboptimally, such an inheritance with respect to the time operator should have presented a causal structure that suggests correlation of detail coefficients.
The Euclidean algorithm corresponds to the optimal measurement, which concerns the fact that the signal ensemble is a function of the time operator T = ∑ j,k jP j,k . The process is represented by the operation M(ρ) = ∑ j,k P j,k ρP j,k , which the density ρ = |d(T)| 2 is an invariant of. One should adopt a conception of the Euclidean paradigm that concerns ensembles which are invariant in regard to such an operation. Such ensembles are therefore operator functions of the optimal time, which means equal distribution of detail coefficients within each scale. Respecting that, any measurement process is represented by M(FF † ) = ∑ j 1 2 j E|P D j F| 2 P D j , which actually comes down to the application of projectors P D j = ∑ k P j,k to the root operator of a signal ensemble.
Supposing the Haar multiresolution hierarchy, projectors evolve in the manner of P D j+1 = U χ P D j U −1 χ , which makes them reducible to the measurement operator P 0 that corresponds to a primary device χ 0 = 1 ⊗ χ whose evolution χ j = U j χ χ 0 generates the base ∏ j∈(j 1 <···<j n ) χ j of signal ensembles [8] (pp. [29][30][31][32]. Each element χ #»  of the base is specified by an increasing sequence of integers #»  = (j 1 < · · · < j n ) and it evolves U χ χ #»  = χ #»  +1 , wherein #»  + 1 = (j 1 + 1 < · · · < j n + 1). The measurement operator P 0 should fix the element χ #»  = χ j 1 · · · · · χ j n if it is started by χ j 1 = χ 0 and annihilate it if it is not. The operators P D j = U j χ P 0 U −j χ first of all imply the process U −j χ χ #»  = χ #»  −j due to which some states have become devices. The measurement operator P 0 annihilates all devices except the primary one χ 0 . The terminal step concerns the evolution U j χ χ #»  = χ #»  +j in which some devices become states. In that respect, crossing between these options through the evolutionary operator is fundamental for the multiresolution hierarchy.
Tracking the evolution of an ensemble requires its decomposition into elements of the base, which evolves in a common manner. Such a procedure corresponds to the untangling of entangled ensembles, since each of elements χ #»  = ∏ 0≤j∈ #»  χ j · ∏ 0>j∈ #»  χ j is the tensor product of a state and a device. The entanglement does not communicate between states and devices, which is an implementation of the no-signalling principle. The only communication in that respect occurs through their crossing due to the action of the evolutionary operator on signal ensembles.

Psychophysical Parallelism
Except for the temporal evolution, the process comes down to the measurement projector P 0 , which annihilates signal ensembles out of a display, determining the boundary between states and devices, which is arbitrary to a very large extent. Autoduality of the signal space representing both states and devices concurrently concerns the principle of psychophysical parallelism as has been noticed by von Neumann [22] (p. 420). The problem occurs in that the principle is violated so long as it is not demonstrated that the display has been placed in an arbitrary manner, which is achieved by crossing due to the action of an evolutionary operator. In that regard, the projector evolution P D j+1 = UP D j corresponds to its displacement by designating another device as the primary one. Devices of the measurement are continually crossing into states and the concept of psychophysics is used in order to transcend any separation between the two. Von Neumann made a reference to Bohr, who was the first to have pointed out that the dual description of quantum theory relates to the principle of psychophysical parallelism [22] (p. 207). Although Bohr never mentioned it in the print, he adopted Fechner's psychophysics as taught by his mentor Harald Höffding [53] (pp. 244-245). The most significant source for phychophysical parallelism of Fechner is the foreword and introduction from the book Elements of Psychophysics [54] (pp. vii-xiii, [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. His attitude is termed the identity view, since the observer is not to be considered a conglomeration of two substances but one single entity. The outer psychophysics, which is a link between sensation and stimulation, is realized through the neuroesthetical computation, which relates sensation and neural activity. It is regarded by Fechner to be the inner psychophysics [40] (pp. 2147-2149).
An important consequence of von Neumann's solution to the measurement problem is that the irreversibility takes place in the presence of the observer's mind, which seems to play an active role in the process. The only manner to make such an unpleasant situation compatible to the psychophysical parallelism is to switch into the inner psychophysics through a change in representation, which should elucidate the irreversible evolution of the causal variable. In that regard, self-organization of the time continuum appears in a relation to the entropy production, which is a measurement characterized by [22] (pp. 398-416). The optimal measurement corresponds to the most significant increase in the information entropy, considering that the statistical complexity is maximized.
The outer psychophysical information of a signal ensemble is independent of the wavelet base, since H(CD) = H(D) + log | det C| = H(D) for any unitary matrix C designating the coordinate transformation related to a base substitution. The canonical relation (27) separates the inner psychophysical information H(S) contained in the causal variable from an irreducible randomness H(D|S), having the noise property [40] (p. 2150). An innate component of the wavelet domain hidden Markov model is the denoising procedure that is proven to be advantageous over other methods [35] (pp. 894-895). It is performed in a superior manner by the use of the optimal base, and in that regard the inner psychophysics corresponds to such a denoising procedure [36] (pp. [4][5]. The optimal measurement is therefore related to recognition of the causal structure that should maximize the information it contains. The process is represented by the time operator constituting an optimal multiresolution hierarchy of the signal space. Statistical properties of the wavelet domain hidden Markov model reproduce the Fechner law that relates the outer psychophysical stimulus to the inner psychophysical sensation scale [40] (p. 2150). The logarithmic dependence between them is a consequence of the exponential decay concerning detail coefficients, which is applicable as well to the Euclidean paradigm of signal ensembles. It has been demonstrated that eigenvalues d j = c j 2 − j 2 correspond to binary digits c j of a real number. The exponential decay should follow from the fact that digits meet the uniform distribution almost certainly [1] (pp. 151-153). Some form of the Fechner law is therefore satisfied by almost all ensembles of the Euclidean paradigm. Its formulation in terms of the statistical causality is a completion of former studies in the area concerned [55].

Conclusions
The main advantage of the operator mechanics is that it avoids the concept of single trajectories which concern points of the phase space in order to distinguish a common behavior. It has been demonstrated by operators corresponding to evolution of wavelets, which are not induced by any pointwise dynamics. A wavelet multiresolution hierarchy gives rise to the evolutionary group acting on the space of signal ensembles. Due to the time operator, the group action comes down to the semigroup one, which is a blurring related to expansion of the spatial domain.
According to Brouwer's view, time is a primordial intuition that is the base of conscious life. Mathematics is regarded as the paradigm of self-organization, i.e., an intellection of increasingly complex features. In that respect, the main structure is the time continuum that is regarded to be the categorical skeleton of complex systems. The dynamical identity it implies is unfolded by choice, similar to one of Jungian psychology, wherein the natural number is found to be a timestamp.
The complex description of nature following an evolution of the continuum is designed by fractal geometry, wherewith time is established in terms of the multiresolution hierarchy. The wavelet domain hidden Markov model considering the statistics of continuous signals has been extremely useful in a variety of applications. It is obtained in a manner of experimental mathematics, which makes the time continuum a paradigmatic framework for such an activity. Referring to Chaitin's remark from the beginning of the paper, one concludes that another generation of mathematics has come aligned to Brouwer's method.