Old Game, New Rules: Rethinking The Form of Physics

We investigate the modeling capabilities of sets of coupled classical harmonic oscillators (CHO) in the form of a modeling game. The application of simple but restrictive rules of the game lead to conditions for an isomorphism between Lie-algebras and real Clifford algebras. We show that the correlations between two coupled classical oscillators find their natural description in the Dirac algebra and allow to model aspects of special relativity, inertial motion, electromagnetism and quantum phenomena including spin in one go. The algebraic properties of Hamiltonian motion of low-dimensional systems can generally be related to certain types of interactions and hence to the dimensionality of emergent space-times. We describe the intrinsic connection between phase space volumes of a 2-dimensional oscillator and the Dirac algebra. In this version of a phase space interpretation of quantum mechanics the (components of the) spinor wave-function in momentum space are abstract canonical coordinates, and the integrals over the squared wave function represents second moments in phase space. The wave function in ordinary space-time can be obtained via Fourier transformation. Within this modeling game, 3+1-dimensional space-time is interpreted as a structural property of electromagnetic interaction. A generalization selects a series of Clifford algebras of specific dimensions with similar properties, specifically also 10- and 26-dimensional real Clifford algebras.


I. INTRODUCTION
D. Hestenes had the joyful idea to describe physics as a modeling game 1 . We intend to play a modeling game with (ensembles of) classical harmonic oscillators (CHO). The CHO is certainly one of the most discussed and analyzed systems in physics and one of the few exactly solveable problems. One would not expect any substantially new discoveries related to this subject. Nevertheless there are aspects that are less well-known than others. One of these aspects concerns the transformation group of the symplectic transformations of n coupled oscillators, Sp(2n). We invite the reader to join us playing "a modeling game" and to discover some fascinating features related to possible reinterpretations of systems of two (or more) coupled oscillators. We will show that special relativity can be reinterpreted as a transformation theory of the second moments of the abstract canonical variables of coupled oscillator systems 1 . We extend the application beyond pure LTs and show that the Lorentz force can be reinterpreted by the second moments of two coupled oscillators in proper time. Lorentz transformations can be modeled as symplectic transformations 5 . We shall show how Maxwell's equations find their place within the game.
The motivation for this game is to show that many a) Electronic mail: christian-baumgarten@gmx.net 1 The connection of the Dirac matrices to the symplectic group has been mentioned by Dirac in Ref. 3 . For the connection of oscillators and Lorentz transformations (LTs) see also the papers of Kim and Noz 4-6 and references therein. The use of CHOs to model quantum systems has been recently described -for instance -by Briggs and Eisfeld 7 .
aspects of modern physics can be understood on the basis of the classical notions of harmonic oscillation if these notions are appropriately reinterpreted. In Sec. II we introduce the rules of our game, in Sec. III we introduce the algebraic notions of the Hamilton formalism. In Sec. IV we describe how geometry emerges from coupled oscillator systems, in Sec. V we describe the use of symplectic transformations and introduce the Pauli-and Dirac algebra. In Sec. VI we introduce a physical interpretation of oscillator moments and in Sec. VIII we relate the phase space of coupled oscillators to the real Dirac algebra. Sec. IX contains a short summary.

II. THE RULES OF THE GAME
The first rule of our game is the principle of reason (POR): No distinction without reason -we should not add or remove something specific (an asymmetry, a concept, a distinction) from our model without having a clear and explicite reason. If there is no reason for a specific asymmetry or choice, then all possibilities are considered equivalently. The second rule is the principle of variation (POV): We postulate that change is immanent to all fundamental quantities in our game. From these two rules, we take that the mathematical object of our theory is a list (n-tuple) of quantities (variables) ψ, each of which varies at all times. The third rule is the principle of objectivity (POO): Any law within this game refers to measurements, defined as comparison of quantities (object properties) in relation to other object properties of the same type (i.e. unit). Measurements require rulers to enable for measurements. A measurement standard (ruler) has to be objective, i.e. based on properties of the objects of the game. This apparent self-reference is unavoidable, as it models the real situation of physics as experimental science. Since all fundamental objects (quantities) in our model vary at all times, the only option to construct a constant quantity that might serve as a ruler, is given by constants of motion (COM). Hence the principle of objectivity means that measurement standards are based on constants of motion.
This third rule implies that the fundamental variables can not be directly measured, but only functions of the fundemantal variables of the same dimension (unit) of a COM. Thus the model has two levels: The level of the fundamental variable list ψ, which is experimentally not directly accessible and a level of observables which are (as we shall argue) even moments of fundamental variables.
A. Discussion of the Rules E.T. Jaynes wrote that "Because of their empirical origins, QM and QED are not physical theories at all. In contrast, Newtonian celestial mechanics, Relativity, and Mendelian genetics are physical theories, because their mathematics was developed by reasoning out the consequences of clearly stated physical principles from which constraint the possibilities". And he continues "To this day we have no constraining principle from which one can deduce the mathematics of QM and QED; [...] In other words, the mathematical system of the present quantum theory is [...] unconstrained by any physical principle" 9 . This remarkably harsh criticism of quantum mechanics raises the question of what we consider to be a physical principle. Are the rules of our game physical principles? We believe that they are no substantial physical principles but formal first principles, they are preconditions of a sensible theory. They contain no immediate physical content, but they define the form or the idea of physics.
It is to a large degree immanent to science and specifically to physics to presuppose the existence of reason: Apples do not fall down by chance -there is a reason for this tendency. Usually this believe in reason implies the believe in causality, i.e. that we can also (at least in principle) explain why a specific apple falls at a specific time, but practically this latter believe can rarely be confirmed experimentally and therefore remains to some degree metaphysical. Thus, if, as scientists, we postulate that things have reason, then this is no physical principle but a precondition, a first principle.
The second rules (POV), is specific to the form (or idea) of physics, e.g. that it is the sense of physics to recognize the pattern of motion and to predict future. Therefore the notion of time in the form of change is indeed immanent to any physical description of reality. The principle of objectivity is immanent to the very idea of physics: A measurement is the comparison of properties of objects with compatible properties of reference objects, e.g. requires "constant" rulers. Hence the rules of the game are to a large degree unavoidable: They follow from the very form of physics and therefore certain laws of physics are not substantial results of a physical theory. For instance a consistent "explanation" of the stability of matter is impossible as we presumed it already within the idea of measurement. More precisely: if this presumption does not follow within the framework of a physical theory, then the theory is fundamentally flawed, since it can not even reproduce it's own presumptions.
Einstein wrote with respect to relativity that "It is striking that the theory (except for the four-dimensional space) introduces two kinds of things, i.e. (1) measuring rods and clocks, (2) all other things, e.g., the electromagnetic field, the material point, etc. This, in a certain sense, is inconsistent; strictly speaking, measuring rods and clocks should emerge as solutions of the basic equations [...], not, as it were, as theoretically self-sufficient entities." 10 . The more it may surprise that the stability of matter can not be obtained from classical physics as remarked by Elliott H. Lieb: "A fundamental paradox of classical physics is why matter, which is held together by Coulomb forces, does not collapse" 11 . This single sentence seems to rule out the possibility of a fundamental classical theory and uncovers the uncomfortable situation of theoretical physics today: Despite the overwhelming experimental and technological success, there is a deepseated confusion concerning the theoretical foundations. Our game is therefore a meta-experiment. It is not the primary goal to find "new" laws of nature or new experimental predictions, but it is a conceptional "experiment" that aims to further develop our understanding of the consequences of principles: which ones are really required to derive central "results" of contemporary physics. In this short essay final answers can not be given, but maybe some new insights are possible.

B. What about Space-Time?
A theory has to make the choice between postulate and proof. If a 3+1 dimensional space-time is presumed, then it cannot be proven within the same theoretical framework. Or at least the value of such proof remains questionable. This is a sufficient reason to refuse any postulates concerning the dimensionality of space-time. Another -even stronger -reason to avoid a direct postulate of space-time and its geometry has been given above: The fundamental variables that we postulated above, can not be directly measured. This excludes space-time coordinates as primary variables (which can be directly measured), but with it almost all other apriori assumed concepts like velocity, acceleration, momentum, energy and so on. At some point these concepts certainly have to be introduced, but we suggest an approach to the formation of concepts that differs from the Newtonian axiomatic method. The POR does not allow to introduce distinctions between the fundamental variables into coordinates and momenta without reason. Therefore we are forced to use an interpretational method, which one might summa-rize as function follows form. We shall first derive equations and then we shall interpret the equations according to some formal criteria. This implies that we have to refer to already existing notions if we want to identify quantities according to their appearance within a certain formalism. The consequence for the game is, that we have to show how to give rise to geometrical notions: If we do not postulate space-time then we apparently have to construct it.
A consequence of our conception is that both, objects and fields have to be identified with dynamical structures, as there is simply nothing else available. This nicely fits to the framework of structure preserving (symplectic) dynamics that follows from the described principles.

III. THEORY OF SMALL OSCILLATIONS
In this section we shall derive the theory of coupled oscillators from the rules of our game. According to the POO there exists a function (COM) H(ψ) such that 2 : or in vector notation The simplest solution is given by an arbitrary skewsymmetric matrix X :ψ Note that it is only the skew-symmetry of X , which ensures that it is always a solution to Eqn. 2 and which ensures that H is constant. If we now consider a state vector ψ of dimension k, then there is a theorem in linear algebra, which states that for any skew-symmetric matrix X there exists a non-singular matrix Q such that we can write 13 : where η 0 is the matrix If we restrict us to orthogonal matrices Q, then we may still write In both cases we may leave away the zeros, since they correspond to non-varying variable, which is in conflict with the second rule of our modeling game. Hence k = 2n must be even and the square matrix X has the dimension 2n × 2n. As we have no specific reason to assume asymmetries between the different degrees of freedom (DOF), we have to choose all λ k = 1 in Eqn. 6 and return to Eqn. 4 without zeros and define the block-diagonal socalled symplectic unit matrix (SUM) γ 0 : These few basic rules thus lead us directly to Hamiltonian mechanics: Since the state vector has even dimension and due to the form of γ 0 , we can interpret ψ as an ensemble of n classical DOF -each DOF represented by a canonical pair of coordinate and momentum: ψ = (q 1 , p 1 , q 2 , p 2 , . . . , q n , p n ) T . In this notation and after the application of the transformation Q, Eqn. 3 can be written in form of the Hamiltonian equations of motion (HEQOM):q The validity of the HEQOM is of fundamental importance as it allows for the use of the results of Hamiltonian mechanics, of statistical mechanics and thermodynamics -but without the intrinsic presupposition that the q i have to be understood as positions in real space and the p i as the corresponding canonical momenta. This is legitimate as the theory of canonical transformations is independent from any specific physical interpretation of what the coordinates and momenta represent physically. The canonical pairs are coordinates q i , p i in an abstract phase space and they can be interpreted as canonical coordinates and momenta due to the form of the HEQOM. The choice of the specific form of γ 0 is for n > 1 DOF not unique. It could for instance be written as which corresponds a state vector of the form ψ = (q 1 , . . . , q n , p 1 , . . . , p n , ) T , or by as in Eq. 7. Therefore we are forced to make an arbitrary choice 3 . But in all cases the SUM γ 0 must be skewsymmetric and have the following properties: 3 But we should keep in mind, that other "systems" with a different choice are possible. If we can not exclude their existence, then they should exist as well. With respect to the form of the SUM, we suggest that different "particle" types (different types of fermions for instance) have a different SUM.
which also implies that γ 0 is orthogonal and has unit determinant. Note also that all eigenvalues of γ 0 are purely imaginary. However, once we have chosen a specific form of γ 0 , we have specified a set of canonical pairs (q i , p i ) within the state vector. This choice fixes the set of possible canonical (structure preserving) transformations. Now we write the Hamiltonian H(ψ) as a Taylor series, we remove the rule-violating constant term and cut it after the second term. We do not claim that higher terms may not appear, but we delay the discussion of higher orders to a later stage. All this is well-known in the theory of small oscillations. There is only one difference to the conventional treatment: We have no direct macroscopic interpretation for ψ and following our first rule we have to write the second-order Hamiltonian H(ψ) in the most general form: where A is only restricted to be symmetric as all nonsymmetric terms do not contribute to H. Since it is not unlikely to find more than a single constant of motion in systems with multiple DOFs, we distinguish systems with singular matrix A from those with a positive or negative definite matrix A. Positive definite matrices are favoured in the sense that they allow to identify H with the amount of a substance or an amount of energy 4 . Before we try to interprete the elements in A, we will explore some general algebraic properties of the Hamiltonian formalism. If we plug Eqn. 12 into Eqn. 3, then the equations of motion can be written in the general forṁ The matrix F = γ 0 A is the product of the symmetric (positive semi-definite) matrix A and the skewsymmetric matrix γ 0 . As known from linear algebra, the trace of such products is zero: It is obvious that pure harmonic oscillation of ψ is given for matrices F that have purely imaginary eigenvalues. Furthermore these are the only stable solutions 14 . Note that Eq. 13 may represent a tremendous amount of different types of systems -all linearily coupled systems in any dimension, chains or d-dimensional lattices of linear coupled oscillators and wave propagation 5 . One quickly derives from the properties of γ 0 and A that 4 It is immanent to the concept of substance that it is understood as something positive semidefinite. 5 However the linear approximation does not allow for the description of the transport of heat.
Since any square matrix can be written as the sum of a symmetric and a skew-symmetric matrix, it is nearby to also consider the properties of products of γ 0 with a skew-symmetric real square matrices B. If C = γ 0 B, then Symmetric 2n × 2n-matrices contain 2n (2n + 1)/2 different matrix elements and skew-symmetric ones 2n (2n − 1)/2 elements, so that there are ν s linear independent symplices ν s = n (2n + 1) (17) and ν c cosymplices with In the theory of linear Hamiltonian dynamics, matrices of the form of F are known as "Hamiltonian" or "infinitesimal symplectic" and those of the form of C as "skew-Hamiltonian" matrices. This convention is a bit odd as F does not appear in the Hamiltonian and it is in general not symplectic. Furthermore the term "Hamiltonian matrix" has a different meaning in quantum mechanics -more in analogy to A. But it is known that this type of matrix is closely connected to symplectic matrices as every symplectic matrix is a matrix exponential of a matrix F 14 . We consider the matrices as defined by Eqn. 15 and Eqn. 16 as too important and fundamental to have no meaningful and unique names: Therefore we speak of a symplex (plural symplices), if a matrix holds Eqn. 15 and of a cosymplex if it holds Eqn. 16.

A. Symplectic Motion and Second Moments
So what is a symplectic matrix anyway? The concept of symplectic transformations is a specific formulation of the theory of canonical transformations. Consider we define a new state vector (or new coordinates) φ(ψ) -with the additional requirement, that the transformation is reversible. Then the Jacobian matrix of the transformation is given by and the transformation is said to be symplectic, if the Jacobian matrix holds 14 Let us see what this implies in the linear case: and -by the use of Eqn. 20 one finds thatF is still a symplex:F Hence a symplectic transformation is first of all a similarity transformation, but secondly, it preserves the structure of all involved equations. Therefore the transformation is said to be canonical or structure preserving. The distinction between canonical and non-canonical transformations can therefore be traced back to the skewsymmetry of γ 0 and the symmetry of A -both of them consequences of the rules of our physics modeling game.
Recall that we argued that the matrix A should be symmetric because skew-symmetric terms do not contribute to the Hamiltonian. Let us have a closer look what this means. Consider the matrix of second moments Σ that can be build from the variables ψ: in which the angles indicate some (yet unspecified) sort of average. The equation of motion of this matrix is given byΣ Now, as long as F does not depend on ψ, we obtaiṅ so that with the matrix S ≡ Σ γ 0 this yields: This equation is a key to our theory. Firstly it contains only observables: though we started with unmeasurable abstract quantities, we reach here a stage in which the original quantities ψ are completely hidden. In accelerator physics this equation is called envelope equation as it describes the dynamics of the so-called "envelope" of the beam, which is nothing but a parametrization by statistical (here: second) moments. For completeness we introduce the "adjunct" spinor ψ = ψ T γ 0 so that we may write Note that S is also a symplex. The matrix S (i.e. all second moments) is constant, iff S and F commute. Now we define an observable to be an operator O with a (potentially) non-vanishing expectation value, defined by: Thus, if the product γ 0 O is not skew-symmetric, i.e. contains a product of γ 0 with a symmetric matrix B, then the expectation value is potentially non-zero: This means that only the symplex-part of an operator is "observable", while cosymplices yield a vanishing expectation value. Hence Eq. 26 delivers the blueprint for the general definition of observables. Furthermore we find in the last line the constituting equation for Lax pairs 17 . Peter Lax has shown that for such pairs of operators S and F that obey Eqn. 26 there are the following constants of motion for arbitrary integer k > 0. Since S is a symplex and therefore by definition the product of a symmetric matrix and the skew-symmetric γ 0 , Eqn. 30 is always zero and hence trivially true for k = 1. The same is true for any odd power of S, as it can be easily shown that any odd power of a symplex is again a symplex (see Eq. 36), so that the only non-trivial general constants of motion correspond to even powers of S, which implies that all observables are functions of even powers of the fundamental variables.
To see the validity for k > 1 we have to consider the general algebraic properties of the trace operator. Let λ be an arbitrary real constant and τ be a real parameter, then It follows that From the last line of Eqn. 32 follows for dA Remark: This conclusion is not limited to symplices. However for single spinors ψ and their second moments S = Σ γ 0 = ψψ T γ 0 we find: since each single factor (ψ T γ 0 ψ) vanishes due to the skew-symmetry of γ 0 . Therefore the constants of motion as derived from Eqn. 30 are non-zero only for even k and after averaging over some kind of distribution such that S = ψψ T γ 0 has non-zero eigenvalues as in Eq. 35 below.
The symmetric matrix 2n × 2n-matrix Σ (and also A) is positive definite, if it can be written as a product Σ = ΨΨ T , where Ψ is a non-singular matrix of size 2n × m with m ≥ 2n.
For n = m/2 = 1, the form of Ψ may be chosen as so that for k = 2 the average of two "orthogonal" columnvectors ψ and η 0 ψ gives a non-zero constant of motion via Lax pairs as γ 2 0 = −1. These findings have numerous consequences for the modeling game. The first is that we have found constants of motion -though some of them are physically meaningful only for a non-vanishing volume in phase space, i.e. by the combination of several spinors ψ. Secondly, a stable stateṠ = 0 implies that the matrix operators forming the Lax pair have the same eigenvectors: a density distribution in phase space (as described by the matrix of second moments) is stable if it is adapted or matched to the symplex F. The phase space distribution as represented by S and the driving terms (the components of F) must fit to each other in order to obtain a stable "eigenstate". But we also found a clear reason, why generators (of symplectic transformations) are always observables and vice versa: Both, the generators as well as the observables are symplices of the same type. There is a one-to-one correspondence between them, not only as generators of infinitesimal transformations, but also algebraically.
Furthermore, we may conclude that (anti-) commutators are an essential part of "classical" Hamiltonian mechanics and secondly that the matrix S has the desired properties of observables: Though S is based on continuously varying fundamental variables, it is constant, if it commutes with F, and it varies otherwise 6 .
Hence it appears sensible to take a closer look on the (anti-) commutation relations of (co-) symplices and though the definitions of (co-) symplices are quite plain, the (anti-) commutator algebra that emerges from them has a surprisingly rich structure. If we denote symplices by S k and cosymplices by C k , then the following rules 6 In accelerator physics, Eqn. 26 describes the envelope of a beam in linear optics. The matrix of second moments Σ is a covariance matrix -and therefore our modeling game is connected to probability theory exactly at the stage where we define observables.
can quickly be derived: This Hamiltonian algebra of (anti-)commutators is of fundamental importance insofar as we derived it in a few steps from first principles (i.e. the rules of the game) and it defines the structure of Hamiltonian dynamics in phase space. The distinction between symplices and cosymplices is also the distinction between observables and non-observables. It is the basis of essential parts of the following considerations.

IV. GEOMETRY FROM HAMILTONIAN MOTION
In the following we will demonstrate the geometrical content of the algebra of (co-)symplices (Eqn. 36) which emerges for specific numbers of DOF n. As shown above pairs of canonical variables (DOFs) are the a direct consequence of the abstract rules of our game. Though single DOFs are poor "objects", it is remarkable to find physical structures emerging from our abstract rules at all. This suggests that there might be more structure to discover when n DOF are combined, for instance geometrical structures. The following considerations obey the rules of our game, since they are based purely on symmetry considerations like those that guided us towards Hamiltonian dynamics. The objects of interest in our algebraic interpretation of Hamitonian dynamics are matrices. The first matrix (besides A) with a specific form that we found, is γ 0 . It is a symplex: According to Eq. 17 there are ν s = n (2 n+1) (i.e. ν s ≥ 3) symplices. Hence it is nearby to ask if other symplices with similar properties like γ 0 exist -and if so, what the relations between these matrices are. According to Eq. 36 the commutator of two symplices is again a symplex, while the anti-commutator is a cosymplex. Hence -as we are interested in observables and components of the Hamiltonians (i.e. symplices), respectively, we would like to find other symplices that anti-commute with γ 0 and with each other. In this case, the product of two such matrices is also a symplex, i.e. another potential contribution to the general Hamiltonian matrix F. Assumed we had a set of N mutually anti-commuting orthogonal symplices γ 0 and γ k with k ∈ [1 . . . N − 1], then a Hamiltonian matrix F might look like The γ k are symplices and anti-commute with γ 0 : Multiplication from the left with γ 0 gives: Hence all other possible symplices γ k that anticommute with γ 0 are symmetric -and hence they square to 1 as they are also assumed to be orthogonal. This is an extremely important finding for what follows, as it can (within our game) be interpreted as a classical proof of the uniqueness of (observable) time-dimension: Time is one-dimensional as there is no other skew-symmetric symplex that anti-commutes with γ 0 . We can choose different forms for γ 0 , but the emerging algebra allows for no second "direction of time". The second order derivative of ψ is (for constant F) given byψ = F 2 ψ which yields: Since the matrices on the right anticommute, we are left with: Thus -we find a set of (coupled) oscillators, if such thatψ Given such matrix systems exist -then they generate a Minkowski type "metric" as in Eq. 42 7 . The appearance of such a metric may guide us towards further aspects of physics to be modeled. It should be possible to construct a Minkowski type geometry from the driving terms of oscillatory motion. This is indeed possible -at least for symplices of certain dimensions as we will show below. The first thing needed is some kind of measure to define the length of a "vector". Since the length is a measure that is invariant under certain transformations, specifically under rotations, we prefer to use a quantity with 7 Indeed it appears that Dirac derived his system of matrices from the this requirement 18 .
certain invariance properties to define a length. The only one we have at hand is given by Eqn. 30. Accordingly we define the (squared) length of a matrix representing a "vector" by The division by 2 n is required to make the unit matrix have unit norm. Besides the norm we need a scalar product, i.e. a definition of orthogonality. Consider the Pythagorean theorem which says that two vectors a and b are orthogonal iff The general expression is The equations are equal, iff a · b = 0. Hence the Pythagorean theorem yields a reasonable definition of orthogonality. However, we had no method yet to define vectors within our game. Using matrices A and B we may then write (48) If we compare this to Eqn. 46 and Eqn. 47, respectively, then the obvious definition of the inner product can be defined as follows: Since the anticommutator does in general not yield a scalar, we have to distinguish between inner product and scalar product: where we indicate the scalar part by the subscript "S". Accordingly we define the exterior product by the commutator Now that we defined the products, we should come back to the unit vectors. The only "unit vector" that we explicitely defined so far is the symplectic unit matrix γ 0 . If it represents anything at all then it must be "the direction" of change, the direction of evolution in time as it was derived in this context and is the only "dimension" found so far. As we have already shown, all other unit vectors γ k must be symmetric, if they are symplices. And vice versa: If γ k is symmetric and anti-commutes with γ 0 , then it is a symplex. As only symplices represent observables and are generators of symplectic transformations, we can have only a single "time" direction γ 0 and a yet unknown number of symmetric unit vectors 8 . However, for n > 1, there might be different equivalent choices of γ 0 . Whatever the specific form of γ 0 is, we will show that in combination with some general requirements like completeness, normalizability and observability it determines the structure of the complete algebra. Though we don't yet know how many symmetric and pairwise anticommuting unit vectors γ k exist -we have to interpret them as unit vectors in "spatial directions" 9 . Of course unit vectors must have unit length, so that we have to demand that Note that (since our norm is not positive definite), we explicitely allow for unit vectors with negative "length" as we find it for γ 0 . Note furthermore that all skewsymmetric unit vectors square to −1 while the symmetric ones square to 1 20 . Indeed systems of N = p + q anti-commuting real matrices are known as real representations of Clifford algebras Cl p,q . The index p is the number of unit elements ("vectors") that square to +1 and q is the number of unit vectors that square to −1. Clifford algebras are not necessarily connected to Hamiltonian motion, rather they can be regarded as purely mathematical "objects". They can be defined without reference to matrices whatsoever. Hence in mathematics, sets of matrices are merely "representations" of Clifford algebras. But our game is about physics and due to the proven one-dimensionality of time we concentrate on Clifford algebras Cl N −1,1 which link CHOs in the described way with the generators of a Minkowski type metric. Further below it will turn out that the representation by matrices is -within the gameindeed helpful, since it leads to an overlap of certain symmetry structures. The unit elements (or unit "vectors") of a Clifford algebra, e k , are called the generators of the Clifford algebra. They pairwise anticommute and they square to ±1 10 . Since the inverse of the unit elements e k of a Clifford algebra must be unique, the products of different unit vectors form new elements and all possible products including the unit matrix form a group. There are N k possible combinations (products without repetition) of k elements from a set of N generators. We 8 Thus we found a simple answer to the question, why only a single time direction is possible, a question also debated in Ref. 19 9 The meaning of what a spatial direction is, especially in contrast to the direction of time γ 0 , has to be derived from the form of the emerging equations, of course. As meaning follows form, we do not define space-time, but we identify structures that fit to the known concept of space-time. 10 The role as generator of the Clifford algebra should not be confused with the role as generators of symplectic transformations (i.e. symplices). Though we are especially interested in Clifford algebras in which all generators are symplices, not all symplices are generators of the Clifford algebra. Bi-vectors for instance are symplices, but not generators of the Clifford algebra. therefore find N 2 bi-vectors, which are products of 2 generators, N 3 trivectors) and so on. The product of all N basic matrices is called pseudoscalar. The total number of all k-vectors then is 11 : If we desire to construct a complete system, then the number of variables of the Clifford algebra has to match the number of variables of the used matrix system: Note that the root of this equation gives an even integer 2 N/2 = 2 n so that N must be even. Hence all Hamiltonian Clifford algebras have an even number of dimensions. Of course not all elements of the Clifford algebra are symplices. The unit matrix (for instance) is a cosymplex. Consider the Clifford algebra Cl 1,1 with N = 2, which has two generators, say γ 0 with γ 2 0 = −1 and γ 1 with γ 2 1 = 1. Since these two anticommute (by definition of the Clifford algebra), so that we find (besides the unit matrix) a fourth matrix formed by the product γ 0 γ 1 : The completeness of the Clifford algebras as we use them here implies that any 2 n×2 n-matrix M with (2n) 2 = 2 N can be written as a linear combination of all elements of the Clifford algebra: The coefficients can be computed from the scalar product of the unit vectors with the matrix M: Recall that skew-symmetric γ k have a negative length and therefore we included a factor s k which represents the "signature" of γ k , in order to get the correct sign of the coefficients m k . Can we derive more properties of the constructable space-times? One restriction results from representation theory: A theorem from the theory of Clifford algebras states that Cl p,q has a representation by real matrices if (and only if) 22 p − q = 0 or 2 mod 8 .
The additional requirement that all generators must be symplices so that p = N − 1 and q = 1 then restricts N to Hence the only matrix systems that have the required symmetry properties within our modeling game are those that represent Clifford algebras with the dimensions 1+1, 3 + 1, 9 + 1, 11 + 1, 17 + 1, 19 + 1, 25 + 1, 27 + 1 and so on. These correspond to matrix representations of size 2 × 2, 4 × 4, 32 × 32, 64 × 64, 512 × 512 and so on. The first of them is called Pauli algebra, the second one is the Dirac algebra. Do these two have special properties that the higher-dimensional algebras do not have? Yes, indeed. Firstly, since dynamics is based on canonical pairs, the real Pauli algebra describes the motion of a single DOF and the Dirac algebra decribes the simplest system with interaction between two DOF. This suggests the interpretation that within our game, objects (Dirac-particles) are not located "within space-time", since we did not define space at all up to this point, but that space-time can be modeled as an emergent phenomenon.
Secondly, if we equate the number of fundamental variables (2 n) of the oscillator phase space with the dimension of the Clifford space N , then Eqn. 54 leads to which allows for N = 2 and N = 4 only. But why should it be meaningful to assume N = 2 n? The reason is quite simple: If 2 n > N as for all higher-dimensional state vectors, there are not enough generators of the algebra as there are variables. This discrepancy increases with n. Hence the described objects can not be pure vectors anymore, but must contain tensor-type components (kvectors) 12 . But before we describe a formal way to interprete Eqn. 60, let us first investigate the physical and geometrical implications of the game as described so far.

A. Matrix Exponentials
We said that the unit vectors γ 0 and γ k are symplices and therefore generators of symplectic transformations. All symplectic matrices are matrix exponentials of symplices 14 . The computation of matrix exponentials is in the general case non-trivial. However, in the special case of matrices that square to ±1 13 , the exponentials are readily evaluated: 12 For a deeper discussion of the dimensionality of space-time, see Ref. 20 and references therein. 13 E.g. along the "axis" γ k of the coordinate system. where s = ±1 is the sign of the matrix square of γ a . For and for s = 1 (γ 2 a = 1): We can indentify skew-symmetric generators with rotations and (as we will show in more detail below) symmetric generators with boosts. The (hyperbolic) sine/cosine structure of symplectic matrices are not limited to the generators but are a general property of the matrix exponentials of the symplex F 14 : where the (co-) symplex S (C) is given by: since (the linear combination of) all odd powers of a symplex is again a symplex and the sum of all even powers is a cosymplex. The inverse transfer matrix M −1 (t) is given by: The physical meaning of the matrix exponential results from Eqn. 13, which states that (for constant symplices F) the solutions are given by the matrix exponential of F: A symplectic transformation can be regarded as the result of a possible evolution in time. There is no prove that non-symplectic processes are forbidden by nature, but that only symplectic transformations are structure preserving. Non-symplectic transformations are then structure defining. Both play a fundamental role in the physics of our model reality, because fundamental particles are -according to our model -represented by dynamical structures. Therefore symplectic transformations describe those processes and interactions, in which structure is preserved, i.e. in which the type of the particle is not changed. The fundamental variables are just "carriers" of the dynamical structures. Non-symplectic transformations can be used to transform the structure. This could also be described by a rotation of the direction of time. Another interpretation is that of a gaugetransformation 21 .

V. THE SIGNIFICANCE OF (DE-)COUPLING
In physics it is a standard technique to reduce complexity of problems by a suitable change of variables. In case of linear systems, the change of variables is a linear canonical transformation. The goal of such transformations is usually to substitute the solution of a complicated problem by the solution of multiple simpler systems. This technique is known under various names, one of these names is decoupling, but it is also known as principal component analysis or (as we will later show) transformation into the "rest frame". In other branches of science one might refer to it as pattern recognition.
In the following we investigate, how to compute (or recognize) the eigenvectors and eigenvalues of a general 2n × 2n-dimensional symplex. Certainly it would be preferable to find a "physical method", i.e. a method that matches to the concepts that we introcuded so far and that has inherently physical significance. Or at least significance and explantory power with respect to our modeling game. Let us start from the simplest systems, i.e. with the Pauli and Dirac algebras which correspond to matrices of size 2 × 2 and 4 × 4, respectively.

A. The Pauli Algebra
The fundamental significance of the Pauli algebra is based on the even dimensionality of (classical) phase space. The algebra of 2 × 2 matrices describes the motion of a single (isolated) DOF. Besides η 0 , the real Pauli algebra includes the following three matrices: All except the unit matrix η 3 are symplices. If η 0 and η 1 are chosen to represent the generators of the corresponding Clifford algebra Cl 1,1 , then η 2 is the only possible bi-vector. A general symplex has the form: The characteristic equation is given by Det(F − λ 1) = 0 The eigenvalues λ ± are both either real for a 2 < c 2 + b 2 or both imaginary a 2 > c 2 + b 2 (or both zero). Systems in stable oscillation have purely imaginary eigenvalues. This case is most interesting for our modeling game.
Decoupling is usually understood in the more general sense to treat the interplay of several (at least two) DOF, but here we ask, whether all possible oscillating systems of n = 1 are isomorphic to normal form oscillators. Since there are 3 parameters in F and only one COM, namely the frequency ω, we need at least two parameters in the transformation matrix. Let us see, if we can choose these two transformations along the axis of the Clifford algebra. In this case we apply subsequentially two symplectic transformations along the axis η 0 and η 2 . Applying the symplectic transformation matrix exp (η 0 τ /2) we obtain: 15 The transformed coefficients a ′ , b ′ and c ′ are given by so that -depending on the "duration of the pulse", we can chose to transform into a coordinate system in which either b ′ = 0 or c ′ = 0. If we choose t = arctan (−c/b), then c ′ = 0, so that If we chose the next generator to be η 2 , then: In this case we have to dinstinguish between the case, where a ′ > b ′ and a ′ < b ′ . The former is the oscillatory system and in this case the transformation with τ = artanh(b ′ /a ′ ) leads to the normal form of a 1-dim. oscillator: and the matrix F ′′ has the form If the eigenvalues are imaginary, then λ = ± i ω and hence so that the solution is -for constant frequency -given by the matrix exponential: This shows that in the context of stable oscillator algebras -the real Pauli algebra can be reduced to the complex number system: This becomes evident, if we consider possible representations of the complex numbers. Clearly we need two basic elements -the unit matrix and η 0 , i.e. a matrix that commutes with the unit matrix and squares to −1. If we write "i" instead of η 0 , then it is easily verified that 16 : The theory of holomorphic functions is based on series expansions and can be equally well formulated with matrices. Viewed from our perspective the complex numbers are a special case of the real Pauli algebra -since we have shown above that any one-dimensional oscillator can be canonically transformed into a system of the form of Eqn. 77. Nevertheless we emphasize that the complex numbers interpreted this way can only represent the normal form of an oscillator. The normal form excludes a different scaling of coordinates and momenta as used in classical mechanics, i.e. it avoids intrinsically the appearance of different "spring constants" and masses 17 .

B. The Dirac Algebra
In this subsection we consider the oscillator algebra for two coupled DOF, the algebra of 4 × 4 matrices. In contrast to the real Pauli algebra, where the parameters a, b and c did not suggest a specific physical meaning, the structure of the Dirac algebra bears geometrical significance as has been pointed out by David Hestenes and others [29][30][31] . The (real) Dirac algebra is the simplest real algebra that enables for a description of two DOF and the interaction between them. Furthermore the eigenfrequencies of a general symplex F may be complex, while the spectrum of the Pauli matrices does not include complex numbers off the real and imaginary axis. The spectrum of general 2n × 2n-symplices has a certain structure 16 See also Refs. 22,23 and Eqn. 35 in combination with Ref. 43 . 17 There have been several attempts to explain the appearance of the complex numbers in quantum mechanics 8,12,[24][25][26][27][28] . A general discussion of the use of complex numbers in physics is beyond the scope of this essay, therefore we add just a remark. Gary W. Gibbons wrote that "In particular there can be no evolution if ψ is real" 27 . We agree with Gibbons that the unit imaginary can be related to evolution in time as it implies oscillation, but we do not agree with his conclusion. Physics was able to describe evolution in time without imaginaries before quantum mechanics and it still is. The unconscious use of the unit imaginary did not prevent quantum mechanics from being experimetally successful. But it prevents physicists from understanding its structure.
-since the coefficients of the characteristic polynomial are real: If λ is an eigenvalue of F, then its complex conjugateλ as well as λ and −λ are also eigenvalues. As we will show, this is the spectrum of the Dirac algebra and therefore any 2n × 2n-system can at least in principle be block-diagonalized using 4 × 4-blocks. The Pauli algebra is therefore not sufficient to cover this general case. The structure of Clifford algebras follows Pascal's triangle. The Pauli algebra has the structure 1 − 2 − 1 (scalar-vector-bivector), the Dirac algebra has the structure 1 − 4 − 6 − 4 − 1, standing for unit element (scalar), vectors, bi-vectors, tri-vectors and pseudoscalar. The vector elements are by convention indexed with γ µ with µ = 0 . . . 3, i.e. the generators of the algebra 18 : We define the following numbering scheme for the remaining matrices 19 : According to Eq. 17 we expect 10 symplices and since the 4 vectors and 6 bi-vectors are symplices, all other elements are cosymplices. With this ordering, the general 4 × 4-symplex F can be written as (instead of Eq. 56): In Ref. 15 we presented a detailed survey of the Dirac algebra with respect to symplectic Hamiltonian motion. The essence of this survey is the insight that the real Dirac algebra describes Hamiltonian motion of an ensembles of two-dimensional oscillators, but as well the motion of a "point particle" in 3-dimensional space, i.e. that Eqn. 26 is, when expressed by the real Dirac algebra, isomorphic to the Lorentz force equation as we are going to show in Sec. VI C. Or, in other words, the Dirac algebra allows to model a point particle and its interaction with an electromagnetic field in terms of the classical statistical ensemble of abstract oscillators.
For a = 9 (γ a = γ 1 γ 2 ) for instance we find: γ 1 = γ 1 cos(τ ) + γ 1 γ 2 γ 1 sin(τ ) = γ 1 cos(τ ) − γ 2 sin(τ ) γ 2 = γ 2 cos(τ ) + γ 1 γ 2 γ 2 sin(τ ) = γ 2 cos(τ ) + γ 1 sin(τ ) which is formally equivalent to a rotation of P about the "z-axis". If the generator γ a of the transformation is symmetric, we obtain: so that (if γ a and γ k commute): and if γ a and γ k anticommute: which is formally equivalent to a boost, as it allows for the following parametrization of "rapidity" τ : A complete survey of these transformations and the (anti-) commutator tables can be found in Ref. 15 . However this formalism corresponds exactly to the relativistic invariance of a Dirac spinor in QED as described for instance in Ref. 41 . The "spatial" rotations are generated by the bi-vectors associated with B and Lorentz boosts by the components associated with E. The remaining 4 generators of symplectic transformations correspond to E and P . They where named phase-rotation (generated by γ 0 ) and phase-boosts (generated by γ = (γ 1 , γ 2 , γ 3 )) and have been used for instance for symplectic decoupling as described in Ref. 32 . It is nearby (and already suggested by our notation) to consider the possibility that the EMEQ (Eq. 83) allows to model a relativistic particle as represented by energy E and momentum P either in an external electromagnetic field given by E and B or -alternatively -in an accelerating and/or rotating reference frame, where the elements E and B correspond to the axis of acceleration and rotation, respectively. We assumed at beginning, that all components of the state vector ψ are equivalent in meaning and unit. Though we found that the state vector is formally composed of canonical pairs, the units are unchanged and identical for all elements of ψ. From Eq. 13 we take, that the simplex F (and also A) have the unit of a frequency. If the Hamiltonian H is supposed to represent energy, then the components of ψ have the unit of the square root of action.
If the coefficients are supposed to represent electromagnetic field, then we need to express these fields in units of frequency. This can be done, but it requires to involve natural conversion factors like , charge e, velocity c and a mass, for instance the electron mass m e . The magnetic field (for instance) is related to a "cyclotron frequency" ω c by ω c ∝ e me B. However, according to the rules of the game, the distinction between particle properties and "external" fields requires a reason, an explanation. Especially as it is physically meaningless for macroscopic coupled oscillators. In Refs. 15,32 , we used this nomenclature in a merely formal way, namely to find a descriptive scheme to order the symplectic generators, so to speak an equivalent circuit to describe the general possible coupling terms for two-dimensional coupled linear optics as required for the description of charged particles beams.
Here we play the reversed modeling game: Instead of using the EMEQ as an equivalent circuit to describe ensembles of oscillators, we now use ensembles of oscillators as an equivalent circuit to describe point particles. The motivation for Eqn. 83 is nevertheless similar, i.e. it follows the formal structure of the Dirac Clifford algebra. The grouping of the coefficients comes along with the number of vector-and bi-vector-elements, 4 and 6, respectively. The second criterium is to distinguish between generators of rotations and boost, i.e. between symmetric and skew-symmetric symplices, which separates energy from momentum and electric from magnetic elements. Third of all, we note that even 20 elements (scalar, bi-vectors, 4-vectors etc.) of even-dimensional Clifford algebras form a sub-algebra. This means that we can generate the complete Clifford algebra from the vector-elements by matrix multiplication (this is why we call them generators), but we can not generate vectors from bi-vectors by multiplication. And therefore the vectors are the particles (which are understood as the sources of fields) and the bi-vectors are the fields, which are generated by the objects and influence their motion. The full Dirac symplex-algebra includes the description of a particle (vector) in a field (bi-vector). But why would the field be external? Simply, because it is impossible to generate bi-vectors from a single vector-type object, since any single vector-type object written as E γ 0 + P · γ squares to a scalar. Dirac aimed for this result when he invented the Dirac matrices. Therefore, the fields must be the result of interaction with other particles and hence we call them "external". This is in some way a "firstorder" approach, since there might be higher order processes that we did not consider yet. But in the linear approach (i.e. for second-order Hamiltonians), this distinction is reasonable and hence a legitimate move in the game.
Besides the Hamiltonian structure (symplices vs. cosymplices) and the Clifford algebraic structure (distinguishing vectors, bi-vectors, tri-vectors etc) there is a third essential symmetry, which is connected to the real matrix representation of the Dirac algebra and to the fact that it describes the general Hamiltonian motion of coupled oscillators: To distinguish the even from the odd elements with respect to the block-diagonal matrix structure. We used this property in Ref. 32 to develop a general geometrical decoupling algorithm (see also Sec. VI B).
Now it may appear that we are cheating somehow, as relativity is usually "derived" from the constancy of the speed of light, while in our modeling game, we did neither introduce spatial notions nor light at all. Instead we directly arrive at notions of quantum electrodynamics (QED). How can this be? The definition of "velocity" within wave mechanics usually involves the dispersion relation of waves, i.e. the velocity of a wave packet is given by the group velocity v gr defined by and the so-called phase velocity v ph defined by It is then typically mentioned that the product of these two velocities is a constant v gr v ph = c 2 . By the use of the EMEQ and Eq. 30, the eigenvalues of F can be written as: Since symplectic transformations are similarity transformations, they do not alter the eigenvalues of the matrix F and since all possible evolutions in time (which can be described by the Hamiltonian) are symplectic transformations, the eigenvalues (of closed systems) are conserved. If we consider a "free particle", the we obtain from Eq. 95: As we mentioned before both, energy and momentum, have (within this game) the unit of frequencies. If we take into account that ω 1,2 ≡ m is fixed, then the dispersion relation for "the energy" E = ω is which is indeed the correct relativistic dispersion. But how do we make the step from pure oscillations to waves? 21 .

A. Moments and The Fourier Transform
In case of "classical" probability distribution functions (PDFs) φ(x) we may use the Taylor terms of the characteristic functionφ x (t) = exp i t x x , which is the Fourier transform of φ(x), at the origin. The k-th moment is then given by where φ (k) is the k-th derivative ofφ x (t).
A similar method would be of interest for our modeling game. Since a (phase space-) density is positive definite, we can always take the square root of the density instead of the density itself: φ = √ ρ. The square root can also defined to be a complex function, so that the density is ρ = φφ ⋆ = φ 2 and, if mathematically well-defined (convergent), we can also define the Fourier transform of the complex root, i.e.
and vice versa: In principle, we may define the density no only by real and imaginary part, but by an arbitrary number of components. Thus, if we consider a four-component spinor, we may of course mathematically define its Fourier transform. But in order to see, why this might be more than a mathematical "trick", but physically meaningful, we need to go back to the notions of classical statistical mechanics. Consider that we replace the single state vector by an "ensemble", where we leave the question open, if the ensemble should be understood as a single phase space trajectory, averaged over time, or as some (presumably large) number of different trajectories. It is well-known, that the phase space density ρ(ψ) is stationary, if it depends only on constants of motion, for instance if it depends only on the Hamiltonian itself. With the Hamiltonian of Eq. 12, the density could for example have the form which corresponds to a multivariate Gaussian. But more important is the insight, that the density exclusively depends on the second moments of the phase space variables as given by the Hamiltonian, i.e. in case of a "free particle" it depends on E and P . And therefore we should be able to use energy and momentum as frequency ω and wave-vector k.
But there are more indications in our modeling game that suggest the use of a Fourier transform as we will show in the next section.

B. The Geometry of (De-)Coupling
In the following we give a (very) brief summary of Ref. 32 . As already mentioned, decoupling is meant -despite the use of the EMEQ -first of all purely technicalmathematical. Let us delay the question, if the notions that we define in the following have any physical relevance. Here we refer first of all to block-diagonalization, i.e. we treat the symplex F just as a "Hamiltonian" matrix. From the definition of the real Dirac matrices we obtain F in explicit 4 × 4 matrix form: If we find a (sequence of) symplectic similarity transformations that would allow to reduce the 4 × 4-form to a block-diagonal form, then we would obtain two separate systems of size 2 × 2 and we could continue with the transformations of Sec. V A. Inspection of Eqn. 102 unveils that F is block-diagonal, if the coefficents E y , P y , B x and B z vanish. Obviously this implies that E · B = 0 and P · B = 0. Or vice versa, if we find a symplectic method that transforms into a system in which E · B = 0 and P · B = 0, then we only need to apply appropriate rotations to achieve block-diagonal form. As shown in Ref. 32 this can be done in different ways, but in general it requires the use of the "phase rotation" γ 0 and "phase boosts" γ. Within the conceptional framework of our game, the application of these transformations equals the use of "matter fields". But furthermore, this shows that block-diagonalization has also geometric significance within the Dirac algebra and -with respect to the Fourier transformation, the requirement P · B = 0 indicates a divergence free magnetic field, as the replacement of P by ∇ yields ∇ · B = 0. The additional requirement E · B = 0 also fits well to our physical picture of e.m. waves. Note furthermore, that there is no analogous requirement to make P · E equal to zero. Thus (within this analogy) we can accept ∇ E = 0.
But this is not everything to be taken from this method. If we analyze in more detail, which expressions are required to vanish and which may remain, then it appear that P · B is explicitely given by That means that exactly those products have to vanish which yield cosymplices. This can be interpreted via the structure preserving properties of symplectic motion. Since within our game, the particle type can only be represented by the structure of the dynamics, and since electromagnetic processes do not change the type of a particle, then they are quite obviously structure preserving which then implies the non-appearance of co-symplices. Or -in other words -electromagnetism is of Hamiltonian nature. We will come back to this point in Sec. VII A.

C. The Lorentz Force
In the previous section we constituted the distinction between the "mechanical" elements P = E γ 0 + γ · P of the general matrix F and the electrodynamical elements F = γ 0 γ · E + γ 14 γ 0 γ · B. Since the matrix S = Σ γ 0 is a symplex, let us assume to be equal to P and apply Eqn. 26. We then find (with the appropriate relative scaling between P and F as explained above): which yields written with the coefficients of the real Dirac matrices: where τ is the proper time. If we convert to the labe frame time t using dt = dτ γ EQ. 104 yields (setting c = 1): which is the general form of the Lorentz force. Therefore the Lorentz force acting on a charged particle in 3 spatial dimensions can be modeled by an ensemble of 2-dimensional CHOs. The isomorphism between the observables of the perceived 3-dimensional world and the second moments of density distributions in the phase space of 2-dimensional oscillators is remarkable. In any case, Eq. 104 clarifies two things within the game. Firstly, that both, energy E and momentum p, have to be interpreted as mechanical energy and momentum (and not canonical), secondly the relative normalization between fields and mechanical momentum is fixed and last, but not least, it clarifies the relation between the time related to mass (proper time) and the time related to γ 0 and energy, which appears to be the laboratory time.

VII. COLLECTIVE MOTION AND THE "SPIN"
Eqn. 26 describes the motion of the second moments of the phase space distribution. Since the equation is linear, it is possible to bring it into a formḟ = B f with a ten-component "vector" f 15 . The frequencies Ω k of the modes of collective motion are given by the ten eigenvalues λ k = i Ω k of the 10 × 10 matrix B. Two of these eigenvalues are zero and correspond to a matched phase space distribution. The frequencies of the other modes are given by The frequencies Ω 1 and Ω 2 are identical to the frequencies of motion of a single phase space point (Eqn. 95).
If they are excited, the averages of the complete distribution oscillate like single spinors. We might call them coherent modes, as the complete ensemble coherently oscillates in these modes, much like the coherent Glauber states or like the betatron oscillation of a non-centered beam in a particle accelerator. A matched beam (or density distribution) is centered, if all first moments of ψ are constantly zero. If we add a fixed centering error δψ to a matched ψ, then the combined distribution S yields: The envelope which can be represented by the second moments oscillates as a whole about the center. However, the oscillation generates a symplex δS with vanishing determinant. As we will show in the next section this can be interpreted as the generation of a lightlike spinor. At the same time, this is a possible Ansatz for the development of a "mechanical" model of how the oscillatory motion of S = P generates electromagnetic waves, since we can describe oscillations by positions, but also as an oscillation in momentum space. In the latter case, we may use the proper (absolute time in the comoving frame) time.
The intrinsic collective modes are those related to the frequencies Ω 3 and Ω 4 . There is a high frequency mode with frequency Ω 3 and a low frequency mode±Ω 4 . Note further, that in the limit K 2 → 0, these modes degenerate: the low frequency mode vanishes while the high frequency mode has the same frequency as the modes of individual motion. However, the collective modes depend on the same two invariants K 1 and K 2 as the single modes. Since these are invariant under (symplectic) transformations, we can describe collective motion as well in normal coordinates, where the matrix F has the form: These are characterized by E = P = 0 and B = B y e y . Hence only mass, energy and a gyroscopic quantity | B| are non-zero. Then we have K 1 = E 2 + B 2 y and K 2 = E 2 B 2 y , so that the frequencies are: The first two states -the stationary states -show the frequency spectrum of a spin-1 2 -particle in an external magnetic field. The last two, the collective modes, correspond to the average frequency of the two oscillators and half of the frequency difference, respectively. Hence the energy of the spin in an external magnetic field is proportional to the frequency difference between the two oscillators.
The energy levels -and hence the measurable frequenciesω k -of the modes described by Eqn. 110, are given by: However, up to now the mass m (and the energy) are understood as properties of the particle while B y is an external magnetic field, for which B y ≪ m 22 so that the measurable frequencies are simply E ± B y . If the oscillator energy has a contribution that is proportional to an external field, then we say that the system has a magnetic moment that is usually interpreted classically by a current loop and hence by an intrinsic angular momentum called spin S. In section VI C we found the correct scale for the Lorentz force, if the fields E and B are scaled by q 2 m , so that 22 The magnetic frequency ω b = e 2 m B equals the "eigenfrequency" of the electron ωe = me c 2 in case that B ≈ 10 10 T. Magnetic fields of this size would -according to our model -significantly reduce the electron self-energy -though the energy difference between "spin up" and "spin down" remains unchanged.
where g e is the so-called g-factor of the electron. Since the two energy levels are interpreted by the "spin component" parallel to the field S y = ± 2 , we may conclude that g e = 2, as expected. But the idea of a "charge" that really "rotates" in space-time thus generating a magnetic moment is -in view of our oscillator model -at best a metaphor.
The "classical" picture in spacetime requires either a distributed charge density in some rotary motion or a point particle in circular motion to explain the magnetic moment. The former implies a finite radius of the charge density, the latter fails since an oscillating charge must -according to electrodynamics -radiate and therefore continuously loose energy. Both pictures are intrinsically inconsistent while there are no contradictions within our modeling game with respect to "spin". As spacetime is only defined by the interaction of several "objects" involving a "distance". In our abstract oscillator model "spin" is a "tune-shift" between coupled oscillators by the "gyroscopic force" B y . But the oscillation itself is a motion in an abstract phase space -it is a consequence of classical oscillator algebra and does not imply any "real" motion in any "real" space-time.

A. The Maxwell Equations
As we already pointed out, waves are (within this game) the result of a Fourier transformation (FT). But there are different ways to argue this. In Ref. 20 we argued that Maxwell's equations can be derived within our framework by (a) the postulate that space-time emerges from interaction, i.e. that the fields E and B have to be constructed from the 4-vectors. X = t γ 0 + x · γ, J = ρ γ 0 + j · γ and A = Φ γ 0 + A · γ with (b) the requirement that no co-symplices emerge. But we can also argue with the FT of the density (see Sec. VI A).
If we introduce the 4-derivative The non-abelian nature of matrix multiplication requires to distinguish differential operators acting to the right and to the left, i.e. we have ∂ as defined in Eq. 113, → ∂ and ← ∂ which is written to the right of the operand (thus indicating the order of the matrix multiplication) so that (114) The we find the following general rules (see Eq. 36) that prevent from non-zero cosymplices: (115) Application of these derivatives yields: The first row of Eq. 116 corresponds to the usual definition of the bi-vector fields from a vector potential A and is (written by components) given by The second row of Eq. 116 corresponds to the usual definition of the 4-current J as sources of the fields and the last three rows just express the impossibility of the appearance of cosymplices. They explicitely represent the homogenuous Maxwell equations the continuity equation and the so-called "Lorentz gauge" The simplest idea about the 4-current within QED is to assume that it is proportional to the "probability current", which is within our game given by the vector components of S = Σ γ 0 .

VIII. THE PHASE SPACE
Up to now, our modeling game referred to the second moments and the elements of S are second moments such that the observables are given by (averages over) the following quadratic forms: If we analyze the real Dirac matrix coefficents of S = ψ ψ T γ 0 in terms of the EMEQ and evaluate the quadratic relations between those coefficients, then we obtain: Besides a missing renormalization these equations describe an object without mass but with the geometric properties of light as decribed by electrodynamics, e.g. by the electrodynamic description of electromagnetic waves, which are E · B = 0, P ∝ E × B, E 2 = B 2 and so on. Hence single spinors are light-like and can not represent massive particles.
Consider the spinor as a vector in a four-dimensional Euclidean space. We write the symmetric matrix A (or Σ, respectively) as a product in the form of a Gramian: or -componentwise: The last line can be read such that matrix element A ij is the conventional 4-dimensional scalar product of column vector B i with column vector B j . From linear algebra we know that Eqn. 123 yields a non-singular matrix A, iff the column-vectors of the matrix B are linearily independent. In the orthonormal case, the matrix A simply is the pure form of a non-singular matrix, i.e. the unit matrix. Hence, if we want to construct a massive object from spinors, we need several spinors to fill the columns of B. The simplest case is the orthogonal case: the combination of four mutual orthogonal vectors. Given a general 4-component Hamiltonian spinor ψ = (q 1 , p 1 , q 2 , p 2 ), how do we find a spinor that is orthogonal to this one? In 3 (i.e. odd) space dimensions, we know that there are two vectors that are perpendicular to any vector (x, y, z) T , but without fixing the first vector, we can't define the others. In even dimensions this is different: if we fine a non-singular skew-symmetric matrix like γ 0 , then we have a general expression for a vector that is perpendicular to ψ, namely γ 0 ψ. As in Eq. 3, it is the pure form of the matrix that ensures the orthogonality. Any third vector γ k ψ, which is perpendicular to ψ and to γ 0 ψ, must then be skew-symmetric and must fulfil ψ T γ T k γ 0 ψ = 0, which means that the product γ T k γ 0 must be skew-symmetric and hence that γ k must anti-commute with γ 0 : Now let us for a moment return to the question of dimensionality. There are in general 2 n (2 n − 1)/2 nonzero independent elements in a skew-symmetric square 2n × 2n matrix. But how many matrices are there in the considered phase space dimensions, i.e. in 1 + 1, 3 + 1 and 9 + 1 (etc) dimensions which anti-commute with γ 0 ? We need at least 2 n − 1 skew-symmetric anticommuting elements to obtain a diagonal A. However, this implies at least N − 1 anticommuting elements of the Clifford algebra that square to −1. Hence the ideal case is 2n = N , which is only true for the Pauli and Dirac algebra. For the Pauli algebra, there is one skewsymmetric element, namely η 0 . In the Dirac algebra there are 6 skew-symmetric generators that contain two sets of mutually anti-commuting skew-symmetric matrices: γ 0 , γ 10 and γ 14 on the one hand and γ 7 , γ 8 and γ 9 on the other hand. The next considered Clifford algebra with N = 9 + 1 dimensions requires a representation by 2n = 32 = √ 2 10 -dimensional real matrices. Hence this algebra may not represent a Clifford algebra with more than 10 unit elements -certainly not 2 n. Hence we can not use the algebra to generate purely massive objects (e.g. diagonal matrices) without further restrictions (i.e. projections) of the spinor ψ.
But what exactly does this mean? Of course we can easily find 32 linearily independent spinors to generate an orthogonal matrix B. So what exactly is special in the Pauli-and Dirac algebra? To see this, we need to understand, what it means that we can use the matrix B of mutually orthogonal column-spinors B = (ψ, γ 0 ψ, γ 10 ψ, γ 14 ψ) .
This form implies that we can define the mass of the "particle" algebraically, and since we have N − 1 = 3 anticommuting skew-symmetric matrices in the Dirac algebra, we can find a multispinor B for any arbitrary point in phase space. This does not seem to be sensational at first sight, since this appears to be a property of any Euclidean space. The importance comes from the fact that ψ is a "point" in a very special space -a point in phase space. In fact, we will argue in the following that this possibility to factorize ψ and the density ρ is everything but self-evident. If we want to simulate a phase space distribution, we can either define a phase space density ρ(ψ) or we use the technique of Monte-Carlo simulations and represent the phase space by (a huge number of random) samples. If we generate a random sample and we like to implement a certain exact symmetry of the density in phase space, then we would (for instance) form a symmetric sample by appending not only a column-vector to B, but also its negative −ψ. In this way we obtain a sample with an exact symmetry. In a more general sense: If a phase space symmetry can be represented by a matrix γ s that allows to associate to an arbitrary phase space point ψ a second point γ s ψ where γ s is skew-symmetric, then we have a certain continuous linear rotational symmetry in this phase space. As we have shown, phase-spaces are intrinsically structured by γ 0 and insofar much more restricted than Euclidean spaces. This is due to the distinction of symplectic from non-symplectic transformations and due to the intrinsic relation to Clifford algebras: Phase spaces are spaces structured by time. According to the rules and results of the game, they appear to be the only physically fundamental spaces at all.
We may imprint the mentioned symmetry to an arbitrary phase space density ρ by taking all phase space samples that we have so far and adding the same number of samples, each column multiplied by γ s . Thus, we have a single rotation in the Pauli algebra and two of them in the Dirac algebra: or: Note that order and sign of the column-vectors in B are irrelevant -at least with respect to the autocorrelation matrix B B T . Thus we find that there are two fundamental ways to represent a positive mass in the Dirac algebra and one in the Pauli-algebra. The 4-dimensional phase space of the Dirac algebra is in two independent ways self-matched. Our starting point was the statement that 2 n linear independent vectors are needed to generate mass. If we can't find 2 n vectors in the way described above for the Pauli and Dirac algebra, then this does (of course) not automatically imply that there are not 2 n linear independent vectors.
But what does it mean that the dimension of the Clifford algebra of observables (N ) does not match the dimension of the phase space (2 n) in higher dimensions? There are different physical descriptions given. Classically we would say that a positive definite 2 n-component spinor describes a system of n (potentially) coupled oscillators with n frequencies. If B is orthogonal, then all oscillators have the same frequency, i.e. the system is degenerate. But for n > 2 we find that not all eigenmodes can involve the complete 2 n-dimensional phase space. This phenomenon is already known in 3 dimensions: The trajectory of the isotropic three-dimensional oscillator always happens in a 2-dimensional plane, i.e. in a subspace. If it did not, then the angular momentum would not be conserved. In this case the isotropy of space would be broken. Hence one may say in some sense that the isotropy of space is the reason for a 4dimensional phase-space and hence the reason for the 3 + 1-dimensional observable space-time of objects. Or in other words: higher-dimensional spaces are incompatible with isotropy, i.e. with the conservation of angular momentum. There is an intimate connection of these findings to the impossibility of Clifford algebras Cl p,1 with p > 3 to create a homogeneous "Euclidean" space: Let γ 0 represent time and γ k with k ∈ [1, . . . , N − 1] the spatial coordinates. The spatial rotators are products of two spatial basis vectors. The generator of rotations in the (1, 2)-plane is γ 1 γ 2 . Then we have 6 rotators in 4 spatial dimensions: However, we find that some generators commute and while others anticommute and it can be taken from combinatorics that only sets of 3 mutual anti-commuting rotators can be formed from a set of symmetric anticommuting γ k . The 3 rotators mutually anticommute, but γ 1 γ 2 and γ 3 γ 4 commute. Furthermore, in 9 + 1 dimensions, the spinors are either projections into 4-dimensional subspaces or there are non-zero off-diagonal terms in A, i.e. there is "internal interaction". Another way to express the above considerations is the following: Only in 4 phase space dimensions we may construct a massive object from a matrix B that represents a multispinor Ψ of exactly N = 2n single spinors and construct a wave-function according to where ρ = φ 2 is the phase space density.
It is easy to prove and has been shown in Ref. 20 that the elements γ 0 , γ 10 and γ 14 represent parity, time reversal and charge conjugation. The combination of these operators to form a multispinor, may lead (with normalization) to the construction of symplectic matrices M. Some examples are: 14 (132) Hence the combination of the identity and CPToperators can be arranged such that the multispinor M is symplectic with respect to the directions of time γ 0 , γ 10 and γ 14 , but not with respect to γ 7 , γ 8 or γ 9 . As we tried to explain, the specific choice of the skew-symmetric matrix γ 0 is determined by a structure defining transformation. Since particles are nothing but dynamical structures in this game, the 6 possible SUMs should stand for 6 different particle types. However, for each direction of time, there are also two choices of the spatial axes. For γ 0 we have chosen γ 1 , γ 2 and γ 3 , but we could have used γ 4 = γ 0 γ 1 , γ 5 = γ 0 γ 2 and γ 6 = γ 0 γ 3 as well.
Thus, there should be either 6 or 12 different types of structures (types of fermions) that can be constructed within the Dirac algebra. The above construction allows for 3 different types corresponding to 3 different forms of the symplectic unit matrix, further 3 types are expected to be related to γ 7 , γ 8 and γ 9 : (133) These matrices describe specific symmetries of the 4dimensional phase space, i.e. geometric objects in phase space. Therefore massive multispinors can be described as volumes in phase space. If we deform the figure by stretching parameters a, b, c, d such that then one obtains with f k taken from Eq. 121: This result reproduces the quadratic forms f k of Eq. 121, but furthermore the phase space radii a, b, c and d reproduce the structure of the Clifford algebra, i.e. the classification into the 4 types of observables E, P , E and B. This means that a deformation of the phase space "unit cell" represents momenta and fields, i.e. the dimensions of the phase space unit cell are related to the appearance of certain symplices: while for a = b = c = d all vectors but E vanish. Only in this latter case, the matrix M is symplectic for a = b = c = d = 1. These relations confirm the intrinsic connection between a classical 4-dimensional Hamiltonian phase space and Clifford algebras in dimension 3+1.

IX. SUMMARY AND DISCUSSION
Based on three fundamental principles we have shown that the algebraic structure of coupled classical degrees of freedom is (depending on the number of the DOFs) isomorph to certain Clifford algebras that allow to explain the dimensionality of space-time, to model Lorentztransformations, the relativistic energy-momentum relations and even Maxwell's equations.
It is usually assumed that we have to define the properties of space-time in the first place: "In Einstein's theory of gravitation matter and its dynamical interaction are based on the notion of an intrinsic geometric structure of the space-time continuum" 37 . However -as we have shown within this "game" -it has far more explanatory power to derive and explain space-time from the principles of interaction. Hence we propose to reverse the above statement: The intrinsic geometric structure of the space-time continuum is based on the dynamical interaction of matter. A rigorous consequence of this reversal of perspective is that "space-time" does not need to have a fixed and unique dimensionality at all. It appears that this dimensionality is relative to the type of interaction. However, supposed higher-dimensional spacetimes (see Ref. 20 ) would emerge in analogy to the method presented here, for instance in nuclear interaction, then these space-times would not simply be Euclidean spaces of higher dimension. Clifford algebras -especially if they are restricted by symplectic conditions by a Hamiltonian function, have a surprisingly complicated intrinsic structure. As we pointed out, if all generators of a Clifford algebra are symplices, then in 9 + 1 dimensions, we find k-vectors with k ∈ [0..10] but k-vectors generated from symplices are themselves symplices only for k ∈ [1, 2, 5, 6, 9, 10, . . . ]. However, if space-time is constraint by Hamiltonian motion, then ensembles of oscillators may also clump together to form "objects" with 9 + 1 or 25 + 1-dimensional interactions, despite the fact that we gave strong arguments for the fundamentality of 3 + 1-dimensional Hamiltonian algebra.
There is no a priori reason to exclude higher order terms -whenever they include constants of motion. However, as the Hamitonian then involves terms of higher order, we might then need to consider higher order moments of the phase space distribution. In this case we would have to invent an action constant in order to scale ψ.
Our game is based a few general rules and symmetry considerations. The math used in our derivation -taken the results of representation theory for granted -is simple and can be understood on an undergraduate level. And though we never intended to find a connection to string theory, we found -besides the 3+1 -dimensional interactions a list of possible higher-dimensional candidates, two of which are also in the focus of string theories, namely 9 + 1 = 10-dimensional and 25 + 1 = 26-dimensional theories 38 .
We understand this modeling game as a contribution to the demystification (and unification) of our understanding of space-time, relativity, electrodynamics and quantum mechanics. Despite the fact that it has become tradition to write all equations of motion of QED and QM in a way that requires the use of the unit imaginary, our model seems to indicate that it does not have to be that way. Though it is frequently postulated that evolution in time has to be unitary within QM, it appears that symplectic motion does not only suffice, but is superior as it yields the correct number of relevant operators. While in the unitary case, one should expect 16 (15) unitary (traceless) operators for a 4-component spinor, while the natural number of generators in the corresponding symplectic treatment is 10 as found by Dirac himself in QED 2,3 . If a theory contains things which are not required, then we have added something arbitrary and artificial. The theory as we described it indicates that in momentum space, which is used here, there is no immediate need for the use of the unit imaginary and no need for more than 10 fundamental generators. The use of the unit imaginary however appears unavoidable when we switch via Fourier transform to the "real space".
There is a dichotomy in physics. On the one hand all causes are considered to inhabit space-time (local causality), but on the other hand the physical reasoning mostly happens in energy-or momentum space: There are no Feyman-graphs, no scattering amplitudes, no fundamental physical relations, that do not refer in some way to energy or momentum (-conservation). We treat problems in solid state physics as well as in high energy physics mostly in Fourier space (reciprocal lattice).
We are aware that the rules of the game are with their rigour difficult to accept. However, maybe it does not suffice to speculate that the world might be a hologram 23 -we really should play modeling games that might help to decide, if and how it could be like that. Hence we find that f 0 (energy), f 8 (one spin component) and H (mass) are "sharp" (i.e. operators with an eigenvalue), while the other "expectation values" have a nonvanishing variance. The fact that spin always has a "direction of quantization", i.e. that only one single "sharp" component, can therefore be nicely modelled within our game. It is a consequence of symplectic motion. Note also that the squared expectation values of all even (γ 1 , γ 3 , γ 4 and γ 6 , except γ 0 and γ 8 ) and all odd (γ 2 , γ 5 , γ 7 and γ 9 ) operators are equal 24 .
Consider the coordinates are given by the fields ( E ∝ x) Q = (f 4 , f 5 , f 6 ) T and the momenta as usual by P = (f 1 , f 2 , f 3 ) T , then the angular momentum L should be given by L = Q× P . We obtain the following expectation values from the microcanonical ensemble: That is -up to a common scale factor of ε (or ε 2 , respectively) -we have the same results as in Eq. A18. Consider now the quantum mechanical postulates that 24 The even Dirac matrices are block-diagonal, the odd ones not. There are six even symplices and four odd (γ 2 , γ 5 , γ 7 and γ 9 ) ones 15 . Obviously this pattern is the reason for the grouping in Eq. A18.
the spin component of a fermion is s z = ±s = ± 1 2 and | s| 2 = s 2 x + s 2 y + s 2 z = s(s + 1) = 3 4 . We can "derive" this result (up to a factor) from an isotropy requirement for the 4th order moments, i.e. from the condition that P 2 x = P 2 y = P 2 z : sin 2 (2 α) = 2 cos 2 (2 α) (A20) so that or equivalently with 1 + 3 cos 2 (2 α) = 0 (A22) we obtain With respect to the symplex F as defined in Eq. A2, we have so that with Eq. A18 one finds The total spin would then be given by the 4th order moments as S 2 = ε 2 , so that for a spin-1 2 -particle we would have to normalize to ε 2 = (s(s + 1)) 2 = 3 4 2 and hence To conclude, classical statistical mechanics allows for a description of spin, if the rules of symplectic motion are taken into account. This alone is remarkable. Secondly, assumed that the microcanonical ensemble is the right approach, then the isotropy of the emergent 3+1dimensional space-time (with respect to 4th-order moments) apparently requires a certain ratio between the frequencies and amplitudes of the two coupled oscillators, i.e. an asymmetry on the fundamental level.

Entropy and Heat Capacity
The entropy S of the microcanonical ensemble can be written as 35 : The temperature T of the system is given by so that energy as a function of temperature is and the heat capacity C V = ∂U ∂T is (per particle) This important result demonstrates -according to statistical mechanics -the 3-dimensionality of the "particle" as the energy per DOF is k T 2 : a two-dimensional harmonic oscillator of fundamental variables is equivalent to an free 3-dimensional "point particle". To our knowledge this is the first real physical model of a relativistic point particle.