Next Article in Journal
Half-Symmetric Connections of Generalized Riemannian Spaces
Previous Article in Journal
Fractional Stochastic Systems Driven by Fractional Brownian Motion: Existence, Uniqueness, and Approximate Controllability with Generalized Memory Effects
Previous Article in Special Issue
Two-Party Quantum Private Comparison with Pauli Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is the Basic Structure of the Universe Simple?

Department of Mathematics, University of Stockholm, 106 91 Stockholm, Sweden
Axioms 2025, 14(12), 922; https://doi.org/10.3390/axioms14120922
Submission received: 30 September 2025 / Revised: 27 November 2025 / Accepted: 5 December 2025 / Published: 14 December 2025
(This article belongs to the Special Issue Recent Advances in Quantum Mechanics and Mathematical Physics)

Abstract

Can our extremely complicated world be explained, starting from simple conditions and laws? Such questions easily become metaphysical, but at the same time they have historically served science well by promoting new and fruitful ideas. Of particular interest in this paper is the unification of general relativity and quantum mechanics. It is suggested that a way to find a common basis for these theories could be to view them both as stochastic theories which, among all possible macroscopic developments, promote the simplest ones. But with the difference that general relativity uses real probabilities, whereas quantum mechanics uses complex weights. In a certain sense this approach goes back to the Principle of Least Action, although the perspective on this principle in the present paper is different from the one which is commonly used in contemporary physics. It is also suggested that a more general principle, which applies to both theories and which is beyond both stationarity and minimizing, could give us a better starting point for the unification.

1. Introduction

A very central reason for doing physics is to find a description of reality which is as simple as possible. It may not always be obvious what simplicity should mean, but often it is valued much higher at a fundamental level than in connection with practical applications.
For example, the Copernican revolution was to a high degree driven by the wish to simplify Ptolemy’s complicated machinery on a fundamental level, even if it was only long after Copernicus’ first steps that the simplification actually resulted in better predictions from a computational point of view (Kuhn [1]).
Similarly, general relativity is usually considered as a huge step forward from the Newtonian theory of gravity, not because it is easier to use in applications, but because it is simpler on a fundamental level (see e.g., [2], pp. 302–303).
It goes without saying that our judgement over a scientific theory must be based on the accuracy of its predictions. Nevertheless, it is remarkable how often fundamental progress has been made possible by such striving for simplicity as in the above examples.
The perhaps greatest problem in fundamental physics today consists in the unification of general relativity and quantum mechanics. And so far, the most common method has been to somehow try to incorporate general relativity into quantum mechanics, often by trying to make it into a quantum field theory.
In this paper, my starting point will be a different one. Rather than trying to find technical tools for pressing gravity into quantum mechanics, let us try to see how our enormously complicated universe could have come into existence, starting from genuinely simple principles. The idea that the universe is somehow driven by simplicity is a very old one in physics, often associated with the Principle of Least Action. However, nowadays this aspect of the principle has been more or less forgotten, and the version which is nowadays in use should more accurately be called the “Stationary Action Principle”.
Here I will argue that returning to the original idea of this principle could be a starting point for a common perspective on general relativity and quantum mechanics, where the two theories play similar roles: Both can be perceived as mechanisms for promoting the simplest kind of development.
It may be that this will not lead directly to quick and effective computational methods for treating problems which contain elements from both theories (such methods to a large part already exist). But it may be beneficial for our understanding in the long run, both from a theoretical and a computational point of view.
Is such an approach to the concept of simplicity in itself metaphysical? Certainly, it may lead to encounters in the borderland between ordinary physical science and metaphysical ideas about the ultimate nature of reality, sometimes referred to as “philosophical cosmology”.
A lot more could be said about the reasons for striving towards simplicity, both from a scientific and a philosophical point of view. In the following however, I will leave such questions out as far as possible and rather concentrate on the question to what such ideas may lead us. Let us also keep in mind that the goal should always be a simple, consistent and mathematically manageable theory, nothing more and nothing less.
In Section 2, Section 3, Section 4, Section 5 and Section 6, I review the history of the role of simplicity in physics from the perspective in this paper. Everything here is based on well-known results, but the presentation is set up to reflect my own viewpoint. In Section 7, Section 8 and Section 9 on the other hand, I then briefly sketch a way to make general relativity into a kind of simple stochastic theory, based on real probabilities/weights. The methods come from classical statistical mechanics, but I take the full responsibility for the way these are applied to general relativity. The results presented here are based on earlier work of mine (references are given), and they represent the best I can do so far. But it should be remembered that this is a very difficult area of science, which is still far from being put into a final rigorous form. In Section 10, Section 11 and Section 12, I when discuss possible implications for a future unification of general relativity and quantum mechanics. Again, this is my own attempt to find a common ground for the real-valued probabilities of general relativity and the complex-valued probabilities used in quantum mechanics. It is not claimed that the idea to make general relativity into a stochastic theory is new. But one of the interesting points here is that such a common ground for the two theories could be seen as way to connect quantum formalism with the essentially classical theory of minimizing curvature. Our lack of understanding of the true role of curvature in quantum mechanics may in fact be one of the most central obstacles on the road to a grand unification.
Without making claims to have reached any kind of final answers to the mysteries of the unification, the question of a common generalized Principle of Least Action is discussed in some detail. Rather than as a final answer, this should be seen as a starting point and an attempt to initiate a broader discussion about the role of minimizing and probabilities in physics. In general, this paper is about ideas and perspectives, rather than about absolute rigor and computations. I have done my best to avoid technicalities as far as possible, but some references to more technical papers are given.

2. The Classical Universe

To give a historical perspective on the question of simplicity in physics, let us start from classical Newtonian physics and its interpretation by Laplace and his famous demon (see [3]). Laplace’s claim was that the time development of our universe is completely deterministic. In somewhat modernized language, this could also be expressed by saying that time development gives an isomorphism between the states of the universe at any two moments of time. And as a consequence, these states must in a certain sense be equally complicated.
If we now add the contemporary view that the universe started with a Big Bang approximately 14 billion years ago, then the conclusion must be that the structure of the universe at the Big Bang (or rather arbitrarily close to it) must have been just as complex as the universe we experience now. And in particular, it must have included the seed to all of Shakespeare’s dramas and... well just everything we know of.
The task to explain the coming into existence of something so horribly complicated may even make assuming it to be the work of an almighty God look as an easy way out.
Although Laplace’s point of view may in a sense have been reasonable in principle, fortunately the premises are wrong. The world is not governed by classical mechanics. And other theories, like quantum mechanics, give much better hope for finding a simple origin.

3. The Quantum Mechanical Universe

Why should a quantum mechanical universe give us better hope for a simple explanation? At first sight, quantum mechanics is a deterministic theory. The time development given by e.g., the Schrödinger equation gives the same kind of isomorphism between the states at two different times as does the Newtonian equations in the classical case. This is definitely a relevant question, and it is also a point where the answers may depend on our underlying beliefs and our interpretations of quantum mechanics: Different interpretations may lead to different answers and different conclusions.
Having said this however, quantum theory does in one way or another contain a stochastic element. Exactly what this means and how it should be interpreted is something that divides the scientific community, but assuming such a stochastic element can open up for entirely different possibilities, as compared to Laplace’s determinism. As a highly simplified example, without any ambition to describe the real Big Bang, we may consider the universe to have started from just one single completely ordered state with zero volume. Statistical fluctuations could then have led to more and more complicated states, following well-known principles from well-established physical theories.
In other words, a complicated universe could originate from essentially trivial initial conditions. However, explanations using such simple initial conditions come at a price: What are the mechanisms behind such a stochastic process? It must definitely be something different from what our usual theories, whether classical or Bayesian, have been created to handle. Although quantum mechanics is an extremely accurate theory, its ontology is mysterious. According to a recent study (see Nature [4]), Physicists disagree wildly on what quantum mechanics says about reality: Is the wave function something real or just a kind of abstract mathematical tool? Do virtual particles exist or do they not? And so on.
Abandoning the need for extremely complicated initial conditions, in favor of an ontology which we do not understand, could mean that we are just trying to avoid one kind of philosophical obstacle by replacing it with another kind of philosophical obstacle.

4. The Everett Interpretation

One way to avoid the difficulties with the stochastic element, without returning into the problems of Laplace’s determinism, is given by the multiverse interpretation. This point of view originates from the Everett interpretation of quantum mechanics (see [5]), in which all possible outcomes of random events or, more accurately, all possible developments of the universe, are considered real and together constitute the so called multiverse. In this interpretation the stochastic element is not a part of the physical world itself, but rather has to do with our perception of it. What we see as randomness just comes from the fact that we can only observe a very small part of the multiverse: If we observe that a certain particle after a measurement has spin-up, it just means that spin-up is the true value of the spin in the world where we find ourselves, but this does not exclude that the spin may at the same time be down in another world which is also real, but no longer observable to us.
The Everett interpretation is usually considered to be both an interpretation of quantum mechanics and a consequence of it. To the mind of the author, it gives an opportunity for understanding the concept of quantum randomness which is much better suited for explaining the origin of our complex world than is the Copenhagen interpretation.
But this does not necessarily mean that it is unproblematic. For example, although the Schrödinger equation is essentially invariant with respect to the reversal of the direction of time, the multiverse interpretation is not: A simple measurement of the spin of a particle may cause the world to split into two in the multiverse. But the world never seems to split into two when we go backwards in time. This again leads to problems about time’s arrow and causality, where the scientific community is divided.
It should be noted that the elimination of the stochastic element in the multiverse interpretation also comes at a price in another way: In classical ontology, things either exist or they do not. In the multiverse interpretation, all developments which are logically possible actually exist, but with different “weights”, and we are still far from understanding the meaning of these weights, how many macroscopic developments actually receive substantial such weights and the importance of this for our understanding of the multiverse as a whole.
Having said this, it is still the belief of the author of this paper that the Everett interpretation may be the best way forwards if we do want a description of reality which has developed from something simple.
But something in this picture is still missing. In fact, it may be that simply considering all possible development still lacks both an appropriate perspective and a corresponding physical principle. To say that every development is equally real would otherwise just leave us with a very chaotic picture of the world where everything is possible but nothing is predictable. Needless to say, this would be extremely far from the reality which we can all perceive and which is somehow governed by remarkably simple laws of nature.

5. Universal Simplicity and the Democracy of All Histories

A possible way to avoid this chaotic picture at the end of the previous section would be to argue that somehow the process which determines the weights of different developments tends to favor the simplest ones: The chaos on the micro-level remains, but the macroscopic reality that emerges could still be a very ordered one. As already mentioned in the introduction, this is very closely related to the old idea in physics which is most often referred to as “the Principle of Least Action”. This idea is often associated with Maupertuis (see [6]), who in 1744 launched it as a “principle of ultimate economy”, in the sense that nature always develops with the least possible effort. Although his argumentation may have been rather metaphysical, action can actually in a certain sense be seen as a starting point for a way of determining weights for different developments which favors simplicity. And in addition, minimizing the action can also in general be used to derive the full equations of motion.
But what has this got to do with the multiverse? Even if the Principle of Least Action, as a principle of stationary action, rather quickly became an indispensable instrument in every physicist’s toolbox, it took a long time before it was discovered that it could have something to say about multiple developments. A major step forward came with Feynman’s idea of “democracy of all histories” (see [7]).
The essence of Feynman’s approach is to assign to each history a complex weight e i L / , where L is the action of the development, and the total probability amplitude for a certain given (macroscopic) history, leading from a state A to a state B, is then obtained by summing over all nearby histories. As it turns out, for almost all macroscopic histories, this summing leads to essentially vanishing probability, due to the fact that the complex weights, as opposed to real probabilities, will tend to cancel out. Only histories which are very close to being stationary with respect to the action can contribute with non-negligible probabilities.
Since action minimizing histories are also stationary, the theory actually promotes macro-states with as small action as possible, very much in the spirit of Maupertius’ original idea.

6. The Pre-Multiverse

Feynman’s approach has been enormously successful. It can be said to have been the key to the success of quantum field theory. In the same time it fits very well into the general scheme of this paper: It does present a very general mechanism which explains how nature, among all possible macro developments, somehow favors the simplest ones (i.e., the ones with small action).
On the other hand, this mechanism, although it works well in a quantum mechanical context, has so far been unable to handle gravitation. In spite of enormous efforts ever since Feynman’s book [7], there is still little progress when it comes to including general relativity.
It can very well be argued that quantum mechanics is the most exact theory that physics has ever come up with, in spite of its inner contradictions. This has led many physicists to leave conceptual understanding behind, and simply accept the mathematical framework as a tool without further questions.
This may not be a bad strategy for exploring the realm of quantum physics. But in the same time, it could actually be that it is this strategy which blocks the road to a unification with e.g., general relativity. You cannot really argue with a theory which gives constants of nature with 15 valid digits or more, so attempting to create a synthesis between quantum mechanics and more or less any other physical theory has so far tended to end up with—quantum mechanics.
In this paper, I want to suggest a different and more general perspective: Let us start, in the spirit of Feynman’s approach, by taking as our basic object of study the set of all possible developments in a broad sense. Everett’s multiverse usually is concerned with all possible quantum mechanical developments. Here it will be more natural to start from the idea of the set of all possible developments, leading from a given initial state to a given final state, however without necessarily satisfying any particular laws of motion on the way.
In the following I will refer to such a set as a pre-multiverse. The name may not be ideal, since it has already been used in various more or less scientific contexts. But it is hard to come up with something better. With or without a name, it has been with us for a long time.
In the present situation, it is important to note that, as opposed to the usual Everett multiverse, a pre-multiverse is not considered to be a consequence of quantum mechanics. Rather, quantum mechanics is regarded as an instrument, or a mechanism, for selecting the simplest developments.
We may now ask what a corresponding instrument in general relativity would look like? Comparing these instruments could then hopefully lead to a new and perhaps also more fruitful view on how quantum mechanics and general relativity could be linked to each other.

7. The Principle of Least Action in General Relativity from a Stochastic Point of View

In the following, I want to shortly sketch how general relativity could be perceived as an instrument for selecting the simplest elements in a pre-multiverse, in a certain sense similar to the mechanism which we associate with the Feynman path-integral formulation of quantum mechanics, but now with real probabilities instead of complex ones. At this stage, it may appear non-obvious how this should be related to quantum theory, but I will come back to this question in Section 10 and Section 11.
The Principle of Least Action has played a central role in general relativity, ever since Hilbert’s derivation of the field equations (see [8] or e.g., [2]) from the Einstein-Hilbert action integral,
Ω R d V ,
and it is also central in most alternative theories of gravity. However, in all these cases, it works essentially as a technical tool for finding the stationary metrics, and has little to do with Maupertuis’ original ideas about minimizing.
Following the strategy of this paper, let us now instead ask what kind of closed universe will be the simplest? More precisely, we may take vanishing curvature as the simplest state of the geometry. Thus, curvature becomes a kind of measure of the deviation from the ground state. The simplest universe can then be interpreted as the one which is least curved.
The most obvious and also most natural way of measuring curvature is to make use of the integral
Ω R 2 d V .
This choice shares with the traditional theory the view that the scalar curvature is a fundamental concept and a natural starting point for general relativity. But it is nevertheless an odd choice from the classical point of view: If we take (2) as a Lagrangian (as is usually done with (1) above), it does not give the right field equations when these are computed in the traditional deterministic way. In fact, it is for example easy to see that every metric with R 0 will be stationary for (2), which would then force us to include many unphysical metrics in the theory.
So what is wrong with (2)? Although (2) can be a good measure of simplicity, it still misses the stochastic element. In the next two sections, I will discuss how this can be included in a natural way. The result will be a different version of the Principle of Least Action which does imply the usual field equations, although in a way which is different from the usual one, starting from (1).
For more information about various aspects of this in cosmology, see [9,10].

8. The Ensemble

The approach to be used here comes from classical statistical mechanics. This is one of the most successful and reliable branches of mathematical physics, and its major mission consists in deriving macroscopic results from microscopic assumptions, which is exactly what is needed here. But it is notoriously difficult to define concepts and to prove results with the same kind of rigor as in other areas of physics.
A preliminary issue concerns the meaning of the words macro and micro themselves. In the following “macro” will refer to phenomena on the kind of cosmic scale where we usually apply general relativity, whereas “micro” will refer to phenomena on a scale where quantum phenomena are becoming important. In practice, it is seldom difficult to see the difference between the two cases, even if it is not possible to give an entirely appropriate definition.
In the following, we will consider the probability space of all possible metrics on a certain space-time manifold Ω , which all give the same fixed value for the total volume. This is a typical situation where an exact definition is out of reach: We know very little about what happens to the metric at a scale where quantum mechanics comes into play. We do not know if such a probability space would be finite or infinite, or even if the classical concept of a metric has a well-defined meaning at all.
But it is also an example of a situation where statistical mechanics can be put to work without such exact knowledge. What would be the most natural way to assign a probability to a given metric g?
Clearly, the total scalar curvature is additive in separate regions. Starting from this, what can be said about the probability for a given value of the mean scalar curvature in a region D that is a disjoint union of many smaller regions?
We may not know what the probability distributions for the mean scalar curvatures look like in all these smaller regions, but the point is that under quite general assumptions, this will not be important. Let us just suppose, starting from the idea that zero curvature is the most natural and simplest state, that the mean values of these distributions are zero. The mean scalar curvature R in D will be the mean of the contributions from many much smaller subregions, and if we roughly treat these contributions as independent variables, then the central limit theorem (see Fischer [11]) will tell us that the probability for a certain value of the mean scalar curvature R in D is
exp { μ D R 2 } ,
where μ D is some constant depending on the volume of D. In the following, I will simply take this as the natural probability weight for the metric g in D.
Now how about the probability weight p of the metric g on a disjoint union Ω = α D α of sets like in (3) above? Using multiplicativity (which again means that different regions are treated as more or less independent) and assuming that all the regions have roughly the same volume, we get the (unnormalized) probability:
p α exp { μ D α R g α 2 } = exp { α μ D α R g α 2 } exp { μ Ω R g 2 d V } ,
where R g α is the mean scalar curvature of g in D α . Here we have, in the transition from sum to integral, made use of the additive property of the variance in normal distributions, which in this context means that
μ D α μ · V o l ( D α ) ,
for some fixed constant μ . Summing up, what we have obtained is a kind of Ensemble of all possible metrics in Ω , where each metric gets a probability weight as in (4) above.
It is very much to the essence of the central limit theorem that the distribution in (4) will be more or less the same, independently of the actual distributions on the micro-level. For more details on this and the following, see [9]. (For the general formalism of Ensembles, see e.g., [12,13]).
So what kind of metrics will dominate the probability distribution given by this Ensemble? This is one of the points where the discussion about macro-states and micro-states becomes important. A macro-state in this context will essentially be, as stated above, an ordinary metric g of the kind we work with in general relativity. However, what dominates the probability distribution in (4) on the macroscale does not only depend on the probability of the metric/macro-state itself, but equally important is the number of corresponding micro-states: These can loosely be defined as those micro-states which are macroscopically indistinguishable from the given macro-state and which also have approximately the same probability according to (4).
If we for a given neighborhood D of a given point x Ω , denote by ω g ( D ) , the set of all such micro-states, the approximate independence of disjoint sets implies that ω g ( D ) is an exponential function of the volume of D (at least when D is so small that g can be considered to be more or less constant in D). It follows that we can naturally write
log ω g ( D ) log ω g ( x ) · V o l ( D ) ,
where log ω g ( x ) is now a macroscopic function, independent of D, which measures the density of states itself at x.
The standard method in statistical mechanics for computing the macro-state which dominates the Ensemble, under the condition of, in this case, constant 4-volume, looks as follows:
First compute the “state sum”:
Ξ = g exp { Ω μ R g 2 d V } .
Minus the logarithm of the state sum, L = log Ξ , is what in the usual three-dimensional theory is referred to as the “Helmholtz Free Energy”. According to standard wisdom in statistical mechanics, the macro-states which minimize L (among all states with a given volume), are the by far most probable ones, i.e., the ones which we can actually observe in nature.
To compute the free energy L exactly is a very complicated task. However, again according to standard wisdom in statistical mechanics, the sum in (7) above is usually dominated by its largest term, corresponding to a certain macro-metric g, together with its density of states function log ω g .
Assuming this, we can now heuristically compute the state sum (writing h = g + δ g ),
Ξ h α exp { μ D α R h α 2 } = δ g α exp { μ D α R g α + δ g α 2 } δ g α exp { μ D α R g α 2 } ,
in the following way. First, note that according to the above definition of the “density-of-states”, for all terms in the sum to the right in (8) that significantly contribute, the exponential factors for a given α will be essentially the same. If we in addition suppose that the metric g + δ g can be viewed as given by an independent choice δ g α in each D α , then Ξ can formally be rewritten as:
Ξ δ g α exp { μ D α R g α 2 } α ( 1 + 1 + + 1 ω g ( D α ) terms ) exp { μ D α R g α 2 } =
α ω g ( D α ) exp { μ D α R g α 2 } = α exp { μ D α R g α 2 + log ω g ( D α ) } = exp { α μ D α R g α 2 log ω g ( D α ) } .
Summing up, after a transition to an integral as in (4), using (5) and also (6), we arrive at:
Ξ exp { α μ R g α 2 log ω g α V o l ( D α ) } exp { Ω μ R g 2 log ω g d V } ,
or equivalently:
L = log ( Ξ ) Ω μ R g 2 log ω g d V .
The principle of minimizing the free energy now gives us a natural, although of course still heuristic, foundation for the following
Principle of Least Action.
The metric g which is realized in Ω must minimize
L = Ω μ R g 2 d V Ω log ω g d V .

9. The Field Equations

To understand the interplay between the two terms in (13) can in general be very difficult. However, the case of the vacuum equations turns out to be significantly easier, even if exact computations may still be out of reach.
Assume therefore that there is a metric g in Ω which satisfies the vacuum equations R i j = 0 and also gives the prescribed volume. The claim is then that this metric is exactly the one which minimizes L as given by the formula in (13).
As it turns out, in this case the metric g which minimizes the second term in (13) will automatically have R 0 (in fact even R i j 0 , see below), which means that it will also minimize the first term. Hence, we may concentrate on minimizing the second one or, alternatively, on maximizing
Ω log ω g d V .
Under the simplifying but reasonable assumption that the fluctuations in different regions can be treated as stochastically independent, the problem of maximizing the integral in (14) becomes a local one: What we need to do is to find the conditions on g which make log ω g ( x ) (or ω g ( x ) itself) as large as possible at each point x. And as it turns out, the answer is that this will happen if and only if g i j satisfies the field equations.
The idea underlying this statement is surprisingly simple: To make ω g as large as possible means to make the minimum of R g 2 at g as flat as possible in the space of perturbations g + δ g of g. In fact, the flatter the minimum, the more room there will be for perturbations which do not significantly change R 2 . But this will happen precisely when the derivatives of R in all directions δ g vanish:
δ R δ g = 0 .
And this statement is in a certain sense equivalent to the vacuum equations R i j = 0 .
Remark 1.
To make a complete argument out of this last statement is not an entirely trivial undertaking. The exact statement from differential geometry which underlies the claim looks as follows: Writing h and h i j for δ g and δ g i j , we can compute the s-derivative of the scalar curvature R ( s ) along the differentiable, volume-preserving one-parameter family g s = g 0 + s · h + of metrics passing through a given extremal metric g = g 0 . The result is that at s = 0 ,
d R d s = i , j h i j R i j + divergence terms , where h i j = k , l g i k g j l h k l .
Using the divergence theorem and the distinction between the macro-metric and the micro-perturbations, the divergence terms can be eliminated, and the rest is essentially simple linear algebra. (For the formula in (16) above, see, e.g., [14]). See [9] and the appendix in [15] for some more details about the connection to the Ensemble in the previous section.

10. A Generalized Form of the Principle of Least Action

One of the purposes of this paper so far has been to argue that both general relativity and quantum mechanics can be seen as natural mechanisms for promoting the probabilities of those universes which in a certain sense are the simplest ones on the macro-level, and in both cases the Principle of Least Action plays a crucial role. This naturally leads to the question: Can these two mechanisms be united into a common principle? Even if this may turn out to be possible, it will not be an easy task.
The most obvious difference between the general relativistic and the quantum mechanical perspectives in this paper, is that the above approach to general relativity is based on real probabilities, whereas the quantum mechanical approach uses complex probabilities (or rather complex phases).
But there is also a perhaps deeper problem, which has to do with the nature of the Principle of Least Action itself. From the traditional point of view, this is a macroscopic and deterministic law of nature. Often it is even more or less identified with the deterministic equations of motion which it gives rise to.
It is the belief of the author that this kind of deterministic framework can never successfully be united with the probabilistic framework of quantum mechanics. What we can do is to try to formulate a general stochastic principle which includes the probabilistic framework in Section 7, Section 8 and Section 9 as well as basic quantum mechanics, and then hopefully re-obtain the classical theory in the limit h 0 (where h should be thought of as representing Planck’s constant).
A successful formulation of such a principle could actually be a major step towards a unification of general relativity and quantum mechanics, but I make no claim to have come that far. Rather, the following should be seen as a first attempt in this direction.
The reader will probably note that neither stationarity nor minimization is mentioned here. Even if the machinery from the calculus of variation may still be our main tool for finding the relevant states, it is not the only tool. This was illustrated in Section 9, where the argument leading to the field equations was rather a result of a study of the density of states function log ω g .
The distinction between micro and macro underlying this discussion is extremely important, but it is not a sharp one. This is a very common problem when trying to formulate and solve problems in statistical mechanics. A standard solution is to replace the large but finite system under study by an infinite one (the so called “thermodynamic limit”, see [13]). This makes it possible to formulate and prove sharp theorems. Unfortunately, this is not a possible strategy for uniting general relativity with quantum mechanics, since the difference in length scales is huge but not infinite. But the limit h 0 may in a certain sense work as a substitute.
In the following, I will start from an informal model where the difference between the sizes of the two length scales is very large but finite. In Remark 2, I will then briefly return to the question how this can be made mathematically more precise.
The informal definition below attempts to find a way to handle the idea of the “density of states” in the following discussion:
Definition 1.
Given a macrostate G in a suitable pre-multiverse F , a δ-neighborhood Q G of G is defined as a set of micro-states corresponding to G , which are macroscopically indistinguishable from G and in the same time contribute with approximately the same probability weight as G itself.
Based on this definition, we can now formulate an informal but more general Principle of Action as follows:
General Principle of Action.
A sufficient condition for the macroscopic state G in a suitable pre-multiverse F to occur, is that there is a δ-neighborhood Q G of G in F (as in Definition 1) which dominates the probability space in the sense that
P ( F Q G ) P ( Q G ) .
Here P ( S ) stands for the absolute value of the sum of the weights of all the elements of the set S. Hence, this formulation can be applied to both real and complex probabilities.
In the ordinary Principle of Least Action, the most central role is played by the stationary states. In the more general principle above, these states correspond to the following:
Definition 2.
A state G which has a δ-neighborhood Q G which satisfies (17) is called stable.
Example 1.
An ideal gas in a container. The pre-multiverse F is the set of all possible configurations (micro-states) of the gas, and G is the macro-state where the gas is homogeneously distributed over the container. Q G is the set of all micro-states corresponding to the macro-state G .
Example 2.
General relativity. The pre-multiverse F is the set of all possible metrics in some space-time region Ω, satisfying certain given boundary conditions and the condition of fixed volume, G is the macro-state given by a corresponding solution g of the field equations, and Q G is the set of all perturbations h = g + δ g , corresponding to this macro-state as in Section 8.
Example 3.
Quantum mechanics. The pre-multiverse F is the set of all possible quantum mechanical histories satisfying certain initial and final conditions, G is the solution of the macroscopic equations of motion, and Q G is the set of all histories, macroscopically indistinguishable from G .
These three examples all illustrate how simple macroscopic phenomena can arise as stable states from large pre-multiverses. The first example is of course introduced just to emphasize the connection to classical statistical mechanics and is not directly connected to the unification problem. Note that the stable states arise according to definite stochastic principles, but that these principles are different in the three cases:
  • In the first example, the stable homogeneous state emerges simply because the corresponding micro-states are the by far most common ones.
  • In the second example, it is the high probability weights associated with low curvature which causes the stable macro-state to emerge.
  • In the third example, all histories are equally probable, and it is the cancelation of the complex weights for non-stationary histories which causes the stable macro-state to emerge.
Remark 2.
The informal approach above, using symbols like , is in a sense quite close to the actual problem: What distinguishes the micro- and macro-levels is a huge but still finite scale-factor.
For mathematical purposes however, it may be preferable to use a more exact, although idealized model. So what would a mathematically more precise version of the generalized Principle of Least Action look like? There can of course be several different answers to this question, but here is one attempt in the case of general relativity:
Let Ω be a region in space-time, which we may think of as bounded, and let d ( g 1 , g 2 ) be a distance function, measuring the distance between different metrics g 1 and g 2 on Ω.
Definition 3.
A metric g F is called stable if there is a map h Q h from small positive h to neighborhoods of g in F , which is monotone in the sense that 0 < h 1 < h 2 Q h 1 Q h 2 , such that h Q h = { g } and
lim h 0 + P ( F Q h ) P ( F ) = 0 .
A possible reformulation of the analysis in Section 9 is now the following
Principle. 
The stable states (in the sense of Definition 3 above), correspond exactly the solutions of the vacuum equations R i j = 0 .
The reader may wonder about the role of the distance function d ( g 1 , g 2 ) above. On the one hand, the above definition is quite insensitive of the exact form of d, and one may think of it as measuring the maximum deviation when g 1 and g 2 are applied to the same vectors. On the other hand, it should be remembered that the starting point for the Ensemble in Section 8 was that in fact we may not want to specify exactly the micro structure of space-time. As long as we do not know more about the true nature of the quantum mechanical fluctuations on the micro level, it may be better to leave the door open for many different models. The same may then also be said about distans function d.

11. Can the Probabilistic View on GR and Feynman’s View on QM Be United?

Should we consider the examples in the previous section as belonging to completely different paradigms? Or should we try to find a perspective where they are all parts of something more general?
It is in no way my intension to claim that the general principle in Section 10 offers any kind of final answer to this question. What I do think is that it offers a sort of different perspective on the unification problem.
But rather than speculating further about this in general terms, let me here briefly discuss an interesting connection between the two kinds of probabilities, which may also have something essential to say about the unification of general relativity and quantum mechanics.
A very naive attempt to formulate general relativity in quantum mechanical terms would be to attach to each metric g, the “complex probability”
Φ exp { i μ Ω R g 2 d V } ,
where μ is a large number related to Planck’s constant ( μ 1 / ). In other words, we may consider a kind of “democracy of all geometries”, similar to Feynman’s democracy of all histories: All geometries will have the same probability amplitude ( | Φ | = 1 ), but for almost all g these phase factors will essentially cancel out. Only for those metrics g, for which the integrands in (19) are very flat, will the summing over an appropriate δ -neighborhood Q δ give something significantly different from zero.
Remark 3.
Note that it is not claimed that all these metrics together with their complex probabilities add up to some kind of total wave-function for the geometry. Whether or not such an interpretation is possible in a more sophisticated model is a complicated question which I leave open. Here, the Φ:s are in fact just treated as probabilities, although with the extra twist that they are complex numbers.
Claim. 
In the case of the vacuum pre-multiverse F discussed in Section 8, the complex probabilities in (19) and the real probabilities in (4), lead to the same stable states (i.e., to the solutions of R i j = 0 ).
In other words, it may be that it is not the real-valued stochastic theory in Section 7 which is the natural foundation of general relativity, and in the same time it may be that it is not the complex probability theory based on (19) which is the true natural foundation. But it could be that both points of view emerge from something deeper, perhaps better understood using methods from the theory of information.
A rigorous interpretation and proof of this Claim is beyond the reach of this paper, and such a proof would in fact also demand a much more precise model for how the complex probabilities and the metrics interact. Nevertheless, the underlying idea is (as most ideas in this paper) simple: Given a macro-state/metric g which satisfies the field equations, the condition for the phase factor in (19) of a metric h = g + δ g to line up with g, rather than canceling, is in this case (since R g 0 ) that
arg ( exp { μ i Ω R h 2 d V } ) 0 .
In the real-valued case, the condition for a metric to contribute significantly is that
μ Ω R h 2 d V 0 .
These two conditions turn out to be equivalent in essence, but not in a completely trivial way. The argument in (20) will be ∼0 and hence the phase Φ will be close to real and positive if (21) is fulfilled. But this could also happen if i μ Ω R g 2 d V would be close to a non-zero multiple of 2 π i . However, this could only happen if R g 0 , in which case the integrand in (19) would be non-stationary and hence could not contribute significantly to the complex probability.

12. Conclusions

In this paper I have tried to suggest a new perspective on the unification of general relativity and quantum mechanics. Although the Claim in Section 11 suggests that the classical probabilistic methods in Section 7 could be equivalent to a quantum mechanical approach in the case of the vacuum equations, this is still very far from a full unification, in particular in the case where there is mass. For any attempt in that direction, a much deeper understanding of what curvature really stands for in quantum physics appears to be necessary.
A final question: Could the possible connection between general relativity and quantum mechanics suggested by the Claim in Section 11 also be extended to other theories of gravity given by other Lagrangians (including the usual one based on (1))? I my personal opinion, this can very well be so. What however may be unique about the theory in Section 7, based on the probability distribution (4), is the coupling between quantum theory and the classical geometrical interpretation as a principle of minimal curvature.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

  1. Kuhn, T.S. The Copernican Revolution; Harvard University Press: Cambridge, MA, USA, 1957; ISBN 0-674-17103-9. [Google Scholar]
  2. Misner, C.M.; Thorne, K.S.; Wheeler, J.A. Gravitation; W. H. Freeman and Company: San Fransisco, CA, USA, 1973. [Google Scholar]
  3. Marij, S. On the origins and foundations of Laplacian determinism. Stud. Hist. Philos. Sci. 2014, 45, 24–31. [Google Scholar]
  4. Gibney, E. Physicists Disagree Wildly on What Quantum Mechanics Says About Reality, Nature Survey Shows. 2025. Available online: https://www.nature.com/articles/d41586-025-02342-y (accessed on 8 August 2025).
  5. Everett, H. Relative State Formulation of Quantum Mechanics. Rev. Mod. Phys. 1957, 29, 454. [Google Scholar] [CrossRef]
  6. Maupertuis, P. Accord de differentes lois de la Nature qui avaient jusqu’ici paru incompatibles. Hist. Acad. Sci. Paris 1748, 1744, 417–426. [Google Scholar]
  7. Feynman, R.; Hibbs, A.R. Quantum Mechanics and Path Integrals; McGraw-Hill: New York, NY, USA, 1965; ISBN 978-0-07-020650-2. [Google Scholar]
  8. Hilbert, D. Die Grundlagen der Physik. (Erste Mitteilung). In Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen. Mathematisch-physikalische Klasse; Gesellschaft der Wissenschaften zu Göttingen: Göttingen, Germany, 1915; pp. 395–407. [Google Scholar]
  9. Tamm, M. Natural Lagrangians. Universe 2021, 7, 74. [Google Scholar] [CrossRef]
  10. Tamm, M. Should the evolution of our universe be treated as an initial value problem? Symmetry 2025, 17, 1258. [Google Scholar] [CrossRef]
  11. Fischer, H. A History of the Central Limit Theorem: From Classical to Modern Probability Theory; Sources and Studies in the History of Mathematics and Physical Sciences; Springer: New York, NY, USA, 2011. [Google Scholar]
  12. Huang, K. Statistical Mechanics, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1987. [Google Scholar]
  13. Ruelle, D. Thermodynamic Formalism; Cambridge University Press: Cambridge, UK, 1978. [Google Scholar] [CrossRef]
  14. Schoen, R.M. Variational theory for the total scalar curvature functional for riemannian metrics and related topics. In Topics in Calculus of Variations; Giaquinta, M., Ed.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1989; Volume 1365. [Google Scholar] [CrossRef]
  15. Tamm, M. Newton’s First Law and the Grand Unification. Symmetry 2024, 16, 1694. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tamm, M. Is the Basic Structure of the Universe Simple? Axioms 2025, 14, 922. https://doi.org/10.3390/axioms14120922

AMA Style

Tamm M. Is the Basic Structure of the Universe Simple? Axioms. 2025; 14(12):922. https://doi.org/10.3390/axioms14120922

Chicago/Turabian Style

Tamm, Martin. 2025. "Is the Basic Structure of the Universe Simple?" Axioms 14, no. 12: 922. https://doi.org/10.3390/axioms14120922

APA Style

Tamm, M. (2025). Is the Basic Structure of the Universe Simple? Axioms, 14(12), 922. https://doi.org/10.3390/axioms14120922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop