Postulating the Unicity of the Macroscopic Physical World

We argue that a clear view of quantum mechanics is obtained by considering that the unicity of the macroscopic world is a fundamental postulate of physics, rather than an issue that must be mathematically justified or demonstrated. This postulate allows for a framework in which quantum mechanics can be constructed in a complete mathematically consistent way. This is made possible by using general operator algebras to extend the mathematical description of the physical world toward macroscopic systems. Such an approach goes beyond the usual type-I operator algebras used in standard textbook quantum mechanics. This avoids a major pitfall, which is the temptation to make the usual type-I formalism ’universal’. This may also provide a meta-framework for both classical and quantum physics, shedding new light on ancient conceptual antagonisms and clarifying the status of quantum objects. Beyond exploring remote corners of quantum physics, we expect these ideas to be helpful to better understand and develop quantum technologies.

We argue that a clear view on quantum mechanics is obtained by considering that the unicity of the macroscopic world is a fundamental postulate of physics, rather than an issue that must be mathematically justified or demonstrated.This postulate allows a framework in which quantum mechanics can be constructed, in a complete mathematically consistent way.This is made possible by using general operator algebras to extend the mathematical description of the physical world towards macroscopic systems.Such an approach goes beyond the usual type I operator algebras used in standard textbook quantum mechanics.This avoids a major pitfall, which is the temptation to make the usual type I formalism 'universal'.This may also provide a meta-framework for both classical and quantum physics, shedding a new light on ancient conceptual antagonisms, and clarifying the status of quantum objects.Beyond exploring remote corners of quantum physics, we expect these ideas to be helpful to better understand and develop quantum technologies.

I. INTRODUCTION
A. Our world is unique... Obvious empirical evidence tells us that we live and die in a single world, which has an history before and after our individual existence.Alternative histories of the universe are possible but counterfactual, and remain a subject for fiction.In a similar line of ideas, the past can be forgotten but not changed, and the future can be predicted and depends to some extent on our actions, or absence of action.It is actually a mix of quasi-certainty (the sun will raise tomorrow) and inherent uncertainties (it is likely to rain tomorrow).And once something has happened, good or bad, there is no way back.
These obvious empirical evidences have set the framework in which mankind has evolved, and they have basically not changed, despite the fact that our abilities to record the past, and to predict the future, have tremendously changed over the centuries and millenia.In the modern world there is no more need to kill Iphigenia, but propaganda is more active than ever.Actually, most of our current techniques for recording, sending and processing -also manipulating -information are using microelectronics, that is based itself on quantum physics.
Here we want to argue that considering the unicity of the macroscopic world as a basic postulate for physics is not only possible, but appears as a firm basis on which quantum mechanics (QM) can be built.So our approach does not contradict QM as it is currently used, but it embeds the usual quantum formalism in a framework where it is still valid, but where some misleading extrapolations don't show up [1].From the mathematical side this extended framework is not really new and may be traced back to John von Neumann at the end of the 1930's [2,3].This framework is also used in mathematical physics when addressing some aspects of statistical physics and quantum field theory [4].However it is conceptually and technically demanding, and has been mostly ignored by physicists in standard QM, and especially in elementary textbooks.But as we will see below it may be extremely useful to establish the overall consistency of QM.

B. ... but is it classical and/or quantum ?
Since the definition of the theoretical framework of QM to describe the microscopic world initiated by Heisenberg and Schrödinger and completed by Dirac and von Neumann, the tremendous reliability of its predictions combined with the exotic phenomena it unveils have raised much interrogation in the scientific community and beyond.The exotic aspect has led to interpretations that many consider as far-fetched, and mainstream quantum physicists tend to stick to a 'don't bother and calculate!' approach.We will call 'textbook quantum mechanics' (TBQM) the formalism on which the mainstream approach relies [5].Despite its efficiency, this attitude may not be the one that will allow much future development -especially in the frame of e.g. the emerging quantum technologies, as well as quantum gravity.
The main problem that motivates non-mainstream interpretations lies in the non-deterministic aspect of quantum projective measurements.This aspect, when related to quantum superpositions, leads to such wordings as 'the system is in two states at the same time' in popular texts close to mainstream thoughts.More divergent schools consider that QM could be in fact deterministic at the cost of hidden mechanisms (Bohmian mechanics, pilot wave) or at the expense of a proliferation of replicas of the Universe every time a projective measurement happens (Everett's multiverse interpretation).It is true that the contrast between the smooth, predictable unitary evolution of quantum systems without measurement on the one hand, and on the other hand the abrupt, discontinuous, dissipative, random effect of a measurement, creates shock waves that leave no-one indifferent.

arXiv:2310.06099v1 [quant-ph] 9 Oct 2023
A second problem lies in the significant difference between the laws that apply to microscopic vs. macroscopic systems, all the more because macroscopic systems are made of a very large number of microscopic systems.This leads to the ideas that macroscopical laws should emerge from laws of quantum physics as a sort of averaging out of quantum effects.This is the essence of a form of 'reductionism', and it is true that Ehrenfest's theorem or the properties of coherent states seem to show at a path in this direction.Nevertheless another view, coming mostly from Bohr and Heisenberg and adopted in TBQM, tells that there is an irreducible cut between the microscopic and the macroscopic worlds.TBQM can be considered as 'dualist' when considered from this point of view.
On the reductionist side, Zeh [6] and Zurek [7] have elaborated the concept of decoherence that explains how in the process of measurement, the microscopic degrees of freedom of the quantum system of interest become entangled with a huge number of external degrees of freedom (in the measurement device, in its environment).The global state cannot be realistically tracked thus leading to a leak of information outside the system.This leak can however be computationally managed in the density operator of the system and environment as a partial trace on the environment degrees of freedom.This leads to a situation where this operator becomes diagonal on pairs of associated system and device states.Though this does not explain how a precise measurement result is selected among possible ones, at least it shows how system and device may couple to produce a result.
A third problem of a different nature has to be mentioned at this point.Our civilisation is on the verge of developing quantum technologies, which rely on using all the specific aspects of quantum physics, and not only some of them as in electronics and photonics, where a part of the weirdness is hidden by the large quantities of involved electrons or photons.These technologies will be developed by engineers who will be all the more efficient if they can develop an intuitive understanding of QM.Developing intuition requires clearing the various ad-hoc and often wobbly explanations that are given on such or such basic properties, often by awkwardly trying to connect them with a usual, classical view of the world.This means that clarifying 'what exists' (ontology) and what happens whatever the observer (objectivity) is not only a philosophical question, it is the condition of progress of these technologies.
The purpose of the present article is first to summarise the main features of CSM (section 2), that starts from a few empirical postulates, including the unicity of the physical macroscopic world, and the contextuality of measurements, in order to build up the formalism of TBQM.In order to present an overview of CSM, we will not give here the details of the proofs, but refer to published papers.Some major steps in the CSM framework are to show that quantum theory must be probabilistic, and also that the description of macroscopic systems requires an operator algebraic framework broader that the one traditionally used in TBQM.This theoretical extension actually provides an understanding of the classicalquantum transition, and allows building a comprehensive overall framework to embed both realms.In section 3, we discuss several implications of this approach on a more general level, i.e, on the validity of considering mathematically infinite systems, and on hazardous predictions related to the so-called 'universal unitarity'.We will also argue that the reductionist vs. dualist views of physics may not be so antagonistic after all.

II. OVERVIEW OF THE RECONSTRUCTION
OF NON-FULLY-UNITARY QM.
A. Introduction and motivation.
The heart of the conceptual difficulties of quantum mechanism lies in the measurement process, that is framed in TBQM by the von Neumann projection postulate.What is the problem with this postulate ?Certainly not to be wrong, since it is at the core of the functioning of TBQM, and it has been observed correct whenever the relevant observations could be made.The main usual criticism is that it introduces a projection, that is a non-unitarity evolution during a measurement, in apparent contradiction with the fact that the system and measurement apparatus should globally evolve according to Schrödinger equation, predicting a unitary evolution.
There are many ways to deal with this unsatisfactory dichotomy, most of them are based on the decoherence theory [6,7], telling basically that measurement devices are large systems coupled to an even larger environment, where interactions ultimately create an extremely large entangled system.Therefore it is not possible to keep track of all degrees of freedom, and unitarity is broken by taking a partial trace over the degrees of freedom that get out of control.In addition to this loss of information, the structure of the interaction with the environment selects the measurement basis by environment-induced selection, that is 'einselection' [7].
After some reasonable approximations this leads to a classical probability distribution over the possible results in the einselected basis, but one more problem shows up: only one result is observed, not a probability distribution of them, so what does 'select the winner' in an individual measurement event ?This question is more tricky, because it assumes that there should be such a winner selection process, that does not exist in basic QM.So a simple way to avoid the problem is to tell that the result is fundamentally a probability distribution, and there is nothing like a winner selection process.Then the unique result can simply be observed at the macroscopic level, and the probability distribution is actualized as usual.

B. The proposed reconstruction.
Clearly this observation plus actualisation process is fully consistent with a postulate on the unicity of the physical macroscopic world, and thus of the macroscopic measurement result.The CSM framework [8][9][10][11][12][13][14][15][16][17][18][19][20][21] is built upon this idea, with the following postulates: P 0 Unicity of the macroscopic world -There is a unique macroscopic physical world, in which a given measurement gives a single result.
D 1 Contexts, systems and modalities -Given a microscopic physical system, a modality is defined as the values of a complete set of physical quantities that can measured on this system.This complete set of physical quantities is called a context, and a modality is attributed to a system within a context.Contexts are concretely defined by the settings of macroscopic measurement devices.
P 1 Predictability and extravalence -Once a context is defined and the system is prepared in this context, modalities can ideally be predicted with certainty and measured repeatedly on this system.When changing the context, modalities change but some modalities in different contexts may be connected with certainty, this is called extracontextuality.This defines an equivalence class between modalities, called extravalence [8,11].
P 2 Contextual quantisation -For a given system and context, there exist at most D distinguishable modalities, that are mutually exclusive: if one modality is realised in an experiment yielding a result in the macroscopic world, the other ones are not realised.The value of D, called the dimension, is a characteristic property of the quantum system, and is the same in all relevant contexts.P 3 Changing context -Given P 1 and P 2 , the different contexts relative to a given quantum system are related between themselves by continuous transformations (e.g.rotating a polarisation beamsplitter) which are associative, have a neutral element (no change), and an inverse.Therefore the set of context transformations has the structure of a continuous group, generally non commutative.
For the sake of clarity, let us note that, within the usual QM formalism (which is not here yet), a complete set of commuting observables (CSCO) corresponds to a context, and a state vector (or projector onto that vector) corresponds to an extravalence class of modalities, but not to a particular modality, since the specification of the context is missing in the state vector.
The crucial postulate P 2 (Contextual quantisation) can be understood as the consequence of dealing with the smallest bits of reality, that for this reason have only a finite quantity of information to give whatever the way they are interrogated [23].This is also related to the existence of truly indiscernable microscopic objects: if there were an infinite quantity of information that could be carried by individual quantum objects, they would end up all different at least on one of these pieces of information -but both theoretical and empirical evidence tells us that QM does not work that way.
Then the leading idea of CSM is to start from these physical postulates, that involve the quantisation of the properties of microscopic systems within macroscopic contexts, and to show first that a probabilistic description is required to avoid contradiction [10].The basic idea is that the results from different contexts cannot be gathered together, as this would lead to more than D mutually exclusive modalities, contradicting P 2 .Therefore the link between modalities in different contexts must be probabilistic: given a modality in a first context, only the probability to get another modality in a different context can be predicted [12].Additionally, it appears that this probabilistic aspect and its underlying indeterminism are key to maintain relativistic causality when spatially extended contexts are considered.As a matter of facts thanks to them, only randomness -i.e.noninformation, entropy -can be transferred over space-like intervals.This is related to the discussion on predictive incompleteness presented in [15], see e.g.Fig. 1 in this reference.
It can also be shown that usual classical probabilities are not suitable to warrant that (i) there is a fixed number of mutually exclusive modalities in any context, and (ii) that the certainty of extravalent modalities can be transferred between contexts; in TBQM this is most directly shown by the Kochen-Specker theorem [17,24].A more suitable framework (worth trying !) is then to associate mutually orthogonal projectors to the events corresponding to mutually exclusive modalities [14,22].
each exclusive modality {m i } D i=1 of a system is represented by a projector Πi in a Hilbert space of dimension D, all Πi 's being orthogonal.
Then the above postulates can be used to justify the hypotheses of powerful mathematical theorems, that are respectively Uhlhorn's theorem to connect different contexts with unitary transformations, and Gleason's theorem to derive Born's rule [8].The projectors are thus the basic mathematical tools, related to probabilities, and from them it is easy to construct standard observables: One thus recovers the standard definition of observables in a complete set of commuting observables (CSCO), in relation with the spectral theorem.In this framework, traces over products of projectors and observables thus emerge naturally as the way to compute an experimental expectation value of the said observable.
Again, the intuitive idea behind the postulates is that making more measurements in QM (by changing the context) cannot provide more details about the system, because this would increase the number of mutually exclusive modalities, contradicting P 2 .One might conclude that changing context totally randomises all results and that nothing can be predicted, but this is not true either: some modalities may be related with certainty between different contexts, this is why extravalence is an essential feature of the construction -actually, extravalent modalities tie the other, non extravalent modalities to a predictible probability distribution, through Gleason's theorem [8,14].
The final step, moving out from usual TBQM, is to apply this formalism to a countable infinity of systems with infinite tensor products (ITP) of elementary Hilbert spaces, to model the macroscopic limit.Doing so, one moves from the separable Hilbert spaces and type-I operators considered in TBQM towards non-separable Hilbert spaces and Type-II and type-III operators described by Murray and von Neuman in the 1930's and 1940's [3].In this limit, unitary equivalence and unitarity overall are lost [4], and the predicted behaviour looks very much like the classical one [18-20, 25, 26].Therefore the overall mathematical description fits with the initially postulated separation between microscopic systems and macroscopic contexts (Heisenberg cut), closing the loop of the reconstruction (Fig. 1).

C. Discussion.
We note that in the above approach there is no need to call for partial traces, or loss of information, since decoherence is built in initially by the postulates, and finally recovered from the (mathematically) infinite character of the context; this full loop is thus self-consistent.It is also quite possible to make type-I calculations, for instance to calculate decoherence times in a given experiment; but it should be made explicit that they are approximations, able to get very close to the actual non-unitary jump, but unable to manage it.On the other hand, the overall framework sets a clear separation between the microscopic (system) level and the macroscopic (context) level, and makes sure that there is nothing like super-contexts, or some variety of Wigner's friend, that would be able to turn an unbounded context back into a system [21].Similarly, reasonings based upon a universally extended unitary evolution do not fit in our framework.

D. The crucial role of unitary transformations.
Given the above statements, it is important to give more details on what unitary transformations are, and are not, according to the CSM approach.In standard QM, unitary transformations appear with a variety of different roles.A standard one is time evolution -on which we will come back below.In relation with the previous sections, one may look at a related issue, that is the role of unitary transformations in quantum measurements.This turns out to be quite important in the CSM framework, since a modality must be associated with a certain and repeatable result in a given context.However, this is not the most frequent situation in practical QM: actually, in most cases a unitary transformation is inserted in the quantum measurement itself.
For instance, let us ask how to make a measurement that gives a certain and repeatable modality, in the following situations: (i) a coherent state |α⟩ of an harmonic oscillator (or quantized electromagnetic field mode), (ii) a Bell state for two spins, and (iii) an arbitrary state of a quantum register.Looking first at a coherent state |α⟩, it is clear that neither photon counting nor coherent detection will do the job: the first one gives a Poisson distribution of photo-counts, and the second one gives an amplitude value, with some probability distribution depending on how the measurement is implemented.But how to get the required certainty can be guessed easily [9]: let us deterministically translate |α⟩ by (−α), get the vacuum |0⟩, count zero photon with certainty, and translate back by (+α) to the initial state.The modality criterion is thus respected, but it is clear that the irreversible part of the measurement (the photon counting) is inserted between two reversible unitary transformations, here translations.
A similar situation appears to carry out a measurement in the Bell state basis, with the four entangled states {| + +⟩ ± | − −⟩, | + −⟩ ± | − +⟩}.One has then to use a CNOT gate, then a Hadamard gate, measure the spins along z in the factorized basis {|++⟩, |−−⟩, |+−⟩, |−+⟩}, and go back to the Bell basis by using the reverse unitary transform.This can obviously be generalized to an arbitrary quantum state of a register with many qubits: if such a state is prepared from the unitary transform Û , then apply Û † , check that the register is back to its initial state (all zeros for instance), and get back to the arbitrary quantum state by applying Û again.
Obviously these examples are highly idealized, since they assume perfect unitary transforms, and perfect quantum non demolition (QND) measurements when the irreversible check is performed.They are however perfectly legitimate from a quantum point of view, and match actually how a perfect quantum computer should be working.The current gate fidelities make that the successive application of Û and Û † cannot efficiently bring back to the initial state, unless a very small number of qubits is used -doing it with a very large register is extremely challenging, though not forbidden in principle.
In the logic of CSM these examples make clear that unitary transformations describe the deterministic evolution or manipulation of isolated quantum systems within classical contexts, outside the measurement periods [18,19].However, they don't apply to the universe as a whole.The contexts themselves are classically described, and do not correspond to mathematical entities that can be the object of unitary transformations; quite the opposite, in an algebraic framework they correspond to separate sectors, that are not connected by any operator constructed from the systems level [20].This illustrates again the overall consistency of CSM, from the previous physical postulates to the mathematical formalism, and back.

III. HIGHER LEVEL IMPLICATIONS
The scheme summarized above builds the TBQM formalism from a few postulates and then closes the loop by showing how the ITP limit recovers the key assumptions.This construction calls examination from a higher level perspective, at least on three aspects (i) the acceptability of the infinite system limit (ii) the key role of unitarity and where it cannot apply and (iii) a new light on the debate between those who consider that classical physics should emerge from quantum (reductionism), and those who think that there is a fundamental distinction (dualism) -actually both positions might be equivalent.We discuss these three aspects in this section.

A. Is infinity acceptable at all?
Using infinity in a physical theory legitimately rises relevant questions.These questions relate intuitively to the one whether our universe is infinite or not -which answer we do not know.Moreover, in the specific case we are considering, something even more puzzling occurs.At the infinite subsystem number N → ∞ limit, the ITP of N elementary Hilbert space H = ⊗ N α=1 H α appears to become suddenly non-separable, i.e. qualitatively different, which could lead to thinking that anything valid at N < ∞ does not hold anymore in the limit.Yet, there are two points one can make to justify taking the limit seriously.One is mathematical, the other one is more epistemological.
On the mathematical side, looking at the details of von Neumann's breakdown theorem [19,20], things are much more subtle than just a function that would be discontinuous at the limit.As a matter of fact, two key properties of the non-separable ITP Hilbert space are (1) the breakdown into non-unitarily equivalent orthogonal sectors that correspond to an infinite number of changes in the elementary subsystem states, and (2) the fact that sectors are not connected by the ring B # of operators, built as the extension to the full ITP of operators that act on elementary subsystem Hilbert spaces, their products, sums, and topological completions.Quite importantly, it can be shown that these two properties build up gradually.More precisely [2] shows that (1) if |Ψ⟩ := ⊗ N α=1 |ψ α ⟩ and |Φ⟩ := ⊗ N α=1 |ϕ α ⟩ in H are not in the same sector when N → ∞, for any ε > 0 one can find a finite set J ⊂ [1, ..., N ] of M indices α's, all distinct, so as to build |Φ⟩ are not in the same sector when N → ∞, for any ε > 0 one can find a finite set J ⊂ [1, ..., N ] of M indices α's, all distinct, so as to build |Ψ M ⟩ and |Φ M ⟩ as above, and the restriction ÂM of Â to the ⊗ α∈J H α , such that This gradual onset of the properties means that the non-separable, brokendown limit is reached in a controlled manner, at least in the weak topology that is the one relevant for von Neumann (W * -)algebras 1 .This controlled approach to the limit is very much reminiscent of the controlled approach to the Central Limit Theorem at the Thermodynamical limit, on which much of equilibrium statistical mechanics relies.Another example is the pervasive function derivative, that consider infinitesimal elements even though we might think that there is also an ultraviolet cutoff at the Planck scale that make them no more valid than the thermodynamic limit.On top of this, this gradual onset can be understood in a 'for all practical purposes' way, in the sense that for a large enough system, the inter-sector coherences in the density operator are so weak that it would take experimental repetitions over more than the age of the Universe to observe a quantum effect in such a system.
From the epistemological point of view, this limit can be validated too.As a matter of fact, however generic a conceptual representation of reality may be, it remains a model of reality [27].Generally, physicists assume that: • there is a mapping between concepts in the representation (that can be expressed in mathematical language), and the target elements of reality • this mapping allows conducting surrogative reasoning [28] on the concepts to yield (falsifiable) claims on the elements of reality they are meant to describe.
Though this might be blurred in the daily exercise of physics, representations and reality are elements of two different worlds, thus the conceptual elements of a model are not elements of reality.Moreover, models are by definition approximate, valid until proven wrong by an experiment.So models do not need to have all the properties of reality to be relevant, especially in the most remote corners of their application domain 2 .
Overall we consider that these two arguments in two distinct domains validate the consequences which can be derived from taking the N → ∞ limit in QM.

B. Unitarity relevance and multiverse interpretation
We come back here to a standard role of unitary transformations in QM, that is time evolution.Assuming a Hamiltonian Ĥ(t) that describes the energy of an otherwise isolated system, it is well known that the system's evolution can be described by a unitary operator Û (t), solution of the equation iℏ d Û (t)/dt = Ĥ(t) Û (t).More generally, when a transformation (rotation, translation...) is applied at the classical level, one can define a corresponding unitary transformation to be applied on the states or observables of the system.In this point of view, time evolution is just a translation in time, and the Hamiltonian is the infinitesimal generator of such translations.This important subject can be developed in great details, and it shows the importance of Uhlhorn's theorem to build representations of symmetry groups [29].This role of unitarity in time evolution, contrasted with the abrupt evolution during a measurement, is a disturbing situation taking into account the tremendous success of QM.All the more when considerations on measurements are extended to the whole Universe, where it would lead to the 'many worlds' conclusion that there is an infinity of parallel different universes, where each possible outcome of any measurement is realised.In our approach spelled out above, these extrapolations are unwarranted, and result from a misunderstanding and misuse of the quantum formalism.
Nevertheless, in defense of the idea of multiple parallel universes, it may be told that Science already made many counter-intuitive predictions, establishing e.g. that the earth is round and moving quite fast, despite the 'obvious empirical evidence' that it is flat and motionless.But actually this view raises two issues of different natures.
• For it to be a scientific statement, it would require to yield a falsifiable experimental prediction -analog to Bell's inequalities for local hidden variables.Such a prediction is not available yet; actually, this idea only shows up as a consequence of extrapolating the type-I quantum formalism, by applying carelessly it to macroscopic systems and then to the whole universe.This is at difference with the round and moving aspects of the Earth, which led immediately to many practical predictions, e.g.sailing around it, that have been largely vindicated.
• The above considerations on ITP show (if the model holds) that there is no reason to expect any unitarity whatsoever at a macrocopic scale, and thus that the very motivation for parallel universes collapses.But all this is not a real surprise, as the previous section explained that TBQM can be derived from a set of postulates that include the unicity of the Universe.

C. Reductionism vs dualism
Despite all classical objects being built with of quantum objects, the radical difference between the classical and quantum behaviours has raised very early in the history of quantum physics the question of their compatibility.Two antagonistic positions have emerged.Bohr and followers have claimed that there are two fundamentally different levels of reality, separated by a 'Heisenberg cut' and that physicists have to live with this dual description of reality.Let us call 'dualist' this position.The other position aims at looking for a mechanism, by which classical behaviours would emerge from quantum ones.One could see Ehrenfest's theorem or intense coherent states [30] as first hints in this direction, that has then been much developed in the frame of decoherence theory [6,7].Classical would reduce to a part of quantum, in this 'reductionist' perspective.
The closing of the CSM loop with infinite tensor products sheds a new light on this antagonism.In the language of CSM, the dualist approach translates into considering that there is always a (classical) context around a quantum system, with a Heisenberg cut separating them.This assumption is the prerequisite of the contextual quantisation postulate [II.B].These postulates allow deriving usual TBQM, so duality and contextuality lead to QM.But now, forgetting that this usual QM formalism can result from dualism, and just taking it for granted, one sees that key properties of contextuality (the Kochen-Specker theorem) and the difference between quantum and classical behaviours (through von Neumann's breakdown theorem on ITP) result from usual QM.So QM implies duality and contextuality.In other words, dualism implies reductionism and reductionism implies dualism.In logical terms, this means that both positions can be viewed as equivalent, and not antagonistic.

IV. CONCLUSION: QM FOR ENGINEERS, AND BEYOND ?
There are now many engineers working on quantum technologies, and for engineering it is clear that a reliable physical ontology is extremely useful, to tell which objects and which properties they are working with.In such a framework, speaking about inaccessible multiple worlds or dead-and-alive cats is not very enlightening; invoking only abstract equations is not much better.
So coming back within our unique world, for the worst and for the best, may be quite useful in practice.Also, rather than telling that a quantum superposition is 'being in two states at the same time', it is better to tell that one can get a result with certainty for some mea-surement in some context, and a random result for some other measurements in another context determined at measurement time.Maybe the strangest feature of quantum randomness is that it can be turned into a certainty -for well chosen measurements.But when seen from the engineering side, this is a quite manageable idea, likely to orient thinking in a practically usable direction.For simple accounts see also [13,16].
From a more foundational perspective, it is clear that the views presented here have a strong Bohrian flavour, though they are quite distinct from Bohr's ideas; for instance we never speak about complementarity, that is too vague in our opinion.Also, it can be said that our approach is close to the so-called Copenhagen point of view -certainly closer to it than to any other 'interpretation' of QM.However, the Copenhagen framework is also loosely defined, and it does not include topics like non-type I operator algebra that are essential for our construction.As a matter of facts, considering the use of non-type-I algebras allows proposing a global model where the classical and the quantum realms have a clearly articulated relationship, clarifying the way each one relates to the other.
So overall we call for an extension of the QM formalism towards operator algebras, despite the known mathematical difficulties of this topic.But maybe this field went too quickly to mathematics, so it should be reconsidered by physicists, and brought back into their realm.

D 2
Observables as operators -From the orthogonal rays given by the Πi 's, hermitian operators on a D-dimensional Hilbert space can be constructed by considering each Πi as an eigenspace associated to m i , the corresponding eigenvalue.If m i corresponds to a single observable quantity, this yields an operator M = D i=1 m i Πi .If m i is a tuple of several observable quantities, a tuple of operators can be constructed in a similar way.