An order-theoretic quantification of contextuality

In this essay, I develop order-theoretic notions of determinism and contextuality on domains and topoi. In the process, I develop a method for quantifying contextuality and show that the order-theoretic sense of contextuality is analogous to the sense embodied in the topos-theoretic statement of the Kochen-Specker theorem. Additionally, I argue that this leads to a relation between the entropy associated with measurements on quantum systems and the second law of thermodynamics. The idea that the second law has its origin in the ordering of quantum states and processes dates to at least 1958 and possibly earlier. The suggestion that the mechanism behind this relation is contextuality, is made here for the first time.

not strictly Let us now consider a set of such sub-system state-objects Σ together with a partial order ⊑ that includes certain intrinsic notions of completeness and approximation that are defined by this order.Together, they form a domain, (Σ, ⊑).Given two objects ρ, σ ∈ Σ, the statement ρ ⊑ σ is interpreted as saying that ρ contains (or carries) some information about σ, i.e., σ is "more informative" than ρ [8].We take "information" to be anything that may be represented by a state ρ.In the event that ρ contains complete information about σ, then ρ = σ and ρ is said to be a maximal element (object) of the domain, in which case it is an example of an ideal element.An object that is not ideal is said to be partial.T

order ⊑ is in
erpreted as an information order in the sense that if a process generates some ordered sequence (ρ n ) of elements that increases with respect to the information order, then ρ n ⊑ ρ n+1 for all n, i.e., ρ n+1 is more informative than ρ n .To be clear, information is defined as anything that can be represented by ρ; no ordering relation is necessary.As such, not all information necessarily has an inherent order.A special type of information order is an appro imation , where the statement ρ σ is interpreted as saying that ρ approximates σ [12].We take this to mean that ρ carries essential information about σ.In other words, any "information path" leading to σ must pass through ρ.An element ρ of a poset is compact if ρ ρ.

If one takes a neo-realist viewpoint, information about reality exists independent of any agent or observer.On the other hand, it often only makes sense to talk about ordering information in the context of an agent or observer who is obtaining that information.The ordering relation ⊑ has to do with potential evolutions of a state and depends directly on the agent, whereas has to do with the intrinsic information about a state.For example, consider a set of three closed boxes.We are told that there is a ball in one of the three boxes, A, B and C. Our task is to determine the color of the ball.Clearly, in order to determine its color, we must first locate it.Suppose we open box A and find that it does not contain the ball.We only have two boxes left; we have eliminated one possibility.Thus, the states of the other two boxes are clearly more informative than the state of the box we just opened, i.e., ρ A ⊑ ρ B , ρ C .If we open box B and do not find the ball, we immediately know it is in box C. Thus, ρ A ⊑ ρ B ρ C .We may now observe the ball to determine its color.Suppose, instead, that we opened box B first.In that case, ρ B ⊑ ρ A ρ C .The neo-realist view assumes that the presence of the ball in box C is independent of any agent: ρ C contains essential information about the ball's color, regardless of whether or not the box is opened.Thus, the relation is independent of any agent, whereas ⊑ is not.

Both ordering relations can be thought of as a collection of arrows where each arrow points from one state (mapping) ρ i to another.As such, the domain consisting of the objects ρ i along with the set of arr ws associated with the information order form a category.Each object ρ i by itself is, of course, an arrow ρ i : Q i → Ê i in a topos with a state-object Q i and a quantity-value object Ê i .Each of the objects ρ i represents some amount of partial information about the larger system, as noted above.It is the amount of partiality that helps to define the order.In order to quantify this, we now more formally define a measurement µ on a domain Σ as a function µ : Σ → Ê + that assigns to each informative object on the domain, ρ i ∈ Σ, a number µρ i ∈ Ê + that measures the amount of partiality contained in ρ i .

This number is referred to as the information content of the object ρ i .Thus, moving "up" a given information order (such as the order in which we opened the boxes in search of the ball) represents decreasing partiality, i.e., ρ ⊑ σ ⇒ µρ ≥ µσ.

(

Further, if ρ is a maximal element of a domain, then µρ = 0 [7].Such a state is said to be pure.Intuitively, this means that if full knowledge of an object is known, then there is obviously no partiality.It is importan to remember that µ may be any number as long as (1) and the condition:
µρ = 0 ⇒ ρ ∈ max(Σ)(2)
are satisfied.This requirement then constrains the possible functional forms that µ may take.There are several conditions that are implicit in (1) and ( 2).The key is recognizing that we are quantifying an ordering relation of mappings based on how informative each mapping is relative to the others, i.e., we seek to essentially rank these mappings.For example, (1) tells us that σ is more informative than ρ and, thus, essentially ranks ρ and σ according to the information that they provide.Consider a measurement µρ on a state ρ.The measurement will presumably have a number of possible outcomes.Whatever those outcomes may be, the information supplied by the measurement provides a certain amount of insight into the object or system under consideration.Suppose we now add additional outcomes, all of which are zero probability.It seems trivial to assume that adding these zero-probability outcomes will not add or subtract from the information provided by a measurement, i.e., that the measurement should not supply us with any more or any less information than it would without the additional zeroprobability outcomes.This assumption is known as expansibility, and it implies that adding zero-probability outcomes to a given measurement will not change the ordering r le involving the boxes and the ball, this would be akin to adding a third outcome that has zero likelihood of occurring, say, for example, the possibility that the ball is both in the box and not in the box simultaneously (which, for classical systems, such as balls and boxes, is clearly absurd).The addition of this outcome does not fundamentally alter the ordering relation of the states of the various boxes.(Note that this is especially true in a neo-realist interpretation.)

It is also natural for us to assume that there is symmetry in the outcomes of any given measurement, i.e., permuting them does not change the amount of information given by the measurement.In other words, it is irrelevant as to whether or not we find the ball in a given box; the presence or absence of the ball are of equal "worth" to us as far as an individual measurement is concerned.Now, consider two measurements µρ and µσ on a pair of states ρ and σ, respectively.We assume that the information gained from a joint measurement on the two systems is less than or equal to the sum of the information gained from individual measurements on the two systems.This, of course, is the well-known property of subadditivity (cf.[13,14]).When the information gained from the joint measurement is identical to the sum of the information from the individual measurements (i.e., when equality holds), the information is said to be additive [15].While subadditivity appears intuitive, there stran ely do exist systems that may be additive, but not subadditive [16].

Thus, an appropriate functional form for µ would satisfy both (1) and ( 2), as well as the conditions of expansibility, symmetry, subadditivity and additivity (as well as a standard normalization condition).Aczél, Forte and Ng have shown that only linear combinations of the Shannon and Hartley entropies satisfy the conditions of expansibility, symmetry, subadditivity, additivity and normalization.Coecke and Martin have shown that Shannon entropy satisfies (1) and (2).Thus, we are justified in choosing the Shannon entropy:
µρ = − n i=1 ρ i log ρ i(3)
as a functional form of µ, where the individual mappings ρ i must be chosen, such that they yield real-number representations that allow (3) to satisfy (1) and (2) [18].Note also that Aczél et al. employ Forte's interpretation of "experiments (measurements)" and "outcomes" as partitions of a set [17].This is particularly appropriate for our purposes, given that the domain-, category-and topos-theoretic approach presented here is fundamentally built on a set-theoretic foundation.Note also that Forte shows that Shannon entropy is the only function defined on n-tuples that fully satisfies the conditions of expansibility, symmetry, subadditivity and additivity (as well as of probabilistic normalization) [17].Thus, though the Shannon entropy is technically a scalar, we are interested in its ability to ra k the mappings, which allows us to preserve the structure of the n-tuple in the ordering relation itself.

A decreasing partiality, then, corresponds to decreasing uncertainty about a physical system, such that when the complete state is known, there is no uncertainty about it.As such, regardless of the functional form that µ takes, we use the term "relative" entropy to describe anything that satisfies (1) and (2).Notice that this is entirely consistent with the usual counter-arguments to Maxwell's Demon, since it necessarily requires an agent (hence, the use of the term "relative").One might assume that in adopting the general domain-theoretic definition of a measurement as a broader definition for entropy, I am taking entropy to be a measure of knowledge where, in this sense, "knowledge" refers to information transferred to an agent.Thus "knowledge" would necessarily be an agentdependent concept.In fact, it is the exact combination of an agent and a neo-realist view that raises the false specter of Maxwell's Demon in the first place: the on y decreases to begin with is precisely because of the presence of the agent ("demon"), whose own entropy is not being properly accounted for.To be clear, I am expressly not adopting this view of entropy.Rather, entropy is interpreted here solely as a method of rank-ordering states based on the ness about a system.

Returning to the mathematical aspects of measurements on domains, consider some information order on a domain, such that ρ 1 ⊑ ρ 2 ⊑ ρ 3 . . .ρ n .The sense of decreasing partiality suggests that the information order is, in Martin's words, "going somewhere" [8].That is:
ρ 1 ⊑ ρ 2 ⊑ ρ 3 . . . =⇒ n∈AE ρ n ∈ Σ (4)
where the element n∈AE ρ n is a maximal element of the domain.Thus, any process that leads to ρ = ρ n will yield an entropy of:
µ n∈AE ρ n = lim n→∞ µρ n .(5)
With this in mind, it makes sense to ask if there is some way in which an agent can maximize the efficiency associated with obtaining the information about a particular domain.For instance, it might be that under one particular choice of information order, two of the chosen sub-states contain redundant information.Ideally, any set of sub-states considered by an agent should be maximally informative with no redundancy, i.e., no two sub-states should contain duplicate information, but all of the sub-states considered together should give complete information about the state.To better understand this point, consider a simple system that may be completely characterized by a single measurement ("query") that yields a "yes" or a "no" to the query.The state of the system is thus a map ρ : Q → {0, 1}, and the purpose of a measurement by an agent is to distinguish between ρ : Q → 0 and ρ : Q → 1, i.e., to determine which of the two possible states the system is actually in.Maximizing the information requires choosing the correct basis within which to measure the system.In fact, Schumacher and Westmoreland define information as the probability of successfully distinguishing between orthogonal measurements [19].

These ideas are nicely summarized in a mathematical form via a directed-co plete partial order or dcpo.Intuitively, a dcpo is a poset in which every directed sub-set has a supremum.In other words, every sub-state of a state should be maximally knowable, by which I mean that the maximum amount of information for a sub-state should be, under ideal conditions, fully transferable to an agent if one exists.Formally, a dcpo is defined as follows.

Definition 1 (dcpo).Let (P, ⊑) be a poset.Then, the nonempty subset P ⊆ P is said to be directed if for all x, y ∈ P there exists z ∈ P , such that x, y ⊑ z.The supremum P of P ⊆ P is the least of its upper bounds when it exists.A dcpo is a poset in which every directed set has a supremum.

Any continuous dcpo is an example of a domain [7,8].In the example in which a colored ball was in one of three boxes, we considered a sequence of processes that resulted in one of two possible outcomes in each case: either the ball was in the box or it was not.Any time we have a situation in which a process has more than one possible outcome (be it two or ten), we need a formal way to distinguish be ween those outcomes.Consider, then, n + 1 boxes and assume that within one of these boxes is a ball [20].Now, suppose that Alice and Bob are each tasked with locating the ball and determining its color.Prior to opening any boxes, the state of the system, representing both Alice's and Bob's knowledge, is given by the completely mixed state:
ρ ⊥ = (1/(n + 1), . . . , 1/(n + 1))(6)
where we are representing the fact that Alice and Bob both initially assume that the ball is equally likely to be in any of the boxes.In this case, e of the state, if you prefer) is a probability.Once the ball is located, the state is then given by the pure state:
ρ i = (0, . . . , 1, . . . , 0)(7)
where i indicates in which of the n + 1 boxes the ball is found.In a neo-realist view, ( 7) is independent of any agent.If we say that ρ ∈ Σ n+1 represents the state as it appears to Alice and σ ∈ Σ n+1 represents the state as it appears to Bob, then the statement ρ ⊑ σ indicates that Bob knows more about the position of the ball than Alice.For example, perhaps Bob was able to look in the boxes faster than Alice.For every box in which the ball is not found, the state can be updated, since one possibility has been eliminated.As such, Bob could eliminate possibilities faster than Alice.In this way the completely mixed state is the least element of the domain of states Σ n+1 .The set of all pure states would thus be the set of all maximal elements.Coecke and Martin use a similar example to show that there exists a unique order on classical two-states given by the set Σ 2 and that a partial order ⊑ on the more general Σ n respects a mixing law under certain restrictions [7].The most important point here is that classical states have a unique ordering relation.I will return to this later.Now, let (Σ, ⊑) be a dcpo.For elements ρ, σ ∈ Σ, we write ρ σ if and only if for every directed subset ∆ ⊆ Σ with σ ⊑ ∆ we have ρ ⊑ τ for some τ ∈ ∆.In order to simplify notation, we introduce the following sets:
։ ρ := {ρ, σ ∈ Σ : ρ σ} and ։ ρ := {ρ, σ ∈ Σ : σ ρ} ↑ ρ := {ρ, σ ∈ Σ : ρ ⊑ σ} and ↓ ρ := {ρ, σ ∈ Σ : σ ⊑ ρ}(8)
where the arrows suggest the "direction" of the information order.In fact, these may be viewed as defining a specific information order.Therefore, for example, ↑ ρ is the set of states for which σ is more informative than ρ, whereas ↑ σ = ↓ ρ is the set of states for which ρ is more informative than σ.Thus, for some dcpo (Σ, ⊑), a pair of elements ρ, σ ∈ Σ are said to be orthogonal if:
µ(↑ ρ ∩ ↑ σ) ∈ {0}(9)
where {0} is the null set and Σ is assumed to have a least element ρ ⊥ .In a way, this formalizes the neo-realist viewpoint, since it says that there can be no reality in which σ is more informative than ρ and simultaneously ρ is more informative tha σ, i.e., there exists only one reality for a set of processes.This offers another way to distinguish between the relations ⊑ and : the former is a statement about knowledge and, hence, as mentioned above, is agent-dependent, whereas the latter is a statement about processes agent-independent.Crucially, this is related to the fact that, as Coecke and Martin have shown, there exists a unique order on cl is further in Section II.Now consider a specific agent's lack of information.We can quantify this lack of information via the Shannon entropy (Equation ( 3)), such that conditions (1) and ( 2) hold.In other words, for some dcpo (Σ, ⊑) and ρ, σ ∈ Σ, we set:
µρ = − n i=1 ρ i log ρ i with ρ ⊑ σ ⇒ µρ ≥ µσ (10)
where for some value n = N, we have ρ = σ and µρ = µσ.Thus, as our knowledge of Σ increases, ρ approaches the maximal (ideal) element σ, i.e., ρ → σ.Simultaneously, the entropy decreases, such that µρ → µσ and µ is said to be monotone.In classical physics, which assumes a neo-realist viewpoint, we typically can infer knowledge about a system given some minimum amount of information.For example, suppose that the information about the state of a system is fully encoded in the number π. Suppose via some process that we have been given information about the state in the form of the decimal 3.14 ± 0.01.This is clearly not enough information for us to say with certainty that the state of the system is given by π.For example, the number 3.146 . . .falls within our range of uncertainty.This number is the square root of 9.9 and is algebraic, whereas π is transcendental.While perfect certainty in this example is imp ssible, since both π and √ 9.9 are irrational, we can at least establish a limit, such that, at some point, we can be fairly certain that the state is π.In other words, in classical systems, there may a threshold ρ min , such that if ρ min ≤ ρ, σ may be predicted with near certainty, i.e., once ρ passes some threshold, we may say with confidence that ρ ≈ σ.A system whose future states may be predicted with certainty based on complete knowledge f its prior states may be said to be physically deterministic.Note that this definition of determinism inherently assumes that measurements do not disturb the system [21].

Condition 1 (physical determinism).Let I ≡ σ be the maximal (ideal) element on a domain of physical states ρ n ∈ (Σ, ⊑), w ere for some value n = N, ρ n = σ.For a sequence of measurements µρ 1 , µρ 2 , . . ., µρ N , if ρ min ≤ ρ N and N is finite, then if ρ = I and µρ = µI = 0, the system is said to be physically deterministic.

A hypothetically omniscient being who happens to be in possession of ρ min ≤ ρ N (i.e., who possesses enough information about a system to fully predict its future states) is known as Laplace's demon.A system for which N is not finite, but that otherwise satisfies all other aspects of the condition, may be said to be approximately deterministic.In the example in which the state of a system is given by π, there is clearly a point at which the system becomes approximately deterministic (though, to some extent, this point is arbitrary).It nism inherently assumes a neo-realist viewpoint.This is not without its problems, since it assumes that, if the universe is a valid system, as we obtain more and more information about it, the entropy should decrease.Of course, it is well known that the exact opposite is actually happening.The m contextuality.


II. CONTEXTUALITY

Consider three states, ρ, σ, τ ∈ Σ, and suppose that ρ σ.Furthermore suppose that σ ⊑ τ .This means that ρ carries essential information about σ and σ carries some (not necessarily essential) information about τ .In order for us to conclude from this that ρ τ , we would need to know that the statement σ ⊑ τ is being made in the same context as ρ σ.Coecke and Martin define context in the form of the following proposition [7].

Proposition 2 (context).For all states ρ, σ, τ ∈ Σ, if ρ σ ⊑ τ and ։ τ = {0}, then ρ τ .

I refer the interested reader to [7] for the proof of this proposition.Intuitively what this says is that if a unique information order can be established for τ that includes ρ and σ, then ρ and τ have the same "context", which means the former carries essential information about the latter.Considering the example of the ball in the boxes, since the ball is assumed to be in one and only one box, the states of all of the other boxes, whether or not they are opened, contain essential information about the ball: it is not in them.That is the essence of context.As it happens, the results of classical measurements are elements of continuous domains and approximation on continuous domains is context independent [7].Note that this is a statement about domains in t easurements here simply refer to a rank ordering of states.What this means is that for classical measurements, it is automatically true that if ρ σ and σ ⊑ τ , then ρ τ (for ρ, σ, τ ∈ Σ).This is at the root of the fact that there is a unique order on classical states, as I briefly mentioned in Section I. Again, this formalizes the fact that classical states naturally have a neo-realist int states are not necessarily representable by measurements on continuous domains and, so, are not necessarily context independent.Recall that on the most fundamental level, we are actually working with topoi and ρ is a mapping from a state object to a quantity-value object.In topos theory, the state object Q has no points (microstates), and as briefly mentioned above, Döring and Isham show that this is equivalent to the Kochen-Specker (KS) theorem [9].The KS theorem essentially points out a conflict between two neo-realist assumptions: (i) that measurable quantities have definite values at all times and (ii) that the values of those measurables are intrinsic and independent of the measurement process used to obtain them [22].In the language we developed in Section I, Assumption (i) says that a definite state of a sub-system ρ : Q → Ê exists at all times, while Assumption (ii) says that the value corresponding to Ê is unique.For quantum systems, we are forced to abandon one or thus establishes the notion of context as fundamental to quantum measurements: the state of the sub-system and/or the real-valued quantity that is associated with that state, is dependent on the details of the measurement on that state.The notion of context set forth in Proposition 2 formalizes this idea in an order-theoretic way: for any set of states ρ, σ, τ ∈ Σ, if ρ σ ⊑ τ , we can only conclude that ρ τ if ։ τ / ∈ {0}.Recall that the relation ⊑ pertains to knowledge and is thus agent-dependent, whereas the relation pertains to processes and is entirely independent of any agent.In other words, the relation can only apply to a pair of quantum states if those states exist in the same context.Or, to put it another way, there is no unique order on quantum states.Let us consider two examples.First, let us return to the entirely classical example of the ball in one of three boxes.We established that if the ball was in box C and the boxes were opened in the order A, then B, then C, the information order would be ρ A ⊑ ρ B ρ C .Conversely, if we swapped the order in which we opened boxes A and B, then the information order would be ρ B ⊑ ρ A ρ C .Since this is a classical system and the only information of interest is whether or not the ball is in a particular box, it should be clear that these two cases are essentially equivalent, i.e., ρ A = ρ B .As such:
ρ A ⊑ ρ B ρ C ⇔ ρ B ⊑ ρ A ρ C(11)
and we may substitute for ⊑ in both cases.This is the essence of a classical system: it is completely independent of context and thus independen of any agent.Now, consider a sequence of spin-1 2 measurements on a certain quantum system, as shown in Figure 1.Due to the nature of quantum mechanical spin, we generally assume that it is an intrinsic property, such that it has a definite value along a given axis if measured along that axis.In other words, a neo-realist interpre ation would assume that, if the spin is measured along, say, axis A (corresponding to basis a), and was found to be aligned with that axis, then regardless of any intermediate measurements along other axes, any subsequent measurement along A must necessarily show the spin to be aligned with that axis.Quantum mechanics, however, tells us that the probabilities associated with the two possible outcomes to a measurement along axis C (basis c) in the figure solely depend on the relative alignment of axes B and C and the state as it enters the device measuring along axis C. For example, if, as in the figure, the state exiting the middle device measuring axis B in basis b is |b− , then the probabilities for the outcomes from the third device are Pr(c+) = sin 2 1  2 θ BC and Pr(c−) = cos 2 1 2 θ BC , where θ BC is the angle between the B and C axes [23].If, for instance, θ BC = π 2 , then Pr(c+) = Pr(c−) = 0.5, meaning that the

ate as measured al
ng axis C could equally well be aligned or anti-aligned with that axis.This is independent of the outcome of any previous measurement.That means that if A and C represent the same axis, it is possible for the state to be measured to be |a+ initially, but then later to be found to be |a− .In the classical example with the ball in one of three boxes, this would be the equivalent of opening box A and finding the ball, then opening b x B (finding nothing) and, finally, opening box C and finding the ball again.

From these xamples, it should be clear that no unique ordering relation exists for quantum states.Recall that the statement ρ σ is interpreted as saying that ρ contains essential information about σ.Therefore, in the classical example, if the ball is in box C, it clearly is not in any of the other boxes.Thus, the other boxes contain essential information about the location of the ball: it is not there.In the quantum example, even though the prior state does affect the probabilities, it does not guarantee anything.It merely establishes whether an information order exists or not.For example, consider just the first two spin measurement devices in Figure 1, and as in the figure, assume that the measurement along A finds the state aligned with that axis.The probabilities associated with a measurement along axis B are Pr(b+) = cos 2 1  2 θ AB and Pr(b−) = sin 2 1 2 θ AB , respectively.Let us consider three cases [24]:

(i) :
θ ab = π 2 (ii) : 0 < θ ab < π 2 (iii) : θ ab = 0.
In Case (i), the probabilities for the outcome of a measurement along axis B will be Pr(b+) = Pr(b−) = 1 2 .This is equivalent to a completely random outcome meaning that we have no information whatsoever about ρ B prior to making the measurement along B. Order theoretically, prior to the measurement, ρ B is a compl tely mixed state as in (6) and is said to be a least element on the domain.What this means is that no information order can be established for ρ A and ρ B ; neither ⊑ nor apply, since knowledge of the outcome of the measurement along A does nothing to improve our chances of predicting the outcome of a measurement along B. Now consider Case (ii).In this case, the probabilities are not 1 2 , and so, knowledge of the outcome of the measurement along A does improve our chances of correctly predicting the outcome of a measurement along B, but it still does not guarantee a specific result.In this way, we may write ρ A ⊑ ρ B , since this partial knowledge does allow us to establish some kind of order on the states.Likewise, for Case (iii) if θ AB = 0, the outcome of the measurement along B is guaranteed to be exactly the same as it was along A. As such, we may write ρ A ρ B .

In order to generalize this, it is necessary to introduce the order-theoretic notion of a basis.Consider some subset of states m on a domain (Σ, ⊑), such that m ⊆ Σ.The subset m is said to be a basis when m ∩ ։ ρ is directed with supremum ρ for each ρ ∈ Σ. Recall that a dcpo is a poset in which every directed set has a supremum.Notice that this definition inherently includes the relation (via ։ ρ).As such, it codifies the notion that a basis is any subset for which the neo-realist assumption holds within that subset.In order to see why the subset must be directed with a supremum, consider just two boxes, one of which contains a ball of some indeterminate color.The state representing the box that contains the ball is the supremum, since it contains more information about the color of the ball than the box that does not contain the ball.It is necessarily directed, because, regardless of the order in which the boxes are opened, ρ no ball ρ ball .Now, consider two quantum states, ρ and σ.If they are measured in the same basis, i.e., if ρ, σ ∈ m ⊆ Σ, then we can establish relationships, such as ρ σ.If they are measured in different bases, the establishment of any kind of information order is dependent on how "close" they are.We might be tempted to use the term "orthogonal" here to describe two bases for which no information order can be established, based on our example with the spin measurements.This can be a bit confusing, since each of these bases are individually said to be orthogonal, since measurements on different elements wit 9).In fact, it is the orthogonality of various information orders that defines a basis in the first place.Consider the classical example of the three boxes, A, B and C, with a ball in one of them.Without knowledge of the ball's location (i.e., before measurement), there are three possible information orders:
ρ A , ρ B ⊑ ρ C , ρ A , ρ C ⊑ ρ B , ρ B , ρ C ⊑ ρ A
given by ↑ ρ C , ↑ ρ B and ↑ ρ A , respectively, depending on where the ball is located.Since only one may be correct in a neo-realist interpretation, a measurement on the intersection of any two must satisfy (9).Thus, we can think of measurements on these boxes (which entails opening them) as representing a method for identifying an orthogonal basis for measurements where those measurements are aimed at determining the location and color of the ball.In other words, the set ↑ ρ A , ↑ ρ B , ↑ ρ C =: m ⊆ Σ defines the basis.Classically, of course, this is somewhat irrelevant, but it serves to illustrate the definition.Clearly, then, if a measurement on states at the intersection of any two information orders is anything other than an element of the null set, the two information orders must necessarily be defined on different bases that share some information, i.e., for ρ ∈ m ⊆ Σ and σ ∈ n ⊆ Σ:
µ(↑ ρ ∩ ↑ σ) / ∈ {0} ⇒ m = n.(12)
This adequately generalizes Case (ii) above.Distinguishing between Case (i) and Case (iii) is a bit more problematic, since the result µ(↑ ρ ∩ ↑ σ) ∈ {0} does not automatically (in a mathematical sense) tell us whether ρ and σ belong to the same basis or not (recall that orthogonality is defined here by an information order).

Consider the set M(↑ ρ ∩ ↑ σ) of all measurements on states at the intersection of any two information orders where µ ∈ M. Recall that µ is defined as a function on a domain that assigns to each informative object a number that measures the amount of "partiality" such that continued measurements lead to decreasing partialit (see (1)).The greatest amount of partiality, then, corresponds to the greatest lack of knowledge (which is why µ is often most conveniently expressed as entropy).The greatest lack of knowledge regarding any two information orders is associated with complete randomness.In the example given in Figure 1, this corresponds to Case (i), where we may write the state in one basis in terms of a different basis' least element.For example, suppose that a = z and b = x.The state of the system exiting the second spin measurement device in Figure 1 can be written in terms of the first:
|x− = 1 √ 2 |z+ − 1 √ 2 |z− .(13)
Two points should be clear from this.First, this demonstrates that the set M is partially ordered.Second, it allows us to clearly define a supremum for M as representing the case in which knowledge of a unique basis corresponds to a least element on one of the bases (i.e., as in the example given by ( 13)).Thus, we may ay tha ∩ ↑ σ) ⇒ m ⊥ n(14)
where I use the symbol "⊥" intentionally to tie this to the concept of a least element and a completely mixed state (e.g., Equation ( 6)).This, then, adequately generalizes Case (i) above.Thus Case (iii) is simply the definition of orthogonality given by (9), i.e.,
µ(↑ ρ ∩ ↑ σ) ∈ {0} ⇒ m = n.(15)
Intuitively, this says that within any given basis, there is a single, unique information order, and thus, neo-realism holds (within that basis; see Section III).Any difference in basis necessarily eliminates the uniqueness of the ordering relation, and a neo-realist interpretation is no longer tenable under these conditions.It is the dependency of quantum systems on a measurement basis (vis-à-vis projective measurements) that is at the heart of this behavior.

Classical systems do not suffer from this complication and, thus, are context independent, as discussed before.This idea may now be related to Proposition 2 as a means of quantifying (quantum) contextuality, since some measurement µ(↑ ρ ∩ ↑ σ) is a number that necessarily lies on the interval 0 ≤ µ(↑ ρ ∩ ↑ σ) ≤ M(↑ ρ ∩ ↑ σ).A specific measurement context can then be defined as follows, and the value of µ(↑ ρ ∩ ↑ σ) is akin to a "distance" measure telling us how "far" one measurement context is from another.Definition 3 (measurement context).For any three quantum states ρ ∈ j ⊆ Σ, σ ∈ k ⊆ Σ and τ ∈ l ⊆ Σ, where j, k and l are bases, if µ(↑ ρ ∩ ↑ σ) ∈ {0} and µ(↑ σ ∩ ↑ τ ) ∈ {0}, then it follows that µ(↑ ρ ∩ ↑ τ ) ∈ {0} and ρ, σ and τ are said to have an identical context, i.e., ρ σ τ .

Note that the fact that this definition necessarily ensures that a unique information order exists, e.g., ։ τ / ∈ {0}.Any values of µ other than elements of the null set imply that, at best, the states only possess a partial context.Measurements on states that represent a supremum on the set of all possible measurements imply that those states share no context.Thus, we have established an order-theoretic method for quantifying contextuality.


III. CONTEXT, DETERMINISM AND ENTROPY

What does it mean to say that an information order may be defined within a basis?In Figure 1, a single particle always exits one and only one of the output channels of any given spin measurement device.Subsequent measurements of the same particle in that basis (i.e., without ever changing the basis) yield the same result.In other words, as long as a basis does not change, it behaves much as the classical example of a single ball that is inside one of multiple boxes.As such, it possesses a unique information order (just as the ball and box example does), and thus, neo-realism holds, even though only a single measurement is usually required to produce the outcome.For example, in Figure 1, the information order ρ b+ ρ b− remains unique as long as the measurement basis is not changed, even though only a single measurement is required to establish the order.This is simply due to the nature of quantum measurement devices.The same would be true, for example, in the classical case involving the boxes and the ball if all of the boxes could be o his is practical and not theoretical.This concept, then, is easily generalized to an n-dimensional basis m n : though only a single measurement may be required in order to determine the state, this measurement nevertheless establishes a unique information order.As such, following the prescription given by (10), µ is monotone, so long as the basis does not change.It is thus trivially true that Condition 1 holds for any sequence of measurements on quantum states in which the basis does not change and the basis itself is finite-dimensional.Equation (10) tells us that this corresponds to a decrease in entropy.

Condition 1 does not hold, however, if the basis changes.This is because, as the above examples clearly demonstrate, each set of basis states has its own maximal element.In other words, inherent in the order theoretic definition of physical determinism given by Condition 1 is the notion of the the complete set of states for some physical system.A change of basis in a quantum system introduces a new maximal element associated with the new basis.Thus, the complete set of states for a quantum system does not possess a single unique maximal element, but, rather, possesses many.Therefore, a full characterization of the complete set of states for a quantum system is not physically deterministic.In addition, for a sequence of measurements on this complete set of states, the entropy will not decrease.I formalize this with the following remark.

Remark 1.For some complete set of quantum states Σ measured on a complete set of finite-dimensional bases, no single, unique maximal element I exists.

This result is very closely related to the topos-theoretic statement of the Kochen-Specker (KS) theorem, as given in [9].The details in [9] are quite involved, but some general remarks are in order.

The topos-theoretic statement of the KS theorem as given in [9] employs the concept of a spectral presheaf.While the details are beyond the scope of this essay, suffice it to say that a spectral presheaf is a particular type of category-theoretic mapping called a contravariant functor.For our purposes, it is essentially representat on a system, and we will label it Σ to distinguish it from Σ. Very roughly, a global element on Σ is a function that essentially assigns a unique number to each state on Σ.In other words, it guarantees that there should be one and only one maximal element.As stated in [9], the KS theorem only applies to systems described by a finite-dimensional Hilbert space as no global element.

In the notion of contextuality that I have developed here, there is no restriction on the dimensionality of the Hilbert space, per se.There is, however, a restriction to finitedimensional bases.This provokes the following remark.In both stateme ere and in [9]), complications with the infinite-dimensional basis states arise from the fact that they can have continuous parts.As Coecke and Martin show, continuous bases are context independent [7].Thus, Remark 1 is analogous to the topos-theoretic statement of the KS theorem.

One of the key points related to the lack of a unique maximal element on sets of quantum states, as pointed out earlier, is the fact that if measurements are made on a complete set of ases, the overall entropy of the system will not decrease.Each change of basis essentially "resets" the system and, thus, the entropy.Consider a complete set of N measurement bases on a system of states Σ.If the system behaves classically, then (10) holds and a sequence of measurements will result in decreasing partiality, i.e., increasing knowledge about the system.This corresponds to the existence of a single, unique maximal element.It must necessarily be true, then, that the existence of multiple maximal el ments would prevent the entropy from decreasing.We might imagine, though, a system for which there are N maximal elements corresponding to sub-systems, such that the entropy could decrease for any given sub-system individually.This, however, would require neo-realism to hold for a giv n sub-system regardless of whether the measurements on that sub-system are interrupted by measurements on a different sub-system.

Quantum systems, of course, do not behave in this manner.In the example given in Figure 1, there is no guarantee that the outcome of the third device will match that of the first device, even if they measure along the same axis.As such, it is possible that re-measuring in a given basis may result in a different maxim

element for that basis.In other words
in quantum systems, it is possible for the maximal element of a given basis to change.This means that even if the complete set of bases is finite, the number of maximal elements may be infinite.This may at first appear paradoxical, but the paradox is resolved if we consider that the state as measured by the first device is not the same state as that measured by the third device, regardless of the outcome.This is because the object being measured fundamentally possesses a world line in spacetime.In the example given in Figure 1, the object is a localized qubit.However, as pointed out in [25,26], in a strict sense, the "location" of a qubit on a world line is fundamentally a part of its state, i.e., a localized qubit is really best understood as a sequence of quantum states associated with points on a world line.In other words, quantum states are constantly changing, since they are associated with objects that possess world lines.If the world line is infinite, then, regardless of the number of possible measurement bases, the number of maximal elements must be infinite, since one exists for each possible measurement.This, in fact, is the very essence of contextuality: neo-realism fails spectacularly.In this case, the overall entropy of the complete system will tend to increase.This warrants an additional remark.

Remark 2. For a sequence of measurements on a complete set of finite-dimensional bases for some complete set of quantum states ρ n ∈ Σ n , the entropy must be greater than or equal to zero, i.e., µρ n ≥ 0 (recall that complete knowledge of a system corresponds to µρ = 0).Remark 2 bears a striking resemblance to the second law of thermodynamics, sometimes called the "law of increase of entropy." ndeed, the quantum mechanical origin of this law is not a new suggestion.In the 1958 English translation of their volume on statistical physics, Landau and Lifshitz write that "[i]t is more natural to think of the origin of the law of increase of entropy in its general formulation . . .as being bound up with the quantum mechanical effects".They continue:

[I]f two [quantum mechanical] processes of interaction take place consecutively with a given quantum object (let us call them A and B) then the assertion that the probability of some result of process B is determined by the results of process A can be true only if process A takes place before process B. [27] (p.31).This statement is equivalent to the order-theoretic statement "ρ A ⊑ ρ B for ρ A , ρ B ∈ Σ is not a unique information order" (where Σ is understood to represent a quantum mechanical system).As I have argued, this l ck of a unique ordering relation on quantum states is an order-theoretic manifestation of the phenomenon of quantum contextuality.Thus, it would appear s if quantum contextuality is at the root of the second law of thermodynamics.However, note that I have also shown that the lack of a unique ordering relation arises from the presence of an agent.This would seem to suggest, then, that the second law itself is somehow agent-dependent [28].


IV. SUMMARY AND CONCLUDING REMARKS

In this essay, I have developed order-theoretic notions of determinism and contextuality on domains and topoi, in the process developing an order-theoretic quantification of contextuality that is compatible with the sense of the term embodied in the Kochen-Specker theorem.The order-theoretic view has allowed me to show that, while a unique ordering relation exists for classical states, no such unique relation exists for quantum states.I have argued that this lack of a unique ordering relation necessarily appears with the introduction of an agent.As such, quantum states do not allow for a neo-realist interpretation.This fact is a result of the contextual natur of quantum states.Thus, contextuality (at least in the sense given by the Kochen-Specker theorem) is deeply connected to the concept of a measuring agent.Contextuality also assures us that no sequence of measurements on quantum states can lead to the complete characterization of a quantum system in the same sense that such a sequence of measurements on classical states could completely characterize a classical system.In fact, the entropy associated with such a sequence of measurements on a quantum system will necessarily never decrease.Incidentally, this i perfectly consistent with the notion of conditional quantum entropy, which can be negative (cf. [31,32]).Entropy, as the term is applied here, simply refers to the entropy associated with measurements on the system that establish an information order.Nevertheless, this non-decrease in entropy for measurements on quantum systems bears a striking resemblance to the second law of thermodynamics when applied to sequential measurements.Indeed it essentially formalizes a suggestion made by Landau and Lifshitz in 1958.This does seem to suggest that the second law is an agent-dependent phenomenon.Additionally, it suggests some relation between contextuality and thermodynamics.

This essay suggests at least two pieces of additional work.First, Remark 1 should be put on more solid ground by stating it as a proposition, lemma or theorem supported by a formal proof.Second, a deeper relation between Remark 2 and the second law of thermodynamics should be found by formalizing the latter in order-theoretic terms on domains of generalized states.This would necessarily involve additional work related to coarse-graining and potentially, could involve extensions to generalized probability theories.It is interesting to note in passing that coarse-graining is inherent in the derivation of the Kochen-Specker theorem in terms of spectral presheafs given by Döring and Isham.Given the close analogy to between Remark 1 and their statement of the KS theorem, it stands to reason that additional work on coarse-graining in relation to Remark 2 should yield a deeper connection and should solidify any relation between contextuality and thermodynamics.

FIG. 1 .
1
FIG.1.Each box represents a measurement of the spin for a spin-1  2 particle along some axis with the top output indicating that the state is aligned (+) with the measurement axis and the bottom output indicating that the state is anti-aligned (−) with the measurement axis.Red and blue lights on the top simply indicate to the experimenter which of the two results is obtained (e.g., red might indicate aligned and blue might indicate anti-aligned).

ACKNOWLEDGMENTSAcknowledgments I wish to thank Bob Coecke for introducing me to categories and domains and for sending me a copy of New Structures for Physics, by which I was inspired.I also wish to thank two anonymous referees for very helpful comments that aided in making my arguments more succinct.In particular, I would like to thank one referee for introducing me to the work of Aczél, Forte and Ng, which has yielded additional insights.Finally, I acknowledge financial support from FQXi.
A foundation for computation. K Martin, 2000New Orleans, LA, USATulane UniversityPh.D. Thesis

Deriving laws from ordering relations. K H Knuth, Bayesian Inference and Maximum Entropy Methods in Science and Engineering.
. Y Zhai, G Erickson, 2004NY, USAAmerican Institute of Physics: Melville

Introducing categories to the practicing physicist. B Coecke, What Is Category Theory. 

G Sica, Advanced Studies in Mathematics and Logic; Polimetrica. Milan, Italy200630

Categorical quantum mechanics. S Abramsky, B Coecke, Handbook of Quantum Logic and Quantum Structures. Amsterdam, The NetherlandsElsevier2008II

Topos Methods in the Foundations of Physics. C Isham, Deep Beauty. Halvorsen, H., Ed; Cambridge, UKCambridge University Press2010

Category theory for scientists. D I Spivak, arXiv:1302.69462013

A Partial Order on Classical and Quantum States. B Coecke, K Martin, New Structures for Physics. Lecture Notes in Physics. Berlin/Heidelberg, GermanySpringer2011813

Doma n Theory and Measurement. K Martin, New Structures for Physics. Lecture Notes in Physics. Berlin/Heidelberg, GermanySpringer2011813

What is a Thing?": Topos Theory in the Foundations of Physics. A Döring, C Isham, New Structures for Physics. 

. S Category Awodey, Theory, 2010Oxford University PressOxford, UK2nd ed.
The Philosophy of Physical Science. A S Eddington, 1939Cambridge University PressCambridge, UK

The notation ≪ is standard, but in order to avoid confusion with other inequalities, we adopt , so as to clearly distinguish it from the usual meaning of ≪ in inequalities. 

Some convexity and subadditivity properties of entropy. E H Lieb, Bull. Am. Math. Soc. 811975

Quantum Computation and Quantum Information. M A Nielsen, I L Chuang, 2000Cambridge University PressCambridge, UK

Why the Shannon and Hartley Entropies are 'Natural'. J Aczél, B Forte, C Ng, Adv. Appl. Probab. 61974

The difference can be better understood by noting that some systems obey the s

ong subadditivity condition. while
thers do not [14

Why Shannon's entropy. B Forte, Symposia Mathematica. Instituto Nazionale Di, Alta Mathematica, New York, NY, USAAcademic Press1975XI

It is important to remember that, while the functional form of µ generally depends on an agent, the basic conditions set by (1) and (2) always hold under a neo-realist interpretation. 

B Schumacher, M Westmoreland, Quantum Processes, Systems, and Information; Cambridge University Press. Cambridge, UK2010

There are, of course, other forms of determinism that may allow for invasive measurements. While I do not consider those here, a domain-theoretic definition of such an alternate definition of determinism would an intriguing line of enquiry. 

The problem of hidden variables in quantum mechanics. S B Kochen, E Specker, J. Math. Mech. 171967

J Sakurai, Modern Quantum Mechanics. Reading, MA, USAAddison Wesley Longman1994revised ed.

There are many more cases than these three, corresponding to various multiples of π and π. 

I am simply using these as examples to illustrate how the ordering relations apply in the quantum case. 

Localized qubits in curved spacetimes. M C Palmer, M Takahashi, H F Westman, Ann. Phys. 2012

Relativistuc quantum information theory and quantum reference frames. M C Palmer, 2013University of Sydney, AustrialiaPh.D. Thesis

L Landau, Lifshitz, E. Statistical Physics. Course of Theoretical Physics. 52nd ed.

. Addison Wesley, 1958Reading, MA, USA

This idea is very similar to one that was relayed to me by Chris Adami, who has suggested that the second law actually refers to relative entropy. He has hinted at the concept in a few of his publications (see, for example, [29, 30. but to my knowledge, has never stated it explicitly in print

Information theory of quantum entanglement and measurement. N J Cerf, C Adami, Physica D. 1201998

Quantum Mechanics of Consecutive Measurements. C Adami, arXiv:0911.11422010

Negativ Entropy and Information in Quantum Mechanics. N J Cerf, C Adami, 10.1103/PhysRevLett.79.5194Phys. Rev. Lett. 791997

Quantum extension of conditional probability. N J Cerf, C Adami, 10.1103/PhysRevA.60.893Phys. Rev. A. 601999