Next Article in Journal
Bayesian Nonparametric Modeling of Categorical Data for Information Fusion and Causal Inference
Previous Article in Journal
Comparison of Compression-Based Measures with Application to the Evolution of Primate Genomes
Previous Article in Special Issue
Towards Experiments to Test Violation of the Original Bell Inequality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Entropy and Differentiation Phenomenon

1
Liberal Arts Division, National Institute of Technology, Tokuyama College, Gakuendai, Shunan, Yamaguchi 745-8585, Japan
2
Department of Psychology, City University London, London EC1V 0HB, UK
3
International Center for Mathematical Modeling in Physics and Cognitive Sciences Linnaeus University, 351 95 Växjö-Kalmar, Sweden
4
National Research University of Information Technologies, Mechanics and Optics, St. Petersburg 197101, Russia
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(6), 394; https://doi.org/10.3390/e20060394
Submission received: 5 April 2018 / Revised: 17 May 2018 / Accepted: 21 May 2018 / Published: 23 May 2018
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)

Abstract

:
In the formalism of quantum theory, a state of a system is represented by a density operator. Mathematically, a density operator can be decomposed into a weighted sum of (projection) operators representing an ensemble of pure states (a state distribution), but such decomposition is not unique. Various pure states distributions are mathematically described by the same density operator. These distributions are categorized into classical ones obtained from the Schatten decomposition and other, non-classical, ones. In this paper, we define the quantity called the state entropy. It can be considered as a generalization of the von Neumann entropy evaluating the diversity of states constituting a distribution. Further, we apply the state entropy to the analysis of non-classical states created at the intermediate stages in the process of quantum measurement. To do this, we employ the model of differentiation, where a system experiences step by step state transitions under the influence of environmental factors. This approach can be used for modeling various natural and mental phenomena: cell’s differentiation, evolution of biological populations, and decision making.

1. Introduction

In quantum theory, a state of a system is represented by a density operator. A density operator, e.g., ρ , can be decomposed into a weighted sum of (projection) operators representing “pure states”. This linear combination represents a statistical distribution of pure states in an ensemble of systems. However, the same density operator ρ can be decomposed in various ways. Hence, numerous statistical state distributions are mathematically encoded by the same ρ , unless ρ coincides with a pure state.
One class of these statistical distributions, namely, obtained from “Schatten decompositions” of ρ , plays a special role. We remark that, for a density operator with degenerate spectrum, Schatten decomposition is not unique. Any selection of orthogonal bases in eigensubspaces of ρ generates some Schatten decomposition. Each Schatten decomposition corresponds to the statistical distribution of eigenstates of ρ . The crucial point is that these eigenstates may be distinguishable on the basis of measurement of some physical quantity X, because these states are orthogonal to each other. The eigenvalues are interpreted as the frequency probabilities of the measurement outcomes. In this sense, the distribution corresponding to the concrete Schatten decomposition of the density operator ρ is conceptually equivalent to a “classical” or “standard” probability distribution.
On the other hand, other decompositions of the same state ρ are “non-classical” or “non-standard” and represent ensembles of pure states which may be not orthogonal to each other. In Section 2, we discuss these points in more detail.
The main topic of this paper is a quantity that evaluates structural features of various statistical state distributions encoded in the same density operator ρ . It is well-known that the von Neumann entropy [1,2], defined as ρ log ρ , can evaluate how ρ deviates from a pure state, i.e., the degree of mixture of pure states. In fact, ρ log ρ can be rewritten as k λ k log λ k , where { λ k } are eigenvalues of ρ . It equals to zero if and only if ρ is a pure state. Note that the quantity k λ k log λ k is the Shannon entropy for classical probability distribution { λ k } . Thus, the von Neumann entropy evaluates only the classical distribution encoded in ρ , but not non-classical ones.
In this paper, we define a quantity such that more detailed information about the structure of statistical state distributions, especially non-classical ones, is reflected. Our discussion is fundamental, but straightforward. First, in Section 3, we mention the “differentiation phenomenon” which an ensemble of pure states experiences under a quantum measurement of some physical observable, say X . Each pure state is stochastically differentiated into an eigenstate of X. If pure states in the statistical ensemble are different, the expectation values of X estimated from each of them are also generally different. In Section 4, we focus on dispersion of these expectation values and discuss its mathematical property reflecting structural features of the state distribution. Finally, in Section 5, we define a “state entropy” (see Equation (17)). This quantity evaluates the “diversity” of pure states constituting an ensemble. It is proportional to the number of pure states and inversely proportional to similarities among them.
We also point to the interrelation between the state entropy and the von Neumann entropy. It can be briefly described in the following way. If a state distribution, which is encoded in ρ , is classical, then its state entropy is equal to the von Neumann entropy. The state entropies of non-classical state distributions do not exceed the latter; see the inequality of Equation (18): The state entropy is a generalization of von Neumann entropy which is extensively used in different types of quantum entropies, e.g., conditional, relative and mutual entropies [3,4,5].
State entropy evaluates non-classical statistical state distributions. To stress significance of the notion of state entropy, we explain the theoretical context of state distributions. We note that classical state distributions are always identified after completion of quantum measurements. Therefore, non-classical distributions may exist at the stages before measurements are completed, more generally, in the process of differentiation.
In Section 6, we focus on the model of differentiation that was discussed in Reference [6]. This model describes accumulation of very small state transitions experienced by the system, and each transition is mathematically represented by a map in the state space, i.e., by a “quantum channel” in the terminology of quantum information theory. A quantum channel denoted by Λ is given by Equation (28), which is concerned with “environmental elements” around the system. They are weakly interacting with the system causing numerous small state transitions step by step, if differentiations of states occur sequentially. The above picture corresponds to an ideal “open quantum system dynamics”. To describe the process of differentiation in the system, we consider a more complicated model, assuming differentiations not only of the system state, but also in the elements of the environment. The differentiation in each environmental element is similar to the determination of a “pointer basis” in the theory of quantum decoherence proposed by Zurek [7]. In our approach, the Lindblad equation [8,9], which is a traditional way to describe open quantum system dynamics, is not employed directly.
We believe that the described model can be applicable to a variety of natural and mental phenomena (not only in the micro-world). The process of creation of a diversity of states in an ensemble of systems, which were originally prepared in the same pure state Ψ , through mutual interaction with environmental factors is universal. Originally, the formalism of quantum theory was established to describe microscopic phenomena, but now it is widely used in psychology, decision making, and finance (see [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38]). It is also applied to model behavior of biological systems, especially the functioning of genetic and epigenetic systems (see [39,40,41,42,43,44,45,46]). We plan to explore the novel mathematical apparatus developed in this paper (based on the state entropy) for such applications elsewhere.
In psychology, there has been extensive interest in employing classical entropy for quantifying uncertainty, e.g., in decision making (entropy minimization was used to model decision biases in [47]), categorization (as a way to formalize intuitions in spontaneous grouping [48]), and learning [49,50]. We plan to apply the apparatus of the quantum state entropy to these problems.
As shown in Figure 1, the accumulation of transitions generated by channel Λ represents an ideal differentiation process realized in the system. Further, in this modeling, non-classical state distributions in the intermediate stages are identified (see Equations (29)–(31)). We analyze them by means of the state entropy (see Figure 2 and Figure 3).

2. State Representation by Density Operator

If a physical quantity X is measurable in a system, the frequency probabilities { P ( x ) } for observed values { x } may be estimated. Then, the quantity X is a “stochastic variable” in terms of probability theory, and the distribution { P ( x ) } is a “state of the system” which can be analyzed, e.g., by calculating the expectation value E ( X ) or dispersion V ( X ) = E ( X 2 ) ( E ( X ) ) 2 , as is usual in statistics.
The mathematical framework of quantum theory includes probability theory, where classical concepts of stochastic variables and probability distribution are expanded using the notion of “operator”. Firstly, a physical quantity is defined in the form of
X = k = 1 M x k x k x k .
This is a Hermitian operator in Hilbert space H = C M with real eigenvalues x k R ( k = 1 , 2 , , M ) and eigenvectors { x k } . (A vector x H whose norm is 1 is called ket-vector, and x , which is Hermitian conjugate of x , i.e., x = x , is called bra-vector.) The form of Equation (1) implies that after a non-degenerate value x k is observed, the system under the measurement has the definite (pure) state represented by the operator x k x k . Note that the trace of product of X and x k x k is equal to x k ;
T r ( X x k x k ) = x k X x k = x k .
For the calculation, the orthogonality of vectors, i.e., x k | x k = 0 if k k , is used. Next, using the pure states { x k x k } , let us construct the operator:
ρ = K = 1 M P ( x k ) x k x k .
where { P ( x k ) } corresponds to the frequency probabilities of the observed values { x k } , and, in fact, the trace of X ρ is equal to the expected value E ( X ) ;
T r ( X ρ ) = E ( X ) .
Mathematically, ρ is a Hermitian matrix satisfying T r ( ρ ) = 1 and x ρ x 0 , x H = C M . Such operator is called density operator and used for representing a statistical mixture of pure states (a mixed state). A density operator may be given in the form of Schatten decomposition, i.e., represented as a diagonal matrix:
k = 1 M λ k ϕ k ϕ k ,
where { λ k 0 } are the eigenvalues of the matrix (the same as probabilities { P ( x k ) } of ρ ), and { ϕ k H = C M } are the corresponding eigenvectors (the same as x k of ρ ). From Equation (4), one can obtain a picture of statistical mixture of { ϕ k ϕ k } . (This mixture is denoted by { ϕ k , λ k } hereafter.) As can be seen from the construction of ρ in Equation (2), to give a Schatten decomposition is conceptually equivalent to giving a probability distribution of measurement of some physical quantity. In this sense, the state distribution { ϕ k , λ k } is “classical”. We have to point out here that decomposition of density operator is not unique, generally: By considering various linear combinations of { ϕ k } , one can find a set of vectors { Ψ i , i = 1 , N } , which satisfies
k = 1 M λ k ϕ k ϕ k = i = 1 N P i Ψ i Ψ i , i = 1 N P i = 1 .
Note that N M and the vectors { Ψ i H = C M } need not be orthogonal to each other, that is, they need not be eigenstates of a single physical quantity: the state distribution { Ψ i , P i } is “non-classical”. There exist numerous state distributions corresponding to same density operator, other than { ϕ k , λ k } and { Ψ i , P i } , and they are non-classical.

3. Differentiation Phenomenon in Quantum Measurement Process

As shown in Equation (3), for the density operator
ρ = K = 1 M P ( x k ) x k x k ,
where { x k x k } are the eigenstates of X, T r ( X ρ ) = k = 1 M P ( x k ) x k = E ( X ) is satisfied. In this section, noting the non-uniqueness of decomposition of density operator, we mention the meaning of T r ( X ρ ) that has not been discussed in the classical theory. Let us consider a different decomposition, ρ = i = 1 N P i Ψ i Ψ i , that is, we assume the existence of non-classical state distribution { Ψ i , P i } . Then, T r ( X ρ ) is described as the statistical average of the averages { X Ψ i = T r ( X Ψ i Ψ i ) } of observable X with respect to the pure states { Ψ i } :
T r ( X ρ ) = i = 1 N P i X Ψ i .
Each term, e.g., X Ψ , in the above is expanded as
X Ψ = k = 1 M x k | Ψ | x k | 2 .
( k = 1 M | Ψ | x k | 2 = 1 is satisfied.) The square of inner product | Ψ | x k | 2 is frequently called “transition probability”. It is related to a problem of measurement that has been discussed in the quantum theory. In the concept of quantum measurement, the existence of the measurement device is considered first, because it is assumed that some interaction between the device and the system realizes the measurement of a physical quantity. Due to the interaction, the initial state of system Ψ Ψ is transferred to one of { x k x k } , and the values of { x k } can be read out from the device. If X Ψ = k = 1 M x k | Ψ | x k | 2 means the average of outputs, the value of | Ψ | x k | 2 corresponds to the probability of transition from Ψ Ψ to x k x k .
We interpret the process of quantum measurement as a sort of “differentiation”, in which a group of systems in one initial state is divided into groups having different states by means of external or environmental factors. The expected value of X Ψ i comes from one differentiation denoted by Ψ i { x k } , and the value of T r ( X ρ ) = i = 1 N P i X Ψ i is to be calculated supposing a statistical mixture of M kinds of differentiations, { Ψ i { x k } } ( i = 1 , , M ) .

4. Characteristic Quantity of State Distribution

We assume a definitive state distribution denoted by { Ψ i , P i } is given, and the calculations of { X Ψ i } are possible. The average of { X Ψ i } , i.e., i = 1 N P i X Ψ i = T r ( X ρ ) depends only on the density operator ρ , in which { Ψ i , P i } is encoded. A statistical quantity reflecting more detailed information on the structure of { Ψ i , P i } is dispersion of { X Ψ i } , formulated as
V ( { X Ψ i , P i } ) = i = 1 N P k X Ψ i 2 i = 1 N P i X Ψ i 2 = i = 1 N P i X Ψ i 2 T r ( X ρ ) 2 .
Below, we prove the inequality
V ( { x i , P ( x i ) } ) V ( { X Ψ i , P i } ) ,
where V ( { x i , P ( x i ) } ) is the dispersion of observable X, i.e., its dispersion with respect to the probability distribution encoded in the Shatten decomposition (see Equation (1)), corresponding to the spectral decomposition of the observable X (see Equation (2)). Thus, the probability distribution corresponding to the spectral decomposition of X maximizes the dispersions with respect to decompositions in Equation (5). The inequality for dispersions can be interpreted by the theory of weak measurements. The quantities X Ψ i can be interpreted as weak values. In this framework, the inequality in Equation (9) simply means that dispersion of a weak measurement is always majorized by dispersion of the “maximally disturbing measurement”, represented by a Hermitian operator. At the same time, we are aware that interpretation of weak values is a complex foundational problem of itself.
To prove the inequality in Equation (9), let us consider the first term given by
D ( { X Ψ i , P i } ) = i = 1 N P i X Ψ i 2 .
Let us note the following inequality
k = 1 M x k ρ x k ( x k ) 2 D ( { X Ψ i , P i } ) T r ( X ρ ) 2 .
which follows from the convexity of y = x 2 , because
i = 1 N P i X Ψ i 2 T r ( X i = 1 N P i Ψ i Ψ i 2 = T r ( X ρ ) 2 ,
and since
i = 1 N P i X Ψ i 2 = i = 1 N P i k = 1 M x k x k | Ψ i 2 2
where X = k = 1 M x k x k x k , one can see
i = 1 N P i X Ψ i 2 i = 1 N k = 1 M P i x k | Ψ i 2 ( x k ) 2 = k = 1 M x k ρ x k ( x k ) 2 .
Such inequality can be derived with the use of other convex functions, not limited to y = x 2 . Even if the dispersion V is defined as
V ( { X Ψ i , P i } ) = i = 1 N P i f ( X Ψ i ) ) f ( T r ( X ρ ) ) ,
using another convex function, e.g., f ( x ) , the result holds true, that is, the inequality
k = 1 M x k ρ x k f ( x k ) f ( T r ( X ρ ) ) V ( { X Ψ i , P i } ) 0 ,
is satisfied.
We redefine the first term D of Equation (10) as
D ( { X Ψ i , P i } ) = i = 1 N P i f ( X Ψ i ) ) .
As discussed in the next section, we believe that, under proper choices of X and f ( x ) , this D itself becomes a quantity that captures structural features of { Ψ i , P i } .

5. State Entropy

In this section, we consider D of Equation (16) in the case of X = ρ and f ( x ) = log x :
D ( { ρ Ψ i , P i } ) = i = 1 N P i log ρ Ψ i .
Here, we fix the state distribution { Ψ i , P i } for the density operator ρ , and y = log x is our choice of a convex function. What does the above D tell us about { Ψ i , P i } ? To discuss this question, we first focus on the term of T r ( ρ Ψ i Ψ i ) = ρ Ψ i . Since
ρ Ψ i = P i + j i P j | Ψ i | Ψ j | 2 ,
1 > ρ Ψ i P i ,
is satisfied. One can see, ρ Ψ i = P i if all the vectors { Ψ j i } are orthogonal to Ψ i , and ρ Ψ i = 1 if all { Ψ j i } are parallel to Ψ i . Based on this, we interpret ρ Ψ i as a degree of “similarity” of | Ψ i Ψ i | and ρ . This interpretation of quantity ρ Ψ i as the degree of similarity can also be illustrated by the representation of the operators | Ψ i Ψ i | and ρ as vectors in the Hilbert space of Hilbert–Schmidt operators endowed with the scalar product A | B = T r A B . We start with the remark that A | B = cos θ A B A 2 B 2 , where · 2 is the Hilbert–Schmidt norm; we also remark that, for a self-adjoint operator A , A 2 = T r A 2 . In particular, the norm of any pure state and the norm of any projector are equal to one. We have
| Ψ i Ψ i | | ρ = T r j P j | Ψ i Ψ i | | Ψ j Ψ j | = ρ Ψ i .
Hence,
ρ Ψ i = cos θ T r ρ 2 ,
where θ is the angle between the vectors | Ψ i Ψ i | and ρ . The scaling coefficient T r ρ 2 is the purity of the state ρ .
Further, noting that y = log x is a monotonically decreasing function, we interpret log ρ Ψ i = log cos θ log T r ρ 2 as a degree of orthogonality between the vectors | Ψ i Ψ i | and ρ . We note that the following inequality is satisfied: log P i log ( ρ Ψ i ) > 0 .
In general, any convex and monotonically decreasing function is allowed as f ( x ) . The average of orthogonality log ρ Ψ i , i.e., i = 1 N P i ( log ρ Ψ i ) corresponds to D of Equation (17). Generally, the value of D will increase in proportion to the number of states and decrease in proportion to similarities among them. That is why we call the value D “state diversity” or “state entropy”.
The following inequality shows the significance of state entropy D:
k = 1 M λ k ( log λ k ) D ( { ρ Ψ i , P i } ) log ( T r ( ρ 2 ) ) .
It can be derived with the use of convexity of y = log x , in a similar way as derivation of Equation (11). In the above form, { λ k } are the eigenvalues of ρ = k = 1 M λ k ϕ k ϕ k . The term of k = 1 M λ k ( log λ k ) in the left-hand side corresponds to von Neumann entropy given by ρ log ρ . Further, T r ( ρ 2 ) in the right-hand side is a well-known quantity in the quantum theory, too. The von Neumann entropy ρ log ρ and T r ( ρ 2 ) are frequently used to evaluate the degree of “mixing” in ρ : if ρ is pure, then, ρ log ρ = 0 and T r ( ρ 2 ) = 1 . If ρ is a mixed state, ρ log ρ > 0 and T r ( ρ 2 ) < 1 , and especially, when λ 1 = λ 2 = = λ M = 1 / M , ρ log ρ takes the maximum value of log M , and T r ( ρ 2 ) takes minimum value of 1 / M . Mathematically, these two quantities have the relation of ρ log ρ log ( T r ( ρ 2 ) ) . The inequality in Equation (18) implies that the intermediate values between these two correspond to other kinds of state entropy, which can estimated for various non-classical state distributions reducing to ρ . In other words, the well-known ρ log ρ and T r ( ρ 2 ) are newly interpreted as maximum and minimum values of state entropy.
Note that the state entropy D is different from the generalized quantum entropic measures that have been proposed until now. This point is mentioned in the Appendix A.

6. Model of Differentiation and Calculation of State Entropy

As mentioned in Section 2, a Schatten decomposition of a density operator such as Equation (2) represents a probabilistic distribution of orthogonal pure states. Such an ensemble of states is postulated to be the resulting state of the system after measurement of some physical quantity, whose eigenstates are orthogonal. On the other hand, using another decomposition of the density operator, a mixture of non-orthogonal pure states may be obtained, and we call such mixture non-classical. In Section 3, we point out that the essence of quantum measurement is state differentiation caused by external or environmental factors. If state distribution corresponding to Schatten decomposition is a goal of differentiation, various non-classical ones will appear in intermediate stages before reaching the goal. Below, we model this mechanism as proposed in [6]. This model mathematically explains what state distribution may occur in the differentiation process. Our aim in this section is to evaluate state structural features by using the state entropy defined in Section 5.
Let us consider a typical state transition caused by a quantum measurement, which is denoted by
Ψ { ψ k , P k } .
Ψ means an initial state of system represented by Ψ Ψ , and { ψ k , P k } means a distribution where the states { ψ k ψ k } exist with probabilities { P k } . { ψ k } correspond to eigenstates of some physical quantity defined in Hibert space H = C M , and the initial vector Ψ is expanded as
Ψ = k = 1 M P k ψ k ,
where P k means a complex number satisfying | P k | 2 = P k , that is,
Ψ Ψ = k = 1 M P k ψ k ψ k + k k P k P k ψ k ψ k .
The first term k = 1 M P k ψ k ψ k corresponds to the distribution { ψ k , P k } , and, therefore, vanishing of the second term, the process called “decoherence” in quantum theory, means accomplishment of the measurement. The relation of Ψ and { ψ k , P k } is represented as
k = 1 M M k Ψ Ψ M k = k = 1 M | ψ k | Ψ | 2 ψ k ψ k = k = 1 M P k ψ k ψ k ,
with the use of projection operator M k = ψ k ψ k . (The transition probability | ψ k | Ψ | 2 is equal to P k .)
If the above transition is interpreted as a sort of differentiation, its development, i.e., what state distributions occur between Ψ and { ψ k , P k } , becomes a crucial concern. The model of differentiation, which was proposed in [6], presents the picture that the initial state Ψ is differentiated to { ψ k , P k } step by step through many state transitions. Each state transition is described with use of a map from state to state, which is denoted by Λ . The map is called “quantum channel” in quantum information theory. A chain of state transitions given as
ρ ( 0 ) = Ψ Ψ ρ ( 1 ) = Λ ( ρ ( 0 ) ) ρ ( 2 ) = Λ ( ρ ( 1 ) ) ρ ( n ) = Λ ( ρ ( n 1 ) ) ,
is regarded as a process of differentiation, if
lin n ρ ( n ) = k = 1 M P k ψ k ψ k ,
is satisfied. A channel Λ is to be defined based on the following: There exist numerous environmental elements around the system. Initially, states of system and these elements are given independently. Let Φ Φ be the initial state of one element, which is defined on a space K 1 = C L . The initial compound state of the system and the element, on the space H K 1 , is factorized
Ψ Ψ Φ Φ .
At the next step, the states of the system and the element become non-separable. Such compound state is generally defined as
U Ψ Ψ Φ Φ U ,
using a unitary operator U on H K 1 . The unitary transformation U specifies a correlation generated between the system and the element, and, in the modeling, the following form is assumed:
U = k = 1 M ψ k ψ k u k .
where u k is a unitary on K 1 . Actually, by this U, the vector Ψ Φ is transformed to
U Ψ Φ = k = 1 M P k ψ k Φ k ,
where Φ k = u k Φ . Then, the states of the system and the element are “entangled”, since the above form cannot be factorized into two vectors independently defined on H and K 1 , if Φ k Φ k for some k k . A compound state at the third step is described as
j = 1 L ( I M ¯ j ) U Ψ Ψ Φ Φ U ( I M ¯ j ) .
{ M ¯ j } are projection operators corresponding to the basis set of K 1 = C L , say { ϕ j } . As can be seen from Equation (20), a state transition given by a projection operator mathematically represents accomplishment of differentiation. The operation of { M ¯ j } means that the state of the element is eventually differentiated into { ϕ j ϕ j } . Note that the states of the system and the element are correlated at the second step. Thus, the state of the system is affected by the differentiation. Actually, Equation (23) may be rewritten to
j = 1 L E j Ψ Ψ E j ϕ j ϕ j ,
by introducing the operator,
E j = k = 1 M ϕ j | Φ k ψ k ψ k = k = 1 M ν j | k ψ k ψ k .
The above form implies that the state of the element of the environment gets transformed to ϕ j ϕ j with probability,
P j = T r ( E j Ψ Ψ E j ϕ j ϕ j ) = Ψ E j E j Ψ ,
and at the same time the state of the system transits to
Ψ j Ψ j = 1 P j E j Ψ Ψ E j .
The operator E j introduced in Equation (25) is called Kraus operator and satisfies j = 1 L E j E j = I . (In general, a set of Hermitian positive operators { F i } with i = 1 N F i = I is called positive-operator valued measure (POVM).) With the use of { E j } , a quantum channel Λ is defined:
Λ ( · ) = j = 1 L E j · E j .
Λ ( Ψ Ψ ) = j = 1 L E j Ψ Ψ E j means the density operator obtained from the partial trace of the compound state, T r K ( j = 1 L E j Ψ Ψ E j ϕ j ϕ j ) .
The other environmental elements are defined in Hilbert spaces denoted by K 2 , 3 . If they interact with the system in a similar way,
ρ ( n ) = Λ ( ρ ( n 1 ) ) = = Λ ( Λ ( Λ ( Ψ Ψ ) ) ) ,
is defined as the density operator of the system that is obtained after interacting with n environmental elements. From the definition of Λ (see Equation (28)), this ρ ( n ) is decomposed as
ρ ( n ) = { j 1 , j 2 , , j n } P { j 1 , j 2 , , j n } Ψ { j 1 , j 2 , , j n } Ψ { j 1 , j 2 , , j n } ,
where
P { j 1 , j 2 , , j n } = Ψ E { j 1 , j 2 , , j n } E { j 1 , j 2 , , j n } Ψ ,
and
Ψ { j 1 , j 2 , , j n } = 1 P { j 1 , j 2 , , j n } E { j 1 , j 2 , , j n } Ψ .
(The notation E { j 1 , j 2 , , j n } means E j n E j 2 E j 1 .) P { j 1 , j 2 , , j n } is the probability that the states of n environmental elements eventually become { ϕ j 1 , ϕ j 2 , , ϕ j n } , and Ψ { j 1 , j 2 , , j n } Ψ { j 1 , j 2 , , j n } is a pure state of the system at this event. It should be noted here that the density operator ρ ( n ) can be expanded as
ρ ( n ) = k = 1 M P k ψ k ψ k + k k P k P k ( Φ k | Φ k ) n ψ k ψ k ,
by using Equation (19) and the property of
Λ ( ψ k ψ k ) = j = 1 L Φ k | ψ j ψ j | Φ k ψ k ψ k = Φ k | Φ k ψ k ψ k .
Since | Φ k | Φ k | < 1 , the condition of Equation (21), i.e., lin n ρ ( n ) = k = 1 M P k ψ k ψ k . is clearly satisfied. Thus, the state distribution { Ψ { j 1 , j 2 , , j n } , P { j 1 , j 2 , , j n } } , which is encoded in ρ ( n ) , is identified at an intermediate stage in the differentiation process Ψ { ψ k , P k } .
Figure 1 shows the result of computational simulation with M = L = 2 , Ψ = 0.7 ψ 1 + 0.3 ψ 2 ( P 1 = 0.7 , P 2 = 0.3 ), ν 1 | 1 = 0.5 ( ν 2 | 1 = 0.5 ) and ν 1 | 2 = 0.45 ( ν 2 | 2 = 0.55 ) . The histograms of population rates of states with l 1 20 < | ψ 1 | Ψ { j 1 , j 2 , , j n } | 2 l 20 ( l = 1 , 2 , , 20 ) are calculated in the case of n = 0 , 10 , 100 , 500 and 2000. One can see that with increasing n, the state distribution approaches the goal of differentiation, i.e., { { ψ 1 , ψ 2 } , { 0.7 , 0.3 } } .
Figure 2 shows the behavior of the state entropy D, von Neumann entropy and log ( T r ( ρ 2 ) ) for the distribution { Ψ { i 1 , i 2 , , i n } , P { i 1 , i 2 , , i n } } , which are calculated in the same setting of parameters. One can directly see that the inequality of Equation (18) is satisfied at any n. Note that the state entropy D takes values close to von Neumann entropy at very large n. In fact, as shown in Figure 3, the difference between von Neumann entropy and the state entropy is noticeable mostly at earlier stages. These results imply that state distributions appearing in the differentiation process are non-classical in general.

7. Conclusions

The state entropy is a truly non-classical quantity because it depends not only on statistical probabilities, but also on similarities among states. The differentiation phenomenon is also non-classical, because it is interpreted as dynamics of the probabilities and similarities. Definition of the state entropy and modeling of the differentiation process are impossible in the framework of classical probability theory.
We believe that evaluation of an ensemble of systems by the state entropy fits the empirical reasoning: No matter how many systems are in the ensemble, we may not recognize high diversity if we know that these states are not very different. Further, we believe that various areas of the nature dynamics of character change in the population of individuals is very much like the differentiation phenomena. This makes prospects of the quantum-like formalism grow stronger.

Author Contributions

Conceptualization, M.A., I.B., E.M.P. and A.K.; Methodology, M.A. and A.K.; Validation, I.B., E.M.P. and A.K.; Writing-Original Draft Preparation, M.A.; Writing-Review & Editing, I.B., E.M.P. and A.K.

Acknowledgments

I.B. was supported by Marie Curie Fellowship at City University of London, H2020-MSA-IF-2015, grant N 696331.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. State Entropy and other Quantum Entropies

In this paper, we propose the state entropy,
D ( { ρ Ψ i , P i } ) = i = 1 N P i ( log ( T r ( ρ Ψ i Ψ i ) ) ) .
More generally, it is defined as
D ( { ρ Ψ i , P i } ) = i = 1 N P i f ( T r ( ρ Ψ i Ψ i ) ) ,
by using a convex and monotone decreasing function f ( x ) . Mathematically, D ( { ρ Ψ i , P i } depends on the way of decomposition of ρ . As discussed in Section 5, for ρ = i = 1 N P i Ψ i Ψ i , f ( T r ( ρ Ψ i Ψ i ) ) = f ( ρ Ψ i ) is interpreted as the degree of orthogonality between Ψ i Ψ i and ρ . In this sense, D evaluates a sort of diversity in the state distribution { Ψ i , P i } , and it takes the maximal value equivalent to T r ( ρ log ρ ) for the Schatten decomposition (see Equation (18)).
On the other hand, there are many mathematical expansions of von Neumann entropy. As examples, the quantum version of Rényi entropy [51],
R α ( ρ ) = 1 1 α log ( T r ( ρ α ) ) ,
and the one of Tsallis entropy [52],
T α ( ρ ) = 1 1 α ( T r ( ρ α ) 1 ) ,
are well-known. These entropies approach to von Neumann entropy S ( ρ ) = T r ( ρ log ρ ) in the limit α 1 . The index α , which is called the entropic parameter, is nonnegative and α 1 . Further, such generalized entropies are uniformly represented in the form of quantum version of Salicrú entropy [53,54], which is given by
H ( h , ϕ ) ( ρ ) = h ( T r ϕ ( ρ ) ) ,
where the functions h : R R and ϕ : [ 0 , 1 ] R satisfy either of the following conditions: (i) h is increasing and ϕ is concave; or (ii) h is decreasing and ϕ is convex. In the form of Rényi entropy, h ( x ) = log ( x ) 1 α and ϕ ( x ) = x α , and in the form of Tsallis entropy, h ( x ) = x 1 1 α and ϕ ( x ) = x α . Of course, von Neumann entropy is also recovered at h ( x ) = x and ϕ ( x ) = x log x .
Here, we have to point out that H ( h , ϕ ) ( ρ ) is practically calculated as
H ( h , ϕ ) ( ρ ) = h ( k = 1 M ϕ ( λ k ) ) ,
by using the eigenvalues of ρ , that is, any quantum entropic measure that is reduced into H ( h , ϕ ) ( ρ ) does not depend on the way of decomposition of ρ . The state entropy is different in this point. Actually, it is clear that D ( { ρ Ψ i , P i } ) is not recovered in the form of H ( h , ϕ ) ( ρ ) .

References

  1. Von Neumann, J. Thermodynamik quantummechanischer Gesamheiten. Gott. Nach. 1927, 1, 273–291. [Google Scholar]
  2. Von Neumann, J. Mathematische Grundlagen der Quantenmechanik; Springer: Berlin, Germany, 1932. [Google Scholar]
  3. Horodecki, M.; Oppenheim, J.; Winter, A. Partial quantum information. Nature 2005, 436, 673–676. [Google Scholar] [CrossRef] [PubMed]
  4. Umegaki, H. Conditional expectations in an operator algebra IV (entropy and information). Kodai Math. Sem. Rep. 1962, 14, 59–85. [Google Scholar] [CrossRef]
  5. Ohya, M. Fundamentals of Quantum Mutual Entropy and Capacity. Open Syst. Inf. Dyn. 1999, 6, 69–78. [Google Scholar] [CrossRef]
  6. Asano, M.; Basieva, I.; Khrennikov, A.; Yamato, I. A model of differentiation in quantum bioinformatics. Prog. Biophys. Mol. Biol. 2017, 130, 88–98. [Google Scholar] [CrossRef] [PubMed]
  7. Zurek, W.H. Decoherence and the Transition from Quantum to Classical. Phys. Today 1991, 44, 36–44. [Google Scholar] [CrossRef]
  8. Lindblad, G. On the generators of quantum dynamical semigroups. Commun. Math. Phys. 1976, 48, 119. [Google Scholar] [CrossRef]
  9. Gorini, V.; Kossakowski, A.; Sudarshan, E.C.G. Completely positive semigroups of N-level systems. J. Math. Phys. 1976, 17, 821. [Google Scholar] [CrossRef]
  10. Khrennikov, A. Classical and quantum mechanics on information spaces with applications to cognitive, psychological, social and anomalous phenomena. Found. Phys. 1999, 29, 1065–1098. [Google Scholar] [CrossRef]
  11. Khrennikov, A. Quantum-like formalism for cognitive measurements. Biosystems 2003, 70, 211–233. [Google Scholar] [CrossRef]
  12. Khrennikov, A. On quantum-like probabilistic structure of mental information. Open Syst. Inf. Dyn. 2014, 11, 267–275. [Google Scholar] [CrossRef]
  13. Khrennikov, A. Information Dynamics in Cognitive, Psychological, Social, and Anomalous Phenomena; Ser.: Fundamental Theories of Physics; Kluwer: Dordreht, The Netherlands, 2004. [Google Scholar]
  14. Busemeyer, J.B.; Wang, Z.; Townsend, J.T. Quantum dynamics of human decision making. J. Math. Psychol. 2006, 50, 220–241. [Google Scholar] [CrossRef]
  15. Haven, E. Private information and the ‘information function’: A survey of possible uses. Theory Decis. 2008, 64, 193–228. [Google Scholar] [CrossRef]
  16. Yukalov, V.I.; Sornette, D. Processing Information in Quantum Decision Theory. Entropy 2009, 11, 1073–1120. [Google Scholar] [CrossRef]
  17. Khrennikov, A. Ubiquitous Quantum Structure: From Psychology to Finances; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2010. [Google Scholar]
  18. Asano, M.; Masanori, O.; Tanaka, Y.; Khrennikov, A.; Basieva, I. Quantum-like model of brain’s functioning: Decision making from decoherence. J. Theor. Biol. 2011, 281, 56–64. [Google Scholar] [CrossRef] [PubMed]
  19. Busemeyer, J.R.; Pothos, E.M.; Franco, R.; Trueblood, J. A quantum theoretical explanation for probability judgment errors. Psychol. Rev. 2011, 118, 193–218. [Google Scholar] [CrossRef] [PubMed]
  20. Asano, M.; Ohya, M.; Khrennikov, A. Quantum-Like Model for Decision Making Process in Two Players Game—A Non-Kolmogorovian Model. Found. Phys. 2011, 41, 538–548. [Google Scholar] [CrossRef]
  21. Asano, M.; Ohya, M.; Tanaka, Y.; Khrennikov, A.; Basieva, I. Dynamics of entropy in quantum-like model of decision making. AIP Conf. Proc. 2011, 63, 1327. [Google Scholar]
  22. Bagarello, F. Quantum Dynamics for Classical Systems: With Applications of the Number Operator; Wiley: New York, NY, USA, 2012; Volume 90, p. 015203. [Google Scholar]
  23. Busemeyer, J.R.; Bruza, P.D. Quantum Models of Cognition and Decision; Cambridge Press: Cambridge, UK, 2012. [Google Scholar]
  24. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y. Quantum-like dynamics of decision-making. Phys. A Stat. Mech. Appl. 2010, 391, 2083–2099. [Google Scholar] [CrossRef]
  25. De Barros, A.J. Quantum-like model of behavioral response computation using neural oscillators. Biosystems 2012, 110, 171–182. [Google Scholar] [CrossRef] [PubMed]
  26. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y. Quantum-like generalization of the Bayesian updating scheme for objective and subjective mental uncertainties. J. Math. Psychol. 2012, 56, 166–175. [Google Scholar] [CrossRef]
  27. De Barros, A.J.; Oas, G. Negative probabilities and counter-factual reasoning in quantum cognition. Phys. Scr. 2014, T163, 014008. [Google Scholar] [CrossRef]
  28. Wang, Z.; Busemeyer, J.R. A quantum question order model supported by empirical tests of an a priori and precise prediction. Top. Cogn. Sci. 2013, 5, 689–710. [Google Scholar] [PubMed]
  29. Dzhafarov, E.N.; Kujala, J.V. On selective influences, marginal selectivity, and Bell/CHSH inequalities. Top. Cogn. Sci. 2014, 6, 121–128. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, Z.; Solloway, T.; Shiffrin, R.M.; Busemeyer, J.R. Context effects produced by question orders reveal quantum nature of human judgments. Proc. Natl. Acad. Sci. USA 2014, 111, 9431–9436. [Google Scholar] [CrossRef] [PubMed]
  31. Khrennikov, A. Quantum-like modeling of cognition. Front. Phys. 2015, 3, 77. [Google Scholar] [CrossRef]
  32. Boyer-Kassem, T.; Duchene, S.; Guerci, E. Testing quantum-like models of judgment for question order effect. Math. Soc. Sci. 2016, 80, 33–46. [Google Scholar] [CrossRef]
  33. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y. A Quantum-like Model of Selection Behavior. J. Math. Psychol. 2016. [Google Scholar] [CrossRef]
  34. Yukalov, V.I.; Sornette, D. Quantum Probabilities as Behavioral Probabilities. Entropy 2017, 19, 112. [Google Scholar] [CrossRef]
  35. Igamberdiev, A.U.; Shklovskiy-Kordi, N.E. The quantum basis of spatiotemporality in perception and consciousnes. Prog. Biophys. Mol. Biol. 2017, 130, 15–25. [Google Scholar] [CrossRef] [PubMed]
  36. De Barros, J.A.; Holik, F.; Krause, D. Contextuality and indistinguishability. Entropy 2017, 19, 435. [Google Scholar] [CrossRef]
  37. Bagarello, F.; Di Salvo, R.; Gargano, F.; Oliveri, F. (H,ρ)-induced dynamics and the quantum game of life. Appl. Math. Mod. 2017, 43, 15–32. [Google Scholar] [CrossRef]
  38. Takahashi, K.S.-J.; Makoto, N. A note on the roles of quantum and mechanical models in social biophysics. Prog. Biophys. Mol. Biol. 2017, 130 Pt A, 103–105. [Google Scholar] [CrossRef] [PubMed]
  39. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.; Yamato, I. Quantum-like model of diauxie in Escherichia coli: Operational description of precultivation effect. J. Theor. Biol. 2012, 314, 130–137. [Google Scholar] [CrossRef] [PubMed]
  40. Accardi, L.; Ohya, M. Compound channels, transition expectations, and liftings. Appl. Math. Optim. 1999, 39, 33–59. [Google Scholar] [CrossRef]
  41. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.; Yamato, I. A model of epigenetic evolution based on theory of open quantum systems. Syst. Synth. Biol. 2013, 7, 161. [Google Scholar] [CrossRef] [PubMed]
  42. Asano, M.; Hashimoto, T.; Khrennikov, A.; Ohya, M.; Tanaka, A. Violation of contextual generalization of the Leggett-Garg inequality for recognition of ambiguous figures. Phys. Scr. 2014, 2014, T163. [Google Scholar] [CrossRef]
  43. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.; Yamato, I. Quantum Information Biology: From Information Interpretation of Quantum Mechanics to Applications in Molecular Biology and Cognitive Psychology. Found. Phys. 2015, 45, 1362. [Google Scholar] [CrossRef]
  44. Asano, M.; Khrennikov, A.; Ohya, M.; Tanaka, Y.; Yamato, I. Three-body system metaphor for the two-slit experiment and Escherichia coli lactose-glucose metabolism. Philos. Trans. R. Soc. A 2016. [Google Scholar] [CrossRef] [PubMed]
  45. Ohya, M.; Volovich, I. Mathematical Foundations of Quantum Information and Computation and its Applications to Nano- and Bio-Systems; Springer: Berlin, Germany, 2011. [Google Scholar]
  46. Asano, M.; Khrennikov, A.; Ohya, M.; Tanaka, Y.; Yamato, I. Quantum Adaptivity in Biology: From Genetics to Cognition; Springer: Berlin, Germany, 2015. [Google Scholar]
  47. Oaksford, M.; Chater, N. A Rational Analysis of the Selection Task as Optimal Data Selection. Psychol. Rev. 1994, 101, 608–631. [Google Scholar] [CrossRef]
  48. Pothos, E.M.; Chater, N. A simplicity principle in unsupervised human categorization. Cogn. Sci. 2002, 26, 303–343. [Google Scholar] [CrossRef]
  49. Miller, G.A. Free Recall of Redundant Strings of Letters. J. Exp. Psychol. 1958, 56, 485–491. [Google Scholar] [CrossRef] [PubMed]
  50. Jamieson, R.K.; Mewhort, D.J.K. The influence of grammatical, local, and organizational redundancy on implicit learning: An analysis using information theory. J. Exp. Psychol. Learn. Mem. Cogn. 2005, 31, 9–23. [Google Scholar] [CrossRef] [PubMed]
  51. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20–30 July 1960; Volume 1, p. 547. [Google Scholar]
  52. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479. [Google Scholar] [CrossRef]
  53. Salicrú, M.; Menéndez, M.L.; Morales, D.; Pardo, L. Asymptotic distribution of (h,ϕ)-entropies. Commun. Stat. Theory Methods 1993, 22, 2015. [Google Scholar] [CrossRef]
  54. Bosyk, G.M.; Zozor, S.; Holik, F.; Portesi, M.; Lamberti, P.W. A family of generalized quantum entropies: Definition and properties. Quantum Inf. Proc. 2016, 15, 3393–3420. [Google Scholar] [CrossRef]
Figure 1. Histograms of population rates of states with l 1 20 < | ψ 1 | Ψ { i 1 , i 2 , , i n } | 2 l 20 ( l = 1 , 2 , , 20 ) in the case of n = 0 , 10 , 100 , 500 and 2000. The parameters are set by M = L = 2 , Ψ = 0.7 ψ 1 + 0.3 ψ 2 ( P 1 = 0.7 , P 2 = 0.3 ), ν 1 | 1 = 0.5 ( ν 2 | 1 = 0.5 ) and ν 1 | 2 = 0.45 ( ν 2 | 2 = 0.55 ) . If Ψ { i 1 , i 2 , , i n } ψ 1 ( ψ 2 ) , | ψ 1 | Ψ { i 1 , i 2 , , i n } | 2 takes a value nearby 1(0). With increasing n, the state distribution approaches to { { ψ 1 , ψ 2 } , { 0.7 , 0.3 } } .
Figure 1. Histograms of population rates of states with l 1 20 < | ψ 1 | Ψ { i 1 , i 2 , , i n } | 2 l 20 ( l = 1 , 2 , , 20 ) in the case of n = 0 , 10 , 100 , 500 and 2000. The parameters are set by M = L = 2 , Ψ = 0.7 ψ 1 + 0.3 ψ 2 ( P 1 = 0.7 , P 2 = 0.3 ), ν 1 | 1 = 0.5 ( ν 2 | 1 = 0.5 ) and ν 1 | 2 = 0.45 ( ν 2 | 2 = 0.55 ) . If Ψ { i 1 , i 2 , , i n } ψ 1 ( ψ 2 ) , | ψ 1 | Ψ { i 1 , i 2 , , i n } | 2 takes a value nearby 1(0). With increasing n, the state distribution approaches to { { ψ 1 , ψ 2 } , { 0.7 , 0.3 } } .
Entropy 20 00394 g001
Figure 2. Behaviors of state entropy, von Neumann entropy and log ( T r ( ρ 2 ) ) at the parameters of M = L = 2 , Ψ = 0.7 ψ 1 + 0.3 ψ 2 ( P 1 = 0.7 , P 2 = 0.3 ), ν 1 | 1 = 0.5 ( ν 2 | 1 = 0.5 ) and ν 1 | 2 = 0.45 ( ν 2 | 2 = 0.55 ) .
Figure 2. Behaviors of state entropy, von Neumann entropy and log ( T r ( ρ 2 ) ) at the parameters of M = L = 2 , Ψ = 0.7 ψ 1 + 0.3 ψ 2 ( P 1 = 0.7 , P 2 = 0.3 ), ν 1 | 1 = 0.5 ( ν 2 | 1 = 0.5 ) and ν 1 | 2 = 0.45 ( ν 2 | 2 = 0.55 ) .
Entropy 20 00394 g002
Figure 3. Difference between von Neumann entropy and state entropy.
Figure 3. Difference between von Neumann entropy and state entropy.
Entropy 20 00394 g003

Share and Cite

MDPI and ACS Style

Asano, M.; Basieva, I.; Pothos, E.M.; Khrennikov, A. State Entropy and Differentiation Phenomenon. Entropy 2018, 20, 394. https://doi.org/10.3390/e20060394

AMA Style

Asano M, Basieva I, Pothos EM, Khrennikov A. State Entropy and Differentiation Phenomenon. Entropy. 2018; 20(6):394. https://doi.org/10.3390/e20060394

Chicago/Turabian Style

Asano, Masanari, Irina Basieva, Emmanuel M. Pothos, and Andrei Khrennikov. 2018. "State Entropy and Differentiation Phenomenon" Entropy 20, no. 6: 394. https://doi.org/10.3390/e20060394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop