Next Article in Journal
Maximizing the Index of Signed Complete Graphs Containing a Spanning Tree with k Pendant Vertices
Previous Article in Journal
Enriched Z-Contractions and Fixed-Point Results with Applications to IFS
Previous Article in Special Issue
Modulation Transfer between Microwave Beams: Asymptotic Evaluation of Integrals with Pole Singularities near a First-Order Saddle Point
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probability Bracket Notation for Probability Modeling

1
Sherman Visual Lab, Sunnyvale, CA 94085, USA
2
Institut für Physikalische Chemie, RWTH Aachen University, 52056 Aachen, Germany
*
Author to whom correspondence should be addressed.
Current address: Sathorn St-View, 201/1 St Louise 1 Alley, Khwaeng Yan Nawa, Sathon, Bangkok 10120, Thailand.
Axioms 2024, 13(8), 564; https://doi.org/10.3390/axioms13080564
Submission received: 20 April 2024 / Revised: 16 July 2024 / Accepted: 13 August 2024 / Published: 20 August 2024
(This article belongs to the Special Issue Stochastic Processes in Quantum Mechanics and Classical Physics)

Abstract

:
Following Dirac’s notation in Quantum Mechanics (QM), we propose the Probability Bracket Notation (PBN), by defining a probability-bra (P-bra), P-ket, P-bracket, P-identity, etc. Using the PBN, many formulae, such as normalizations and expectations in systems of one or more random variables, can now be written in abstract basis-independent expressions, which are easy to expand by inserting a proper P-identity. The time evolution of homogeneous Markov processes can also be formatted in such a way. Our system P-kets are identified with probability vectors and our P-bra system is comparable with Doi’s state function or Peliti’s standard bra. In the Heisenberg picture of the PBN, a random variable becomes a stochastic process, and the Chapman–Kolmogorov equations are obtained by inserting a time-dependent P-identity. Also, some QM expressions in Dirac notation are naturally transformed to probability expressions in PBN by a special Wick rotation. Potential applications show the usefulness of the PBN beyond the constrained domain and range of Hermitian operators on Hilbert Spaces in QM all the way to IT.

1. Introduction

The postulates of Quantum Mechanics (QM) were established in terms of Hilbert spaces and linear Hermitian operators. Values of physical observables such as energy and momentum were considered as eigenvalues, more precisely as spectral values of linear operators in a Hilbert space. Dirac’s Vector Bracket Notation (VBN) is a very powerful tool to manipulate vectors in Hilbert spaces. It has been widely used in QM and quantum field theories. The main beauty of the VBN is that many formulae can be presented in an abstract symbolic fashion, independent of state expansions or basis selections, which, when needed, can be easily performed by inserting an appropriate v-identity operator ([1] p. 96).
v - Bracket : ψ A | ψ B v - bra : ψ A | , v - ket : | ψ B
where : ψ A | ψ B = ψ B | ψ A * , ψ A | = | ψ A
v - basis & v - identity : H ^ | ε i = ε i | ε i , ε i | ε j = δ i j , I ^ H = i | ε i ε i |
Normalization : 1 = Ψ | Ψ = Ψ | I ^ H | Ψ = i Ψ | ε i ε i | Ψ = i | c i | 2
Expectation : H H ¯ Ψ | H ^ | Ψ = i Ψ | H ^ | ε i ε i | Ψ = i ε i | c i | 2
where † denotes the Hermitian adjoint and ∗ is the complex conjugate. However, when applying operators, one must ensure that the results are contained within a Hilbert space or subspace. One must also consider if the operator was bounded or unbounded, etc. Thus, one has to develop an entire spectral theory for Hermitian operators in a Hilbert space [2].
Inspired by the great success of the VBN in QM, we now propose the Probability Bracket Notation (PBN). The latter relies on a sample space which is less constrained than a Hilbert space. Assuming X is a random variable ( R . V ) , Ω is the set of all its outcomes and P ( x i ) is the probability of x i Ω . Then we can make an expression like Equation (4), with
i P ( x i ) = i P ( x i | Ω ) = i P ( Ω | x i ) P ( x i | Ω ) = P ( Ω | i | x i ) P ( x i | | Ω ) = P ( Ω | Ω ) = 1
Here we have used the definition of conditional probability for A , B Ω ([3] p. 91).
P ( A | B ) P ( A B ) / P ( B ) , P ( x i | Ω ) = P ( x i ) , P ( Ω | x i ) = 1 for x i Ω
Therefore, we seem to have discovered a probability “identity operator”:
I X i | x i ) P ( x i | P ( Ω | Ω ) = P ( Ω | I X | Ω ) = i P ( Ω | x i ) P ( x i | Ω ) = ( 6 ) i P ( x i ) = 1
Then, following Dirac’s notation, we define the probability bra (P-bra), P-ket, P-bracket (as conditional probabilities by nature), P-basis, the system P-ket, P-identity, normalization, expectation and more, which are similar but not identical to their counterparts in Equations (1)–(5). In Section 2, for systems of one R . V , we show that the PBN has an advantage similar to that of the VBN: miscellaneous probability expressions [3,4,5] now can be presented in an abstract way, independent of P-basis, and can be expanded by inserting a suitable P-identity.
Next, in Section 3, we investigate the time evolution of homogeneous Markov chains (HMC) [3,4,5]. We realize that the time evolution of a continuous time HMC can be written in a symbolic abstract expression (in Section 3.2), just like the stationary Schrödinger equation in the VBN:
i t | Ψ ( t ) = H ^ | Ψ ( t ) = ( 2 / 2 m ) x 2 V ( x ) | Ψ ( t ) , | Ψ ( t ) = U ( t ) | Ψ ( 0 ) = exp ( i H ^ t / ) | Ψ ( 0 )
We also find that our time-dependent system P-kets can be identified with probability vectors [3]. Our system P-bra is closely related to the state function or standard bra introduced in Doi–Peliti Techniques [6,7,8]. We show that by transforming from the Schrödinger picture to the Heisenberg picture in the PBN, the time dependence of a system P-ket relocates to the R . V , which becomes a stochastic process; the Chapman–Kolmogorov Theorem [4,5,9] for transition probabilities can be derived by just inserting a time-dependent P-identity. Section 5 shows that a Schrödinger equation transforms to a master equation by making a special Wick rotation. Section 6 showcases the potential applications of the PBN, such as handling non-Hermitian operators and clustering text datasets. Discussion and concluding remarks are made at the end.

2. Probability Bracket Notation and Random Variable (R.V)

2.1. Discrete Random Variable

We define a probability space ( Ω , X , P ) of a discrete random variable ( R . V , or observable) X as follows: the set of all elementary events ω , associated with a discrete random variable X, is the sample space Ω , and
For ω i Ω , X ( ω i ) = x i , P : ω i P ( ω i ) = m ( ω i ) 0 , i m ( ω i ) = 1
Definition 1.
(Probability event-bra and evidence-ket): Let A Ω and, B Ω ,
1. 
The symbol P ( A | represents a probability event bra, or P-bra;
2. 
The symbol | B ) represents a probability evidence ket, or P-ket.
Definition 2.
(Probability Event Evidence Bracket): The conditional probability (CP) of event A given evidence B in the sample space Ω can be treated as a P-bracket, and it can be split into a P-bra and a P-ket, similar to a Dirac bracket. For A , B Ω , we define:
P - bracket : P ( A | B ) P - bra : P ( A | , P - ket : | B ) Note : P ( A | | A ) Here : P ( A | B ) P ( A B ) P ( B ) ( a conditional probability by nature )
As a CP, the P-bracket has the following properties for A , B Ω :
P ( A | B ) = 1 if A B
P ( A | B ) = 0 if A B =
Definition 3.
System P-ket:  For any subset, the probability P ( E ) can be written as a conditional probability, or a P-bracket:
P ( E ) = P ( E | Ω )
Here | Ω ) is called the system P-ket. The P-bracket defined in Equation (9) now becomes:
P ( A | B ) = P ( A B ) / P ( B ) = P ( A B | Ω ) / P ( B | Ω )
We have the following important property expressed in PBN:
For B Ω and B , P ( Ω | B ) = 1
The Bayes formula (see [3] Sec. (2.1)) now can be expressed as:
P ( A | B ) P ( B | A ) P ( A ) / P ( B ) = P ( B | A ) P ( A | Ω ) / P ( B | Ω )
The set of all elementary events in Ω forms a complete mutually disjoint set:
ω i Ω ω i = Ω , ω i ω j = δ i j ω i , i m ( ω i ) = 1 .
Definition 4.
(Discrete P-Basis and P-Identity): Using Equations (8)–(11), we have the following properties for basis elements in ( Ω , X , P ) :
X ( ω j ) = x j X | ω j ) = x j | ω j ) , P ( Ω | ω j ) = 1 , P ( ω i | Ω ) = m ( ω i )
In view of the one-to-one correlation between x j and ω j , from now on, we will use x j to label basis elements, just like labeling eigenstates in Equation (3) in the VBN for QM:
X ( x j ) = x j X | x j ) = x j | x j ) , P ( Ω | x j ) = 1 , P ( x i | Ω ) = m ( x i )
Here X behaves like a right-acting operator. The complete mutually disjoint events in (16)–(18) form a probability basis (or P-basis) and a P-identity, similar to Equation (3) in QM:
P ( x i | x k ) = δ i k , x Ω | x ) P ( x | = i | x i ) P ( x i | = I X .
The system P-ket, | Ω ) , now can be expanded from the left as:
| Ω ) = I X | Ω ) = i | x i ) P ( x i | Ω ) = i m ( x i ) | x i )
While the system P-bra, has its expansion from the right as:
P ( Ω | = P ( Ω | I X = i P ( Ω | x i ) P ( x i | = ( 14 ) i P ( x i |
The two expansions are quite different, and P ( Ω | [ | Ω ) ] . But their P-bracket is consistent with the requirement of normalization, similar to Equation (4) in the VBN:
1 = P ( Ω ) P ( Ω | Ω ) = i , j = 1 N P ( x i | m ( x j ) | x j ) = i , j = 1 N m ( x j ) δ i j = i = 1 N m ( x i )
Definition 5.
(Expectation Value): Analogous to Equation (5) in QM, the expected value of the R . V or observable X in ( Ω , X , P ) now can be expressed as:
X X ¯ E [ X ] = P ( Ω | X | Ω ) = x Ω P ( Ω | X | x ) P ( x | Ω ) = x Ω x m ( x )
If f ( X ) is a continuous function of observable X, then it is easy to show that:
f ( X ) E [ f ( X ) ] P ( Ω | f ( X ) | Ω ) = x Ω f ( x ) m ( x )

2.2. Independent Random Variables

Let X = { X 1 , X 2 , X n } be a vector of independent random variables and the sample space (i.e., the set of possible outcomes) of X i is the set Ω i . Then the joint probability distribution can be denoted as:
P ( x 1 , . . . , x N | Ω ) = P ( X 1 = x 1 , . . . , X n = x n | Ω ) , | Ω ) = | Ω 1 Ω n )
The joint probability of independent  R . V are factorable, e.g.:
P ( x i , x k | Ω ) = P ( x i | Ω i ) P ( x k | Ω k ) = P ( x i ) P ( x k )
The factorable system have the following factorable expectation, for example:
P ( Ω | X i X j X k | Ω ) = P ( Ω i | X i | Ω i ) P ( Ω j | X j | Ω j ) P ( Ω k | X k | Ω k )
As an example, in Fock space, we have the following basis from the occupation numbers:
N i | n ) = n i | n ) , P ( n | n ) = δ n , n = i δ n i , n i , I n = n | n ) P ( n | = i n i | n i ) P ( n i |
The expectation value of an occupation number now is given by:
N i P ( Ω | N i | Ω ) = P ( Ω i | N i | Ω i ) = P ( Ω | N i I i | Ω i ) = n i n i P ( n i | Ω i )
Moreover, if sets A and B are mutually independent in Ω , we have the following equivalence:
P ( A | B ) = P ( A | Ω ) P ( A B | Ω ) = P ( A | Ω ) P ( B | Ω )

2.3. Continuous P-Basis and P-Identity

Equations (17)–(19) can be extended to the probability space ( Ω , X , P ) of a continuous random variable X,
X ( x ) = x X | x ) = x | x ) , P ( Ω | x ) = 1 , P : x P ( x ) P ( x | Ω )
P ( x | x ) = δ ( x x ) , x Ω | x ) d x P ( x | = I X
We see that it is consistent with the normalization requirement:
P ( Ω | Ω ) = P ( Ω | I X | Ω ) = P ( Ω | x ) d x P ( x | Ω ) = x Ω d x P ( x | Ω ) = x Ω d x P ( x ) = 1
The expected value E ( X ) can be easily extended from (23):
X X ¯ E [ X ] = P ( Ω | X | Ω ) = x Ω P ( Ω | X | x ) d x P ( x | Ω ) = x Ω d x x P ( x )
The basis-independent expressions in the PBN are similar to those in the Dirac VBN, and among them are the expectation and normalization formulae in the two notations:
P B N : f ( X ) E [ f ( X ) ] = P ( Ω | f ( X ) | Ω ) , P ( Ω | Ω ) = 1 ,
V B N : f ( X ^ ) E [ f ( X ^ ) ] = ψ | f ( X ^ ) | ψ , ψ | ψ = 1 .
Of course, there exist differences between the PBN and Dirac VBN. For example, the expansions of bra, ket and normalization have their own expressions:
P B N : | Ω ) = d x | x ) P ( x | Ω ) , P ( Ω | = d x P ( x | , P ( Ω | Ω ) = d x P ( x | Ω ) = 1 ,
V B N : | ψ = d x | x x | ψ , ψ | = d x ψ | x x | , ψ | ψ = d x | ψ | x | 2 = 1 .

2.4. Conditional Probability and Expectation

The conditional expectation of X given H Ω in the continuous basis (31) can be expressed as ([5] p. 61):
E [ X | H ] P ( Ω | X | H ) = P ( Ω | X | x ) d x P ( x | H ) = x d x P ( x | H )
where : P ( x | H ) = ( 9 ) P ( x H | Ω ) P ( H | Ω ) ( conditional probability )
To become familiar with the PBN, let us see two simple examples.

2.4.1. Example: Rolling a Die (Ref. [4] Examples 2.6–2.8)

A die is rolled once. We let X denote the outcome of this experiment. Then the sample space for this experiment is the six-element set Ω = { 1 , 2 , 3 , 4 , 5 , 6 } . We assumed that the die was fair, and we chose the distribution function defined by m ( i ) = 1 / 6 , for i = 1 , 6 . Using the PBN, we have the P-identity for this sample space and the probability of each outcome:
i = 1 6 | i ) P ( i | = I D ,
1 = P ( Ω | Ω ) = i = 1 6 P ( Ω | i ) P ( i | Ω ) = i = 1 6 P ( i | Ω ) = 6 p .
Hence, the probability for each outcome has the same value:
P ( i ) P ( i | Ω ) = p = 1 / 6 .
Its expectation value can be readily calculated:
P ( Ω | X | Ω ) = i = 1 6 P ( Ω | X | i ) P ( i | Ω ) = i = 1 6 P ( Ω | i | i ) P ( i | Ω ) = i = 1 6 i / 6 = 21 / 6 = 7 / 2 ,
and the variance can be calculated as:
σ 2 = X 2 X ¯ 2 = i = 1 6 i 2 / 6 49 / 4 = 91 / 6 49 / 4 = 45 / 4

2.4.2. Example: Rolling a Die (Examples 2.1 Continued, [4] 2.8 Continued)

If E is the event that the result of the roll is an even number, then E = { 2 , 4 , 6 } and P ( E ) = m ( 2 ) + m ( 4 ) + m ( 6 ) = 1 / 6 + 1 / 6 + 1 / 6 = 1 2 . Using the PBN, the probability of event E can be easily calculated as:
P ( E ) P ( E | Ω ) = P ( E | I ^ | Ω ) = i E P ( E | i ) P ( i | Ω ) = i E P ( i | Ω ) = 3 p = 3 / 6 = 1 / 2
Applying Equation (40), we can calculate the conditional probabilities P ( i | E ) as follows:
P ( i | E ) = P ( E i | Ω ) P ( E | Ω ) = P ( i | Ω ) / ( 1 / 2 ) = ( 1 / 6 ) / ( 1 / 2 ) = 1 / 3 , ( i e v e n ) 0 , ( i o d d )
Our discussions above can be easily extended to systems of multiple R . V . For example, we can introduce the following system of three independent discrete R . V :
X | x , y , z ) = x | x , y , z ) , Y | x , y , z ) = y | x , y , z ) , Z | x , y , z ) = z | x , y , z ) P ( x , y , z ) = P ( x , y , z | Ω ) ; P ( Ω | Ω ) = 1 ; P ( Ω | H ) = 1 if H Ω P ( Ω | x , y , z ) = 1
Orthonormality : P ( x , y , z | x , y , z ) = δ x x δ y y δ z z
Completeness : x , y , z | x , y , z ) P ( x , y , z | = I X , Y , Z

3. Probability Vectors and Homogeneous Markov Chains (HMCs)

For simplicity, from now on, we will only discuss the time evolution for one R . V . We assume our probability space Ω has the following stationary discrete P-basis from a random variable N ^ (which is possible for state labeling or occupation number counting):
N ^ | i ) = i | i ) , P ( i | j ) = δ i j , i = 1 N | i ) P ( i | = I

3.1. Discrete-Time HMC

The transition matrix element  p i j is defined as ([3] p. 407):
p i j P ( N t + 1 = j | N t = i ) P ( j , t + 1 | i , t ) , j = 1 N p i j = 1
In matrix form, if we define a probability row vector (PRV) at t = 0 as u ( 0 ) | , then matrix P acting on it from the right for k times gives the PRV at t i m e = k ([3] theorem 11.2):
u ( k ) | = u ( 0 ) | P k , or : u ( k ) | i = u ( k ) i = u ( 0 ) j P k j i
Definition 6.
Time-Dependent System P-ket: we use the following system P-ket, to represent a probability column vector in probability space ( Ω , X , P ) :
| Ω t ) = i N | i ) P ( i | Ω t ) = i N m ( i , t ) | i ) , P ( Ω | Ω t ) = i N m ( i , t ) = 1
The time evolution Equation (53) now can be written in a basis-independent way:
| Ω t ) = ( P T ) t | Ω 0 ) U ^ ( t ) | Ω 0 ) , t N ; | Ω 0 ) [ u ( 0 ) | ] T
Definition 7.
(Time-Dependent Expectation): The expectation value of a continuous function f of occupation number N ^ in ( Ω , N , P ) can be expressed as:
f ( N ^ ) = P ( Ω | f ( N ^ ) | Ω t ) = i P ( Ω | f ( i ) | i ) P ( i | Ω t ) = i f ( i ) m ( i , t )
We can map the P-bra and P-ket into the Hilbert space by using Dirac’s notation:
P ( Ω | = i P ( i | Ψ | = i i | , | Ω t ) | Ψ t = i | i i | Ψ t = i c ( i , t ) | i
Then the expectation Equation (56) can be rewritten in Dirac’s notation as:
f ( N ^ ) = Ψ t | f ( N ^ ) | Ψ t = i Ψ t | f ( i ) | i i | Ψ t = i f ( i ) | c ( i , t ) | 2
P ( i | Ω t ) m ( i , t ) = | c ( i , t ) | 2 | i | Ψ t | 2 , t N
One can see in Equation (59) that the local phases of quantum states are not contained in the probability distribution, and they have no contribution to the distribution:
V B N : | Ψ t = i c i ( t ) | ψ i one to many many to one P B N : | Ω t ) = i P ( i , t ) | i ) ; p ( i , t ) = c i ( t ) 2
This means a certain loss of quantum information, and it is well known that these local phase factors are crucial in explaining experiments like the quantum interference [10] and the Aharonov–Bohm effect [11].

3.2. Continuous-Time HMC

The time evolution equation of a continuous-time HMC with a discrete basis (see Equation ( 85 ) or Ref. [5] p. 221) can be written as:
t p j ( t ) = k p k ( t ) q k j t P ( j | Ω t ) = k Q ^ j k T P ( k | Ω t )
= k P ( j | Q ^ T | k ) P ( k | Ω t ) = P ( j | Q ^ T I | Ω t ) = P ( j | Q ^ T | Ω t ) P ( j | L ^ | Ω t )
Equations (60)–(61) lead to a basis-independent master equation (see [8] Eq. (2.12)):
t | Ω t ) = L ^ | Ω t ) , | Ω t ) = U ^ ( t ) | Ω 0 ) = e L ^ t | Ω 0 ) , t 0
It looks just like Schrödinger’s Equation (7) of a conserved quantum system in Dirac’s notation. With the discrete P-basis of Fock space in Equation (28), Equations (56)–(59) now can be written as:
| Ω t ) = n m ( n , t ) | n ) , P ( Ω | = n ( n | , f ( n ) = P ( Ω | f ( n ) | Ω t )
Doi’s definition of the state function and state vector [8,9] correspond to our system P-bra and P-ket respectively:
P ( Ω | = n P ( n | s | n n | | F ( t ) n P ( n , t ) | n | Ω t ) = n | n ) P ( n | Ω t ) B ^ ( n ) = s | B ^ ( n ) | F ( t ) = P ( Ω | B ^ ( n ) | Ω t )
Note that the vector-basis here corresponds to the P-basis in Equation (28):
n ^ i | n = n i | n , n | n n | = I n , n | n = δ n , n = i = 1 δ n i , n i
In Peliti’s formalism ([8] p. 1472), the vector-basis (from population operator n) is normalized in a special way, so the expansion of the system P-bra is also changed:
n | n 1 n ! n | = I n , m | n = n ! δ m , n
P ( Ω | = P ( Ω | I n = n P ( Ω | n ) 1 n ! P ( n | = ( 2.3 ) n P ( n | 1 n !
Equation (67) can be identified with the standard bra, introduced in [8] Eq. (2.27–28):
P ( Ω | = n 1 n ! P ( n | | n 1 n ! n | | Ω ) = n | n ) 1 n ! P ( n | Ω ) | ϕ = m | m 1 m ! m | ϕ E [ A ^ ] A ^ = | A ^ | ϕ = P ( Ω | A ^ | Ω )

3.3. The Heisenberg Picture

We call Equations (55) and (62) the evolution equations in the Schrödinger picture. Now we introduce the Heisenberg picture of a R . V (or observable), similar to what is used in QM ([1] p. 541):
| Ω t ) = U ^ ( t ) | Ω 0 ) X ^ ( t ) = U ^ 1 ( t ) X ^ U ^ ( t )
Based on U ^ ( t ) , we can introduce following the time-dependent P-basis:
| x , t ) = U ^ 1 ( t ) | x ) , P ( x | U ^ ( t ) = P ( x , t | , P ( x , t | x , t ) = P ( x | x ) , P ( Ω | x , t ) = 1
P ( x , t | X ^ ( t ) | x , t ) = P ( x | U ^ ( t ) U ^ 1 ( t ) X ^ U ^ ( t ) U ^ 1 ( t ) | x ) = P ( x | X ^ | x ) = x P ( x | x )
The probability density now can be interpreted in the two pictures:
P ( x , t ) P ( x | Ω t ) = P ( x | U ^ ( t ) | Ω 0 ) = P ( x , t | Ω 0 ) = P ( x , t | Ω )
In the last step, we have used the fact that in the Heisenberg picture, | Ω 0 ) = | Ω ) .
Definition 8.
Time-Dependent P-Identity: Equations (69)–(71) also provide us with a time-dependent P-identity in the Heisenberg picture:
Discrete : I X ( t ) = U ^ 1 ( t ) I X U ^ ( t ) = U ^ 1 ( t ) i | x i ) P ( x i | U ^ ( t ) = ( 70 ) i | x i , t ) P ( x i , t | Continuous : I X ( t ) = U ^ 1 ( t ) I X U ^ ( t ) = U ^ 1 ( t ) | x ) d x P ( x | U ^ ( t ) = | x , t ) d x P ( x , t |
Now the expectation value of the stochastic process X ( t ) can be expressed as:
P ( Ω | X ^ ( t ) | Ω ) = P ( Ω | X ^ ( t ) I X ( t ) | Ω ) = d x P ( Ω | X ^ ( t ) | x , t ) P ( x , t | Ω ) = d x x P ( x , t | Ω ) = d x x P ( x | Ω t ) = P ( Ω | X | Ω t )
This suggests that a Markov stochastic process can be thought of as an operator in the Heisenberg picture, and its expectation value can be found from its Schrödinger picture. Additionally, if a Markov process X ( t ) X t is homogeneous, with independent and stationary increments ([4] p. 15), we can always set X 0 = 0 , and obtain the following useful property:
P ( X t + s X s = x ) = P ( X t X 0 = x | Ω ) = P ( X t = x | Ω ) = P ( x , t | Ω ) = P ( x | Ω t ) P ( x , t )
Moreover, there is a relation between their transition probability and probability density:
P ( x 2 , t 2 | x 1 , t 1 ) = P ( x 2 x 1 , t 2 t 1 | Ω ) P ( x 2 x 1 , t 2 t 1 ) , ( t 1 < t 2 )

3.4. Chapman–Kolmogorov Equations of Transition Probability [5,9]

These can now be easily obtained by inserting our time-dependent P-identity in Equation (73). For the HMC of discrete  R . V ([5] p. 174):
p m + n i j P ( j , m + n | i , 0 ) = P ( j , m + n | I ^ ( m ) | i , 0 ) = k P ( j , m + n | k , m ) P ( k , m | i , 0 ) = HMC k P ( j , n | k , 0 ) P ( k , m | i , 0 ) = 52 k p m i k p n k j ( Discrete time )
p i j ( t + s ) P ( j , t + s | i , 0 ) = P ( j , t + s | I ^ ( s ) | i , 0 ) = k P ( j , t + s | k , s ) P ( k , s | 0 , i ) = HMC k P ( j , t | k , 0 ) P ( k , s | 0 , i ) = k p i k ( s ) p k j ( t ) ( Continuous time )
For a Markov process of both continuous  R . V and time [9] (Eq. (3.9), p. 31):
P ( x , t | y , s ) = P ( x , t | I ^ ( τ ) | y , s ) = P ( x , t | z , τ ) d z P ( z , τ | y , s ) where t > τ > s

Absolute probability distribution (APD)

Likewise, we can find the time evolution of APD for a HMC simply by inserting P-identity I ( t = 0 ) = I ( 0 ) . For the HMC with discrete states ([5] pp. 174, 214):
Discrete time : P ( i , m ) = P ( i , m | Ω ) = P i ( m ) , P ( i , 0 ) p i ( 0 ) P i ( m ) = P ( i , m | I ( 0 ) | Ω ) = k P ( i , m | k , 0 ) P ( k , 0 | Ω ) = k p k ( 0 ) p m k i
Note that Equations (80) are identical to Equation (53):
Continuous time : P ( x i , t ) = P ( x i , t | Ω ) = P ( x i | Ω t ) , P ( x i , 0 ) p i ( 0 ) P ( x i , t ) = P ( x i , t | I ( 0 ) | Ω ) = k P ( x i , t | x k , 0 ) P ( x k , 0 | Ω ) = k p k ( 0 ) p k i ( t )
For a Markov process of both continuous  R . V and time ([9] p. 31):
P ( x , t ) = P ( x , t | Ω ) = P ( x | Ω t ) , P ( x , 0 ) p ( 0 ) ( x ) P ( x , t ) = P ( x , t | I ( 0 ) | Ω ) = d y P ( x , t | y , 0 ) P ( y , 0 | Ω ) = d y P ( x , t | y , 0 ) p ( 0 ) ( y )

3.5. Kolmogorov Forward and Backward Equations

If the Markov chains are stochastically continuous, then for infinitesimal h, the transition probability has the Taylor expansions ([4] Sec. (6.8); [5] p. 217):
p i j ( h ) = p i j ( 0 ) + p i j ( 0 ) h + o ( h 2 ) = δ i j + q i j h + o ( h 2 )
Then, using the Chapman–Kolmogorov Equation (78), we have:
p i j ( t + h ) = k p i k ( t ) p k j ( h ) = k p k ( t ) ( δ k j + q k j h + o ( h 2 ) ) = p i j ( t ) + k p i k ( t ) ( q k j h + o ( h 2 ) )
Therefore, we obtain the following Kolmogorov Forward equations:
p i j ( t ) = lim h 0 [ ( p i j ( t + h ) p i j ( t ) ) / h ] = k p i k ( t ) q k j
Similarly, we can derive the Kolmogorov Backward equations:
p i j ( h + t ) = k p i k ( h ) p k j ( t ) p i j ( t ) = k q i k p k j ( t )
Using Equations (81) and (83) one can extend Equation (62) to a continuous-time HMC.

3.6. Transition Probability and Path Integrals

From Equations (62) and (70) the transition probability of a Markov process can be expressed as follows (assuming t a < t 0 < t b ):
P ( x b , t b | x a , t b ) = P ( x b | U ( t b , t 0 ) U 1 ( t a , t 0 ) | x a ) = P ( x b | U ( t b , t a ) | x a ) = P ( x b | e t a t b d t L | x a )
We can divide the time interval into small pieces and insert the P-identity N times:
Δ t = t n + 1 t n = ( t b t a ) / ( N + 1 ) > 0 , t N = t b , t 0 = t a ; x N = x b , x 0 = x a ,
P ( x b , t b | x a , t a ) = P ( x b , t b | I ( t N ) . . I ( t 1 ) | x a , t a ) = n = 1 N d x n P ( x n , t n | x n 1 , t n 1 ) = n = 1 N d x n P ( x n | U ^ ( t n , t n 1 ) | x n 1 ) = n = 1 N d x n P ( x n | e L Δ t | x n 1 )
It perfectly matches the starting equation of Feynman’s path integral for the transition amplitude in QM ([12] Eq. (2.4), p. 90):
x b , t b | x a , t a = n = 1 N d x n x n | U ^ ( t n , t n 1 ) | x n 1 = n = 1 N d x n x n | e i H ^ Δ t | x n 1
Since the special Wick rotation can directly transform Equations (89) and (90) and vice versa, it might imply a possibly new connection between the Hilbert space and the probability space for certain quantum systems:
x b , t b | x a , t a i t t i t t P ( t b , t b | x a , t a )
However, at the moment this is just a potential application, and we need investigate this further. For a free particle, V ( x ) = 0 , the Schrödinger equation and resulting transition amplitude from Feynman’s path integral are (see [12] Eq. (2.125)):
i t ψ ( x , t ) = H ^ ψ ( x , t ) = 2 2 m 2 x 2 ψ ( x , t ) , ψ ( x , t ) = x | Ψ ( t )
x b , t b | x a , t a = m 2 π i ( t b t a ) exp i m ( x b x a ) 2 2 ( t b t a )

4. Examples of Homogeneous Markov Processes

Now let us observe some important examples of the HMC using the PBN.

Poisson Process ([4] p. 250, [5] p. 161)

This is a counting process, N ( t ) , with the following properties:
1.
{ N ( t ) , t 0 } is a non-negative process with independent increments and N ( 0 ) = 0 ;
2.
It is homogeneous and its probability distribution is given by:
m ( k , t ) P ( [ N ( t + s ) N ( s ) = k ] | Ω ) = i n d e p e n d e n t i n c r e m e n t P ( [ N ( t ) N ( 0 ) = k ] | Ω ) = N ( 0 ) = 0 P ( [ N ( t ) = k ] | Ω ) P ( k | Ω ( t ) ) = P o i s s o n D i s t r i b u t i o n ( λ t ) k k ! e λ t
It can be shown ([5] p. 161) that:
μ ( t ) N ¯ ( t ) = k k m ( k , t ) = λ t ; σ 2 ( t ) P ( Ω | [ N ( t ) N ¯ ( t ) ] 2 | Ω ) = λ t
It can be shown (see [5] p. 215; [13] Theorem 1.5, p. 6) that the Poisson Process has a Markov property, and its transition probability is:
P ( [ N ( t + s ) = j ] | N ( t ) = i ) = P ( [ N ( t + s ) N ( t ) = j i ] | Ω ) = ( λ t ) j i ( j i ) ! e λ t , if j i p i j ( t ) = 0 , if j < i

4.1. Wiener–Levy Process (see [5] p. 159; [9] sec. (3.6), p. 32)

This is a homogeneous process with independent and stationary increments and W ( 0 ) = 0 . Its probability density is a normal (Gaussian) distribution N ( 0 , t σ 2 ) :
P ( x , t ) P ( [ W ( t + s ) W ( s ) = x ] | Ω ) = h o m o g e n e o u s P ( [ W ( t ) W ( 0 ) = x ] | Ω )
Its stationary increment is defined as:
P ( x 2 , t 2 | x 1 , t 1 ) = 1 2 π ( t 2 t 1 ) σ exp ( y 2 y 1 ) 2 2 ( t 2 t 1 ) σ 2 , ( t 1 < t 2 )
We see that the Wiener–Levy process satisfies Equation (76).
P ( x 2 , t 2 | x 1 , t 1 ) = P ( x 2 x 1 , t 2 t 1 ) , ( t 1 < t 2 )
It can be verified ([5] p. 161) that it is an N ( 0 , t σ 2 ) :
μ ˜ ( t ) P ( Ω | W ( t ) | Ω ) = 0 , σ ˜ 2 ( t ) P ( Ω | W ( t ) 2 | Ω ) = t σ 2

4.2. Brownian Motion ([4] Sec. (10.1), p. 524; [9] p. 6, 42)

The stochastic process X ( t ) has X ( 0 ) = 0 , has stationary and independent increments for t 0 , and its density function for t > 0 is a normal distribution N ( t μ , t σ 2 ) , or:
P ( x , t ) P ( x | Ω ( t ) ) = 1 2 π t σ exp ( x μ t ) 2 2 t σ 2
Brownian motions have stationary and independent increments ([4] Sec. (10), pp. 524, 529), so they are a HMC and satisfy Equation (76). They are the solution of the following master equation (Einstein’s diffusion equation, see [9] p. 6) with a drift speed μ :
t P ( x , t ) = D 2 x 2 P ( x , t ) , D = σ 2 .
Here the constant D is called the diffusion coefficient. Equation (101) can be interpreted as a special HMC case of Equation (62), given in the x-basis:
t P ( x | Ω t ) = d x P ( x | L ^ | x ) P ( x | Ω t ) , P ( x | L ^ | x ) = D 2 x 2 δ ( x x )
Equation (102) closely resembles Schrödinger’s Equation (7) for a free particle in the x-basis:
i t x | Ψ ( t ) = d x x | H ^ | x x | Ψ ( t ) , P ( x | H ^ | x ) = 2 2 m 2 x 2 δ ( x x )

5. Special Wick Rotation, Time Evolution and Induced Diffusions

The induced microscopic diffusion is defined by the following equation:
t P ( x , t ) = G ^ ( x ) P ( x , t ) , G ^ = 1 2 μ 2 x 2 μ u ( x ) , μ m
The Special Wick Rotation (SWR), caused by the imaginary time rotation Wick rotation  ( i t t ) [14], is defined by:
SWR : i t t , | ψ ( t ) | Ω t ) , x b , t b | x a , t a P ( x b , t b | x a , t a )
Under a SWR, Schrödinger Equation (7) is shifted to the induced micro diffusion (104):
i t | ψ ( t ) = 1 H ^ | ψ ( t ) t | Ω t ) = G | Ω t )
1 H ^ = x 2 2 m m V ( x ) m x 2 2 μ μ u ( x ) G ^ = 1 2 μ 2 x 2 μ u ( x )
The simplest example is the Induced Micro-Einstein–Brown motion when u ( x ) = 0 in Equation (104). Applying (105) to the transition amplitude in Equation (92), we obtain the transition probability:
P ( x b , t b | x a , t a ) = 1 4 π D ( t b t a ) exp ( x b x a ) 2 4 D ( t b t a )
Here we have introduced the induced micro-diffusion coefficient,
D 1 / ( 2 μ ) = / ( 2 m )
Similarly, applying (105) to the Schrödinger for free particle, Equation (103), we obtain Einstein’s diffusion Equation (102), with D D .

A Special Non-Hermitian Case

We can apply Dirac notation and the PBN together to solve the following special quantum system with a non-Hermitian Hamiltonian:
H ^ = H ^ 1 i H ^ 2 ; H ^ 1 = H ^ 1 ; H ^ 2 = H ^ 2 ; [ H ^ 1 , H ^ 2 ] = 0 ; t H ^ = 0
Then the time evolution equation can be written as:
i t | Ψ t | Ω t ) = H ^ 1 | Ψ t | Ω t ) + | Ψ t i H ^ 2 | Ω t )
It leads to two equations:
t | Ψ t = i H ^ 1 | Ψ t , t | Ω t ) = 1 H ^ 2 | Ω t ) = ( 106 ) G ^ | Ω t ) )
The first is an ordinary Schrödinger equation while the second is a master equation for an induced micro-diffusion. The product of their solutions is the solution of Equation (111). Suppose we have H ^ 1 ( x ) and H ^ 2 ( y ) ; then the product of Equations (89) and (90) gives the path-integral expression of the system:
[ x b , t b | P ( y b , t b | ] e t a t b d t i [ H ^ 1 ( x ) i H ^ 2 ( y ) ] | [ x a , t a | y a , t a ) ]
= m = 1 N d x m x m | e i H ^ 1 ( x ) Δ t | x m 1 n = 1 N d y n P ( y n | e G ^ ( y ) Δ t | y n 1 )
Moreover, since both H ^ 1 and H ^ 2 are Hermitian, we assume that they have complete sets of discrete orthonormal eigenstates like:
H ^ 1 | ψ k = ε k | ψ k , H ^ 2 | ϕ μ ) = λ μ | ϕ μ )
Then we obtain the following solutions of Equation (112):
| ψ i ( t ) = e i ε k t / | ψ k , | ϕ μ ( t ) ) = e λ μ t / | ϕ μ )
It leads to the expectation value of H in the mixed product state | ψ i ( t ) | ϕ μ ( t ) ) as:
E k , μ [ H ^ ] = ψ k ( t ) | H ^ 1 | ψ k ( t ) i E [ H 2 | φ μ , t ] = ε k i λ μ
Here we have used the conditional prediction of H ^ 2 given ϕ μ ( t ) , defined from Equation (40):
E [ H 2 | φ μ , t ] = α = μ λ α P ( φ α , t ) / P ( φ μ , t ) = λ μ
The imaginary part in Equation (116) looks strange, but it would appear in expectation values of similar complex Hamiltonians, as in [15] p. 290 and [16] p. 3, because H ^ is not Hermitian. However, we know that H ^ H ^ is Hermitian. Therefore, it is reasonable to use the following expression:
E k , μ [ H ^ H ^ ] 1 / 2 = E k , μ [ ( H ^ 1 ) 2 + ( H ^ 2 ) 2 ] 1 / 2 = ε k 2 + λ μ 2 1 / 2
It is interesting to note that if the initial condition of | Ω t ) is a linear combination of a set of P-kets | φ μ ) , then, because each P-ket has its probability proportional to exp [ λ μ t / ] , the P-ket with the lowest eigenvalue λ m i n will eventually dominate, and the expectation value of energy H 2 will approach λ m i n . Therefore, our master Equation (104) describes the processes that will ultimately jump to its lowest energy level, included in its initial condition! Of course, further research on this is required.

6. Potential Applications

Hermitian operators are sufficient for “pure” eigenvalue states for closed systems where the energies are conserved and real valued. However, for mixed states and in a number of physical circumstances, non-Hermitian operators have had to be considered. It is well known that any non-Hermitian linear operator C ^ can be expressed in the form:
C ^ = A ^ + i B ^
where A ^ and B ^ are Hermitian by letting A ^ = ( C ^ + C ^ ) / 2 and B ^ = ( C ^ C ^ ) / ( 2 i ) . Note that B ^ i B ^ can be treated as a Wick rotation in the path-integral application of the PBN. Section 5 dovetails into the matter of general linear operators where there have been a number of applications. We mention:
  • The method of “complex scaling” applied to quantum mechanical Hamiltonians was a “hot” area in the area of atomic and molecular physics during the 1970s and 1980s and involved non-Hermitian linear operators. We cite an application by the mathematician Barry Simon on complex scaling to non-relativistic Hamiltonians for molecules [17].
  • Another application requiring such an operator was performed by Botten et al. [18]. They showed how to solve a practical problem involving wave scattering using a bi-orthogonal basis, where there is a VPN bra basis and a ket basis consisting of different functions. In a unitary problem, these VPN bra and ket basis functions would be the same. Here, the Helmholtz equation Laplacian 2 f + k 2 f = 0 has a wave number k which is complex. The imaginary part of k indicates loss or gain depending on its sign.
  • K. G. Zloshchastiev has performed many applications on the general density operator approach with non-Hermitian Hamiltonians of the form Equation (118) applied to, e.g., open dissipative systems, which automatically deal with mixed states (see [19,20,21]) and von Neumann quantum entropy [22]. For non-Hermitian, open/dissipative systems, there is also the work of Refs. [23,24].
  • Consider a rectangular real data matrix Q ^ . Its similarity matrix (or adjacency matrix) S ^ and corresponding row stochastic (Markov) matrix R ^ are defined by:
    S ^ = Q ^ Q ^ T R i , j = S i j / k S i , k
The symmetric matrix S ^ has real eigenvalues, and as does R ^ , even though the latter is non-symmetric. R ^ is a transition matrix in the language of QM but also the key operator in the Meila–Shi algorithm of spectral clustering, as well as quantum clustering with IT applications in science, engineering, (unstructured) text using a “bag-of-words” model [25,26,27,28] and even medicine [29]. Probably, the PBN may provide us with some new approaches to quantum clustering.
If the real data matrix Q ^ in Equation (119) is huge but very sparse (as is often the case in a “bag-of-words” model [27]), we may use the PBN to find other alternative algorithms for text document clustering. Suppose that the dataset has a vocabulary of N labeled keywords, which serve as the P-basis in Equation (28), and q μ , k represents the frequency of k t h keyword in document Q μ . Since q μ , k can be greater than 1, they behave like the occupation numbers of a Boson system in Quantum Field Theory. The conditional probability of finding doc Q μ given doc Q ν is:
P ( Q μ | Q ν ) = k N P ( Q μ | k ) ( k | Q ν ) = ( 14 ) k Q μ P ( k | Q ν ) ; P ( k | Q μ ) = q μ , k / k Q μ q μ , k
Now, for example, we can define the relevance of two docs and obtain its expression as:
R μ , ν 1 2 [ P ( Q μ | Q ν ) + P ( Q ν | Q μ ) ] = 1 2 k Q μ P ( k | Q ν ) + k Q v P ( k | Q μ )
This algorithm may be very effective for huge datasets in numerous applications.

7. Summary and Discussion

Inspired by Dirac’s notation used in quantum mechanics (QM), we proposed the Probability Bracket Notation (PBN). We demonstrated that the PBN could be a very useful tool for symbolic representation and manipulation in probability modeling, like the various normalization and expectation formulae for systems of single or multiple random variables, as well as the master equations of homogeneous Markov processes. We also show that a stationary Schrödinger Equation (103) naturally becomes a master equation under the special Wick rotation (see Equations (106) and (107)).
We have shown the similarities between many QM expressions in VBN and related probabilistic expressions in PBN, which might provide us with a beneficial bridge connecting the quantum world with the classical one. We also showed how the PBN could be applied to problems which require non-Hermitian operators. We make no pretense that the PBN creates new physics. Rather it provides a notational interdisciplinary “umbrella” to various statistical and physical processes and thereby opens up the possibility of their synthesis and integration.

8. Conclusions

Of course, more investigations need to be undertaken to verify the consistency (or correctness), usefulness and limitations of our proposal of the PBN, but it is intriguing in its possibility of expressing formulations from classical statistics and QM and handling non-Hermitian operators.

Author Contributions

Conceptualization, X.M.W. and T.C.S.; Methodology, T.C.S.; Validation, T.C.S.; Formal analysis, X.M.W.; Investigation, X.M.W. and T.C.S.; Writing—original draft, X.M.W.; Writing—review & editing, T.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We are indebted to K. G. Zloshchastiev of Durban University of Technology, South Africa, for his helpful materials concerning non-Hermitian operators. Special thanks to Sujin Suwanna of Mahidol University, Thailand for his insightful comments.

Conflicts of Interest

Author Xingmin Wang was employed by the company Sherman Visual Lab. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APDAbsolute Probability Distribution
CPConditional Probability
HMCHomogeneous Markov Chains
ITInformation Technology
PBNProbability Bracket Notation
P-basisProbability basis
P-braProbability (event) bra
P-identityProbability identity
P-ketProbability (event) ket
PRVProbability Row Vector
QMQuantum Mechanics
R.VRandom Variable
SWRSpecial Wick Rotation
VBN(Dirac) Vector Bracket Notation

References

  1. Liboff, R.L. Introductory Quantum Mechanics; Addison-Wesley Publishing Company: Boston, MA, USA, 1992. [Google Scholar]
  2. Rudin, W. Functional Analysis; McGraw-Hill Series in Higher Mathematics; McGraw-Hill: New York, NY, USA, 1973. [Google Scholar]
  3. Grinstead, C.M.; Snell, J.L. (Eds.) Introduction to Probability; American Mathematical Society: Providence, RI, USA, 1997. [Google Scholar]
  4. Ross, S.M. Introduction to Probability Models, ISE, 9th ed.; Introduction to Probability Models, Academic Press: San Diego, CA, USA, 2006. [Google Scholar]
  5. Ye, E.; Zhang, D. Probability Theory and Stochastic Processes; Science Publications: Beijing, China, 2005. [Google Scholar]
  6. Doi, M. Second quantization representation for classical many-particle system. J. Phys. A Math. Gen. 1976, 9, 1465. [Google Scholar] [CrossRef]
  7. Trimper, S. Master equation and two heat reservoirs. Phys. Rev. E 2006, 74, 51121. [Google Scholar] [CrossRef] [PubMed]
  8. Peliti, L. Path integral approach to birth-death processes on a lattice. J. Phys. France 1985, 46, 1469–1483. [Google Scholar] [CrossRef]
  9. Garcia-Palacios, J.L. Introduction to the Theory of Stochastic Processes and Brownian motion problems. arXiv 2007, arXiv:cond-mat/0701242. [Google Scholar]
  10. Bach, R.; Pope, D.; Liou, S.H.; Batelaan, H. Controlled double-slit electron diffraction. New J. Physics 2013, 15, 033018. [Google Scholar] [CrossRef]
  11. Aharonov, Y.; Bohm, D. Significance of Electromagnetic Potentials in the Quantum Theory. Phys. Rev. 1959, 115, 485–491. [Google Scholar] [CrossRef]
  12. Kleinert, H. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 5th ed.; World Scientific: Singapore, 2009; Available online: https://www.worldscientific.com/doi/pdf/10.1142/7305 (accessed on 10 August 2024).
  13. Berestycki, N.; Sousi, P. Applied Probability-Online Lecture Notes. 2007. Available online: http://www.statslab.cam.ac.uk/~ps422/notes-new.pdf (accessed on 10 August 2024).
  14. Kosztin, I.; Faber, B.; Schulten, K. Introduction to the diffusion Monte Carlo method. Am. J. Phys. 1996, 64, 633–644. [Google Scholar] [CrossRef]
  15. Kaushal, R.S. Classical and quantum mechanics of complex hamiltonian systems: An extended complex phase space approach. Pramana 2009, 73, 287–297. [Google Scholar] [CrossRef]
  16. Rajeev, S.G. Dissipative Mechanics Using Complex-Valued Hamiltonians. arXiv 2007, arXiv:quant-ph/0701141. [Google Scholar]
  17. Morgan, J.D.; Simon, B. The calculation of molecular resonances by complex scaling. J. Phys. B At. Mol. Opt. Phys. 1981, 14, L167. [Google Scholar] [CrossRef]
  18. Botten, L.C.; Craig, M.S.; McPhedran, R.C.; Adams, J.L.; Andrewartha, J.R. The Finitely Conducting Lamellar Diffraction Grating. Opt. Acta Int. J. Opt. 1981, 28, 1087–1102. [Google Scholar] [CrossRef]
  19. Zloshchastiev, K.G. Quantum-statistical approach to electromagnetic wave propagation and dissipation inside dielectric media and nanophotonic and plasmonic waveguides. Phys. Rev. B 2016, 94. [Google Scholar] [CrossRef]
  20. Zloshchastiev, K.G. Generalization of the Schrödinger Equation for Open Systems Based on the Quantum-Statistical Approach. Universe 2024, 10, 36. [Google Scholar] [CrossRef]
  21. Zloshchastiev, K. PROJECT Density Operator Approach for non-Hermitian Hamiltonians. Available online: https://www.researchgate.net/publication/369022872_PROJECT_Density_Operator_Approach_for_non-Hermitian_Hamiltonians (accessed on 10 August 2024).
  22. Sergi, A.; Zloshchastiev, K.G. Quantum entropy of systems described by non-Hermitian Hamiltonians. J. Stat. Mech. Theory Exp. 2016, 2016, 033102. [Google Scholar] [CrossRef]
  23. Barreiro, J.T.; Müller, M.; Schindler, P.; Nigg, D.; Monz, T.; Chwalla, M.; Hennrich, M.; Roos, C.F.; Zoller, P.; Blatt, R. An open-system quantum simulator with trapped ions. Nature 2011, 470, 486–491. [Google Scholar] [CrossRef] [PubMed]
  24. Zheng, C. Universal quantum simulation of single-qubit nonunitary operators using duality quantum algorithm. Sci. Rep. 2021, 11, 3960. [Google Scholar] [CrossRef]
  25. Fertik, M.B.; Scott, T.; Dignan, T. Identifying Information Related to a Particular Entity from Electronic Sources, Using Dimensional Reduction and Quantum Clustering. U.S. Patent No. 8,744,197, 3 June 2014. [Google Scholar]
  26. Wang, S.; Dignan, T.G. Thematic Clustering. U.S. Patent No. 888,665,1 B1, 11 November 2014. [Google Scholar]
  27. Scott, T.C.; Therani, M.; Wang, X.M. Data Clustering with Quantum Mechanics. Mathematics 2017, 5, 5. [Google Scholar] [CrossRef]
  28. Maignan, A.; Scott, T.C. A Comprehensive Analysis of Quantum Clustering: Finding All the Potential Minima. Int. J. Data Min. Knowl. Manag. Process 2021, 11, 33–54. [Google Scholar] [CrossRef]
  29. Kumar, D.; Scott, T.C.; Quraishy, A.; Kashif, S.M.; Qadeer, R.; Anum, G. The Gender-Oriented Perspective in the Development of Type 2 Diabetes Mellitus Complications; SGOP Study. JPTCP 2023, 30, 2308–2318. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.M.; Scott, T.C. Probability Bracket Notation for Probability Modeling. Axioms 2024, 13, 564. https://doi.org/10.3390/axioms13080564

AMA Style

Wang XM, Scott TC. Probability Bracket Notation for Probability Modeling. Axioms. 2024; 13(8):564. https://doi.org/10.3390/axioms13080564

Chicago/Turabian Style

Wang, Xing M., and Tony C. Scott. 2024. "Probability Bracket Notation for Probability Modeling" Axioms 13, no. 8: 564. https://doi.org/10.3390/axioms13080564

APA Style

Wang, X. M., & Scott, T. C. (2024). Probability Bracket Notation for Probability Modeling. Axioms, 13(8), 564. https://doi.org/10.3390/axioms13080564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop