Next Article in Journal
Simultaneous Measurements of Noncommuting Observables: Positive Transformations and Instrumental Lie Groups
Previous Article in Journal
The Effect of Exit Time and Entropy on Asset Performance Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum State Assignment Flows

1
Image and Pattern Analysis Group, Institute for Mathematics, Heidelberg University, 69117 Heidelberg, Germany
2
Physikalisches Institut, Kirchhoff Institute for Physics, Heidelberg University, 69117 Heidelberg, Germany
3
Research Station Geometry & Dynamics, Institute for Mathematics, Heidelberg University, 69117 Heidelberg, Germany
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(9), 1253; https://doi.org/10.3390/e25091253
Submission received: 3 July 2023 / Revised: 9 August 2023 / Accepted: 14 August 2023 / Published: 23 August 2023
(This article belongs to the Section Quantum Information)

Abstract

:
This paper introduces assignment flows for density matrices as state spaces for representation and analysis of data associated with vertices of an underlying weighted graph. Determining an assignment flow by geometric integration of the defining dynamical system causes an interaction of the non-commuting states across the graph, and the assignment of a pure (rank-one) state to each vertex after convergence. Adopting the Riemannian–Bogoliubov–Kubo–Mori metric from information geometry leads to closed-form local expressions that can be computed efficiently and implemented in a fine-grained parallel manner. Restriction to the submanifold of commuting density matrices recovers the assignment flows for categorical probability distributions, which merely assign labels from a finite set to each data point. As shown for these flows in our prior work, the novel class of quantum state assignment flows can also be characterized as Riemannian gradient flows with respect to a non-local, non-convex potential after proper reparameterization and under mild conditions on the underlying weight function. This weight function generates the parameters of the layers of a neural network corresponding to and generated by each step of the geometric integration scheme. Numerical results indicate and illustrate the potential of the novel approach for data representation and analysis, including the representation of correlations of data across the graph by entanglement and tensorization.

1. Introduction

1.1. Overview and Motivation

A basic task of data analysis is categorization of observed data. We consider the following scenario: On a given undirected, weighted graph G = ( V , E , w ) , data D i X are observed as points in a metric space ( X , d X ) at each vertex i V . Categorization means to determine an assignment
D i j { 1 , , c } = : [ c ]
of a class label j among a finite set of labels to each data point D i . Depending on the application, labels carry a specific meaning, e.g., type of tissue in medical imaging data, object type in computer vision or land use in remote sensing data. The decision at any vertex typically depends on decisions at other vertices. Thus, the overall task of labeling data on a graph constitutes a particular form of structured prediction in the field of machine learning [1].
Assignment flows denote a particular class of approaches for data labeling on graphs [2,3]. The basic idea is to represent each possible label assignment at vertex i V by an assignment vector S i Δ c in the standard probability simplex, the vertices of which encode the unique label assignment for every label by the corresponding unit vector e j , j [ c ] . Data labeling is accomplished by computing the flow S ( t ) of the dynamical system:
S ˙ = R S [ Ω S ] , S ( 0 ) = S 0 ,
with the row-stochastic matrix S ( t ) and row vectors S i ( t ) as the state, which, under mild conditions, converges to unique label assignment vectors (unit vectors) at every vertex i V  [4]. The vector field on the right-hand side in Equation (2) is parameterized by parameters collected in a matrix Ω . These parameters strongly affect the contextual label assignments. They can be learned from data in order to take into account typical relations of data in the current field of application [5]. For a demonstration of the application of this approach to a challenging medical imaging problem, we refer to [6].
From a geometric viewpoint, system (2) can be characterized as a collection of individual flows S i ( t ) at each vertex that are coupled by the Ω parameters. Each individual flow is determined by a replicator equation, which constitutes a basic class of dynamical systems known from evolutionary game theory [7,8]. By restricting each vector S i ( t ) to the relative interior Δ ˚ c of the probability simplex (i.e., the set of strictly positive discrete probability vectors) and by turning this convex set into a statistical manifold equipped with Fisher–Rao geometry [9], the assignment flow (2) becomes a Riemannian ascent flow on the corresponding product manifold. The underlying information geometry is not only important for making the flow converge to unique label assignments but also for the design of efficient algorithms that actually determine the assignments [10]. For extensions of the basic assignment flow approach to unsupervised scenarios of machine learning and for an in-depth discussion of connections to other closely related work on structured prediction on graphs, we refer to [11,12,13].
In this paper, we study a novel and substantial generalization of assignment flows from a different point of view: assignment of labels to metric data, where the labels are elements of a continuous set. This requires replacement of the simplex Δ c as state space, which can only represent assignments of labels from a finite set. The substitutes for assignment vectors S i , i V are Hermitian positive definite density matrices ρ i , i V with unit trace:
D c = { ρ C c × c : ρ = ρ * , tr ρ = 1 } .
Accordingly, the finite set of unit vectors e j , j [ c ] (vertices of Δ c are replaced by rank-one density matrices ρ , also known as pure states in quantum mechanics [14]. The resulting quantum state assignment flow (QSAF)
ρ ˙ = R ρ Ω [ ρ ] , ρ ( 0 ) = ρ 0 ,
consists of a system of nonlinear first-order differential equations, whose solution ρ ( t ) evolves on a corresponding product of state spaces D c as given by Equation (3), with a linear averaging operator Ω [ . ] and a generalized replicator operator R ρ [ . ] that is linear with respect to the Ω [ ρ ] argument and nonlinear with respect to ρ (cf. Equation (64)). The similarity of Equations (4) and (2) can be attributed to the common underlying design strategy. System (4) couples the individual evolutions ρ i ( t ) at each vertex i V through Ω parameters, and the underlying information geometry causes convergence of each ρ i ( t ) towards a pure state. Using a different state space D c (rather than Δ ˚ c in Equation (2)) requires the adoption of a different Riemannian metric, which results in a corresponding definition of the operator R ρ .
Our approach is natural in that restricting Equation (4) to diagonal density matrices results in Equation (2) after identifying each vector diag ( ρ i ) of diagonal entries of the density matrix ρ i with an assignment vector S i Δ ˚ c . Conversely, Equation (4) considerably generalizes Equation (2) and enhances modeling expressivity due to the non-commutative interaction of the state spaces ρ i , i V across the underlying graph G when the quantum state assignment flow is computed by applying geometric numerical integration to Equation (4).
We regard our approach merely as an approach to data representation and analysis rather than a contribution to quantum mechanics. For example, dynamics Equation (4) clearly differs from the Hamiltonian evolution of quantum systems, yet we adopt the term “quantum state”, since not only density matrices as state spaces but also the related information geometry are largely motivated by quantum mechanics and quantum information theory [9,15].

1.2. Contribution and Organization

Section 2 summarizes the information geometry of both the statistical manifold of categorical distributions and the manifold of strictly positive definite density matrices. Section 3 summarizes the assignment flow approach (2) as a reference for the subsequent generalization to Equation (4). This generalization is the main contribution of this paper and is presented in Section 4. Each row of Table 1 specifies the section where an increasingly general version of the original assignment flow (left column) is generalized to the corresponding quantum state assignment flow (right column, same row).
Alternative metrics on the positive definite matrix manifold that have been used in the literature are reviewed in Section 2.3 in order to position our approach from this point of view. In Section 5, we describe some academic experiments that we conducted to illustrate the properties of the novel approach. Working out a particular scenario of data analysis is beyond the scope of this paper. We conclude and indicate directions of further work in Section 6. For ease of readin, proofs are listed in Appendix A.
This paper considerably elaborates the short preliminary conference version [16].

1.3. Basic Notation

For the reader’s convenience, below, we specify the basic notational conventions used in this paper.
[ c ]             { 1 , 2 , , c } , c N
1 c             ( 1 , 1 , , 1 ) R c
R + c             { x R c : x i 0 , i [ c ] }
R + + c             { x R c : x i > 0 , i [ c ] }
e 1 , e 2 ,            Canonical basis vectors of R c
u , v            Euclidean inner vector product
u            Euclidean norm u , u
I c            Unit matrix of R c × c
p · q            Component-wise vector multiplication ( p · q ) i = p i q i , i [ c ] , p , q R c
q p            Component-wise division q p i = q i p i , i [ c ] , q R c , p R + + c
H c            Space of Hermitian c × c matrices (cf. (22))
( A )            Trace i A i i of matrix A
A , B            Matrix inner product ( A B ) , A , B H c
[ A , B ]            Commutator A B B A
Diag ( v )            The diagonal matrix with vector v as entries
diag ( V )            The vector of the diagonal entries of a square matrix V
exp m            The matrix exponential
log m            The matrix logarithm exp m 1
Δ c            The set of discrete probability vectors of dimension c (cf. (6))
S c            The relative interior of Δ c , i.e., the set of strictly positive probability
vectors (cf.  ( ) )
W c            The product manifold S c × × S c (cf.  ( ) )
P c            The set of symmetric positive definite c × c matrices (cf. (17))
D c            The subset of matrices in P c whose trace is equal to 1 (cf. (18))
Q c            The product manifold D c × × D c (cf. (96))
1 S c            Barycenter 1 c 1 c of the manifold S c
1 W c            Barycenter ( 1 S c , 1 S c , , 1 S c ) of the manifold W
1 D c            Matrix Diag ( 1 S c ) D c C c × c
g p , g W , g ρ            The Riemannian metrics on S c , W c , D c (cf. (8), (54), (25))
T c , 0 , T c , 0 , H c , 0            The tangent spaces to S c , W c , D c (cf. (10), (54), (21))
π c , 0 , Π c , 0            Orthogonal projections onto T 0 , H c , 0 (cf. (11), (24))
R p , R W , R ρ            Replicator operators associated with the assignment flows
on S c , W c , D c , Q c (cf. (12), (58), (64), (105))
           Euclidean gradient operator: f ( p ) = p 1 f ( p ) , p 2 f ( p ) ,
grad           Riemannian gradient operator with respect to the Fisher–Rao metric
R W [ · ] , Ω [ · ] , etc.           Square brackets indicate a linear operator that acts in a non-standard
way, e.g., row-wise to a matrix argument.

2. Information Geometry

Information geometry [17,18] is concerned with the representation of parametric probability distributions from a geometric viewpoint, e.g., the exponential family of distributions [19]. Specifically, an open convex set M of parameters of a probability distribution becomes a Riemannian manifold ( M , g ) when equipped with a Riemannian metric g. The Fisher–Rao metric is the canonical choice due to its invariance properties with respect to reparameterization [20]. A closely related scenario concerns the representation of the interior of compact convex bodies as Riemannian manifolds ( M , g ) due to the correspondence between compactly supported Borel probability measures and an affine equivalence class of convex bodies [21].
A key ingredient of information geometry is the so-called α-family of affine connections introduced by Amari [17], which comprises the so-called e-connection and m-connection * as special cases. These connections are torsion-free and dual to each other in the sense that they jointly satisfy the equation that uniquely characterizes the Levi–Civita connection as a metric connection [17] (Definition 3.1, Theorem 3.1). Regarding numerical computations, working with the exponential map induced by the e connection is particularly convenient, since its domain is the entire tangent space. We refer to [9,22,23] for further reading and to [24] and [9] (Chapter 7) for the specific case of quantum state spaces.
In this paper, we are concerned with two classes of convex sets:
  • The relative interior of probability simplices, each of which represents the categorical (discrete) distributions of the corresponding dimension; and 
  • The set of positive definite symmetric matrices with trace one.
Section 2.1 and Section 2.2 introduce the information geometry for the former and the latter class of sets, respectively.

2.1. Categorical Distributions

We set
[ c ] : = { 1 , 2 , , c } , c N .
and denote the probability simplex of distributions on [ c ] as
Δ c : = p R + c : 1 c , p = i [ c ] p i = 1 , 1 c : = ( 1 , 1 , , 1 ) R c .
Its relative interior equipped with the Fisher–Rao metric becomes the Riemannian manifold ( S c , g ) ,
S c : = rint Δ c = { p Δ c : p i > 0 , i [ c ] } ,
g p ( u , v ) : = i [ c ] u i v i p i = u , Diag ( p ) 1 v , u , v T c , 0 , p S c ,
with the trivial tangent bundle given by
T S c S c × T c , 0
and the tangent space
T c , 0 : = T 1 S c S c = { v R c : 1 c , v = 0 } .
The orthogonal projection onto T c , 0 is denoted by
π c , 0 : R c T c , 0 , π c , 0 v : = v 1 c 1 c , v 1 c = I c 1 c 1 S c v .
The mapping defined next plays a major role in all dynamical systems under consideration in this paper.
Definition 1
(replicator operator).  The replicator operator is the linear mapping of the tangent space
R : S c × T c , 0 T c , 0 , R p v : = ( Diag ( p ) p p ) v , p S c , v T c , 0
parameterized by p S c .
The name ‘replicator’ is due to the role of this mapping in evolutionary game theory; see Remark 2 on page 2.
Proposition 1
(properties of  R p ). Mapping (12) satisfies
R p 1 c = 0 ,
π c , 0 R p = R p π c , 0 = R p , p S c .
Furthermore, let f : S c R be a smooth function and f ˜ : U R be a smooth extension of f to an open neighborhood U of S c R c with f ˜ | S c = f . Then the Riemannian gradient of f with respect to the Fisher–Rao metric (8) is given by
grad f ( p ) = R p f ˜ ( p ) .
Proof. 
Appendix A.1    □
Remark 1.
Equations (15) and (A24) show that the replicator operator R p is the inverse metric tensor with respect to the Fisher–Rao metric (8), as expressed in the ambient coordinates.
The exponential map induced by the e connection is defined on the entire space T c , 0 and reads [22]
Exp : S c × T c , 0 S c , Exp p ( v ) : = p · e v p p , e v p , p S c , v T c , 0 .

2.2. Density Matrices

We denote the open convex cone of positive definite matrices by
P c : = { ρ C c × c : ρ = ρ * , ρ 0 }
and the manifold of strictly positive definite density matrices by
D c : = { ρ P c : tr ρ = 1 } .
where D c is the intersection of P c and the hyperplane defined by the trace-one constraint. Its closure D ¯ c is convex and compact. We can identify the space D c as the space of invertible density operators in the sense of quantum mechanics on the finite-dimensional Hilbert space C c without loss of generality. Any matrix ensemble of the form
{ M i } i [ n ] P ¯ c : i [ n ] M i = I c
induces the probability distribution on [ n ] via the Born rule
p Δ n : p i = M i , ρ = tr ( M i ρ ) , i [ n ] .
Equation (19) is called the positive operator valued measure (POVM). We refer to [14] for the physical background and to [25] and references therein for the mathematical background.
The analog of Equation (10) is the tangent space in which at any point, ρ D c is equal to the space of traceless symmetric matrices.
H c , 0 : = H c { X C c × c : tr X = 0 } ,
where
H c : = { X C c × c : X * = X } .
Therefore, the manifold D c has a trivial tangent bundle given by
T D c = D c × H c , 0 ,
with the tangent space H c , 0 = T 1 D c D c defined in Equation (21). The corresponding orthogonal projection onto the tangent space H c , 0 reads
Π c , 0 : H c H c , 0 , Π c , 0 [ X ] : = X tr X c I c .
Equipping the manifold D c as defined in Equation (18) with the Bogoliubov–Kubo–Mori (BKM) metric [26] results in a Riemannian manifold ( D c , g ) . Using T ρ D c = H c , 0 , this metric can be expressed by
g ρ ( X , Y ) : = 0 tr ( X ( ρ + λ I ) 1 Y ( ρ + λ I ) 1 ) d λ , X , Y H c , 0 , ρ D c .
This metric uniquely ensures the existence of a symmetric e-connection on D c that is mutually dual to its m-connection * in the sense of information geometry, leading to a dually flat structure ( g , , * )  [27] and [9] (Theorem 7.1).
The following map and its inverse, defined in terms of the matrix exponential exp m and its inverse log m = exp m 1 , are convenient.
T : D c × H c H c ,
T ρ [ X ] : = d d t log m ( ρ + t X ) | t = 0 = 0 ( ρ + λ I ) 1 X ( ρ + λ I ) 1 d λ ,
T ρ 1 [ X ] = d d t exp m ( H + t X ) | t = 0 = 0 1 ρ 1 λ X ρ λ d λ , ρ = exp m ( H ) .
The inner product (25) may now be written in the form of
g ρ ( X , Y ) = T ρ [ X ] , Y ,
since the trace is invariant with respect to cyclic permutations of a matrix product as an argument. Likewise,
ρ , X = ( ρ X ) = T ρ 1 [ X ] .
We also consider two subspaces on the tangent space T ρ D c ,
T ρ u D c : = X H c , 0 : Ω = Ω * such that X = [ Ω , ρ ] ,
T ρ c D c : = X H c , 0 : [ ρ , X ] = 0 ,
which yield the decomposition [9]
T ρ D c = T ρ c D c T ρ u D c .
In Section 4.5, we use this decomposition to recover the assignment flow for categorical distributions from the quantum state assignment flow by restriction to a submanifold of commuting matrices.

2.3. Alternative Metrics and Geometries

The positive definite matrix manifold P c (We confine ourselves in this subsection to the case of of real density matrices, as our main references for comparison only deal with real matrix manifolds) has become a tool for data modelling and analysis during the last two decades. Accordingly, a range of Riemannian metrics exist with varying properties. A major subclass is formed by the O ( n ) -invariant metrics, including the log-Euclidean, affine-invariant, Bures–Wasserstein and Bogoliubov–Kubo–Mori (BKM) metrics. We refer to [28] for a comprehensive recent survey.
This section provides a brief comparison of the BKM metric (25) adopted in this paper with two metrics often employed in the literature: the affine-invariant metric and the log-Euclidean metric, which may be regarded as ‘antipodal points’ in the space of metrics from the geometric and the computational viewpoint, respectively.

2.3.1. Affine-Invariant Metrics

The affine-invariant metric has been derived in various ways, e.g., based on the canonical matrix inner product on the tangent space [29] (Section 6) or as Fisher–Rao metric on the statistical manifold of centered multivariate Gaussian densities [30]. The metric is given by
g ρ ( X , Y ) = tr ρ 1 2 X ρ 1 2 ρ 1 2 Y ρ 1 2 = tr ρ 1 X ρ 1 Y , ρ P c , X , Y T ρ P c .
The exponential map with respect to the Levi–Civita connection reads
exp ρ ( aff ) ( X ) = ρ 1 2 exp m ρ 1 2 X ρ 1 2 ρ 1 2 , ρ P c , X T ρ P c .
This Riemannian structure turns P c into a manifold with negative sectional curvature [31] (Chapter II.10), which is convenient from the geometric viewpoint due to uniquely defined Riemannian means and geodesic convexity [32] (Section 6.9). On the other hand, evaluating Equations (34) and (35) is computationally expensive, in particular when computing the quantum state assignment flow, which essentially involves geometric averaging.

2.3.2. Log-Euclidean Metric

The log-Euclidean metric introduced in [33] is the pullback of the canonical matrix inner product with respect to the matrix logarithm and is given by
g ρ ( X , Y ) = tr d log m ( ρ ) [ X ] , d log m ( ρ ) [ Y ] = ( 27 ) T ρ [ X ] , T ρ [ Y ] , ρ P c X , Y T ρ P c .
The exponential map reads
exp ρ ( log ) ( X ) = exp m log m ( ρ ) + T ρ [ X ] , ρ P c X , Y T ρ P c
and is much more convenient from the computational viewpoint. Endowed with this metric, the space P c is isometric to a Euclidean space. The log-Euclidean metric is not curved and merely invariant under orthogonal transforms and dilations [28].

2.3.3. Comparison to Bogoliubov-Kubo-Mori Metric

The BKM metric (Equations (25) and (30)) given by
g ρ ( X , Y ) = T ρ [ X ] , Y , ρ P c X , Y T ρ P c ,
looks similar to the log-Euclidean metric (36). Regarding them both as members of the class of mean kernel metrics [28] (Definition 4.1) enables an intuitive comparison. For real-valued matrices, mean kernel metrics have the form of
g ρ ( X , X ) = g D ( X , X ) = i , j [ c ] ( X i j ) 2 ϕ ( D i i , D j j ) , ρ = V D V , V O ( n ) , X = V X V ,
with a diagonal matrix D = Diag ( D 11 , , D c c ) and a bivariate function ϕ ( x , y ) = a m ( x , y ) θ , a > 0 in terms of a symmetric homogeneous mean m : R + × R + R + . Regarding the log-Euclidean metric, ϕ ( x , y ) = x y log x log y 2 , whereas for the BKM metric, ϕ ( x , y ) = x y log x log y .
Taking the restriction to density matrices D c P c into account, one has the relation
exp ρ ( log ) ( Y ) = Exp ρ ( e ) ( X ) , ρ D c , X H c , 0 ,
Y = X log tr exp m log m ( ρ ) + T ρ [ X ] ρ ,
as explained in Remark 4. Here, the left-hand side of Equation (40) is the exponential map (37) induced by the log-Euclidean metric, and Exp ρ ( e ) is the exponential map with respect to the affine e connection of information geometry, as detailed below by Proposition 4. This close relationship of the e-exponential map Exp ρ ( e ) with the exponential map of the log-Euclidean metric highlights the computational efficiency of using the BKM metric, which we adopt for our approach. This is also motivated by the lack of an explicit formula for the exponential map with respect to the Levi–Civita connection [34]. To date, the sign of the curvature remains unknown.
We note that to the best of our knowledge, the introduction of the affine connections of information geometry as surrogates of the Riemannian connection for any statistical manifold predates the introduction of the log-Euclidean metric for the specific space P c .

3. Assignment Flows

The assignment flow approach was informally introduced in Section 1. In this section, we summarize the mathematical ingredients of this approach as a reference for the subsequent generalization to quantum states (density matrices) in Section 4. Section 3.1 and Section 3.2 introduce the assignment flow on a single vertex and on an arbitrary graph, respectively. A reparameterization turns the latter into a Riemannian gradient flow (Section 3.3). Throughout this section, we refer to definitions and notions introduced in Section 2.1.

3.1. Single-Vertex Assignment Flow

Let D = ( D 1 , , D c ) R c and consider the task of picking the smallest components of D. Formulating this operation as an optimization problem amounts to evaluating the negative support function, in the sense of convex analysis [35] (p. 28), of the probability simplex Δ c at D ,
min j [ c ] { D 1 , , D c } = max p Δ c D , p .
In practice, the vector D represents real-valued noisy measurements at some vertex i V of an underlying graph G = ( V , E ) and is therefore in a “general position”, that is, the minimal component is unique. If j * [ c ] indexes the minimal component D j * , then the corresponding unit vector p * = e j * maximizes the right-hand side of (42). Assignment vectors assign a label (index) to observed data vectors.
If D varies, the operation (42) is non-smooth. In view of a desired interaction of label assignments across the graph (cf. Section 3.2), we therefore replace this operation by a smooth dynamical system whose solution converges to the desired assignment vector. To this end, the vector D is represented on S c as a likelihood vector
L p ( D ) : = exp p ( π c , 0 D ) = ( 14 ) exp p ( D ) , p S c ,
where
exp : S c × T c , 0 S c , exp p ( v ) : = Exp p R p ( v ) = p · e v p , e v , p S c .
The single-vertex assignment flow equation reads
p ˙ = R p L p ( D ) = p · L p ( D ) p , L p ( D ) 1 c , p ( 0 ) = 1 S c .
Its solution p ( t ) converges to the vector that solves the label assignment problem (42) (see Corollary 1 below).
Remark 2
(replicator equation). Differential equations of the form (45), with some R c -valued function F ( p ) in place of L p ( D ) , are known as replicator equation in evolutionary game theory [7].
Lemma 1.
Let p S c . Then, the differentials of the mapping (44) with respect to p and v are given by
d v exp p ( v ) [ u ] = R exp p ( v ) u ,
d p exp p ( v ) [ u ] = R exp p ( v ) u p , p S c , u , v T c , 0 .
Proof. 
Appendix A.2.    □
Theorem 1
(single-vertex assignment flow). The single-vertex assignment flow Equation (45) is equivalent to the system
p ˙ = R p q , p ( 0 ) = 1 S c ,
q ˙ = R q q , q ( 0 ) = L 1 S c ( D ) ,
with solution given by
p ( t ) = exp 1 S c 0 t q ( τ ) d τ .
Proof. 
Appendix A.2.    □
Corollary 1
(single-vertex label assignment). Let J * : = arg min j [ c ] { D j : j [ c ] } [ c ] . Then, the solution p ( t ) to (45) satisfies
lim t p ( t ) = 1 | J * | j J * e j arg max p Δ c D , p .
In particular, if D has a unique minimal component D j * , then p ( t ) e j * as t .
Proof. 
Appendix A.2.    □

3.2. Assignment Flows

The assignment flow approach consists of the weighted interaction–as defined below–of single-vertex assignment flows associated with vertices i V of a weighted graph G = ( V , E , ω ) with a non-negative weight function
ω : E R + , i k ω i k .
The assignment vectors are denoted by W i , i V and form the row vectors of a row-stochastic matrix.
W W c : = S c × × S c | V | factors .
The product space W c is called the assignment manifold ( W c , g ) , where the metric g is defined by applying (8) row-wise,
g W ( U , V ) : = i V g W i ( U i , V i ) , U , V T c , 0 : = T c , 0 × × T c , 0 .
The assignment flow equation generalizing (45) reads
W ˙ = R W [ S ( W ) ] ,
where the similarity vectors
S i ( W ) : = Exp W i k N i ω i k Exp W i 1 L W k ( D k ) , i V
form the row vectors of the matrix S ( W ) W c . The neighborhoods
N i : = { i } { k V : i k E }
are defined by the adjacency relation of the underlying graph G , and  R W [ · ] of Equation (55) applies Equation (12) row-wise,
R W [ S ( W ) ] i = R W i S i ( W ) , i V .
Note that the similarity vectors S i ( W ) given by (56) result from geometric weighted averaging of the velocity vectors Exp W i 1 L W k ( D k ) . The velocities represent given data D i , i V via the likelihood vectors L W i ( D i ) given by (43). Each choice of the weights ω i k in (56) associated with every edge i k E defines an assignment flow W ( t ) solving (55). Thus, these weight parameters determine how individual label assignments by (43) and (45) are regularized.
Well-posedness, stability and quantitative estimates of basins of attraction to integral label assignment vectors were established in [4]. Reliable and efficient algorithms for numerical computation of the assignment flow were devised in [10].

3.3. Reparameterized Assignment Flows

In [36] (Proposition 3.6), the following parameterization of the general assignment flow Equation (55) was introduced, which generalizes the parameterization (48) and (49) of the single-vertex assignment flow (45).
W ˙ = R W [ S ¯ ] , W ( 0 ) = 1 W c ,
S ¯ ˙ = R S ¯ [ Ω S ¯ ] , S ¯ ( 0 ) = S ( 1 W c ) ,
with the non-negative weight matrix corresponding to the weight function (52),
Ω = ( Ω 1 , , Ω | V | ) R | V | × | V | , Ω i k : = ω i k , if k N i , 0 , otherwise .
In terms of (60), this formulation reveals the “essential” part of the assignment flow equation, since (59) depends on (60) but not vice versa. Furthermore, the data and weights show up only in the initial point and in the vector field on the right-hand side of (60), respectively.
Henceforth, we solely focus on (60) rewritten for convenience as
S ˙ = R S [ Ω S ] , S ( 0 ) = S 0 ,
where S 0 comprises the similarity vectors (56) evaluated at the barycenter W = 1 W c .

4. Quantum State Assignment Flows

In this section, we generalize the assignment flow Equations (55) and (62) to the product manifold Q c of density matrices as state space. The resulting equations have a similar mathematical form. Their derivation requires:
  • Determination of the form of the Riemannian gradient of functions f : D c R with respect to the BKM metric (25), the corresponding replicator operator and exponential mappings Exp and exp, together with their differentials (Section 4.1);
  • Definition of the single-vertex quantum state assignment flow (Section 4.2);
  • Determination of the general quantum state assignment flow equation for an arbitrary graph (Section 4.3) and its alternative parameterization (Section 4.4), which generalizes Formulation (62) of the assignment flow accordingly.
A natural question is: What does “label” mean for a generalized assignment flow evolving on the product manifold Q c of density matrices? For the single vertex quantum state assignment flow, i.e., without interaction of these flows on a graph, it turns out that the pure state corresponding to the minimal eigenvalue of the initial density matrix is assigned to the given data point (Proposition 5). Coupling non-commuting density matrices over the graph through the novel quantum state assignment flow, therefore, generates interesting complex dynamics, as illustrated in Section 5. In Section 4.5, we show that the restriction of the novel quantum state assignment flow to commuting density matrices recovers the original assignment flow for discrete labels.
Throughout this section, we refer to definitions and notions introduced in Section 2.2.

4.1. Riemannian Gradient, Replicator Operator and Further Mappings

Proposition 2
(Riemannian gradient). Let f : D c R be a smooth function defined on the manifold (18) and f ˜ : U R be a smooth extension of f to an open neighborhood U of D c C c × c with f ˜ | D c = f . Then, its Riemannian gradient with respect to the BKM metric (25) is given by
grad ρ f = T ρ 1 [ f ˜ ] ρ , f ˜ ρ ,
where T ρ 1 is given by (28), and f ˜ is the ordinary gradient with respect to the Euclidean structure of the ambient space C c × c .
Proof. 
Appendix A.3.    □
Comparing the result (63) with (15) motivates the following:
R ρ : H c H c , 0 , R ρ [ X ] : = T ρ 1 [ X ] ρ , X ρ , ρ D c ( replicator map )
The following lemma shows that the properties (63) extend to (64).
Lemma 2
(properties of  R ρ ). Let Π c , 0 denote the orthogonal projection (24). Then, the replicator map (64) satisfies
Π c , 0 R ρ = R ρ Π c , 0 = R ρ , ρ D c .
Proof. 
Appendix A.3.    □
Lemma 2 shows that the replicator map (64) implicitly comprises the orthogonal projection onto the tangent space. This allows for the averaging in (109) without the necessity of explicit projection, which simplifies the notation and explains the larger domain H c of R ρ in (64).
Next, using the tangent space H c , 0 , we define a parameterization of the manifold D c in terms of the mapping.
Γ : H c , 0 D c , Γ ( X ) : = exp m ( X ) tr exp m ( X ) = exp m X ψ ( X ) I , ( Γ map )
where
ψ ( X ) : = log ( tr exp m ( X ) ) .
The following lemma and proposition show that the domain of Γ extends to R c × c .
Lemma 3
(extension of Γ ). The extension to C c × c of the mapping Γ defined by (66) is well-defined and given by
Γ : C c × c D c , Γ ( Z ) = Γ ( Π c , 0 [ Z ] ) .
Proof. 
Appendix A.3.    □
Proposition 3
(inverse of Γ ). The map Γ defined by (66) is bijective with inverse
Γ 1 : D c H c , 0 , Γ 1 ( ρ ) = Π c , 0 [ log m ρ ] .
Proof. 
Appendix A.3.    □
The following lemma provides the differentials of the mappings Γ and Γ 1 .
Lemma 4
(differentials d Γ and d Γ 1 ). Let H , X H c , 0 with Γ ( H ) = ρ and Y T H c , 0 H c , 0 . Then,
d Γ ( H ) [ Y ] = T ρ 1 Y ρ , Y I , ρ = Γ ( H ) ,
d Γ 1 ( ρ ) [ X ] = Π c , 0 T ρ [ X ] .
Proof. 
Appendix A.3.    □
We finally compute a closed-form expression of the e-geodesic, i.e., the geodesic resp. exponential map induced by the e connection on the manifold ( D c , g ) .
Proposition 4
(e-geodesics). The e-geodesic emanating at ρ D c in the direction of X H c , 0 and the corresponding exponential map are given by
γ ρ , X ( e ) ( t ) : = Exp ρ ( e ) ( t X ) , t 0 ( e geodesic )
Exp ρ ( e ) ( X ) : = Γ Γ 1 ( ρ ) + d Γ 1 ( ρ ) [ X ] ( exponential map )
= Γ Γ 1 ( ρ ) + Π c , 0 T ρ [ X ] .
Proof. 
Appendix A.3.    □
Corollary 2
(inverse exponential map). The inverse of the exponential mapping (72) is given by
Exp ρ ( e ) 1 : D c H c , 0 , Exp ρ ( e ) 1 ( μ ) = d Γ Γ 1 ( ρ ) Γ 1 ( μ ) Γ 1 ( ρ ) .
Proof. 
Appendix A.3.    □
Analogous to (44), we define the mapping as exp ρ , where both the subscript and the argument disambiguate the meaning of “exp”.
Lemma 5
(exp-map). The mapping defined using (73) and (64) by
exp ρ : H 0 , c D c , exp ρ ( X ) : = Exp ρ ( e ) R ρ [ X ] , ρ D c ( exp map )
has the explicit form
exp ρ ( X ) = Γ Γ 1 ( ρ ) + X .
Proof. 
Appendix A.3.    □
The following lemma provides the explicit form of the differential of the mapping (76} and (77), which resembles the corresponding Formula (46) of the assignment flow.
Lemma 6
(differential d exp ρ ). The differential of the mapping (76) reads with ρ D c , X H c , 0 and Y T H c , 0 H c , 0
d exp ρ ( X ) [ Y ] = R exp ρ ( X ) [ Y ] .
Proof. 
Appendix A.3.    □
Remark 3
(comparing exp-maps–I). Since (78) resembles (46), one may wonder about the connection between (77) and (44). In view of (66), we define
γ : T c , 0 S c , γ ( v ) : = e v 1 , e v = exp 1 S c ( v )
and compute with the expression for its inverse (cf. [36])
γ 1 ( p ) = π c , 0 log p 1 S c = π c , 0 ( log p log 1 S c ) = π c , 0 log p
= ( 11 ) log p 1 S c , log p 1 c
which resembles (69). Moreover, in view of (77), the analogous expression using γ instead of Γ reads
γ γ 1 ( p ) + v = e π c , 0 log p + v 1 , e π c , 0 log p + v = 1 S c , log p p · e v 1 S c , log p p , e v = p · e v p , e v
= exp p ( v ) .
Remark 4
(comparing exp-maps–II). Using the above definitions and relations, we check Equation (40): exp ρ ( log ) ( Y ) = Exp ρ ( e ) ( X ) , where the relation (41) between Y and X can now be written in the following form:
Y = ( 67 ) X ψ log m ( ρ ) + T ρ [ X ] ρ .
Direct computation yields
exp ρ ( log ) ( Y ) = ( 37 ) exp m ( log m ( ρ ) + T ρ [ Y ] )
= ( 41 ) exp m log m ( ρ ) + T ρ [ X ] ψ log m ( ρ ) + T ρ [ X ] T ρ T ρ 1 [ I c ] = ρ = I c
= ( 66 ) ( 68 ) Γ Π c , 0 [ log m ( ρ ) ] + Π c , 0 T ρ [ X ] = Γ Γ 1 ( ρ ) + Π c , 0 T ρ [ X ]
= Exp ρ ( e ) ( X ) .

4.2. Single-Vertex Density Matrix Assignment Flow

We generalize the single-vertex assignment flow Equation (45) to the manifold ( D c , g ρ ) given by (18) with the BKM metric (25).
In view of (43), the likelihood matrix is defined as
L ρ : H c D c , L ρ ( D ) : = exp ρ ( Π c , 0 [ D ] ) , ρ D c ,
and the corresponding single-vertex quantum state assignment flow (SQSAF) equation reads
ρ ˙ = R ρ [ L ρ ( D ) ] ( SQSAF )
= ( 64 ) T ρ 1 [ L ρ ( D ) ] ρ , L ρ ( D ) ρ , ρ ( 0 ) = 1 D c = Diag ( 1 S c ) .
Proposition 5 below specifies its properties after a preparatory Lemma.
Lemma 7.
Assume
D = Q Λ D Q H c and ρ = Q Λ ρ Q D c
can be simultaneously diagonalized with Q O ( c ) , Λ D = Diag ( λ D ) , Λ ρ = Diag ( λ ρ ) and λ ρ S c , since tr ρ = 1 . Then,
L ρ ( D ) = Q Diag exp λ ρ ( λ D ) Q .
Proof. 
Appendix A.3.    □
Proposition 5
(SQSAF limit). Let D = Q Λ D Q be the spectral decomposition of D with eigenvalues λ 1 λ c and orthonormal eigenvectors of Q = ( q 1 , , q c ) . Assume that the minimal eigenvalue λ c is unique. Then, the solution ρ ( t ) to (90) satisfies
lim t ρ ( t ) = Π q c : = q c q c .
Proof. 
Appendix A.3.    □

4.3. Quantum State Assignment Flow

This section describes our main result, i.e,. the definition of a novel flow of coupled density matrices in terms of a parameterized interaction of single-vertex flows of the form (90) on a given graph G = ( V , E , ω ) .
We assume the weight function ω : E R + to be non-negative with ω i j = 0 if i j E and
k N i ω i k = 1 ,
where we adopt the notation (57) for neighborhoods N i , i V . Analogous to (53), we define the product manifold as
ρ Q c : = D c × × D c | V | factors
where D c is given by (18). The corresponding factors of ρ are denoted by
ρ = ( ρ i ) i [ c ] , ρ i D c , i V .
where Q c becomes a Riemannian manifold when equipped with the the following metric:
g ρ ( X , Y ) : = i V g ρ i ( X i , Y i ) , X , Y T Q c : = H c , 0 × × H c , 0 ,
with g ρ i given by (25) for each i V . We set
1 Q c : = ( 1 D c ) i V Q c ,
with 1 D c given by (91). Our next step is to define a similarity mapping analogous to (56),
S : V × Q c , S i ( ρ ) : = Exp ρ i ( e ) k N i ω i k Exp ρ i ( e ) 1 L ρ k ( D k ) ,
based on mappings (73) and (89). Thanks to the use of the exponential map of the e connection, the matrix S i ( ρ ) can be rewritten and computed in a simpler, more explicit form.
Lemma 8
(similarity map). Equation (100) is equivalent to
S i ( ρ ) = Γ k N i ω i k ( log m ρ k D k ) .
Proof. 
Appendix A.3.    □
Expression (100), which defines the similarity map, looks like a single iterative step for computing the Riemannian center of mass of the likelihood matrices { L ρ k ( D k ) : k N i } if(!) the exponential map of the Riemannian (Levi Civita) connection were used. Instead, when using the exponential map Exp ( e ) , S i ( ρ ) may be interpreted as carrying out a single iterative step for the corresponding geometric mean on the manifold D c .
Therefore, a natural idea is to define the similarity map to be this geometric mean rather than just by a single iterative step. Surprisingly, analogous to the similarity map (56) for categorical distributions (cf. [3]), both definitions are identical, as shown next.
Proposition 6
(geometric mean property). Assume that ρ ¯ D c solves the equation
0 = k N i ω i k Exp ρ ¯ ( e ) 1 L ρ k ( D k )
which corresponds to the optimality condition for Riemannian centers of mass [32] (Lemma 6.9.4), except using a different exponential map. Then,
ρ ¯ = S i ( ρ )
with the right-hand side given by (100).
Proof. 
Appendix A.3.    □
We are now in the position to define the quantum state assignment flow along the lines of the original assignment flow (55),
ρ ˙ = R ρ [ S ( ρ ) ] , ρ ( 0 ) = 1 Q c , ( QSAF )
where both the replicator map R ρ and the similarity map S ( · ) apply factor-wise,
S ( ρ ) i = S i ( ρ ) ,
R ρ [ S ( ρ ) ] i = R ρ i [ S i ( ρ ) ] , i V
with the mappings S i given by (101) and R ρ i given by (64).

4.4. Reparameterization and Riemannian Gradient Flow

The reparameterization of the assignment flow (59) and (60) for categorical distributions described in Section 3.3 has proven to be useful for characterizing and analyzing assignment flows. Under suitable conditions on the parameter matrix Ω , the flow performs a Riemannian descent flow with respect to a non-convex potential [36] (Proposition 3.9) and has convenient stability and convergence properties [4].
In this section, we derive a similar reparameterization of the quantum state assignment flow (104).
Proposition 7
(reparametrization). Define the linear mapping as
Ω : Q c Q c , Ω [ ρ ] i : = k N i ω i k ρ k .
Then, the density matrix assignment flow equation (104) is equivalent to the following system:
ρ ˙ = R ρ [ μ ] , ρ ( 0 ) = 1 Q c ,
μ ˙ = R μ Ω [ μ ] , μ ( 0 ) = S ( 1 Q c ) .
Proof. 
Appendix A.3.    □
For the following, we adopt the symmetry assumption.
ω i j = ω j i , i , j V
j N i i N j , i , j V .
As a consequence, the mapping (107) is self-adjoint.
μ , Ω [ ρ ] = i V μ i , Ω [ ρ ] i = i V k N i ω i k μ i , ρ k = i V k N i ω k i μ i , ρ k
= k V i N k ω k i μ i , ρ k = k N i Ω [ μ ] k , ρ k = Ω [ μ ] , ρ .
Proposition 8
(Riemannian gradient QSAF flow). Suppose that the mapping Ω [ · ] given by (107) is self-adjoint with respect to the canonical matrix inner product. Then, the solution μ ( t ) to (109) also solves
μ ˙ = grad μ J ( μ ) with grad μ J ( μ ) i = grad μ i J ( μ )
with respect to the potential
J ( μ ) : = 1 2 μ , Ω [ μ ] .
Proof. 
Appendix A.3.    □
We conclude this section by rewriting the potential in a more explicit, informative form.
Proposition 9
(nonconvex potential). We define
L G : Q c Q c , L G : = id Ω
with Ω given by (107). Then, the potential (115) can be rewritten as
J ( μ ) = 1 2 μ , L G [ μ ] μ 2
= 1 4 i V j N i ω i j μ i μ j 2 1 2 μ 2 .
Proof. 
Appendix A.3.    □

4.5. Recovering the Assignment Flow for Categorical Distributions

In the following, we show how the assignment flow (62) for categorical distributions arises as special case of the quantum state assignment flow under suitable conditions, as detailed below.
Definition 2
(commutative submanifold). Let
Π = { π i : i [ l ] } , l c
denote a set of operators that orthogonally project onto disjoint subspaces of C c ,
π i 2 = π i , i [ l ] ,
π i π j = 0 , i , j [ l ] , i j ,
and which are complete in the sense that
i [ l ] π i = I c .
Given a family Π of operators, we define
D Π : = i [ l ] p i tr π i π i : p S l D c
the submanifold of commuting Hermitian matrices, which can be diagonalized simultaneously.
A typical example for a family (119) is
Π U = { π i = u i u i * : i [ c ] } ,
where U = { u 1 , , u c } is an orthonormal basis of C c . The following lemma elaborates the bijection D Π S l .
Lemma 9
(properties of D Π ). Let D Π D c be given by (123) and denote the corresponding inclusion map by ι : D Π D c . Then,
(a) 
The submanifold ( D Π , ι * g BKM ) with the induced BKM metric is isometric to ( S l , g FR ) ;
(b) 
If μ D Π , then the tangent subspace T μ D Π is contained in the subspace T μ c D c T μ D c defined by (32);
(c) 
Let U = { u 1 , , u c } denote an orthonormal basis of C c such that for every π i Π , i [ l ] , there are u i 1 , , u i k U that form a basis of range ( π i ) . Then, there is an inclusion of commutative subsets D Π D Π U that corresponds to an inclusion S l S c .
Proof. 
Appendix A.3.    □
Now, we establish that a restriction of the QSAF Equation (109) to the commutative product submanifold can be expressed in terms of the AF Equation (62). Analogous to the definition (96) of the product manifold Q c , we set
D Π , c = D Π × × D Π | V | factors .
If Π is given by an orthonormal basis as in (124), we define the unitary matrices as
U = ( u 1 , , u c ) Un ( c ) ,
U c = Diag ( U , , U ) | V | block - diagonal entries .
Proposition 10
(invariance of D Π , c ). Let Π and D Π be given according to Definition 2. Then, the following holds.
(i) 
If μ D Π , c Q c , then R Ω [ μ ] T μ D Π , c T μ Q c .
(ii) 
If Π U has the form (124), then
R Ω [ μ ] = U c Diag R S [ Ω S ] U c * ,
where S W c is determined by μ i = U Diag ( S i ) U * , i V .
In particular, the submanifold D Π , c is preserved by the quantum state assignment flow.
Proof. 
Appendix A.3.    □
It remains to be verified that under suitable conditions on the data matrices D i , i V that define the initial point of (109) by similarity mapping (Lemma 8), the quantum state assignment flow becomes the ordinary assignment flow.
Corollary 3
(recovery of the AF by restriction). In the situation of Proposition 10, assume that all data matrices D i , i V become diagonal in the same basis U , i.e.,
D i = U Diag ( λ i ) U * , λ i R c , i V .
Then, the solution of the QSAF
μ ˙ = R μ Ω [ μ ] , μ ( 0 ) = S ( 1 Q c )
is given by
μ i ( t ) = U Diag S i ( t ) U * , i V ,
where S ( t ) satisfies the ordinary AF equation
S ˙ = R S [ Ω S ] , S ( 0 ) = S ( 1 W c ) ,
and the initial point is determined by the similarity map (56) evaluated at the barycenter W = 1 W c with the vectors λ i , i V as data points.
Proof. 
Appendix A.3.    □

5. Experiments and Discussion

In this section, we report academic experiments in order to illustrate the novelty of our approach. In comparison to the original formulation, our approach enables a continuous assignment without the need to specify explicitly prototypical labels beforehand. The experiments highlight the following properties of the novel approach, which extend the expressivity of the original assignment flow approach:
  • Geometric adaptive feature vector averaging even when uniform weights are used (Section 5.2);
  • Structure-preserving feature patch smoothing without accessing data at individual pixels (Section 5.3);
  • Seamless incorporation of feature encoding using finite frames (Section 5.3).
In Section 6, we indicate the potential to represent spatial feature context via entanglement. However, working out the potential for various applications more thoroughly is beyond the scope of this paper.

5.1. Geometric Integration

In this section, we focus on the geometric integration of the reparameterized flow described by Equation (109). For a reasonable choice of a single-step-sized parameter, the scheme is accurate, stable and amenable to highly parallel implementations.
The e-geodesic from Proposition 4 constitutes a retraction [37] (Definition 4.1.1 and Proposition 5.4.1) onto the state manifold Q c .
Consequently, the iterative step for updating μ t Q c , t N 0 and step size ϵ > 0 is given by
( μ t + 1 ) i = Exp μ t ( e ) ϵ R μ t Ω [ μ t ] i = Exp ( μ t ) i ( e ) R ( μ t ) i ϵ ( Ω [ μ t ] ) i
= ( 76 ) exp ( μ t ) i ϵ ( Ω [ μ t ] ) i , i V .
for all i V . Using (77) and assuming
( μ t ) i = Γ ( A t ) i , i V ,
with A t T c , we obtain
Γ ( A t + 1 ) i : = exp ( μ t ) i ϵ ( Ω [ μ t ] ) i
= Γ Γ 1 ( ( μ t ) i ) + ϵ ( Ω [ μ t ] ) i
= Γ Γ 1 Γ ( ( A t ) i ) + ϵ ( Ω [ μ t ] ) i
= Γ ( A t ) i + ϵ ( Ω [ μ t ] ) i , i V ,
and in view of (68) and (135), we conclude that
A t + 1 = A t + ϵ Π c , 0 Ω [ Γ ( A t ) ] .
Remark 5.
We note that the numerical evaluation of the replicator operator (64) is not required. This makes the geometric integration scheme summarized by Algorithm 1 quite efficient.
Algorithm 1:Geometric Integration Scheme
Entropy 25 01253 i001
We list a few further implementation details below.
  • A reasonable convergence criterion that measures how close the states are to a rank-one matrix is | tr ( μ t ) i tr ( μ t 2 ) i | ε , i V ;
  • A reasonable range for the step size parameter is ϵ 0.1 ;
  • In order to remove spurious non-Hermitian numerical rounding errors, we replace each matrix ( Ω [ μ t ] i ) with 1 2 ( Ω [ μ t ] ) i + ( Ω [ μ t ] ) i * ;
  • The constraint ρ = 1 of (18) can be replaced by ρ = τ with any constant τ > 1 . This ensures that for larger matrix dimensions c, the entries of ρ vary in a reasonable numerical range and that the stability of the iterative updates.
Up to moderate matrix dimensions, such as c 100 , the matrix exponential in (66) can be computed using any of the basic established algorithms [38] (Chapter 10) or available solvers. In addition, depending on the size of the neighborhood N i induced by the weighted adjacency relation of the underlying graph in (95), Algorithm 1 can be implemented in a fine-grained parallel fashion.

5.2. Labeling 3D Data on Bloch Spheres

For the purpose of visual illustration, we consider the smoothing of 3D color vectors d = ( d 1 , d 2 , d 3 ) interpreted as Bloch vectors, which parameterize density matrices [14] (Section 5.2).
ρ = ρ ( d ) = 1 2 I + d 1 0 1 1 0 + d 2 0 i i 0 + d 3 1 0 0 1 C 2 × 2 , d 1 .
Pure states ρ correspond to unit vectors d , d = 1 , whereas vectors d , d < 1 parameterize mixed states ρ . Given data d i = ( d i , 1 , d i , 2 , d i , 3 ) , i V with d i 1 , as illustrated by Figure 1 and explained in the caption, we initialized the QSAF at ρ i = ρ ( d i ) , i V and integrated the flow. Each integration step involves geometric state averaging across the graph, causing mixed states ρ i ( t ) = ρ ( d i ( t ) ) , i V , which eventually converge towards pure states. Integration was stopped at time t = T , when min { d i ( T ) : i V } 0.999 . The resulting vectors d i ( T ) are visualized as explained in the caption of Figure 1. We point out that the two experiments discussed next are supposed to illustrate the behavior of the QSAF and the impact of the underlying geometry rather than constitute a contribution to the literature on the processing of color images.
Figure 1c shows a noisy version of the image in (b) used to initialize the quantum state assignment flow (QSAF). Panel (d) shows the labeled image, i.e., the assignment of a pure state (depicted as a Bloch vector) to each pixel of the input data (c). Although uniform weights were used and any prior information was absent, the result (d) demonstrates that the QSAF removes the noise and preserves the signal transition fairly well both for large-scale local image structure (away from the image center) and for small-scale local image structure (close to the image center). This behavior is quite unusual in comparison to traditional image denoising methods, which inevitably require adaption of regularization to the scale of local image structure. In addition, we note that noise removal is ‘perfect’ for the three extreme points (red, green and blue in panel (a)) but suboptimal only for the remaining non-extreme points.
Panels (f–h) show the same results when the data are encoded in a better way, as depicted by (e) using unit vectors not only on the positive orthant but on the whole unit sphere. These data are illustrated by RGB vectors that result from translating the unit sphere (e) to the center 1 2 ( 1 , 1 , 1 ) of the RGB color cube [ 0 , 1 ] 3 and scaling it by 1 2 . This improved data encoding is clearly visible in panel (g), which displays the same noise level as shown in panel (c). Accordingly, noise removal while preserving signal structure at all local scales is more effectively achieved by the QSAF in (h) in comparison to (d).

5.3. Basic Image Patch Smoothing

Figure 2 shows an application of the QSAF to a random spatial arrangement (grid graph) of normalized patches, where each vertex represents a patch, not a pixel. Applying vectorization taking the tensor product with itself, each patch is represented as a pure state in terms of a rank-one matrix D i at the corresponding vertex i V , which constitutes the input data in the similarity mapping (100). Integrating the flow causes the non-commutative interaction of the associated state spaces ρ i , i V through geometric averaging with uniform weights (95) until convergence towards pure states. The resulting patches are then simply given by the corresponding eigenvector, possibly after reversing the arbitrary sign of each eigenvector component, depending on the distance to the input patch.
The result shown in Figure 2 reveals an interesting behavior: structure-preserving patch smoothing without accessing explicitly individual pixels. In particular, the flow induces a partition of the patches without any prior assumption on the data.
Figure 3 shows a variant of the scenario depicted in Figure 2 in order to demonstrate the ability to separate local image structure by geometric smoothing at the patch level in another way.
Figure 4 generalizes the setup in two ways. First, patches were encoded using the harmonic frame given by the two-dimensional discrete Fourier matrix. Secondly, non-uniform weights ω i k = e τ P i P j F 2 , τ > 0 were used depending on the distance of adjacent patches P i , P j .
Specifically, let P i denote the patch at vertex i V after removing the global mean and normalization using the Frobenius norm. Then, applying the FFT to each patch and vectorization formally with the discrete two-dimensional Fourier matrix F 2 = F F (Kronecker product) followed by stacking the rows p ^ i = F 2 vec ( P i ) , the input data were defined as D i = F 2 Diag ( | p ^ i | 2 ) F 2 * , where the squared magnitude | · | 2 was computed component-wise. The flow yields were integrated again against pure states that were interpreted and decoded accordingly. The eigenvector was used as a multiplicative filter of the magnitude of the Fourier-transformed patch (keeping its phase), followed by rescaling of the norm and addition of the mean by approximating the original patch in terms of these two parameters.
The results shown in panels (b) and (c) of Figure 4 illustrate the effect of ‘geometric diffusion’ at the patch level through integration of the flow and how the input data are approximated depending on the chosen spatial scale (patch size), subject to significant data reduction.

6. Conclusions

We generalized the assignment flow approach for categorical distributions [2] to density matrices on weighted graphs. While the former flows assign each data point a label selected from a finite set, the latter assign each data point a generalized “label” from the uncountable submanifold of pure states.
Various further directions of research are indicated by numerical experiments. This includes the unusual behavior of feature vector smoothing, which parameterizes complex-valued, non-commutative state spaces (Figure 1), the structure-preserving interaction of spatially indexed feature patches without accessing individual pixels (Figure 2 and Figure 3), the use of frames for signal representation and as observables whose expected values are governed by a quantum state assignment flow (Figure 4) and the representation of spatial correlations by entanglement and tensorization (Figure 5). Extending the representation of the original assignment flow in the broader framework of geometric mechanics to the novel quantum assignment flow approach as recently developed by [39] is another promising research project spurred by established concepts of mathematics and physics.
Based on these viewpoints, this paper adds a novel concrete approach based on information theory to the emerging literature on network design based on concepts from quantum mechanics, e.g., [40] and references therein. Our main motivation is the definition of a novel class of “neural ODEs” [41] in terms of the dynamical systems that generate a quantum state assignment flow. The layered architecture of a corresponding “neural network” is implicitly given by geometric integration. The inherent smoothness of the parameterization allows weight parameters to be learned from data. This will be explored in our future work, along the various lines of research indicated above.

Author Contributions

Investigation, J.S., J.C., B.B., M.G., P.A. and C.S.; Supervision, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deutsche Forschungsgemeinschaft (DFG), grant SCHN 457/17-1, within the priority programme SPP 2298: Theoretical Foundations of Deep Learning. This work was funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy EXC-2181/1-390900948 (the Heidelberg STRUCTURES Excellence Cluster).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs

Appendix A.1. Proofs of Section 2

Proof Proposition 1
We verify (13) and (14) by direct computation. For any p S c ,
R p 1 c = Diag ( p ) p p 1 c = p p , 1 c p = 0 ,
R p π c , 0 = R p ( I 1 c 1 S c ) = R p ,
π c , 0 R p = ( I 1 c 1 S c ) R p = R p 1 c 1 c ( R p 1 c ) = R p .
Next, we characterize the geometric role of R p and show (15). Let p S c be parameterized by the local coordinates
p ¯ = φ ( p ) : = ( p 1 , p 2 , , p c 1 ) R + + c 1
p = φ 1 ( p ¯ ) = ( p ¯ 1 , , p ¯ c 1 , 1 1 c 1 , p ¯ ) S c .
Choosing the canonical basis e 1 , , e c on S c R c , we obtain a basis of the tangent space T c , 0
e j e c = d φ 1 ( e j ) , j [ c 1 ] .
Using these vectors, a column of the matrix is
B : = ( e 1 e c , , e c 1 e c ) = I c 1 1 c 1 R c × ( c 1 ) ,
one has for any v T c , 0
v = B v ¯ = v ¯ v c = v ¯ 1 c 1 , v ¯ , v ¯ = ( v 1 , , v c 1 )
v ¯ = B v , B = I c 1 0 π c , 0 ,
where B : = ( B B ) 1 B denotes the Moore–Penrose generalized inverse of B. Substituting this parameterization and evaluating the metric (8) gives
g p ( u , v ) = u ¯ , B Diag ( p ) 1 B v ¯ = u ¯ , I c 1 1 c 1 Diag ( p ) 1 I c 1 1 c 1 v ¯
= u ¯ , Diag ( p ¯ ) 1 + 1 1 1 c 1 , p ¯ 1 c 1 1 c 1 v ¯
= : u ¯ , G ( p ¯ ) v ¯ .
Applying the Sherman–Morrison–Woodbury matrix inversion formula [42] (p. 9)
( A + x y ) 1 = A 1 A 1 x y A 1 1 + y , A 1 x
yields
G ( p ¯ ) 1 = Diag ( p ¯ ) 1 1 1 c 1 , p ¯ Diag ( p ¯ ) 1 c 1 1 c 1 Diag ( p ¯ ) 1 + 1 1 1 c 1 , p ¯ 1 c 1 , p ¯
= Diag ( p ¯ ) Diag ( p ¯ ) 1 c 1 1 c 1 Diag ( p ¯ ) = Diag ( p ¯ ) p ¯ p ¯
= R p ¯ .
Let v T c , 0 . Then, using the equations
p c = ( A 5 ) 1 1 c 1 , p ¯ ,
R p ¯ 1 c 1 = p ¯ 1 c 1 , p ¯ p ¯ = p c p ¯ ,
we have
R p v = R p ¯ p c p ¯ p c p ¯ p c p c 2 v ¯ v c = R p ¯ v ¯ v c R p ¯ 1 c 1 R p ¯ 1 c 1 , v ¯ + v c p c 1 c 1 , p ¯
= R p ¯ v ¯ 1 c 1 , R p ¯ v ¯ v c R p ¯ 1 c 1 1 c 1 , R p ¯ 1 c 1
= ( A 7 ) B R p ¯ ( v ¯ v c 1 c 1 ) .
Now, consider any smooth function f : S c R . Then,
p ¯ i f φ 1 ( p ¯ ) = j [ c ] j f ( p ) p ¯ i φ 1 ( p ¯ ) = ( A 5 ) i f ( p ) c f ( p ) ,
p ¯ f φ 1 ( p ¯ ) = f ( p ) ¯ c f ( p ) 1 c 1 .
Comparing the last equation and (A21) shows that
R p f ( p ) = B R p ¯ p ¯ f φ 1 ( p ¯ ) = ( A 14 ) B G ( p ) 1 p ¯ f φ 1 ( p ¯ ) ,
which proves (15). □

Appendix A.2. Proofs of Section 3

Proof of Lemma 1.
Let v ( t ) T c , 0 be a smooth curve with v ˙ ( t ) = u . Then,
d d t exp p v ( t ) = d d t p · e v ( t ) p , e v ( t ) = p · u · e v ( t ) p , e v ( t ) p , u · e v ( t ) p · e v ( t ) p , e v ( t ) 2
= exp p v ( t ) · u u , exp p v ( t ) exp p v ( t ) = R exp p ( v ( t ) ) u .
Similarly, for a smooth curve p ( t ) S c with p ˙ ( t ) = u , one has
d d t exp p ( t ) ( v ) = d d t p ( t ) · e v p ( t ) , e v = p ˙ ( t ) · e v p ( t ) , e v p ˙ ( t ) , e v p ( t ) · e v p ( t ) , e v 2
= exp p ( t ) ( v ) · u p ( t ) u p ( t ) , exp p ( t ) ( v ) exp p ( t ) ( v ) = R exp p ( t ) ( v ) u p ( t ) .
Proof of Theorem 1.
Let
q ( t ) = L p ( t ) ( D )
where p ( t ) solves (45). Using (43), (47) and (A29), we obtain
q ˙ = d p L p ( t ) ( D ) [ p ˙ ( t ) ] = R q ( t ) p ˙ ( t ) p ( t ) = ( 45 ) R q ( t ) q ( t ) p ( t ) , q ( t ) 1 c = ( 13 ) R q ( t ) q ( t ) ,
which shows (49). Then, p ( t ) = exp 1 S c ( r ( t ) ) , and differentiating (50) yields r ( t ) = 0 t q ( τ ) d τ
p ˙ ( t ) = ( 46 ) R exp 1 S c ( r ( t ) ) r ˙ ( t ) = ( 50 ) R p ( t ) q ( t ) = ( A 29 ) R p ( t ) L p ( t ) ( D ) ,
which proves the equivalence of (45), (48) and (49). □
Proof of Corollary 1.
The solution p ( t ) to (45) is given by (48) and (49). Proposition 1 and Equation (15) show that (49) is the Riemannian ascent flow of the function S c q 1 2 q 2 . The stationary points satisfy
R q q = ( q q 2 ) · q = 0
and form the set
Q * : = q * = 1 | J * | j J * e j : J * [ c ] .
The case of J * = [ c ] , i.e., q * = 1 S c , can be ruled out if D 1 c , D S c , which is always the case in practice, where D corresponds to real data (measurement and observation). The global maxima correspond to the vertices of Δ c = S ¯ c , i.e., | J * | = 1 . The remaining stationary points are local maxima and degenerate, since vectors D with non-unique minimal components form a negligible null set. In any case, lim t p ( t ) = ( 50 ) lim t q ( t ) = q * , depending on the index set J * determined by D. □

Appendix A.3. Proofs of Section 4

Proof of Proposition 2.
The Riemannian gradient is defined by [43] (p. 337)
0 = d f [ X ] g ρ ( grad ρ f , X ) = ( 29 ) f , X T ρ [ grad ρ f ] , X
= f T ρ [ grad ρ f ] , X , X H c , 0 .
Choosing the parameterization X = Y tr ( Y ) I H c , 0 with Y H c , we further obtain
0 = f T ρ [ grad ρ f ] , Y tr ( Y ) tr ( f T ρ [ grad ρ f ] )
= f T ρ [ grad ρ f ] tr ( f T ρ [ grad ρ f ] ) I , Y , Y H c .
The left factor must vanish. Applying linear mapping T ρ 1 and solving for grad ρ f yields
grad ρ f = T ρ 1 [ f ] ( f T ρ [ grad ρ f ] ) T ρ 1 [ I ] .
Since grad ρ f H c , 0 , taking the trace on both sides and using T ρ 1 [ I ] = tr ρ = 1 yields
0 = tr T ρ 1 [ f ] tr f + tr T ρ [ grad ρ f ] .
Substituting the last two summands into the previous equation yields
grad ρ f = T ρ 1 [ f ] ( tr T ρ 1 [ f ] ) ρ
= T ρ 1 [ f ] ρ , f ρ ,
where the last equation follows from (30). □
Proof of Lemma 2.
The equation Π c , 0 R ρ = R ρ follows from R ρ [ X ] H c , 0 ; hence,
tr R ρ [ X ] = ( 64 ) tr T ρ 1 [ X ] ρ , X tr ρ = ( 30 ) ρ , X ρ , X = 0 .
Thus,
Π c , 0 R ρ [ X ] = R ρ [ X ] = R ρ [ X ] tr X c ρ ρ , I = 1 ρ
= R ρ [ X ] tr X c R ρ [ I ] = R ρ X tr X c I = ( 24 ) R ρ Π c , 0 [ X ] .
Proof of Lemma 3.
Using (24), we compute
exp m ( Π c , 0 [ Z ] ) = exp m Z tr Z c I = e Z c exp m ( Z ) ,
where the last equation holds, since Z and I commute. Substitution into (66) cancels the scalar factor e tr Z c and shows (68). □
Proof of Proposition 3.
We show Γ Γ 1 = id D c and Γ 1 Γ = id H c , 0 . As for the first relation, we compute
Γ Γ 1 ( ρ ) = exp m Γ 1 ( ρ ) ψ Γ 1 ( ρ ) I
= exp m log m ρ tr ( log m ρ ) c I log ( tr exp m log m ρ tr ( log m ρ ) c I ) I
and since log m ρ and I commute,
= exp m log m ρ tr ( log m ρ ) c I log tr e 1 c tr ( log m ρ ) ρ I
= tr ρ = 1 exp m ( log m ρ )
= ρ .
As for the second relation, we compute
Γ 1 Γ ( X ) = Π c , 0 [ log m Γ ( X ) ] = Π c , 0 log m exp m X ψ ( X ) I
= Π c , 0 [ X ] ψ ( X ) Π c , 0 [ I ] = Π c , 0 [ X ]
= X ,
since X H c , 0 by assumption. □
Proof of Lemma 4.
In view of Definition (66) of Γ , we compute using the chain rule
d Γ ( H ) [ Y ] = d d t exp m H + t Y ψ ( H + t Y ) I | t = 0
= d exp m H ψ ( H ) I Y d ψ ( H ) [ Y ] I
= ( 28 ) T ρ 1 Y d ψ ( H ) [ Y ] I .
Furthermore,
d ψ ( H ) [ Y ] = ( 67 ) 1 tr exp m ( H ) tr ( d exp m ( H ) [ Y ] )
= ( 28 ) 1 tr exp m ( H ) tr T exp m ( H ) 1 [ Y ] , exp m ( H ) = ( 66 ) ( tr exp m ( H ) ) Γ ( H )
= ( 30 ) 1 tr exp m ( H ) exp m ( H ) , Y
= ( 66 ) Γ ( H ) , Y = ρ , Y ,
where the last equation follows from the assumption that ρ = Γ ( H ) . Substitution into (A54) yields (70). Regarding (71), using expression (69) for Γ 1 , we compute
d Γ 1 ( ρ ) [ X ] = Π c , 0 d log m ( ρ ) [ X ]
= ( 27 ) Π c , 0 T ρ [ X ] ,
which verifies (71). □
Proof of Proposition 4.
The e-geodesic connecting the two points Q , R D c is given by [24] (Section V)
Γ ( K + t A ) , t [ 0 , 1 ] , K = log m Q , A = log m R log m Q .
Setting Γ 1 ( ρ ) = Π c , 0 [ K ] and T ρ [ X ] = A yields (74), since the orthogonal projections Π c , 0 onto H c , 0 are also implicitly carried out in (A63) due to Lemma 3. Expression (73) is equal to (74) due to (71). It remains to be verified that the geodesic emanates at ρ in the direction of X. We compute
γ ρ , X ( e ) ( 0 ) = Γ ( Γ 1 ( ρ ) ) = ρ
d d t γ ρ , X ( e ) ( 0 ) = d d t Γ Γ 1 ( ρ ) + t d Γ 1 ( ρ ) [ X ] | t = 0
= d Γ Γ 1 ( ρ ) d Γ 1 ( ρ ) [ X ] = id [ X ] = X .
Proof of Corollary 2.
Setting
μ = Exp ρ ( e ) ( X ) = ( 73 ) Γ Γ 1 ( ρ ) + d Γ 1 ( ρ ) [ X ]
we solve for X,
Γ 1 ( μ ) = Γ 1 ( ρ ) + d Γ 1 ( ρ ) [ X ]
d Γ 1 ( ρ ) [ X ] = Γ 1 ( μ ) Γ 1 ( ρ )
X = d Γ Γ 1 ( ρ ) [ Γ 1 ( μ ) Γ 1 ( ρ ) ] ,
which shows (75) and where d Γ ( Γ 1 ( ρ ) ) 1 = d Γ 1 ( ρ ) was used to obtain the last equation. □
Proof of Lemma 5.
We compute
Exp ρ ( e ) R ρ [ X ] = ( 73 ) Γ Γ 1 ( ρ ) + Π c , 0 T ρ R ρ [ X ]
= ( 64 ) Γ Γ 1 ( ρ ) + Π c , 0 T ρ T ρ 1 [ X ] ρ , X ρ
= ρ = T 1 [ I ] Γ Γ 1 ( ρ ) + Π c , 0 [ X ρ , X I ]
= Γ Γ 1 ( ρ ) + X
and omit the projection map Π c , 0 in the last equation due to Lemma 2 or Lemma 3. □
Proof of Lemma 6.
We compute
d exp ρ ( X ) [ Y ] = ( 77 ) d Γ ( Γ 1 ( ρ ) + X ) [ Y ] = ( 70 ) T exp ρ ( X ) 1 [ Y exp ρ ( X ) , Y I ]
= T ρ 1 [ I ] = ρ T exp ρ ( X ) 1 [ Y ] exp ρ ( X ) , Y exp ρ ( X )
= ( 64 ) R exp ρ ( X ) [ Y ] .
Proof of Lemma 7.
We compute
L ρ ( D ) = exp ρ ( Π c , 0 [ D ] ) = ( 65 ) ( 76 ) exp ρ ( D )
= ( 77 ) Γ Γ 1 ( ρ ) D = ( 69 ) Γ Π c , 0 [ log m ρ ] D
= ( 92 ) Γ Q ( log m Λ ρ ) Q 1 c ( tr Λ ρ ) I c Q Λ D Q
= ( 66 ) Q Diag ( e log λ ρ 1 c , λ ρ 1 S c λ D ) 1 c , e log λ ρ 1 c , λ ρ 1 S c λ D Q = Q Diag e log λ ρ λ D 1 c , e log λ ρ λ D Q
= ( 44 ) Q Diag exp λ ρ ( λ D ) Q .
Proof of Proposition 5.
Writing ρ 0 = ρ ( 0 ) = 1 D c and Diag ( λ D ) = Λ D , we have
L ρ 0 ( D ) = Q Diag exp 1 S c ( λ D ) Q
according to Lemma 7 and
ρ ˙ ( 0 ) = ( 91 ) T ρ 0 1 [ L ρ 0 ( D ) ] ρ 0 , L ρ 0 ( D ) ρ 0
= ( 28 ) 0 1 1 c I c 1 s Q Diag exp 1 S c ( λ D ) Q 1 c I c s d s
1 c 2 tr Q Diag exp 1 S c ( λ D ) Q = 1 I c
= 1 c Q Diag exp 1 S c ( λ D ) Q 1 c 2 I c
= 1 c Q Diag exp 1 S c ( λ D ) 1 S c Q .
Comparing this equation to the single-vertex flow (45) at time t = 0 ,
p ˙ ( 0 ) = 1 c exp 1 S c ( D ) 1 c 1 c , exp 1 S c ( D ) = 1 1 c = 1 c exp 1 S c ( D ) 1 S c
shows that
ρ ˙ ( 0 ) = Q Diag λ ˙ ρ ( 0 ) Q ,
that is
ρ ( t ) = Q λ ρ ( t ) Q ,
where λ ( t ) solves the single-vertex assignment flow Equation (45) of the form
λ ˙ ρ = R λ ρ L λ ρ ( λ D ) .
Corollary 1 completes the proof. □
Proof of Lemma 8.
Let
H i = Γ 1 ( ρ i ) = ( 69 ) Π c , 0 log m ρ i , i V .
Then,
Exp ρ i ( e ) 1 L ρ k ( D k ) = ( 89 ) ( 77 ) Exp ρ i ( e ) 1 Γ Γ 1 ( ρ k ) D k
= ( 75 ) d Γ Γ 1 ( ρ i ) [ Γ 1 ( ρ k ) D k Γ 1 ( ρ i ) ]
= d Γ ( H i ) [ H k D k H i ] .
Substituting this expression into (100) yields
S i ( ρ ) = ( 73 ) ( 95 ) Γ H i + d Γ 1 ( ρ i ) d Γ ( H i ) = I k N i ω i k ( H k D k ) H i
= Γ k N i ω i k ( H k D k ) .
Substituting (A92) and omitting the projection map Π c , 0 due to Lemma 3 yields (101). □
Proof of Proposition 6.
Substituting as in the proof of Lemma 8, we obtain
0 = d Γ Γ 1 ( ρ ¯ ) k N i ω i k ( Π c , 0 log m ρ k D k ) Γ 1 ( ρ ¯ ) .
Since d Γ is one-to-one, the expression inside the brackets must vanish. Solving for ρ ¯ and omitting the projection map Π c , 0 due to Lemma 3 yields (101). □
Proof of Proposition 7.
Let ρ ( t ) solve (104) and denote the argument of the replicator operator R ρ on the right-hand side by
μ ( t ) : = S ρ ( t ) ,
which yields (108) and (104), respectively, whereas (109) remains to be verified. Differentiation yields
μ ˙ i = d S i ( ρ ) [ ρ ˙ ]
= ( 101 ) ( 27 ) d Γ k N i ω i k ( log m ρ k D k ) k N i ω i k T ρ k [ ρ ˙ k ]
= ( 101 ) d Γ Γ 1 ( S i ( ρ ) ) k N i ω i k T ρ k [ ρ ˙ k ]
= ( 70 ) T S i ( ρ ) 1 k N i ω i k T ρ k [ ρ ˙ k ] S i ( ρ ) , k N i ω i k T ρ k [ ρ ˙ k ] I
= T ρ 1 [ I ] = ρ T S i ( ρ ) 1 k N i ω i k T ρ k [ ρ ˙ k ] S i ( ρ ) , k N i ω i k T ρ k [ ρ ˙ k ] S i ( ρ )
= ( 64 ) R S i ( ρ ) k N i ω i k T ρ k [ ρ ˙ k ] = ( A 64 ) k N i ω i k R μ i T ρ k [ ρ ˙ k ]
= ( 108 ) k N i ω i k R μ i T ρ k R ρ k [ μ k ]
= ( 64 ) k N i ω i k R μ i T ρ k T ρ k 1 [ μ k ] ρ k , μ k T ρ k 1 [ I ]
= k N i ω i k R μ i [ μ k ρ k , μ k I ] = ( 65 ) k N i ω i k R μ i [ μ k ]
= ( 107 ) R μ i Ω [ μ ] i .
The initial condition for ρ is given by (104). The initial condition for μ follows from (A99). □
Proof of Proposition 8.
We compute using (110) and (111)
J ( μ ) = 1 2 j V μ j , k N j ω j k μ k
μ i J ( μ ) = 1 2 k N i ω i k μ k + j N i ω j i μ j = ( 110 ) Ω [ μ ] i .
Consequently, according to Proposition 2 and (64),
grad μ i J ( μ ) = R μ i [ μ i J ( μ ) ] = R μ i Ω [ μ ] i
which shows that the right-hand sides of (114) and (119) are equal. □
Proof of Proposition 9.
Starting with (115), we compute
J ( μ ) = 1 2 μ , Ω [ μ ] = 1 2 μ , ( id Ω id ) [ μ ] = 1 2 μ , L G [ μ ] μ 2 .
The quadratic form involving L G remains to be rewritten.
μ , L G [ μ ] = i V μ i , μ i Ω [ μ ] i = i V μ i 2 μ i , k N i ω i k μ k
= i V k N i ω i k = 1 μ i 2 k N i ω i k μ i , μ k = i V k N i ω i k μ i , μ i μ k
= 1 2 i V k N i ω i k μ i , μ i μ k + k V i N k ω k i μ k , μ k μ i ,
where the last sum results from the first one by interchanging indices i and k. Using the symmetry relations (110) and (111), we rewrite the second sum as
μ , L G [ μ ] = 1 2 i V k N i ω i k μ i , μ i μ k + i V k N i ω i k μ k , μ k μ i
= 1 2 i V k N i ω i k μ i , μ i μ k + μ k , μ k μ i
= 1 2 i V k N i ω i k μ i μ k 2
which completes the proof. □
Proof of Lemma 9.
We start with claim (b).
(b)
Let μ D Π and X T μ D Π . Suppose that vector X is represented by a curve η : ( ε , ε ) D Π such that η ( 0 ) = μ and η ( 0 ) = X . In view of the definition (123) of D Π , we have
η ( t ) = i [ l ] p i ( t ) tr π i π i X = i [ l ] p i ( 0 ) tr π i π i .
Consequently, if U = u 1 , . . . , u c is a basis of C c that diagonalizes μ , then the tangent vector X is also diagonal in this basis U , and X commutes with μ , i.e., [ μ , X ] = 0 and X T μ c D c . This proves (b).
(a)
The bijection D Π S l is explicitly given by
Φ Π : D Π S l , i [ l ] p i tr π i π i ( p 1 , . . . , p l ) .
This is bijective according to the definition of D Π . It remains to be shown that it is an isometry. Consider another tangent vector Y T μ D Π . We know that μ , X , Y can all be diagonalized in a common eigenbasis. This basis is denoted again by U . Then, we can write
μ = i [ c ] p ˜ i u i u i * , X = i [ c ] x ˜ i u i u i * , Y = i [ c ] y ˜ i u i u i *
and compute
ι * g BKM , μ ( X , Y ) = 0 tr ( X ( μ + λ I ) 1 Y ( μ + λ I ) 1 ) d λ
= i [ c ] 0 tr x ˜ i y ˜ i ( p ˜ i + λ ) 2 u i u i * d λ
= i [ c ] x ˜ i y ˜ i p ˜ i .
Note that the vector p ˜ = ( p ˜ 1 , . . . , p ˜ c ) comes from μ D Π . Therefore, the value p j / tr π j must occur tr π j times in p ˜ for every j [ l ] . This observation also holds for the vectors x ˜ = ( x ˜ 1 , . . . , x ˜ c ) and y ˜ = ( y ˜ 1 , . . . , y ˜ c ) . Thus, the sum above can be reduced to
i [ c ] x ˜ i y ˜ i p ˜ i = j [ l ] x j y j p j ,
where ( p 1 , . . . , p j ) = Φ ( μ ) , ( x 1 , . . . , x l ) = d Φ [ X ] and ( y 1 , . . . , y l ) = d Φ [ Y ] . Taking into account that ( x 1 , . . . , x l ) and ( y 1 , . . . , y l ) are the images of X , Y under the differential d Φ , we conclude
ι * g BKM , μ ( X , Y ) = i [ l ] x i y i p i = ( 8 ) g FR , Φ ( μ ) ( d Φ ( X ) , d Φ ( Y ) ) .
This proves part (a).
(c)
Part (c) is about the commutativity of the diagram.
D Π [ r , h o o k , " α Π " ] [ d , " Φ Π " ] D Π U [ d , " Φ Π U " ] S l [ r , h o o k , " β Π " ] S c .
The horizontal arrows can be described as follows. Recall that Π = π 1 , . . . , π l . k i = tr π i represents the dimensions of the images of the projectors π i . For a fixed p = ( p 1 , . . . , p l ) S l , set
P = ( P 1 , . . . , P c ) : = ( p 1 / k 1 , . . . , p 1 / k 1 k 1 times , . . . , p l / k l , . . . , p l / k l k l times ) S c .
Then, α Π is given by
α Π i [ l ] p i k i π i = j [ c ] P j u j u j * D Π U   and   β Π ( p 1 , . . . , p l ) = ( P 1 , . . . , P c )
Diagram (A128) commutes according to the definition of the Φ maps.
Proof of Lemma 10.
(i)
Due to the commutativity of the components μ i of μ Q , we can simplify the expression for the vector field of the QSAF as follows.
R μ [ Ω [ μ ] ] i = ( 106 ) R μ i Ω [ μ ] i
= ( 64 ) ( 28 ) k N i ω i k 0 1 μ i 1 λ μ k μ i λ d λ tr ( μ i μ k ) μ i
= k N i ω i k μ i μ k tr ( μ i μ k ) μ i .
Define μ D Π , c such that all the components μ i can be written as
μ i = r [ l ] p r i tr π r π r , p i = ( p 1 i , . . . , p l i ) S l , i V .
Then, we can further simplify
μ i μ k = r [ l ] p r i p r k ( tr π r ) 2 π r   and   tr ( μ i μ k ) = r [ l ] p r i p r k tr π r
and, consequently,
k N i ω i k μ i μ k ( μ i μ k ) μ i = r [ l ] k N i ω i k p r k tr π r s [ l ] p s i p s k tr π s p r i tr π r π r
= r [ l ] x r tr π r π r ,
where
x r : = k N i ω i k p r k tr π r s [ l ] p s i p s k tr π s p r i .
Thus,
R μ [ Ω [ μ ] ] i = r [ l ] x r / ( tr π r ) π r .
This has to be compared with the general form of a tangent vector( X T μ i D Π given by (A120). The only condition the vector p ( 0 ) in (A120) has to satisfy is that its components sum to 0. This also holds for x = ( x 1 , . . . x l ) . We conclude that R μ [ Ω [ μ ] ] i lies in T μ i D Π for all i V or, equivalently, R μ [ Ω [ μ ] ] T μ D Π , c .
(ii)
We write μ i = U Diag ( S i ) U * for all i V with S i S c and express R μ [ Ω [ μ ] ] in terms of S W as
R μ [ Ω [ μ ] ] i = k N i ω i k μ i μ k tr ( μ i μ k ) μ i
= U Diag k N i ω i k S i · S k S i , S k S i U *
= U Diag ( R S [ Ω S ] ) i U * .
Proof of Corollary 3.
We write D i = U Diag ( λ i ) U * for λ i R n diagonalized in the U basis. Then, the initial condition for the QSAF S flow (109) is given by
μ ( 0 ) i = S ( 1 Q ) i = ( 101 ) Γ k N i ω i k ( D k ) .
Then, set D ˜ i : = k N i ω i k D k = U Diag ( λ ˜ i ) U * , where
λ ˜ i = k N i ω i k λ k R c .
Recall further that Γ is computed in terms of the matrix exponential as specified by (66). Thus,
μ ( 0 ) i = Γ ( D ˜ i ) = exp m ( D ˜ i ) tr exp m ( D ˜ i ) = U exp m ( Diag ( λ ˜ i ) ) U * tr ( U exp m ( Diag ( λ ˜ i ) ) U * ) = U Diag ( exp ( λ ˜ i ) ) tr exp m ( Diag ( λ ˜ i ) ) U * .
This shows that all the μ ( 0 ) i s are diagonalized by the same basis U and μ ( 0 ) D Π U , c , and we can apply Proposition 10 (ii). Therefore, the vector field of the quantum state assignment S flow is also diagonalized in the U basis, and we solve simply for the diagonal components. The quantum S-flow equation can be written as
μ ˙ i = U Diag ( R S i [ Ω S ] ) U * , μ ( 0 ) i = U Diag ( S ( 1 W ) ) U *
with the classical similarity map S defined in terms of the data vectors λ i and μ i related to S i S c by μ i = U diag ( S i ) U * . The solution to this system is
μ i ( t ) = U Diag ( S i ( t ) ) U * ,
where S W solves the classical S-flow equation S ˙ = R S [ Ω S ] and S ( 0 ) = S ( 1 W ) . □

References

  1. Bakır, G.; Hofmann, T.; Schölkopf, B.; Smolar, A.J.; Taskar, B.; Vishwanathan, S.V.N. (Eds.) Predicting Structured Data; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  2. Åström, F.; Petra, S.; Schmitzer, B.; Schnörr, C. Image Labeling by Assignment. J. Math. Imaging Vis. 2017, 58, 211–238. [Google Scholar] [CrossRef]
  3. Schnörr, C. Assignment Flows. In Variational Methods for Nonlinear Geometric Data and Applications; Grohs, P., Holler, M., Weinmann, A., Eds.; Springer: Berlin, Germany, 2020; pp. 235–260. [Google Scholar]
  4. Zern, A.; Zeilmann, A.; Schnörr, C. Assignment Flows for Data Labeling on Graphs: Convergence and Stability. Inf. Geom. 2022, 5, 355–404. [Google Scholar] [CrossRef]
  5. Hühnerbein, R.; Savarino, F.; Petra, S.; Schnörr, C. Learning Adaptive Regularization for Image Labeling Using Geometric Assignment. J. Math. Imaging Vis. 2021, 63, 186–215. [Google Scholar] [CrossRef]
  6. Sitenko, D.; Boll, B.; Schnörr, C. Assignment Flow For Order-Constrained OCT Segmentation. Int. J. Comput. Vis. 2021, 129, 3088–3118. [Google Scholar] [CrossRef]
  7. Hofbauer, J.; Siegmund, K. Evolutionary Game Dynamics. Bull. Am. Math. Soc. 2003, 40, 479–519. [Google Scholar] [CrossRef]
  8. Sandholm, W.H. Population Games and Evolutionary Dynamics; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  9. Amari, S.-I.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society: Providence, RI, USA; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  10. Zeilmann, A.; Savarino, F.; Petra, S.; Schnörr, C. Geometric Numerical Integration of the Assignment Flow. Inverse Probl. 2020, 36, 034004. [Google Scholar] [CrossRef]
  11. Sitenko, D.; Boll, B.; Schnörr, C. A Nonlocal Graph-PDE and Higher-Order Geometric Integration for Image Labeling. SIAM J. Imaging Sci. 2023, 16, 501–567. [Google Scholar] [CrossRef]
  12. Zern, A.; Zisler, M.; Petra, S.; Schnörr, C. Unsupervised Assignment Flow: Label Learning on Feature Manifolds by Spatially Regularized Geometric Assignment. J. Math. Imaging Vis. 2020, 62, 982–1006. [Google Scholar] [CrossRef]
  13. Zisler, M.; Zern, A.; Petra, S.; Schnörr, C. Self-Assignment Flows for Unsupervised Data Labeling on Graphs. SIAM J. Imaging Sci. 2020, 13, 113–1156. [Google Scholar] [CrossRef]
  14. Bengtsson, I.; Zyczkowski, K. Geometry of Quantum States: An Introduction to Quantum Entanglement, 2nd ed.; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  15. Petz, D. Quantum Information Theory and Quantum Statistics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  16. Schwarz, J.; Boll, B.; Sitenko, D.; Gonzalez-Alvarado, D.; Gärttner, M.; Albers, P.; Schnörr, C. Quantum State Assignment Flows. In Scale Space and Variational Methods in Computer Vision; Calatroni, L., Donatelli, M., Morigi, S., Prato, M., Santacesaria, M., Eds.; Springer: Berlin, Germany; pp. 743–756.
  17. Amari, S.-I. Differential-Geometrical Methods in Statistics, 1990 ed.; Lecture Notes in Statistics; Springer: Berlin/Heidelberg, Germany, 1985; Volume 28. [Google Scholar]
  18. Lauritzen, S.L. Chapter 4: Statistical Manifolds. In Differential Geometry in Statistical Inference; Gupta, S.S., Amari, S.I., Barndorff-Nielsen, O.E., Kass, R.E., Lauritzen, S.L., Rao, C.R., Eds.; Institute of Mathematical Statistics: Hayward, CA, USA, 1987; pp. 163–216. [Google Scholar]
  19. Brown, L.D. Fundamentals of Statistical Exponential Families; Institute of Mathematical Statistics: Hayward, CA, USA, 1986. [Google Scholar]
  20. Čencov, N.N. Statistical Decision Rules and Optimal Inference; American Mathematical Society: Providence, RI, USA, 1981. [Google Scholar]
  21. Brazitikos, S.; Giannopoulos, A.; Valettas, P.; Vritsiou, B.-H. Geometry of Isotropic Convex Bodies; American Mathematical Society: Providence, RI, USA, 2014. [Google Scholar]
  22. Ay, N.; Jost, J.; Lê, H.V.; Schwachhöfer, L. Information Geometry; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  23. Calin, O.; Udriste, C. Geometric Modeling in Probability and Statistics; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  24. Petz, D. Geometry of Canonical Correlation on the State Space of a Quantum System. J. Math. Phys. 1994, 35, 780–795. [Google Scholar] [CrossRef]
  25. Bodmann, B.G.; Haas, J.I. A Short History of Frames and Quantum Designs. In Topological Phases of Matter and Quantum Computation; Contemporary Mathematics; American Mathematical Society: Providence, RI, USA, 2020; Volume 747, pp. 215–226. [Google Scholar]
  26. Petz, D.; Toth, G. The Bogoliubov Inner Product in Quantum Statistics. Lett. Math. Phys. 1993, 27, 205–216. [Google Scholar] [CrossRef]
  27. Grasselli, M.R.; Streater, R.F. On the Uniqueness of the Chentsov Metric in Quantum Information Geometry. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2001, 4, 173–182. [Google Scholar] [CrossRef]
  28. Thanwerdas, Y.; Pennec, X. O(n)-invariant Riemannian Metrics on SPD Matrices. Linear Algebra Appl. 2023, 61, 163–201. [Google Scholar] [CrossRef]
  29. Bhatia, R. Positive Definite Matrices; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  30. Skovgaard, L.T. A Riemannian Geometry of the Multivariate Normal Model. Scand. J. Stat. 1984, 11, 211–223. [Google Scholar]
  31. Bridson, M.R.; Häflinger, A. Metric Spaces of Non-Positive Curvature; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  32. Jost, J. Riemannian Geometry and Geometric Analysis, 7th ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  33. Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N. Geometric Means in a Novel Vector Space Structure on Symmetric Positive-Definite Matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 328–347. [Google Scholar] [CrossRef]
  34. Michor, P.W.; Petz, D.; Andai, A. The Curvature of the Bogoliubov-Kubo-Mori Scalar Product on Matrices. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2000, 3, 1–14. [Google Scholar]
  35. Rockafellar, R. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  36. Savarino, F.; Schnörr, C. Continuous-Domain Assignment Flows. Eur. J. Appl. Math. 2021, 32, 570–597. [Google Scholar] [CrossRef]
  37. Absil, P.A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  38. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2008. [Google Scholar]
  39. Savarino, F.; Albers, P.; Schnörr, C. On the Geometric Mechanics of Assignment Flows for Metric Data Labeling. arXiv (to appear in: Information Geometry). 2021, arXiv:2111.02543. [Google Scholar]
  40. Levine, Y.; Yakira, D.; Cohen, N.; Shashua, A. Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design. In Proceedings of the Sixth International Conference on Learning Representations ICLR, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  41. Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. In Advances in Neural Information Processing Systems 31, Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
  42. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  43. Kobayashi, S.; Nomizu, K. Foundations of Differential Geometry; Inderscience Publishers: New York, NY, USA; John Wiley & Sons: Hoboken, NJ, USA, 1969; Volume II. [Google Scholar]
Figure 1. (a) A range of RGB unit color vectors in the positive orthant. (b) An image with data according to (a). (c) A noisy version of (b): each pixel i V displays a Bloch vector d i = ( d i , 1 , d i , 2 , d i , 3 ) defined by Equation (141) as an initial density matrix ρ i ( 0 ) , i V of the QSAF. (d) The labels (pure states) generated by integrating the quantum state assignment flow using uniform weights. (e) The vectors depicted by (a) are replaced by the unit vectors corresponding to the vertices of the icosahedron centered at 0. (fh) Analogous to (bd), based on (e) instead of (a) and using the same noise level in (g). The colors in (fh) visualize the Bloch vectors by RGB vectors that result from translating the sphere of (e) to the center 1 2 ( 1 , 1 , 1 ) of the RGB cube and scaling it by 1 2 . We refer to the text for a discussion.
Figure 1. (a) A range of RGB unit color vectors in the positive orthant. (b) An image with data according to (a). (c) A noisy version of (b): each pixel i V displays a Bloch vector d i = ( d i , 1 , d i , 2 , d i , 3 ) defined by Equation (141) as an initial density matrix ρ i ( 0 ) , i V of the QSAF. (d) The labels (pure states) generated by integrating the quantum state assignment flow using uniform weights. (e) The vectors depicted by (a) are replaced by the unit vectors corresponding to the vertices of the icosahedron centered at 0. (fh) Analogous to (bd), based on (e) instead of (a) and using the same noise level in (g). The colors in (fh) visualize the Bloch vectors by RGB vectors that result from translating the sphere of (e) to the center 1 2 ( 1 , 1 , 1 ) of the RGB cube and scaling it by 1 2 . We refer to the text for a discussion.
Entropy 25 01253 g001
Figure 2. Left pair: A random collection of patches with oriented image structure. The colored image of each patch shows its orientation using the color code depicted by the rightmost panel. Each patch is represented by a rank-one matrix D in (89) obtained by vectorizing the patch and taking the tensor product. Center pair: The final state of the QSAF obtained by geometric integration with uniform weighting ω i k = 1 | N i | , k N i , i V of the nearest neighbor states. It represents an image partition but preserves image structure due to geometric smoothing of patches encoded by non-commutative state spaces.
Figure 2. Left pair: A random collection of patches with oriented image structure. The colored image of each patch shows its orientation using the color code depicted by the rightmost panel. Each patch is represented by a rank-one matrix D in (89) obtained by vectorizing the patch and taking the tensor product. Center pair: The final state of the QSAF obtained by geometric integration with uniform weighting ω i k = 1 | N i | , k N i , i V of the nearest neighbor states. It represents an image partition but preserves image structure due to geometric smoothing of patches encoded by non-commutative state spaces.
Entropy 25 01253 g002
Figure 3. (a) A random collection of patches with oriented image structure. (b) A collection of patches with the same oriented image structure. (c) Pixel-wise mean of the patches (a,b) at each location. (d) The QSAF recovers a close approximation of (b) (color code: see Figure 2) by iteratively smoothing the states ρ k , k N i corresponding to (c) through geometric integration.
Figure 3. (a) A random collection of patches with oriented image structure. (b) A collection of patches with the same oriented image structure. (c) Pixel-wise mean of the patches (a,b) at each location. (d) The QSAF recovers a close approximation of (b) (color code: see Figure 2) by iteratively smoothing the states ρ k , k N i corresponding to (c) through geometric integration.
Entropy 25 01253 g003
Figure 4. (a) A real image partitioned into patches of size 8 × 8 and 4 × 4 pixels, respectively. Each patch is represented as a pure state with respect to a Fourier frame (see text). Instead of the nearest neighbor adjacency on a regular grid, each patch is adjacent to its eight closest patches in the entire collection. Integrating the QSAF and decoding the resulting states (see text) yield (b) ( 8 × 8 patches) and (c) ( 4 × 4 patches), respectively. Result (b) illustrates the effect of smoothing at the patch level in the Fourier domain, whereas the smaller spatial scale used to compute (c) represents the input data fairly accurately, despite achieving significant data reduction.
Figure 4. (a) A real image partitioned into patches of size 8 × 8 and 4 × 4 pixels, respectively. Each patch is represented as a pure state with respect to a Fourier frame (see text). Instead of the nearest neighbor adjacency on a regular grid, each patch is adjacent to its eight closest patches in the entire collection. Integrating the QSAF and decoding the resulting states (see text) yield (b) ( 8 × 8 patches) and (c) ( 4 × 4 patches), respectively. Result (b) illustrates the effect of smoothing at the patch level in the Fourier domain, whereas the smaller spatial scale used to compute (c) represents the input data fairly accurately, despite achieving significant data reduction.
Entropy 25 01253 g004
Figure 5. (a) A 5 × 5 grid graph. (b) Random Bloch vectors d i S 2 R 3 (visualized using pseudocolor) defining states ρ i according to Equation (141) for each vertex of a 32 × 32 grid graph. (c) Line graph corresponding to (a). Each vertex corresponds to an edge i j of the graph (a) and an initially separable state ρ i j = ρ i ρ j . This defines a simple shallow tensor-network. The histograms display the norms of the Bloch vectors of the states j ( ρ i j ) and i ( ρ i j ) obtained by partially tracing out one factor for each state ρ i j indexed by a vertex i j of the line graph of the grid graph in (b). (d) Histogram showing that in the initial state, all states are separable, while (e,f) both display a histogram of the norms of all Bloch vectors after convergence of the quantum state assignment flow with uniform weights towards pure states. (g) Using the center coordinates of each edge of the grid graph (b), the entanglement represented by ρ i j is visualized by a disk and “heat map” colors (blue: low entanglement, red: large entanglement). For visual clarity, (h,i) again display the same information after thresholding, using two colors only: entangled states are marked with red when the norm of the Bloch vectors drops below the thresholds of 0.95 and 0.99 , respectively, and otherwise with blue.
Figure 5. (a) A 5 × 5 grid graph. (b) Random Bloch vectors d i S 2 R 3 (visualized using pseudocolor) defining states ρ i according to Equation (141) for each vertex of a 32 × 32 grid graph. (c) Line graph corresponding to (a). Each vertex corresponds to an edge i j of the graph (a) and an initially separable state ρ i j = ρ i ρ j . This defines a simple shallow tensor-network. The histograms display the norms of the Bloch vectors of the states j ( ρ i j ) and i ( ρ i j ) obtained by partially tracing out one factor for each state ρ i j indexed by a vertex i j of the line graph of the grid graph in (b). (d) Histogram showing that in the initial state, all states are separable, while (e,f) both display a histogram of the norms of all Bloch vectors after convergence of the quantum state assignment flow with uniform weights towards pure states. (g) Using the center coordinates of each edge of the grid graph (b), the entanglement represented by ρ i j is visualized by a disk and “heat map” colors (blue: low entanglement, red: large entanglement). For visual clarity, (h,i) again display the same information after thresholding, using two colors only: entangled states are marked with red when the norm of the Bloch vectors drops below the thresholds of 0.95 and 0.99 , respectively, and otherwise with blue.
Entropy 25 01253 g005
Table 1. Components of the the Assignment Flow approach and the corresponding components of the novel Quantum State Assignment Flow approach.
Table 1. Components of the the Assignment Flow approach and the corresponding components of the novel Quantum State Assignment Flow approach.
Assignment Flow (AF)Quantum State AF (QSAF)
Single-vertex AF (Section 3.1)Single-vertex QSAF (Section 4.2)
AF approach (Section 3.2)QSAF approach (Section 4.3)
Riemannian gradient AF (Section 3.3)Riemannian gradient QSAF (Section 4.4)
Recovery of the AF from the QSAF by restriction (Section 4.5)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schwarz, J.; Cassel, J.; Boll, B.; Gärttner, M.; Albers, P.; Schnörr, C. Quantum State Assignment Flows. Entropy 2023, 25, 1253. https://doi.org/10.3390/e25091253

AMA Style

Schwarz J, Cassel J, Boll B, Gärttner M, Albers P, Schnörr C. Quantum State Assignment Flows. Entropy. 2023; 25(9):1253. https://doi.org/10.3390/e25091253

Chicago/Turabian Style

Schwarz, Jonathan, Jonas Cassel, Bastian Boll, Martin Gärttner, Peter Albers, and Christoph Schnörr. 2023. "Quantum State Assignment Flows" Entropy 25, no. 9: 1253. https://doi.org/10.3390/e25091253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop