Next Article in Journal
Images Segmentation Based on Cutting the Graph into Communities
Previous Article in Journal
Implicit A-Stable Peer Triplets for ODE Constrained Optimal Control Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms

Institute of Mathematics, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(9), 311; https://doi.org/10.3390/a15090311
Submission received: 5 August 2022 / Revised: 25 August 2022 / Accepted: 26 August 2022 / Published: 30 August 2022
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)

Abstract

:
We consider the numerical solution of the discrete multi-marginal optimal transport (MOT) by means of the Sinkhorn algorithm. In general, the Sinkhorn algorithm suffers from the curse of dimensionality with respect to the number of marginals. If the MOT cost function decouples according to a tree or circle, its complexity is linear in the number of marginal measures. In this case, we speed up the convolution with the radial kernel required in the Sinkhorn algorithm via non-uniform fast Fourier methods. Each step of the proposed accelerated Sinkhorn algorithm with a tree-structured cost function has a complexity of O ( K N ) instead of the classical O ( K N 2 ) for straightforward matrix–vector operations, where K is the number of marginals and each marginal measure is supported on, at most, N points. In the case of a circle-structured cost function, the complexity improves from O ( K N 3 ) to O ( K N 2 ) . This is confirmed through numerical experiments.

1. Introduction

The optimal transport (OT) problem is an optimization problem that deals with the search for an optimal map (plan) that moves masses between two or more measures at low cost [1,2]. OT appears in a wide range of applications such as image and signal processing [3,4,5,6,7], economics [8,9], finance [10,11], and physics [12,13]. The OT problem was first introduced in 1781 by Monge. His objective was to find a map between two probability measures μ 1 , μ 2 on R d that transports μ 1 to μ 2 with minimal cost, where the cost function describes the cost of transporting a mass between two points in R d . However, such maps do not always exist, so that Kantorovich [14] relaxed the problem in 1942 by looking for a transport plan with two prescribed marginals μ 1 and μ 2 that minimizes a certain cost functional.
Several authors have generalized the formulation to multi-marginal optimal transport (MOT) [15,16,17], where more than two marginal measures are given. For the given probability measures μ k on Ω k R d , k = 1 , , K , an optimal transport plan π is defined as a solution of the MOT problem
min π Π ( μ 1 , , μ k ) Ω 1 × × Ω K c ( x 1 , , x K ) d π ( x 1 , , x K )
where Π ( μ 1 , , μ K ) is the convex set of all joint probability measures π whose marginals are μ k , and c : Ω 1 × × Ω K R is the cost function.
Since the numerical computation of a transport plan is difficult in general, a regularization term such as the entropy [13,18], Kullback–Leibler divergence [19], general f-divergence [20], or L 2 -regularization [21,22] can be added to make the problem strictly convex. Different approaches such as the Sinkhorn algorithm [1,13], stochastic gradient descent [23], the Gauss–Seidel method [22], or proximal splitting [24] have been used to iteratively determine a minimizing sequence of the MOT problem.
However, the problem suffers from the curse of dimensionality as the complexity grows exponentially with the number K of marginal measures. One way to circumvent this lies in incorporating additional sparsity assumptions into the model. Polynomial-time algorithms to solve certain sparse MOT problems have been studied in [13,25,26]. We will assume that the cost function decouples according to a graph, where the nodes correspond to the marginals and the cost function is the sum of functions that depend only on two variables which correspond to two marginals connected by an edge of the graph. For example, the circle-structured Euclidean cost function reads as
c ( x 1 , , x K ) = x 1 x 2 2 2 + + x K 1 x K 2 2 + x K x 1 2 2 , x 1 , , x K R d .
MOT problems with graph-structured cost functions with a constant tree-width can be solved with the Sinkhorn algorithm in polynomial time [27]. In [25], polynomial-time algorithms were presented for the MOT problem and its regularized counterpart for the cases of a graph structure and a set-optimization structure, as well as a low-rank and sparsely structured cost function. Another sparsity assumption lies on the transport plan to be thinly spread, which is, e.g., the case for the L 2 -regularized problem [21,22].

1.1. Our Contributions

In the present paper, we study the discrete, entropy-regularized MOT problem with a tree-structured [3,13] or a circle-structured cost function, where all measures are supported on a finite number of points (atoms) in R d . Then, the computational time of the Sinkhorn algorithm [28,29,30], which iteratively determines a sequence converging to the solution, depends linearly on the number K of input measures. If the numbers of atoms is large, however, the Sinkhorn algorithm still requires considerable computational time and memory, which mainly comes from computing a discrete convolution, i.e., a matrix–vector product with a kernel matrix.
This is significantly improved by Fourier-based fast summation methods [31,32]. The key idea is the approximation of the kernel function using a Fourier series, which enables the application of the non-uniform fast Fourier transform (NFFT). Although such fast summation methods are frequently used in different applications such as electrostatic particle interaction [33], tomography [34], image segmentation [35], and, very recently, also OT with two marginals [36], they have not been utilized for MOT so far. Furthermore, a method for accelerating the Sinkhorn algorithm for Wasserstein barycenters via computing the convolution with a different kernel, namely the heat kernel, was discussed in [37].
Our main contribution is the combination of the fast summation method with the sparsity of the tree- or circle-structured cost function in the MOT problem for accelerating the Sinkhorn algorithm. Each iteration step has a complexity of O ( K N ) for a tree- and O ( K N 2 ) for a circle-structured cost function, compared to O ( K N 2 ) and O ( K N 3 ) , respectively, with the straightforward matrix–vector operations, where N is an upper bound of the number of atoms for each of the K marginal measures. Our numerical tests with both tree- and circle-structured cost functions confirm a considerable acceleration, while the accuracy stays almost the same. A different acceleration of the Sinkhorn algorithm via low-rank approximation for tree-structured cost yields the same asymptotic complexity [38]. We note that MOT problems with tree-structured cost functions are used for the computation of Wasserstein barycenters [4], and with circle-structured cost for computing Euler flows [39].

1.2. Outline of the Paper

Section 2 introduces the notation. In Section 3, we focus on the discrete MOT problem with squared Euclidean norm cost functions and the numerical solution of the corresponding entropy-regularized problem, using the Sinkhorn algorithm. We investigate sparsely structured cost functions that decouple according to a tree or circle in Section 4. Then, the complexity of the Sinkhorn algorithm depends only linearly on K. Section 5 describes a fast summation method for further accelerating the Sinkhorn algorithm. Finally, in Section 6, we verify the practical performance by applying it to generalized Euler flows and for finding generalized Wasserstein barycenters. We compare the computational time of the proposed Sinkhorn algorithm based on the NFFT, with the algorithm based on direct matrix multiplication.

2. Notation

Let K N and n = ( n 1 , , n k ) N K . We set [ K ] : =   1 , , K and consider K finite sets
Ω k = x i k k : i k [ n k ] R d , k [ K ] ,
consisting of points x i k k R d , which are called atoms. We set Ω : =   Ω 1 × × Ω K . Additionally, we define the index set
I : =   i = ( i 1 , , i K ) : i k [ n k ] , k [ K ]
and the set of K-dimensional matrices (tensors)
R i : =   R i 1 × × R i K , i I .
Let P ( Ω k ) denote the set of probability measures on Ω k . In this paper, we consider K discrete probability measures, also called marginal measures, μ k P ( Ω k ) , given by
μ k = i k = 1 n k μ i k k δ x i k k , k [ K ] ,
where the probabilities satisfy
μ i k k 0 , i k = 1 n k μ i k k = 1 for all i k [ n k ] , k [ K ] ,
and, for all A Ω k , the Dirac measure is given by
δ x i k k ( A ) : =   1 if x i k k A , 0 otherwise .
For G , H R n , we denote their component-wise (Hadamard) product by
G H G i H i i I R n ,
and similarly, their component-wise division by ⊘, as well as the Frobenius inner product
G , H F i I G i H i R .
The tensor product (Kronecker product) of u , v R m is denoted by u v R m × m . Analogues can be defined for tensors of different size.

3. Multi-Marginal Optimal Transport

In the following, we consider the discrete multi-marginal optimal transport (MOT) between K marginal measures μ k P ( Ω k ) , k [ K ] . We define the set of admissible transport plans by
Π ( μ 1 , , μ k ) : =   Π R 0 n : P k ( Π ) = μ k for all k [ K ] ,
where the k-th marginal projection is defined as
P k ( Π ) : =   [ K ] \ k i [ n ] Π i 1 , , i K R n k .
For a cost function  c : Ω R 0 and the samples x i = x i 1 1 , , x i K K , i I , we define the respective cost matrix
C : =   [ C i ] i I = [ c ( x i ) ] i I = [ c ( x i 1 1 , , x i K K ) ] ( i 1 , , i K ) I R 0 n .
The discrete MOT problem reads
min Π Π ( μ 1 , , μ k ) Π , C F ,
whose solution Π R 0 n is called optimal plan.

Entropy Regularization

As the MOT problem (4) is numerically unfeasible, we consider for η > 0 the entropy-regularized multi-marginal optimal transport ( MOT η ) problem
min Π Π ( μ 1 , , μ K ) Π , C F + η Π , log Π 1 n F = min Π Π ( μ 1 , , μ k ) i I Π i C i + η i I Π i log Π i 1 ,
which is a convex optimization problem. It is possible to numerically deduce the optimal transport plan Π ^ of (5) from the solution of the corresponding Lagrangian dual problem. The following theorem is a special case of [3] for a constant entropy function.
Theorem 1.
The Lagrangian dual formulation of the discrete MOT η problem (5) states
sup ϕ k R 0 n k , k [ K ] S ϕ 1 , , ϕ K : =   sup ϕ k R 0 n k , k [ K ] η k [ K ] j [ n k ] μ j k log ϕ j k η i I K i Φ i ,
where the kernel matrix K R n is defined by
K i : =   exp C i η , i I ,
and the dual tensor Φ = k = 1 K ϕ k by
Φ i : =   k = 1 K ϕ i k k , i I .
The functional S : R 0 n R is called the Sinkhorn function.
The solutions of the dual and the primal problem are generally not equal. The solution of (6) is generally a lower bound to the solution of the primal problem (5). Equality holds if the cost function c is lower semi-continuous; i.e., lim inf c ( x ) c ( x 0 ) as x x 0 for every x 0 R n , or Borel measurable and bounded [40]. This is obviously the case for the squared Euclidean norm cost function
c : R n [ 0 , ) , c ( x ) = k 1 , k 2 [ K ] k 1 k 2 x i k 1 k 1 x i k 2 k 2 2 2
that we study here.
Proposition 1
([41]). An optimal plan of the MOT η problem (5) is given by
Π ^ = K Φ ^ ,
where Φ ^ = k [ K ] ϕ ^ k and ϕ ^ k , k [ K ] are the optimal solutions of dual problem (6).
A sequence converging to the optimal dual vectors ϕ ^ k , k [ K ] in (6) can be iteratively determined by the Sinkhorn algorithm [13,42] presented in Algorithm 1, where we note that line 4 is obtained by deriving the Sinkhorn function  S with respect to ϕ k , k [ K ] . The complexity of the algorithm mainly comes from the computation of the marginal P k ( K Φ ) , where the projection P k is defined in (2). In general, the number of operations depends exponentially on K.
Algorithm 1 Sinkhorn iterations for the MOT η problem
1:
Input: Initial values ( ϕ k ) ( 0 ) R n k , k [ K ] , regularization parameter η > 0 , threshold δ > 0
2:
Set r 0
3:
do
4:
    for  k = 1 , , K  do
5:
        Compute Φ ˜ k ( r + 1 ) : =   [ k 1 ] ϕ ( r + 1 ) [ K ] \ [ k 1 ] ϕ ( r )
6:
        Compute dual vectors
ϕ k ( r + 1 ) : =   μ k ϕ k ( r ) P k K Φ ˜ k ( r + 1 )
7:
        Increment r r + 1
8:
    end for
9:
while | S ϕ 1 ( r ) , , ϕ K ( r ) S ϕ 1 ( r 1 ) , , ϕ K ( r 1 ) | δ
10:
return optimal plan Π ^ = K Φ ^ and Φ ^ = k [ K ] ϕ k ( r + 1 )

4. Sparse Cost Functions

In this section, we take a look at sparsely structured cost functions, for which the Sinkhorn algorithm becomes much faster and we overcome the curse of dimensionality. Let G = V , E be an undirected graph with vertices V and edges E . We say that the MOT η problem has the graph G structure if V = [ K ] and the cost matrix (3) decouples according to
C i = { k 1 , k 2 } E x i k 1 k 1 x i k 2 k 2 2 2 , i = ( i 1 , , i K ) I .
This implies that the kernel K R 0 n satisfies
K i = exp C i η = { k 1 , k 2 } E K i k 1 , i k 2 ( k 1 , k 2 ) , i I ,
where the kernel matrix K ( k 1 , k 2 ) R 0 n k 1 × n k 2 for { k 1 , k 2 } E is given by
K i k 1 , i k 2 ( k 1 , k 2 ) : =   exp 1 η x i k 1 k 1 x i k 2 k 2 2 2 .
We use the indices k V = [ K ] to identify the marginal measures μ k in the rest of the paper.
The discrete, dual formulation (6) of the MOT η problem has the same form independently of the structure of the graph G; only the marginals P k K Φ differ. If the graph G is complete, i.e., each of the two vertices in V are connected with an edge, then the computational complexity of the Sinkhorn Algorithm 1 depends exponentially on the number K of marginal measures (vertices). For larger values of K, it is practically impossible to numerically compute an optimal plan for the MOT η problem. We consider two sparsity assumptions of the MOT η problem, each of them yielding that the Sinkhorn algorithm has a linear complexity in the number K of the nodes. It was shown in [25] that MOT η problems with graphically structured cost functions of constant tree-width can be implemented in polynomial time. This is the case for the tree and circle, whose tree-widths are 1 and 2, respectively. In the following, we give an explicit scheme to efficiently compute the Sinkhorn iterations for the tree and circle structures.

4.1. Tree Structure

We consider the MOT η problem with the structure of a tree  V , E , which is a connected and circle-free graph with | E | = | V | 1 . We define the (non-empty) neighbour set N k of k V as the set of all nodes V , such that { k , } E . Furthermore, we denote by L { k V : | N k | = 1 } the set of all leaves of the tree.
We call 1 V the root of the tree. For every k V , there is a unique path between k and the root. For k V \ 1 , we define the parent  p ( k ) as the node in N k such that p ( k ) lies in the path between k and the root 1. The root has no parent. Without loss of generality, we assume that p ( k ) < k holds for all k V \ 1 . We define the set of children  C k { N k : > k } . Then, we can derive a recursive formula for the k-th marginal of the tensor K Φ R n .
Theorem 2.
Let ( V , E ) be a tree with leaves L , and let c be the MOT η cost function associated. Furthermore, let k V be an arbitrary node. Then, the k-th marginal of the transport plan K Φ is given by
P k K Φ = ϕ k N k α ( k , ) ,
where the vectors α ( k , ) R n k are recursively defined as
α ( k , ) = K ( k , ) ϕ if L , K ( k , ) ϕ t N \ k α ( , t ) otherwise .
A proof of Theorem 2 can be found in [13] (Theorem 3.2). The main idea is to split the rooted tree at the node k into | N k | subtrees. Therefore, the kernel matrix holds
K i = { k 1 , k 2 } E K i k 1 , i k 2 ( k 1 , k 2 ) = N k t D K i p ( t ) , i t ( p ( t ) , t ) , i I ,
where D is the set of descendants of , i.e., the nodes t V such that lies in the path between k and t . Inserting K i into P k K Φ and recalling the definition of P k in (2) yields the result.
In order to efficiently perform the Sinkhorn algorithm, we compute iteratively for = K , , 2 the vectors β : =   α ( p ( ) , ) R n p ( ) . From Theorem 2, we obtain
β = K ( p ( ) , ) ϕ if L , K ( p ( ) , ) ϕ t C β t otherwise .
Since we assumed that p ( k ) < k , the computation of β requires only β t for t > . Similarly, the vectors γ : =   α ( , p ( ) ) R n for > 1 and γ 1 : =   1 n 1 can be iteratively computed by    
γ = K ( p ( ) , ) ϕ p ( ) γ p ( ) t C p ( ) \ β t , = 2 , , K ,
where the vectors β t R n p ( ) are assumed to be known from above. Then, the marginals are
P k K Φ = ϕ k γ k β C k , k [ K ] ,
where for any subset U C k , we define
R n k β U : =   1 n k if U = , t U β t otherwise .
The resulting procedure is summarized in Algorithm 2.
Algorithm 2 Sinkhorn algorithm for tree structure
1:
Input: Tree T ( V , E ) with leaves L and root 1 V = [ K ] , initialization ( ϕ k ) ( 0 ) , k [ K ] , parameters η , δ > 0
2:
Initialize r 0
3:
for k = K , , 2  do
4:
     β k ( 0 ) : =   K ( p ( k ) , k ) ϕ k ( 0 ) if k L , K ( p ( k ) , k ) ϕ k ( 0 ) β C k ( 0 ) otherwise
5:
end for
6:
do
7:
    for  k = 1 , , K  do
8:
         γ k ( r ) : =   1 if k = 1 , K ( p ( k ) , k ) ϕ p ( k ) ( r + 1 ) γ p ( k ) ( r ) β C p ( k ) \ k ( r ) otherwise
9:
        Compute the dual vector ϕ k ( r + 1 ) : =   μ k γ k ( r ) β C k ( r )
10:
    end for
11:
    for  k = K , , 2  do
12:
        Compute β k ( r + 1 ) according to step 4.
13:
    end for
14:
    Set S ( r ) : =   η k = 1 K μ k log ϕ k ( r ) μ 1 ϕ 1 ( r + 1 ) ϕ 1 ( r )
15:
    Increment r r + 1
16:
while | S ( r ) S ( r 1 ) | δ
17:
return optimal plan Π ^ = K Φ ^ , where Φ ^ = k [ K ] ϕ k ( r )

4.2. Circle Structure

We consider the MOT η problem where the graph ( V , E ) is a circle. We assume for each k V = [ K ] that { k , k + 1 } E is an edge of the circle, where we set k + 1 = 1 if k = K and k 1 = K , if k = 1 . Thus, we can define the distance between two nodes k 1 , k 2 V as
d ( k 1 , k 2 ) : =   k 2 k 1 if k 2 k 1 , K k 1 + k 2 otherwise .
Theorem 3.
Let ( V , E ) be a circle and c be the MOT cost function associated. Furthermore, let k V be an arbitrary node. The k-th marginal of the transport plan K Φ is given by
P k K Φ = ϕ k K ( k , k + 1 ) ϕ k + 1 α ( k + 1 , k ) 1 k + 1 ,
where the matrices α ( , t ) R n × n t , , t V recursively satisfy
α ( , t ) = K ( , t ) if d ( , t ) = 1 , K ( , + 1 ) ϕ + 1 α ( + 1 , t ) otherwise ,
and we set k + 1 = 1 if k = K and k 1 = N , if k = 1 .
Proof. 
The kernel K can be decomposed as (7). Let k [ K ] and i k [ n k ] . It holds then, that
[ P k K Φ ] i k = V \ k i [ n ] K i Φ i = ϕ i k k V \ k i [ n ] { k 1 , k 2 } E K i k 1 , i k 2 ( k 1 , k 2 ) j V \ k ϕ i j j = ϕ i k k i 1 ϕ i 1 1 i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 i k 2 K i k 3 , i k 2 ( k 3 , k 2 ) ϕ i k 2 k 2 i k 1 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 1 k 1 K i k 1 , i k ( k 1 , k ) i k + 1 K i k , i k + 1 ( k , k + 1 ) ϕ i k + 1 k + 1 i K K i K 1 , i K ( K 1 , K ) ϕ i K K K i K , i 1 ( K , 1 ) = ϕ i k k i k + 1 K i k , i k + 1 ( k , k + 1 ) ϕ i k + 1 k + 1 i K K i K 1 , i K ( K 1 , K ) ϕ i K K i 1 K i K , i 1 ( K , 1 ) ϕ i 1 1 i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 i k 2 K i k 3 , i k + 2 ( k 3 , k 2 ) ϕ i k 2 k 2 i k 1 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 1 k 1 K i k 1 , i k ( k 1 , k ) .
Setting ψ : =   ϕ ( k + 1 ) mod K and K ˜ ( j , + 1 ) :   =   K ( k + 1 ) mod K , ( k + ) mod K for every [ K ] , we obtain
[ P k K Φ ] i k = ϕ i k k j 2 K ˜ i k , j 2 ( 1 , 2 ) ψ j 2 2 j K 1 K ˜ j K 2 , j K 1 ( K 2 , K 1 ) ψ j K 1 K 1 j K K ˜ j K 1 , j K ( K 1 , K ) ψ j K K K ˜ j K , i k ( K , 1 ) = [ P 1 K ˜ Φ ˜ ] i k .
We define
α ˜ ( K , 1 ) : =   K ˜ ( K , 1 ) = α ( k 1 , k ) R n k 1 × n k
and recursively for all = K 1 , , 1 , the matrix α ˜ ( , 1 ) R ( ( k + 1 ) mod K ) × n k by
α ˜ j l , i k ( , 1 ) : =   K ˜ ( , + 1 ) ψ + 1 α ˜ ( + 1 , 1 ) j l , i k = j + 1 K ˜ j , j + 1 ( , + 1 ) ψ j + 1 + 1 α ˜ j + 1 , i k ( + 1 , 1 ) .
Inserting this into the marginal yields
[ P 1 K ˜ Φ ˜ ] i k = ϕ i k k j 2 K ˜ i k , j 2 ( 1 , 2 ) ψ j 2 2 j K 2 K ˜ j K 2 , j K 1 ( K 2 , K 1 ) ψ j K 1 K 1 j K 1 K ˜ j K 2 , j K 1 ( K 2 , K 1 ) ψ j K 1 K 1 α ˜ j K 1 , i k ( K 1 , 1 ) = ψ i k 1 j 2 K ˜ i k , j 2 ( 1 , 2 ) ψ j 2 2 α ˜ j 2 , i k ( 2 , 1 ) = ψ i k 1 [ K ˜ i k , j 2 ( 1 , 2 ) ] j 2 [ n k + 1 ] ψ 2 α ˜ j 2 , i k ( 2 , 1 ) j 2 [ n k + 1 ] .
Henceforth,
P 1 K ˜ Φ ˜ = ψ 1 K ˜ ( 1 , 2 ) ψ 2 α ˜ ( 2 , 1 ) 1 .
Finally, defining α ˜ ( , 1 ) = α ( ( k + ) mod K , k ) yields the assumption.    □
To efficiently compute the marginal optimal transport as for the tree structure, we choose k = 1 as the starting point and decompose the marginal into two matrices, which can be computed recursively as follows. For matrices G , H R n × m , we define the inner product with respect to the second dimension by
G , H : =   j = 1 m G i j H i j i [ n ] R n .
Theorem 4.
Under the assumptions of Theorem 3, we have
P k K Φ = ϕ 1 K ( 1 , 2 ) , ϕ 2 α ( 2 , 1 ) , k = 1 , ϕ k α ( k , 1 ) , ϕ 1 λ ( 1 , k ) , k = 2 , , K ,
where α ( k , 1 ) is given in (9) and
λ ( 1 , k ) : =   K ( 1 , 2 ) , k = 2 , λ ( 1 , k 1 ) ϕ ( k 1 ) K ( k 1 , k ) , k = 3 , , K .
Proof. 
Let k [ K ] and i k [ n k ] . For k = 1 , the assertion follows from Theorem 3. For k > 2 , we have
P k K Φ i k = ϕ i k k i 1 ϕ i 1 1 i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 i k 1 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 1 k 1 K i k 1 , i k ( k 1 , k )
i k + 1 K i k , i k + 1 ( k , k + 1 ) ϕ i k + 1 k + 1 i K K i K 1 , i K ( K 1 , K ) ϕ i K K K i K , i 1 ( K , 1 )
= ϕ i k k i 1 ϕ i 1 1 i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 i k 1 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 1 k 1 K i k 1 , i k ( k 1 , k ) I : =   α i k , i 1 ( k , 1 ) .
Furthermore, the term I can be rewritten as
I = i k 1 K i k 1 , i k ( k 1 , k ) ϕ i k 1 k 1 i k 2 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 2 k 2 i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 K i 2 , i 3 ( 2 , 3 )
By (10), we have
λ i 1 , i 3 ( 1 , 3 ) = i 2 K i 1 , i 2 ( 1 , 2 ) ϕ i 2 2 K i 2 , i 3 ( 2 , 3 ) , i 1 [ n 1 ] , i 3 [ n 3 ] .
This implies that
I = i k 1 K i k 1 , i k ( k 1 , k ) ϕ i k 1 k 1 i k 2 K i k 2 , i k 1 ( k 2 , k 1 ) ϕ i k 2 k 2 i 3 K i 3 , i 4 ( 3 , 4 ) ϕ i 3 3 λ i 1 , i 3 ( 1 , 3 ) , = = i k 1 K i k 1 , i k ( k 1 , k ) ϕ i k 1 k 1 λ i 1 , i k 1 ( 1 , k 1 ) = λ ( 1 , k 1 ) ϕ k 1 K ( k 1 , k ) i 1 , i k = λ i 1 , i k ( 1 , k ) .
Hence, the hypothesis is true for all k > 2 . The case k = 2 can be proven using the same procedure.    □
In order to efficiently compute a maximizing sequence of the dual MOT η problem (12), we set for k [ K ] the dual matrices
β k : =   α ( k , 1 ) R n k × n 1 ,
γ k : =   λ ( k , 1 ) R n 1 × n k ,
given in Theorems 3 and 4, respectively. The method is shown in Algorithm 3.
Algorithm 3 Sinkhorn algorithm for circle structure
1:
Input: Initialization ( ϕ k ) ( 0 ) , k [ K ] , parameters η , δ > 0
2:
Initialize r 0
3:
for k = K , , 2 do
4:
     β k ( 0 ) : =   K ( K , 1 ) if k = K , K ( k , k + 1 ) ϕ k + 1 ( 0 ) β k + 1 ( 0 ) otherwise
5:
end for
6:
do
7:
     ϕ 1 ( r + 1 ) : =   μ 1 K ( 1 , 2 ) , ϕ 2 ( r ) β 2 ( r )
8:
    for  k = 2 , , K  do
9:
         γ k ( r ) : =   K ( 1 , 2 ) if k = 2 , γ k 1 ( r ) ϕ k 1 ( r + 1 ) K ( k 1 , k ) otherwise
10:
        Compute dual vector ϕ k ( r + 1 ) : =   μ k β k ( r ) , ϕ 1 ( r + 1 ) γ k ( r )
11:
    end for
12:
    for  k = K , , 2  do
13:
        Compute β k ( r + 1 ) according to step 4
14:
    end for
15:
    Compute S ( r ) : =   η k = 1 K μ k log ϕ k ( r ) μ 1 ϕ 1 ( r + 1 ) ϕ 1 ( r )
16:
    Increment r r + 1
17:
while | S ( r ) S ( r 1 ) | δ
18:
return Optimal plan Π ^ = K Φ ^ , where Φ ^ = k [ K ] ϕ k ( r + 1 )
The tree-structured and circle-structured MOT η problems both have a sparse cost function that considerably improves the computational complexity of the Sinkhorn algorithm. In each iteration step, Algorithm 2 requires only 2 ( K 1 ) matrix–vector products, which have a complexity of O ( N 2 ) , where N : =   n , and Algorithm 3 requires 2 ( K 1 ) matrix–matrix products, which have a complexity of O ( N 3 ) . This can be considerably improved by employing fast Fourier techniques, as we will see in the next section.

5. Non-Uniform Discrete Fourier Transforms

The main computational cost of the Sinkhorn algorithm comes from the matrix–vector product with the kernel matrix (8). Let k , [ K ] and α R n . We briefly describe a fast summation method for the computation of β = K ( k , ) α , i.e.,
β i k = i = 1 n α i exp 1 η x i k k x i 2 2 , i k [ n k ] .
We refer to [32] for a detailed derivation and error estimates. The main idea is to approximate the kernel function
κ ( x ) : =   exp ( 1 η x 2 ) , x R ,
using a Fourier series. In order to ensure the fast convergence of the Fourier series, we extend κ to a periodic function of certain smoothness p N . Let ε B > 0 and τ > ε B + max x i k k x i 2 2 : i k [ n k ] , i [ n ] . For x R , we define the regularized kernel
κ R ( x ) : =   κ ( x ) , | x | τ ε B , κ B ( x ) , τ ε B < | x | τ , κ B ( τ ) , | x | > τ ,
where κ B is a polynomial of degree p that fulfills the Herminte interpolation conditions κ B ( r ) ( τ ε B ) = κ ( r ) ( τ ε B ) for r = 0 , , p 1 , and κ B ( r ) ( τ ) = 0 for r = 1 , , p 1 ; see Figure 1.
Then, we define a 2 τ -periodic function on R d by
κ ˜ ( x ) : =   κ R ( x ) , x [ τ , τ ) d .
By construction, κ ˜ is p times continuously differentiable and we have κ ˜ ( x i k k x i ) = exp ( 1 η x i k k x i 2 2 ) for all i k [ n k ] , i [ n ] .
Let M N . We approximate κ ˜ using the 2 τ -periodic Fourier expansion of degree 2 M ,
κ ˜ ( x ) m { M , , M 1 } d κ ^ ( m ) exp ( i π τ m x ) , x R d ,
with the discrete Fourier coefficients κ ^ ( m ) C , which can be efficiently approximated by the fast Fourier transform (FFT)
κ ^ ( m ) : =   1 ( 2 M ) d x τ M { M , , M 1 } d κ ˜ ( x ) exp ( i π τ m x ) , m { M , , M 1 } d .
This yields an approximation of (11) by
β i k = j = 1 n κ ˜ ( x i k k x i ) α i i = 1 n m { M , , M 1 } d κ ^ ( m ) exp i π τ m ( x i k k x i ) α i = m { M , , M 1 } d κ ^ ( m ) i = 1 n α i exp i π τ m x i exp i π τ m x i k k .
The non-uniform discrete Fourier transform (NDFT) of κ ^ : =   [ κ ^ ( m ) ] m { M , , M 1 } d at the nodes Ω k R d is defined by
[ F k κ ^ ] i : =   m { M , , M 1 } d κ ^ ( m ) exp i π τ m x i k , i [ n ] ,
and the adjoint NDFT of α R n on the set Ω is given by
[ F * α ] m : =   i = 1 n α i exp i π τ m x i , m { M , , M 1 } d ,
cf. [43] (Section 7). Therefore, the approximation (16) can be written as
β = K ( k , ) α F k ( κ ^ F * α ) .
The procedure is summarized in Algorithm 4.
Algorithm 4 NFFT-based fast summation
1:
Input: Vector α R n , kernel function κ in (12), parameters ε B > 0 , M N
2:
precomputation
3:
    Compute the regularized kernel κ R by (13)
4:
    Compute the periodized kernel κ ˜ by (14)
5:
    Compute the discrete Fourier coefficients κ ^ ( m ) , m { M , , M 1 } d , by an FFT, see (15)
6:
end
7:
Compute the adjoint NDFT F α , see (18)
8:
Compute the pointwise product β ^ : =   κ ^ F α
9:
Compute the NDFT β : =   F k β ^ , see (17)
10:
return   β K ( k , ) α
There are fast algorithms, known as non-uniform fast Fourier transform (NFFT), allowing for the computation of an NDFT (17) and its adjoint (18) in O ( M d log M + N ) steps up to arbitrary numeric precision; see e.g., [44,45] and [43] (Section 7), where N = n . Note that the direct implementation of (16) requires O ( M d N ) operations. We call the Sinkhorn algorithm where the matrix–vector multiplication is performed via (19) the NFFT-Sinkhorn algorithm. If we fix the Fourier expansion degree M, which is possible because of the smoothness of κ , then we end up at a numerical complexity of O ( K N ) for each iteration step of the NFFT-Sinkhorn algorithm for trees. In the case of a circle (Algorithm 3), we can apply the fast summation column by column for the matrix–matrix product with K ( k , k + 1 ) , yielding a complexity of O ( K N 2 ) for each iteration step.

6. Numerical Examples

We illustrate the results from Section 5 concerning the Sinkhorn algorithm and its accelerated version, the NFFT-Sinkhorn. First, we investigate the effect of parameter choices in some artificial examples. Then, we look at the one-dimensional Euler flow problem of incompressible fluids and the fixed support barycenter problem of images (the code for our examples is available at https://github.com/fatima0111/NFFT-Sinkhorn, accessed on 8 August 2022). All computations were performed on an eight-core Intel Core i7-10700 CPU with 32 GB memory. For computing the NFFT of Section 5, we rely on the implementation [46].

6.1. Uniformly Distributed Points

We consider the MOT η problem for uniform measures μ k of uniformly distributed points on Ω = [ 1 / 2 , 1 / 2 ] . We chose the entropy regularization parameter η = 0.1 , so that a boundary regularization (13) for the fast summation method is necessary. For the tree-structured cost function, we set the boundary regularization ε B = 1 / 16 , the Fourier expansion degree M = 156 , and the smoothness parameters p = 3 (see Section 5). In Figure 2 left, we see the linear dependence of the computational time on the number K of marginals. For a growing number N of points, the NFFT-Sinkhorn algorithm, which requires O ( K N ) steps, clearly outperforms the standard method, which requires O ( K N 2 ) steps (see Figure 2 right).
In Figure 3, we show the computation times for the circle-structured cost function, with the parameters η = 0.1 , ε B = 3 / 32 , M = 2000 , and p = 3 . As the Sinkhorn iteration requires matrix–matrix products, it is more costly than for the tree-structured cost function, and so we used a lower number of points N. The advantage of the NFFT-Sinkhorn is smaller than for the tree, but still considerable. We point out here that the fast summation method is applied column by column to the matrix–matrix product, and the Fourier expansion degree M is larger.
Finally, we investigate how the approximation error between the Sinkhorn and NFFT-Sinkhorn algorithm, | S ( r ) S ˜ ( r ) | , depends on the entropy regularization parameter η and the Fourier expansion degree M at a fixed iteration r = 10 , where S ( r ) denotes the evaluation with the Sinkhorn algorithm and S ˜ ( r ) its evaluation with the NFFT-Sinkhorn algorithm. Since the time differences for different M in the one-dimensional case are very small, we consider the two-dimensional uniform marginal measures for the MOT η problems with tree- or circle-structured cost functions. In Figure 4 and Figure 5, we see that for smaller η , we need a larger expansion degree M to achieve a good accuracy. The error stagnates at a certain level and does not decrease anymore for increasing M. This could be improved by increasing the approximation parameter p and the cutoff parameter of the NFFT, cf. [43] (Section 7). The parameter choice methods for the NFFT-based summation were discussed in [47]. For an appropriately chosen M, the NFFT-Sinkhorn is usually much faster than the Sinkhorn algorithm. However, for very small η , the kernel function (12) is concentrated over a small interval, and therefore, a simple truncation of the sum (11) might be beneficial to the NFFT approximation.

6.2. Fixed-Support Wasserstein Barycenter for General Trees

Let T = V , E be a tree with set of leaves L ; see Section 4.1. For k L , let measures μ k and weights w ˜ k [ 0 , 1 ] be given that satisfy k L w ˜ k = 1 . For any edge e E , we set
w e : =   1 if | e L | 1 , w ˜ k if e L = k .
The generalized barycenters are the minimizers μ k , k V \ L , of
inf μ k P Ω k , k V \ L e = e 1 , e 2 E w e W 2 2 μ e 1 , μ e 2 ,
where W 2 2 ( μ e 1 , μ e 2 ) is the squared Wasserstein distance [48,49] between the measures μ e 1 and μ e 2 . The well-known Wasserstein barycenter problem [50,51] is a special case of (20), where the tree is star-shaped and the barycenter corresponds to the unique internal node. We consider the fixed-support barycenter problem [26,52], where the nodes x k , k V \ L are also given, so that we need to optimize (20) only for μ i k k , k V \ L . This yields an MOT problem with the tree-structured cost
C i = e = e 1 , e 2 E w e x i e 1 e 1 x i e 1 e 2 2 2 , i I ,
where the marginal constraints of (1) are only set for the known measures μ k , k L (see [53]). This barycenter problem can be solved approximately using a modification of the Sinkhorn Algorithm 2, in which we replace line 12 by
( ϕ k ) ( r + 1 ) = 1 if k L , μ k γ k ( r ) β C k ( r ) otherwise .
We test our algorithm with a tree consisting of K = 7 nodes; see Figure 6. The four given marginals μ k , k L , are dithered images in R 2 with uniform weights μ i k k = 1 / N . As support points of the barycenters μ k , k V \ L , we take the union over all support points x i k k of all four input measures μ k , k L . Furthermore, we use the barycenter weights w ˜ k = 1 / 4 . The given images and the computed barycenters are show in Figure 7, where we executed r = 150 iterations of the Sinkhorn algorithm and its accelerated version. We chose the regularization parameter η = 5 · 10 3 and the fast summation parameters M = 156 , p = 3 .

6.3. Generalized Euler Flows

We consider the motion of N particles of an incompressible fluid, e.g., water, in a bounded domain Ω in discrete time steps t k ( k 1 ) / ( K 1 ) , k [ K ] . We assume that we know the function σ : Ω Ω , which connects N initial positions x i 1 Ω of particles with their final positions x i K Ω . At each time step t k , we know an image of the particle distribution, which is described by the discrete marginal measure μ k , k [ K ] , with uniform weights μ i k k = 1 / N . We want to find out how the single particles move, i.e., their trajectories. Due to the least-action principle, this problem can be formulated as the MOT problem (4) with the circle-structured cost
C i = x i K K σ x k 1 1 2 2 + k = 1 K 1 x i k + 1 k + 1 x i k k 2 2 , i I ,
see [55,56,57]. Then, the pair marginal
Π 1 , k : =   [ K ] \ 1 , k i [ n ] Π i 1 , , i K
of the optimal plan Π provides the (discrete) probability that a particle that was initially at position x i 1 Ω is in position x i k Ω at time t k , k = 2 , , K 1 . The one-dimensional problem has been studied by several authors [25,26,39], where the particles are assumed to be on a grid. Here, we consider the case where the positions are uniformly distributed. We draw N = 400 uniformly distributed points on Ω = [ 0 , 1 ] . We use K = 5 marginal constraints and the entropy regularization parameter η = 0.05 . Figure 8 and Figure 9 display the probability matrix Π 1 , k describing the motion of the particles from initial time t 1 = 0 to time t k , where we use r = 50 iterations for both the NFFT-Sinkhorn and the Sinkhorn algorithms, and two different connection functions σ .

7. Conclusions

We have proposed the NFFT-Sinkhorn algorithm to solve the MOT η problem efficiently. Assuming that the cost function of the multi-marginal optimal transport decouples according to a tree or a circle, we obtain a linear complexity in K. The complexity of the algorithm with respect to the numbers n k , k [ K ] , of atoms of the discrete marginal measures is further improved using the non-uniform fast Fourier transform. This results in a considerable acceleration in our numerical experiments compared to the usual Sinkhorn algorithm. The tree-structured MOT η problem gives a much better numerical complexity than the circle-structured MOT η problem due to the fact that in the latter case, matrix–matrix products are required in Algorithm 3 instead of just the matrix–vector products of Algorithm 2.

Author Contributions

Both authors have contributed equally to this article. All authors have read and agreed to the published version of the manuscript.

Funding

We gratefully acknowledge funding by the German Federal Ministry of Education and Research BMBF 01|S20053B project SAℓE. Furthermore, we gratefully acknowledge funding by the German Research Foundation DFG (STE 571/19-1, project number 495365311).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available at https://github.com/fatima0111/NFFT-Sinkhorn, accessed on 4 August 2022.

Acknowledgments

We would like to thank Gabriele Steidl for insightful discussions about optimal transport. We also thank the anonymous reviewers for making valuable suggestions to improve the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peyré, G.; Cuturi, M. Computational Optimal Transport: With Applications to Data Science. Found. Trends Mach. Learn. 2019, 11, 355–607. [Google Scholar] [CrossRef]
  2. Villani, C. Optimal Transport: Old and New; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  3. Beier, F.; von Lindheim, J.; Neumayer, S.; Steidl, G. Unbalanced multi-marginal optimal transport. arXiv 2021, arXiv:2103.10854. [Google Scholar]
  4. Bonneel, N.; Peyré, G.; Cuturi, M. Wasserstein barycentric coordinates: Histogram regression using optimal transport. ACM Trans. Graph. 2016, 35, 71. [Google Scholar] [CrossRef]
  5. Tartavel, G.; Peyré, G.; Gousseau, Y. Wasserstein loss for image synthesis and restoration. SIAM J. Imaging Sci. 2016, 9, 1726–1755. [Google Scholar] [CrossRef]
  6. Thorpe, M.; Park, S.; Kolouri, S.; Rohde, G.K.; Slepčev, D. A transportation Lp distance for signal analysis. J. Math. Imaging Vis. 2017, 59, 187–210. [Google Scholar] [CrossRef] [PubMed]
  7. Vogt, T.; Lellmann, J. Measure-valued variational models with applications to diffusion-weighted imaging. J. Math. Imaging Vis. 2018, 60, 1482–1502. [Google Scholar] [CrossRef]
  8. Carlier, G.; Oberman, A.; Oudet, E. Numerical methods for matching for teams and Wasserstein barycenters. ESAIM M2AN 2015, 49, 1621–1642. [Google Scholar] [CrossRef]
  9. Galichon, A. Optimal Transport Methods in Economics; Princeton University Press: Princeton, NJ, USA, 2016. [Google Scholar] [CrossRef]
  10. Dolinsky, Y.; Soner, H.M. Martingale optimal transport and robust hedging in continuous time. Probab. Theory Relat. Fields 2014, 160, 391–427. [Google Scholar] [CrossRef]
  11. Dolinsky, Y.; Soner, H.M. Robust hedging with proportional transaction costs. Financ. Stoch. 2014, 18, 327–347. [Google Scholar] [CrossRef] [Green Version]
  12. Frisch, U.; Matarrese, S.; Mohayaee, R.; Sobolevski, A.N. A reconstruction of the initial conditions of the universe by optimal mass transportation. Nature 2002, 417, 260–262. [Google Scholar] [CrossRef]
  13. Haasler, I.; Ringh, A.; Chen, Y.; Karlsson, J. Multimarginal optimal transport with a tree-structured cost and the Schrödinger bridge problem. SIAM J. Control Optim. 2021, 59, 2428–2453. [Google Scholar] [CrossRef]
  14. Kantorovich, L. On the translocation of masses. Manag. Sci. 1958, 5, 1–4. [Google Scholar] [CrossRef]
  15. Lin, T.; Ho, N.; Cuturi, M.; Jordan, M.I. On the complexity of approximating multimarginal optimal transport. J. Mach. Learn. Res. 2022, 23, 1–43. [Google Scholar]
  16. Pass, B. Multi-marginal optimal transport and multi-agent matching problems: Uniqueness and structure of solutions. Discret. Contin. Dyn. Syst. 2014, 34, 1623–1639. [Google Scholar] [CrossRef]
  17. Pass, B. Multi-marginal optimal transport: Theory and applications. ESAIM M2AN 2015, 49, 1771–1790. [Google Scholar] [CrossRef]
  18. Benamou, J.-D.; Carlier, G.; Nenna, L. A numerical method to solve multi-marginal optimal transport problems with Coulomb cost. In Splitting Methods in Communication, Imaging, Science, and Engineering; Springer: Cham, Switzerland, 2016; pp. 577–601. [Google Scholar]
  19. Neumayer, S.; Steidl, G. From optimal transport to discrepancy. In Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision; Chen, K., Schönlieb, C.-B., Tai, X.-C., Younces, L., Eds.; Springer: Cham, Switzerland, 2021; pp. 1–36. [Google Scholar] [CrossRef]
  20. Terjék, D.; González-Sánchez, D. Optimal transport with f-divergence regularization and generalized Sinkhorn algorithm. arXiv 2021, arXiv:2105.14337. [Google Scholar]
  21. Blondel, M.; Seguy, V.; Rolet, A. Smooth and sparse optimal transport. In Proceedings of Machine Learning Research, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, Playa Blanca, Spain, 9–11 April 2018; Volume 84, pp. 880–889. [Google Scholar]
  22. Lorenz, D.A.; Manns, P.; Meyer, C. Quadratically regularized optimal transport. Appl. Math. Optim. 2021, 83, 1919–1949. [Google Scholar] [CrossRef]
  23. Genevay, A.; Cuturi, M.; Peyré, G.; Bach, F. Stochastic optimization for large-scale optimal transport. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 3440–3448. [Google Scholar] [CrossRef]
  24. Ammari, H.; Garnier, J.; Millien, P. Backpropagation imaging in nonlinear harmonic holography in the presence of measurement and medium noises. SIAM J. Imaging Sci. 2014, 7, 239–276. [Google Scholar] [CrossRef]
  25. Altschuler, J.M.; Boix-Adsera, E. Polynomial-time algorithms for multimarginal optimal transport problems with structure. Math. Program. 2022, in press. [Google Scholar] [CrossRef]
  26. Benamou, J.-D.; Carlier, G.; Cuturi, M.; Nenna, L.; Peyré, G. Iterative bregman projections for regularized transportation problems. SIAM J. Sci. Comput. 2015, 37, A1111–A1138. [Google Scholar] [CrossRef]
  27. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; Adaptive Computation and Machine Learning; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  28. Alaya, M.Z.; Bérar, M.; Gasso, G.; Rakotomamonjy, A. Screening Sinkhorn algorithm for regularized optimal transport. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates Inc.: Red Hook, NY, USA, 2019. [Google Scholar] [CrossRef]
  29. Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems; Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2013; Volume 26. [Google Scholar]
  30. Knopp, P.; Sinkhorn, R. Concerning connegative matrices and doubly stochastic matrices. Pac. J. Math. 1967, 21, 343–348. [Google Scholar] [CrossRef]
  31. Potts, D.; Steidl, G. Fast summation at nonequispaced knots by NFFTs. SIAM J. Sci. Comput. 2003, 24, 2013–2037. [Google Scholar] [CrossRef]
  32. Potts, D.; Steidl, G.; Nieslony, A. Fast convolution with radial kernels at nonequispaced knots. Numer. Math. 2004, 98, 329–351. [Google Scholar] [CrossRef]
  33. Nestler, F.; Pippig, M.; Potts, D. Fast Ewald summation based on NFFT with mixed periodicity. J. Comput. Phys. 2015, 285, 280–315. [Google Scholar] [CrossRef]
  34. Hielscher, R.; Quellmalz, M. Optimal mollifiers for spherical deconvolution. Inverse Probl. 2015, 31, 085001. [Google Scholar] [CrossRef]
  35. Alfke, D.; Potts, D.; Stoll, M.; Volkmer, T. NFFT meets Krylov methods: Fast matrix-vector products for the graph Laplacian of fully connected networks. Front. Appl. Math. Stat. 2018, 4, 61. [Google Scholar] [CrossRef]
  36. Lakshmanan, R.; Pichler, A.; Potts, D. Fast Fourier transform boost for the Sinkhorn algorithm. arXiv 2022, arXiv:2201.07524. [Google Scholar]
  37. Solomon, J.; de Goes, F.; Peyré, G.; Cuturi, M.; Butscher, A.; Nguyen, A.; Du, T.; Guibas, L. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph. 2015, 34, 1–11. [Google Scholar] [CrossRef]
  38. Strössner, C.; Kressner, D. Low-rank tensor approximations for solving multi-marginal optimal transport problems. arXiv 2022, arXiv:2202.07340. [Google Scholar]
  39. Benamou, J.-D.; Carlier, G.; Nenna, L. Generalized incompressible flows, multi-marginal transport and Sinkhorn algorithm. Numer. Math. 2019, 142, 33–54. [Google Scholar] [CrossRef] [Green Version]
  40. Beiglböck, M.; Léonard, C.; Schachermayer, W. A general duality theorem for the Monge-Kantorovich transport problem. Studia Math. 2012, 209, 151–167. [Google Scholar] [CrossRef]
  41. Elvander, F.; Haasler, I.; Jakobsson, A.; Karlsson, J. Multi-marginal optimal transport using partial information with applications in robust localization and sensor fusion. Signal Process. 2020, 171, 107474. [Google Scholar] [CrossRef]
  42. Marino, S.D.; Gerolin, A. An optimal transport approach for the Schrödinger bridge problem and convergence of Sinkhorn algorithm. J. Sci. Comput. 2020, 85, 27. [Google Scholar] [CrossRef]
  43. Plonka, G.; Potts, D.; Steidl, G.; Tasche, M. Numerical Fourier Analysis; Applied and Numerical Harmonic Analysis; Birkhäuser: Basel, Switzerland, 2018. [Google Scholar] [CrossRef]
  44. Dutt, A.; Rokhlin, V. Fast Fourier transforms for nonequispaced data II. Appl. Comput. Harmon. Anal. 1995, 2, 85–100. [Google Scholar] [CrossRef]
  45. Beylkin, G. On the fast Fourier transform of functions with singularities. Appl. Comput. Harmon. Anal. 1995, 2, 363–381. [Google Scholar] [CrossRef]
  46. Keiner, J.; Kunis, S.; Potts, D. Using NFFT3—A software library for various nonequispaced fast Fourier transforms. ACM Trans. Math. Softw. 2009, 36, 19. [Google Scholar] [CrossRef]
  47. Nestler, F. Parameter tuning for the NFFT based fast Ewald summation. Front. Phys. 2016, 4, 28. [Google Scholar] [CrossRef]
  48. Bassetti, F.; Gualandi, S.; Veneroni, M. On the computation of Kantorovich-Wasserstein distances between 2d-histograms by uncapacitated minimum cost flows. arXiv 2018, arXiv:1804.00445. [Google Scholar]
  49. Cuturi, M.; Doucet, A. Fast computation of wasserstein barycenters. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21–26 June 2014; Volume 32, pp. II–685–II–693. [Google Scholar]
  50. Rabin, J.; Peyre, G.; Delon, J.; Bernot, M. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision; Bruckstein, A.M., Romeny, B.M.t.H., Bronstein, A.M., Bronstein, M.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 435–446. [Google Scholar]
  51. von Lindheim, J. Approximative algorithms for multi-marginal optimal transport and free-support Wasserstein barycenters. arXiv 2022, arXiv:2202.00954. [Google Scholar]
  52. Takezawa, Y.; Sato, R.; Kozareva, Z.; Ravi, S.; Yamada, M. Fixed support tree-sliced Wasserstein barycenter. arXiv 2021, arXiv:2109.03431. [Google Scholar]
  53. Agueh, M.; Carlier, G. Barycenters in the Wasserstein space. SIAM J. Math. Anal. 2011, 43, 904–924. [Google Scholar] [CrossRef]
  54. Flamary, R.; Courty, N.; Gramfort, A.; Alaya, M.Z.; Boisbunon, A.; Chambon, S.; Chapel, L.; Corenflos, A.; Fatras, K.; Fournier, N.; et al. Pot: Python optimal transport. J. Mach. Learn. Res. 2021, 22, 1–8. [Google Scholar]
  55. Brenier, Y. The least action principle and the related concept of generalized flows for incompressible perfect fluids. J. Amer. Math. Soc. 1989, 2, 225–255. [Google Scholar] [CrossRef]
  56. Brenier, Y. The dual least action problem for an ideal, incompressible fluid. Arch. Ration. Mech. Anal. 1993, 122, 323–351. [Google Scholar] [CrossRef]
  57. Brenier, Y. Minimal geodesics on groups of volume-preserving maps and generalized solutions of the euler equations. Comm. Pure Appl. Math 1997, 52, 411–452. [Google Scholar] [CrossRef]
Figure 1. Regularized kernel κ R for η = 1 / 4 , periodicity length τ = 1 , boundary interval ε B = 0.2 , and smoothness p = 1 .
Figure 1. Regularized kernel κ R for η = 1 / 4 , periodicity length τ = 1 , boundary interval ε B = 0.2 , and smoothness p = 1 .
Algorithms 15 00311 g001
Figure 2. Computation time in seconds of MOT η with tree-structured cost function with regularization parameter η = 0.1 . Left: fixed N = 10 4 . Right: fixed K = 10 .
Figure 2. Computation time in seconds of MOT η with tree-structured cost function with regularization parameter η = 0.1 . Left: fixed N = 10 4 . Right: fixed K = 10 .
Algorithms 15 00311 g002
Figure 3. Computation time in seconds of MOT η with a circle-structured cost function with regularization parameter η = 0.1 . Left: fixed N = 700 . Right: fixed K = 3 .
Figure 3. Computation time in seconds of MOT η with a circle-structured cost function with regularization parameter η = 0.1 . Left: fixed N = 700 . Right: fixed K = 3 .
Algorithms 15 00311 g003
Figure 4. MOT η with tree-structured cost function, where d = 2 , K = 10 , N = 10 , 000 , and p = 3 . Left: Approximation error of S ( 10 ) between the Sinkhorn and the NFFT-Sinkhorn algorithm, depending on the number of Fourier coefficients M of the NFFT. Right: Computation time in seconds.
Figure 4. MOT η with tree-structured cost function, where d = 2 , K = 10 , N = 10 , 000 , and p = 3 . Left: Approximation error of S ( 10 ) between the Sinkhorn and the NFFT-Sinkhorn algorithm, depending on the number of Fourier coefficients M of the NFFT. Right: Computation time in seconds.
Algorithms 15 00311 g004
Figure 5. MOT η with circle-structured cost function, where d = 2 , K = 3 , N = 1000 , and p = 3 . Left: Approximation error of S ( 10 ) between Sinkhorn and NFFT-Sinkhorn algorithm depending on the number of Fourier coefficients M. Right: Computation time in seconds.
Figure 5. MOT η with circle-structured cost function, where d = 2 , K = 3 , N = 1000 , and p = 3 . Left: Approximation error of S ( 10 ) between Sinkhorn and NFFT-Sinkhorn algorithm depending on the number of Fourier coefficients M. Right: Computation time in seconds.
Algorithms 15 00311 g005
Figure 6. The tree graph of the barycenter problem, with leaves L marked in blue.
Figure 6. The tree graph of the barycenter problem, with leaves L marked in blue.
Algorithms 15 00311 g006
Figure 7. Given measures:{ 1 , 4 , 6 , 7 , } entropy regularization parameter η = 5 · 10 3 , r = 150 , w ˜ = 1 4 ( 1 , 1 , 1 , 1 ) . NFFT-Sinkhorn parameters: M = 156 , p = 3 . The test images are adapted with permission (MIT License) from https://github.com/PythonOT/POT, see [54]. 2016, Rémi Flamary. (a) Sinkhorn. (b) NFFT-Sinkhorn.
Figure 7. Given measures:{ 1 , 4 , 6 , 7 , } entropy regularization parameter η = 5 · 10 3 , r = 150 , w ˜ = 1 4 ( 1 , 1 , 1 , 1 ) . NFFT-Sinkhorn parameters: M = 156 , p = 3 . The test images are adapted with permission (MIT License) from https://github.com/PythonOT/POT, see [54]. 2016, Rémi Flamary. (a) Sinkhorn. (b) NFFT-Sinkhorn.
Algorithms 15 00311 g007
Figure 8. Joint measures Π 1 , k , k = 2 , , 5 representing the movement of the particles from initial position x [ 0 , 1 ] (x-axis) to position x t k [ 0 , 1 ] (y-axis) at time t k , where σ ( x ) = 1 x . First row: Sinkhorn algorithm. Second row: NFFT-Sinkhorn algorithm.
Figure 8. Joint measures Π 1 , k , k = 2 , , 5 representing the movement of the particles from initial position x [ 0 , 1 ] (x-axis) to position x t k [ 0 , 1 ] (y-axis) at time t k , where σ ( x ) = 1 x . First row: Sinkhorn algorithm. Second row: NFFT-Sinkhorn algorithm.
Algorithms 15 00311 g008
Figure 9. Joint measures as in Figure 8, but with the function σ ( x ) = min ( 2 x , 1 2 x ) .
Figure 9. Joint measures as in Figure 8, but with the function σ ( x ) = min ( 2 x , 1 2 x ) .
Algorithms 15 00311 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ba, F.A.; Quellmalz, M. Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms. Algorithms 2022, 15, 311. https://doi.org/10.3390/a15090311

AMA Style

Ba FA, Quellmalz M. Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms. Algorithms. 2022; 15(9):311. https://doi.org/10.3390/a15090311

Chicago/Turabian Style

Ba, Fatima Antarou, and Michael Quellmalz. 2022. "Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms" Algorithms 15, no. 9: 311. https://doi.org/10.3390/a15090311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop