Next Article in Journal
RainPredRNN: A New Approach for Precipitation Nowcasting with Weather Radar Echo Images Based on Deep Learning
Previous Article in Journal
On the Factors of Successful e-Commerce Platform Design during and after COVID-19 Pandemic Using Extended Fuzzy AHP Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Randomized Distributed Kaczmarz Algorithm and Anomaly Detection

Department of Mathematics, Iowa State University, 396 Carver Hall, Ames, IA 50011, USA
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(3), 106; https://doi.org/10.3390/axioms11030106
Submission received: 10 December 2021 / Revised: 22 February 2022 / Accepted: 23 February 2022 / Published: 26 February 2022
(This article belongs to the Section Mathematical Analysis)

Abstract

:
The Kaczmarz algorithm is an iterative method for solving systems of linear equations. We introduce a randomized Kaczmarz algorithm for solving systems of linear equations in a distributed environment, i.e., the equations within the system are distributed over multiple nodes within a network. The modification we introduce is designed for a network with a tree structure that allows for passage of solution estimates between the nodes in the network. We demonstrate that the algorithm converges to the solution, or the solution of minimal norm, when the system is consistent. We also prove convergence rates of the randomized algorithm that depend on the spectral data of the coefficient matrix and the random control probability distribution. In addition, we demonstrate that the randomized algorithm can be used to identify anomalies in the system of equations when the measurements are perturbed by large, sparse noise.

1. Introduction

The Kaczmarz method [1] is an iterative algorithm for solving a system of linear equations A x = b , where A is an m × k matrix. Written out, the equations are a i * x = b i for i = 1 , , m , where a i * is the ith row of the matrix A. Given a solution guess x ( n 1 ) and an equation number i, we calculate r i = b i a i * x ( n 1 ) (the residual for equation i), and define
x ( n ) = x ( n 1 ) + r i a i 2 a i .
This makes the residual of x ( n ) in equation i equal to 0. Here and elsewhere, · is the usual Euclidean ( 2 ) norm. We iterate repeatedly through all equations (i.e., we consider lim n x ( n ) where n i mod m , so the equations are repeated cyclically). Kaczmarz proved that if the system of equations has a unique solution, then x ( n ) converges to that solution. Later, it was proved in [2] that if the system is consistent (but the solution is not unique), then the sequence converges to the solution of minimal norm. Likewise, it was proved in [3,4] that if inconsistent, a relaxed version of the algorithm can provide approximations to a weighted least-squares solution.
A protocol was introduced in [5] to utilize the Kaczmarz algorithm to solve a system of equations that are distributed across a network; each node on the network has one equation, and the equations are indexed by the nodes of the network. We consider the network to be a graph and select from the graph a minimal spanning tree. The iteration begins with a single estimate of the solution at the root of the tree. The root updates this estimate using the Kaczmarz update as in Equation (1) according to its equation then passes that updated estimate to its neighbors. Each of these nodes in turn updates the estimate it receives using its equation then passes that updated estimate to it neighbors (except its predecessor). This recursion continues until the estimates reach the leaves of the tree. The multiple estimates are then aggregated by each leaf passing its estimate to its predecessor; each of these nodes then take weighted averages of all of its inputs. This second recursion continues until reaching the root; the single estimate at the root then becomes the input for the next iteration. To formalize this, we first introduce some notation.

1.1. Notation

A tree is a connected graph with no cycles. We denote arbitrary nodes (vertices) of a tree by v, u. Let V denote the collection of all nodes in the tree. Our tree will be rooted; the root of the tree is denoted by r. Following the notation from MATLAB, when v is on the path from r to u, we will say that v is a predecessor of u and write u v . Conversely, u is a successor of v. By immediate successor of v we mean a successor u such that there is an edge between v and u (this is referred to as a child in graph theory parlance [6]); similarly, v is an immediate predecessor (i.e., parent) if v is a predecessor of u. We denote the set of all immediate successors of node v by C ( v ) ; we will also use P ( u ) to denote the parent (i.e., immediate predecessor) of node u. A node without a successor is called a leaf; leaves of the tree are denoted by . We will denote the set of all leaves by L . Often we will have need to enumerate the leaves as 1 , , t , hence t denotes the number of leaves.
A weight w is a nonnegative function on the edges of the tree; we denote this by w ( u , v ) , where u and v are nodes that have an edge between them. We assume w ( u , v ) = w ( v , u ) , though we will typically write w ( u , v ) when u v . When u v , but u is not a immediate successor, we write
w ( u , v ) : = j = 1 J 1 w ( u j + 1 , u j )
where v = u 1 , , u J = u is a path from v to u.
We let Π denote the affine orthogonal projection onto the solution space of the matrix equation A x = b . For a positive semidefinite matrix C, λ m a x ( C ) denotes the largest eigenvalue, and λ m i n n z ( C ) denotes the smallest nonzero eigenvalue.

1.2. The Distributed Kaczmarz Algorithm

The iteration begins with an estimate, say x ( n ) at the root of the tree (we denote this by x r ( n ) ). Each node u receives from its immediate predecessor v an input estimate x v ( n ) and generates a new estimate via the Kaczmarz update:
x u ( n ) = x v ( n ) + r u ( x v ( n ) ) a u 2 a u ,
where the residual is given by
r u ( x v ( n ) ) : = b u a u * x v ( n ) .
Node u then passes this estimate to all of its immediate successors, and the process is repeated recursively. We refer to this as the dispersion stage. Once this process has finished, each leaf of the tree now possesses an estimate: x ( n ) .
The next stage, which we refer to as the pooling stage, proceeds as follows. For each leaf, set y ( n ) = x ( n ) . Each node v calculates an updated estimate as:
y v ( n ) = u C ( v ) w ( u , v ) y u ( n ) ,
subject to the constraints that w ( u , v ) > 0 when u C ( v ) (the set of all immediate successors of v) and u C ( v ) w ( u , v ) = 1 . This process continues until reaching the root of the tree, resulting in the estimate y r ( n ) .
We set x ( n + 1 ) = y r ( n ) , and repeat the iteration. The updates in the dispersion stage (Equation (3)) and pooling stage (Equation (5)) are illustrated in Figure 1.

1.3. Related Work

Recent variations on the Kaczmarz method allowed for relaxation parameters [2], re-ordering equations to speed up convergence [7], or considering block versions of the Kaczmarz method with relaxation matrices Ω i [3]. Relatively recently, choosing the next equation randomly has been shown to improve the rate of convergence of the algorithm [8,9,10]. Moreover, this randomized version of the Kaczmarz algorithm has been shown to be comparable to the gradient descent method [11]. The randomized version we present in the present article is similar to the Cimmino method [12], which was extended in [13] and is most similar to the greedy method given in [14]. Both of these methods involve averaging estimates, in addition to applying the Kaczmarz update, as we do here. In addition, a special case of our randomized variant is found in [15]. There, the authors analyze a randomized block Kaczmarz algorithm with averaging, as we do; however, they assume that the size of the block is the same for every iteration, which we do not, and they also assume that the weights associated to the averaging and the probabilities for selecting the blocks are proportional to the inverse of the norms of the row vectors of the coefficient matrix, which we also do not do. See Remark 2 for further details.
The situation we consider in the present article can be considered a distributed estimation problem. Such problems have a long history in applied mathematics, control theory, and machine learning. At a high level, similar to our approach, they all involve averaging local copies of the unknown parameter vector interleaved with update steps [16,17,18,19,20,21,22,23,24,25]. Recently, a number of protocols for gossip methods, including a variation of the Kaczmarz method, was analyzed in [26].
However, our version of the Kaczmarz method differs from previous work in a few aspects: (i) we assume an a priori fixed tree topology (which is more restrictive than typical gossip algorithms); (ii) there is no master node as in parallel algorithms, and no shared memory architecture; (iii) as we will emphasize in Theorem 2, we make no strong convexity assumptions (which is typically needed for distributed optimization algorithms, but see [27,28] for a relaxation of this requirement); and (iv) we make no assumptions on the matrix A, in particular we do not assume that it is nonnegative.
On the other end of the spectrum are algorithms that distribute a computational task over many processors arranged in a fixed network. These algorithms are usually considered in the context of parallel processing, where the nodes of the graph represent CPUs in a highly parallelized computer. See [29] for an overview.
The algorithm we are considering does not really fit either of those categories. It requires more structure than the gossip algorithms, but each node depends on results from other nodes, more than the usual distributed algorithms.
This was pointed out in [29]. For iteratively solving a system of linear equations, a Successive Over-Relaxation (SOR) variant of the Jacobi method is easy to parallelize; standard SOR, which is a variation on Gauss-Seidel, is not. The authors also consider what they call the Reynolds method, which is similar to a Kaczmarz method with all equations being updated simultaneously. Again, this method is easy to parallelize. A sequential version called Reynolds Gauss–Seidel (RGS) can only be parallelized in certain settings, such as the numerical solution of PDEs.
A distributed version of the Kaczmarz algorithm was introduced in [30]. The main ideas presented there are very similar to ours: updated estimates are obtained from prior estimates using the Kaczmarz update with the equations that are available at the node, and distributed estimates are averaged together at a single node (which the authors refer to as a fusion center; for us, it is the root of the tree). In [30], the convergence analysis is limited to the case of consistent systems of equations, and inconsistent systems are handled by Tikhonov regularization [31,32]. Another distributed version was proposed in [33], which has a shared memory architecture. Finally, the Kaczmarz algorithm has been proposed for online processing of data in [34,35]. In these studies, the processing is online and so is neither distributed nor parallel.
The Kaczmarz algorithm, at its most basic, is an alternating projection method, consisting of iterations of (affine) projections. Our distributed Kaczmarz algorithm (whether randomized or not) consists of iterations of averages of these projections. When consistent, the iterations converge to an element of the common fixed point set of these operators. This is a special case of the common fixed point problem, and our algorithm is a special case of the string-averaging methods for finding a common fixed point set. The string-averaging method has been extensively studied, for example [36,37,38,39] in the cyclic control regime, and [40,41] in the random (dynamic) control regime. An application of string averaging methods to medical imaging is found in [42]. The recent study [43] provides an overview of the method and extensive bibliography. While our situation is a special case of these methods, our analysis of the algorithm provides stronger conclusions, since our main results (Theorems 2 and 3) provide explicit estimates on the convergence rates of our algorithm, rather than the qualitative convergence results found in the literature on string-averaging methods. In addition, Theorem 3 provides a convergence analysis even in the inconsistent case, which is typically not available for string-averaging methods, as the standard convergence analysis requires that the common fixed point set is nonempty (though ref. [44] proves that for certain inconsistent cases, convergence is still guaranteed).

1.4. Main Contributions

Our main contributions in this study concern quantitative convergence results of the distributed Kaczmarz algorithm for systems of linear equations that are distributed over a tree. Just as in the case of cyclic control of the classical Kaczmarz algorithm, in our distributed setting, we are able to prove these quantitative results by introducing randomness into the control law of the method. We prove that the random control as described in Algorithm 1 converges at a rate that depends on the parameters of the probability distribution as well as the spectral data of the coefficient matrix. This is in contrast to typical distributed estimation algorithms for which convergence is guaranteed but the convergence rate is unknown.
As a result of this quantitative convergence analysis, we are able to utilize Algorithm 1 to handle the context of (unknown) corrupted equations. Again, this is in contrast to distributed estimation methods or string-averaging methods, which are known to not converge when the system of equations is inconsistent. We note that Algorithm 1 also will not necessarily converge when the system is inconsistent. We suppose that the number of corrupted equations is small in comparison to the total number of equations, and the remaining equations are in fact consistent. If we have an estimate on the number of corrupted equations, by utilizing Algorithms 2 and 3, with high probability we can identify those equations and successfully remove them, thereby finding the solution to the remaining equations. Likewise, in contrast to the string averaging methods, we are are able to prove convergence rates when the solution set (i.e., the fixed point set in the literature on string-averaging methods) is nonempty, as well as handle the case of when the fixed point set is empty provided we have an estimate on the number of outliers (i.e., the number of equations that can be removed so that the solution set of the remaining equations is nonempty). Our Algorithms 2 and 3 are nearly verbatim those found in [45], as are the theorems (i.e., Theorems 4 and 5) supporting the efficacy of those algorithms. We prove a necessary result (Lemma 1), with the remaining analysis as in [45], which also has an extensive analysis of numerical experiments that we do not reproduce here.
Algorithm 1 Randomized Tree Kaczmarz (RTK) algorithm.
1:
Input A, b , x ( 0 ) , D , K
2:
for  k K do
3:
   Draw sample Z D
4:
   if  r Z  then
5:
      x r = x ( k 1 ) + b r a r * x ( k 1 ) a r 2 a r
6:
   else
7:
      x r = x ( k 1 )
8:
   end if
9:
   for  q = 1 , , D e p t h  do
10:
     for  v with d ( v , r ) = q  do
11:
        if  v Z  then
12:
           x v = x P ( v ) + b v a v * x P ( v ) a v 2 a v
13:
        else
14:
           x v = x P ( v )
15:
        end if
16:
     end for
17:
   end for
18:
   for  L  do
19:
      y = x
20:
   end for
21:
   for  q = D e p t h 1 , , 0  do
22:
     for  u with d ( u , r ) = q  do
23:
         T ( u ) = { v C ( u ) : y v x u }
24:
        if  T ( u )  then
25:
           y u = 1 | T ( u ) | v T ( u ) y v
26:
        else
27:
           y u = x u
28:
        end if
29:
     end for
30:
   end for
31:
    x ( k ) = y r
32:
end for
33:
return  x ( K )
Algorithm 2 Multiple Round Randomized Tree Kaczmarz (MRRTK) algorithm.
1:
Input A, b , x ( 0 ) , D , K, W, d
2:
S =
3:
for  i W do
4:
    x ( K , i ) = R T K ( A , b , x 0 , D , K )
5:
    D = a r g m a x D [ A ] , | D | = d j D | A x ( K , i ) b | j
6:
    S = S D
7:
end for
8:
return  x , where A S C x = b S C
Algorithm 3 Multiple Round Randomized Tree Kaczmarz (MRRTKUS) algorithm with Unique Selection.
1:
Input A, b , x ( 0 ) , D , K, W, d
2:
S =
3:
for  i W do
4:
    x ( K , i ) = R T K ( A , b , x 0 , D , K )
5:
    D = a r g m a x D [ A ] \ S , | D | = d j D | A x ( K , i ) b | j
6:
    S = S D
7:
end for
8:
return  x , where A S C x = b S C

2. Randomization of the Distributed Kaczmarz Algorithm

The Distributed Kaczmarz Algorithm (DKA) described in Section 1.2 was introduced in [5]. The main results there concern qualitative convergence guarantees of the DKA: Theorems 2 and 4 prove that the DKA converges to the solution (solution of minimal norm) when the system has a unique solution (is consistent, respectively). Theorem 14 proves that when the system of equations is inconsistent, the relaxed version of the DKA converges for every appropriate relaxation parameter, and the limits approximate a weighted least-squares solution. No quantitative estimates of the convergence rate are given in [5], and in fact it is observed in [46] that the convergence rate is dependent upon the topology of the tree as well as the distribution of the equations across the nodes.

2.1. Randomized Variants

In this subsection, we consider randomized variants of the protocol introduced in Section 1.2 (see Algorithm 1). This will allow us to provide quantitative estimates on the rate of convergence in terms of the spectral data of A. We will be using the analysis of the randomized Kaczmarz algorithm as presented in [8] and the analysis of the randomized block Kaczmarz algorithm as presented in [14].
We will have two randomized variants, but both can be thought of in a similar manner. During the dispersion stage of the iteration, one or more of the nodes will be active, meaning that the estimate they receive will be updated according to Equation (3) then passed on to its successor nodes (or held if the node is a leaf). The remaining nodes will be passive, meaning that the estimate they receive will be passed on to successive nodes without updating. In the first variant, exactly one node will be chosen randomly to be active for the current iteration, and the remaining nodes will be passive. In the second variant, several of the nodes will be chosen randomly to be active subject to the constraint that no two active nodes are in a predecessor-successor relationship. The pooling stage proceeds with the following variation. When a node receives estimates from its successors, it averages only those estimates that differ from its estimate during the dispersion stage. If a node receives estimates from all of its successors that are the same as its own estimate during the dispersion stage, it passes this estimate to its predecessor.
For these random choices, we will require that the root node know the full topology of the network. For each iteration, the root node will select the active nodes for that iteration according to some probability distribution; the nodes that are selected for activation will be notified by the root node during the iteration.
In both of our random variants, we make the assumption that the system of equations is consistent. This assumption is required for the results that we use from [8,14]. As such, no relaxation parameter is needed in our randomized variants, though in Section 2.3 we observe that convergence can be accelerated by over-relaxation. See [9,47] for results concerning the randomized Kaczmarz algorithm in the context of inconsistent systems of equations.
See Algorithm 1 for pseudocode description of the randomized variants; we refer to this algorithm as the Randomized Tree Kaczmarz (RTK) algorithm.

2.1.1. Single Active Nodes

Let Y denote a random variable whose values are in V. In our first randomized variant, the root node selects v V according to the probability distribution
P ( Y = v ) = a v 2 A F 2 ;
denote this distribution by D 0 . Note that this requires the root node to have access to the norms { a v } v V . During the dispersion stage of iteration n, the node that is selected, denoted by Y n , is notified by the root node as estimate x ( n ) traverses the tree.
Proposition 1.
Suppose that the sequence of approximations x ( n ) are obtained by Algorithm 1 with distribution D 0 . Then, the approximations have the form
x ( n ) = x ( n 1 ) + b Y n a Y n * x ( n 1 ) a Y n 2 a Y n .
Proof. 
At the end of the dispersion stage of the iteration, we have that the leaves Y n possess an estimate that has been updated; all other leaves have estimates that are not updated. Thus, we have x ( n 1 ) = x Y n ( n 1 ) = x ( n 1 ) + b Y n a Y n * x ( n 1 ) a Y n 2 a Y n for those Y n , and x ( n 1 ) = x ( n 1 ) otherwise.
Then, during the pooling stage, the only nodes that receive an estimate that is different from their estimate during the dispersion stage are u Y n , and those estimates are all x Y n ( n 1 ) . Thus, for all such nodes, y u = x Y n ( n 1 ) . Every other node has y u = x ( n 1 ) . Since the root r Y n , we obtain that
x ( n ) = y r ( n 1 ) = x Y n ( n 1 ) = x ( n 1 ) + b Y n a Y n * x ( n 1 ) a Y n 2 a Y n .
Corollary 1.
Suppose the sequence of approximations x ( n ) are obtained by Algorithm 1 with distribution D 0 . Then, the following linear convergence rate in expectation holds:
E x ( n ) Π x ( n ) 2 1 λ m i n n z ( A T A ) A F 2 n x ( 0 ) Π x ( 0 ) 2 .
Proof. 
When the blocks are singletons, by Proposition 1, the update is identical to the Randomized Kaczmarz algorithm of [8]. The estimate in Equation (6) is given in [14]. □

2.1.2. Multiple Active Nodes

We now consider blocks of nodes, meaning multiple nodes, that are active during each iteration. For our analysis, we require that the nodes that are active during any iteration are independent of each other in terms of the topology of the tree. Let P ( V ) denote the power set of the set of nodes V. Let Z be a random variable whose values are in P ( V ) with probability distribution D .
Definition 1.
For I P ( V ) , we say that I satisfies the incomparable condition whenever the following holds: for every distinct pair u , v I , neither v u nor u v . We say that the probability distribution D satisfies the incomparable condition whenever the following implication holds: if I P ( V ) is such that P ( Z = I ) > 0 , then I satisfies the incomparable condition.
Following [14], we define the expectation for each node v V :
p v = P ( v Z ) .
We then define the matrix
W = v V p v a v a v * a v 2 .
For I P ( V ) and u V , we define
T ( u , I ) = { w C ( u ) | v I s . t . v w }
We then define for v I :
γ ( v , I ) = u v 1 | T ( u , I ) | .
These quantities reflect how estimates travel from the leaves of the tree back to the root. As multiple estimates are averaged at a node in the tree, the node needs to know how many of its descendants have estimates that have been updated, which (essentially) corresponds to how many descendants have been chosen to be active during that iteration. (Note that it is possible that a node could be active, but its estimate is NOT updated because it may be the case that the estimate that is passed to it is already a solution it its equation, but to simplify the analysis, we suppose that this does not occur). The weights γ ( v , I ) are the final weights used in the update when the estimates ultimately return to the root. Note that these quantities depend on the choice of I P ( V ) as well as the topology of the tree itself.
Proposition 2.
Suppose that the sequence of approximations x ( n ) are generated by Algorithm 1, where the probability distribution D satisfies the incomparable condition. Let Z n be the n-th sample of the random variable Z. Then, the approximations have the form
x ( n ) = x ( n 1 ) + v Z n γ ( v , Z n ) b v a v * x ( n 1 ) a v 2 a v .
Proof. 
For any node w such that there is no v Z n with w v , then x w ( n 1 ) = x r ( n 1 ) = x ( n 1 ) . However, if there is a v Z n with w v , then by the incomparable condition,
x w ( n 1 ) = x v ( n 1 ) = x ( n 1 ) + b v a v * x ( n 1 ) a v 2 a v .
We have in the pooling stage, if v Z n then y v = x v . Moreover, for u = P ( v ) ,
y u ( n 1 ) = 1 | T ( u , Z n ) | w T ( u , Z n ) y w ( n 1 ) = 1 | T ( u , Z n ) | w Z n w u y w ( n 1 ) = 1 | T ( u , Z n ) | w Z n w u x w ( n 1 )
where the second equation follows from the incomparable condition. It now follows by induction that
y r ( n 1 ) = w Z n w u 1 | T ( u , Z n ) | x w ( n 1 ) .
We now obtain Equation (9) from Equation (8). □
For I P ( V ) that satisfies the incomparable condition, we define
B I = v I γ ( v , I ) a v a v * a v 2
where γ ( v , I ) are as in Equation (8).
Let D be a probability distribution on P ( V ) that satisfies the incomparable condition, and let Z be a P ( V ) -valued random variable with distribution D . For each I P ( V ) such that P ( Z = I ) > 0 , let
γ m i n ( I ) = min { γ ( v , I ) | v I } ; γ m a x ( I ) = max { γ ( v , I ) | v I } ;
where γ ( v , I ) are as in Equation (8). Then, let
Γ ( A , D ) = min { 2 γ m i n ( I ) γ m a x ( I ) λ m a x ( B I ) | P ( Z = I ) > 0 } .
For notational brevity, we define the quantity
Σ ( A , D ) : = 1 Γ ( A , D ) λ m i n n z ( W ) .
We note that a priori there is no reason that Σ ( A , D ) < 1 . However, we will see in Section 2.2 examples for which Σ is less than 1 as well as conditions which guarantee this inequality.
We will use Theorem 4.1 in [14]. We alter the statement somewhat and include the proof for completeness.
Theorem 1.
Let Z 1 be a sample of the random variable Z with distribution D . Let Π be the projection onto the space of solutions to the system of equations, and let x ( 0 ) be an initial estimate of a solution that is in the range of A T . Let
x ( 1 ) = x ( 0 ) + v Z 1 γ ( v , Z 1 ) b v a v * x ( 0 ) a v 2 a v .
Then, the following estimate holds in expectation:
E x ( 1 ) Π x ( 1 ) 2 Σ ( A , D ) x ( 0 ) Π x ( 0 ) 2 .
Proof. 
As derived in Theorem 4.1 in [14], we have
x ( 1 ) Π x ( 1 ) 2 x ( 0 ) Π x ( 0 ) * I 2 B Z 1 + B Z 1 2 x ( 0 ) Π x ( 0 ) .
We make the following estimates:
B Z 1 γ m i n ( Z 1 ) v Z 1 a v a v * a v 2 ,
and
B Z 1 2 λ m a x ( B Z 1 ) B Z 1 λ m a x ( B Z 1 ) γ m a x ( Z 1 ) v Z 1 a v a v * a v 2 .
We thus obtain the estimate
I 2 B Z 1 + B Z 1 2 I 2 γ m i n ( Z 1 ) v Z 1 a v a v * a v 2 + γ m a x ( Z 1 ) λ m a x B Z 1 v Z 1 a v a v * a v 2 I Γ ( A , D ) v Z 1 a v a v * a v 2 ,
from which Equation (15) becomes
x ( 1 ) Π x ( 1 ) 2 x ( 0 ) Π x ( 0 ) * I Γ ( A , D ) v Z 1 a v a v * a v 2 x ( 0 ) Π x ( 0 ) .
Taking the expectation of the left side, we obtain
E x ( 1 ) Π x ( 1 ) 2 x ( 0 ) Π x ( 0 ) * I Γ ( A , D ) W x ( 0 ) Π x ( 0 )
Σ ( A , D ) x ( 0 ) Π x ( 0 ) 2 .
The last inequality follows from the Courant–Fischer inequality applied to the matrix D 1 / 2 A : for the matrix W = A T D A , with D = d i a g p v a v 2 ; v V , we obtain
x ( 0 ) Π x ( 0 ) * W x ( 0 ) Π x ( 0 ) = D 1 / 2 A x ( 0 ) Π x ( 0 ) 2 λ m i n n z ( W ) x ( 0 ) Π x ( 0 ) 2 .
Theorem 2.
Suppose the sequence of approximations x ( n ) are obtained by Algorithm 1 with the distribution D satisfying the incomparable condition and initialized with x ( 0 ) R ( A T ) . Then, the following linear convergence rate in expectation holds:
E x ( n ) Π x ( n ) 2 Σ ( A , D ) n x ( 0 ) Π x ( 0 ) 2 .
Proof. 
We take the expected value of x ( n ) Π x ( n ) 2 conditioned on the history Z 1 , , Z n 1 . By Theorem 1, we have the estimate
E x ( n ) Π x ( n ) 2 : Z 1 , , Z n 1 E x ( n ) Π x ( n 1 ) 2 : Z 1 , , Z n 1 E Σ ( A , D ) x ( n 1 ) Π x ( n 1 ) 2 : Z 1 , , Z n 1 Σ ( A , D ) x ( n 1 ) Π x ( n 1 ) 2 .
We now take the expectation over the entire history to obtain
E x ( n ) Π x ( n ) 2 Σ ( A , D ) E x ( n 1 ) Π x ( n 1 ) 2 .
The result now follows by iterating. □
Remark 1.
We note here that Theorem 2 recovers Corollary 1 as follows: if D selects only singletons from P ( V ) , and selects singleton { v } with probability a v 2 A F 2 , then we have
γ m i n ( { v } ) = γ m a x ( { v } ) = λ m a x ( B { v } ) = Γ ( A , D ) = 1
and W = A T A A F 2 . Thus, Σ ( A , D ) = 1 λ m i n n z ( A T A ) A F 2 .
Remark 2.
A similar result to Theorem 2 is obtained in [15]. There, the authors make additional assumptions that we do not. First, in [15], each block that is selected always contains the same number of rows; our analysis works when the blocks have different sizes, which is necessary as we will consider in several of our sampling schemes different block sizes. Second, in [15], their analysis requires the assumption that the expectation p v and the weights γ ( v , Z ) satisfy the following constraint: there exists a α > 0 such that for every v, p v γ ( v , Z ) a v 2 = α . We do not make this assumption, and in fact this need not hold, since the weights γ ( v , Z ) are determined both by Z and the tree structure. Thus, we do not have control over this quantity.

2.2. Sampling Schemes

We propose here several possible sampling schemes for Algorithm 1 and illustrate their asymptotics in the special case of a binary tree. Recall that m is the number of equations (and hence the number of nodes in the tree), and t is the number of leaves (nodes with no successors) in the tree. To simplify our analysis, we assume that each a v = 1 . We note that for any set I P ( V ) , we have the estimate
λ m a x ( B I ) 1
since v I γ ( v , I ) = 1 . In addition, if we assume that the distribution D satisfies the condition that the set I = { I P ( V ) : P ( Z = I ) > 0 } is a partition of V, then
λ m i n n z ( W ) 1
since we have that v V p v = 1 . As a consequence of these estimates, we obtain the following guarantee that Σ ( A , D ) < 1 .
Proposition 3.
Suppose D satisfies the incomparable condition and that Equation (21) is satisfied. In addition, suppose that for every I I has the property that 2 γ m i n ( I ) γ m a x ( I ) > 0 . Then, Σ ( A , D ) < 1 .
Generations. We block the nodes by their distance from the root: G k = { v : d ( r , v ) = k } . If the depth of the tree is K, then we draw from { G 0 , , G K 1 } uniformly. Here, the probabilities p v = 1 K , so we have W = 1 K A T A since we are also assuming that a v = 1 . Thus, the spectral data λ m i n n z ( W ) reduces to 1 K λ m i n n z ( A T A ) . Thus, our convergence rate is
Σ ( A , D ) = 1 Γ ( A , D ) λ m i n n z ( A T A ) K .
For arbitrary trees, the quantities γ m i n ( G k ) and γ m a x ( G k ) depend on the topology, but for p-regular trees (meaning all nodes that are not leaves have p successors), we have
γ m i n ( G k ) = γ m a x ( G k ) = 1 p k .
Thus, from Equation (20) we obtain the estimate
Γ ( A , D ) 1 p K 1
and our convergence rate is bounded by
1 Γ ( A , D ) λ m i n n z ( W ) 1 λ m i n n z ( A T A ) K p K 1 .
For our regular p-tree, K = O ( log m ) and p K 1 = O ( m ) , so asymptotically the convergence rate is at worst 1 O ( ( m log m ) 1 ) λ m i n n z ( A T A ) .
Families. Here, blocks consist of all immediate successors (children) of a common node, i.e., C ( u ) for u not a leaf. The singleton { r } also is a block. We select each block uniformly, so p v = 1 m t . In this case, for each family (block) F,
γ m i n ( F ) = γ m a x ( F ) = 1 | F | .
Thus, we obtain the estimate
Γ ( A , D ) 1 c ,
where c denotes the largest family. We obtain a convergence rate of
Σ ( A , D ) = 1 λ m i n n z ( A T A ) c ( m t ) .
In the case of a binary tree, c = 2 , and m t = O ( m ) , so asymptotically the convergence rate is at worst 1 O ( m 1 ) λ m i n n z ( A T A ) .

2.3. Accelerating the Convergence via Over-Relaxation

We can accelerate the convergence, i.e., lower the convergence factor, by considering larger stepsizes. In the classical cyclic Kaczmarz algorithm, a relaxation parameter ω ( 0 , 2 ) is allowed, and the update is given by
x ( n ) = x ( n 1 ) + ω b n a n * x ( n 1 ) a n 2 a n .
Experimentally, convergence is faster with ω > 1 [4,14,46]. We consider here such a relaxation parameter in Algorithm 1. This alters the analysis of Theorem 1 only slightly. Indeed, the update in Proposition 2 becomes:
x ( n ) = x ( n 1 ) + ω v Z n γ ( v , Z n ) b v a v * x ( n 1 ) a v 2 a v .
Thus, Equation (15) in the proof of Theorem 1 becomes:
x ( 1 ) Π x ( 1 ) 2 x ( 0 ) Π x ( 0 ) * I ω 2 B Z 1 + ω 2 B Z 1 2 x ( 0 ) Π x ( 0 ) .
The remainder of the calculation follows through, with the final estimate
E x ( 1 ) Π x ( 1 ) 2 1 Γ ω ( A , D ) λ m i n n z ( W ) x ( 0 ) Π x ( 0 ) 2 ,
where
Γ ω ( A , D ) = min 2 ω γ m i n ( I ) ω 2 γ m a x ( I ) λ m a x ( B I ) | P ( Z = I ) > 0 .
If we assume that
λ m a x b l o c k : = m a x λ m a x ( B I ) | P ( Z = I ) > 0 < 1 ,
then we can maximize a lower bound on Γ ω ( A , D ) as a function of ω . Indeed, if we assume that for each I, γ m i n ( I ) = γ m a x ( I ) = : γ , then the maximum occurs at
ω 0 = 1 λ m a x b l o c k ,
and we then obtain the estimate
Γ ω 0 ( A , D ) γ λ m a x b l o c k
This suggests that the rate of convergence can be accelerated by choosing the stepsize ω 0 , since the estimate from Equation (25) is better than the estimate
Γ ( A , D ) 2 ( 1 λ m a x b l o c k ) .
Our numerical experiments presented in Section 4 empirically support acceleration through over-relaxation and in fact suggest further improvement than what we prove here.

3. The RTK in the Presence of Noise

We now consider the performance of the Randomized Tree Kaczmarz algorithm in the presence of noise. That is to say, we consider the system of equations A x = b + ϵ , where ϵ represents noise within the observed measurements. We assume that the noiseless matrix equation A x = b is consistent, and its solution is the solution we want to estimate. We suppose that the equations a v * x = b v + ϵ v are distributed across a network that is a tree, as before. We will consider two aspects of the noisy case: first, we establish the convergence rate of the RTK in the presence of noise and estimate the errors in the approximations due to the noise; second, we will consider methods for mitigating the noise, meaning that if the noise vector ϵ satisfies a certain sparsity constraint, then we can estimate which nodes are corrupted by noise (i.e., anomalous) and ignore them in the RTK.

3.1. Convergence Rate in the Presence of Noise

We now consider the convergence rate of Algorithm 1 in the presence of noise. The randomized Kaczmarz method in the presence of noise was investigated in [47]. The main result of that study is that when the measurement vector b is corrupted by noise (likely causing the system to be inconsistent), the randomized Kacmarz algorithm displays the same convergence rate as the randomized Kaczmarz algorithm does in the consistent case up to an error term that is proportional to the variance of the noise, and the constant of proportionality is given by spectral data of the coefficient matrix.
To formalize, we consider the case that the system of equations A x = b is consistent but that the measurement vector b is corrupted by noise, yielding the observed system of equations A x = b + ϵ . The update in Algorithm 1 then becomes:
x ( n ) = x ( n 1 ) + v I γ ( v , I ) b v + ϵ v a v * x ( n 1 ) a v 2 a v = x ( n 1 ) + v I γ ( v , I ) b v a v * x ( n 1 ) a v 2 a v + v I γ ( v , I ) ϵ v a v 2 a v .
We denote the error in the update by
E I = v I γ ( v , I ) ϵ v a v 2 a v .
Note that
E I v I γ ( v , I ) | ϵ v | a v max v V | ϵ v | a v .
For d × d matrices A 1 , , A N , we denote the product A N A N 1 A 1 = n = 1 N A n , so that the product notation is indexed from right to left. For a product where the beginning index is greater than the ending index, e.g., n = k + 1 k A n , we define the product to be the identity I. As before, Z is a P ( V ) valued random variable with distribution D , and B Z is as given in Equation (11).
Theorem 3.
Suppose the system of equations A x = b is consistent, and let Π denote the projection onto the solution space. Let x ( n ) be the n-th iterate of Algorithm 1 run with the noisy measurements A x = b + ϵ , distribution D , and initialization x ( 0 ) R ( A T ) . Then, the following estimate holds in expectation:
E x ( n ) Π x ( n ) Σ ( A , D ) n / 2 x ( 0 ) Π x ( 0 ) + 1 Σ ( A , D ) 1 / 2 1 max v V | ϵ v | a v
where Σ ( A , D ) is as given in Equation (13).
Proof. 
Let x S be any solution to the system of equations A x = b . We have by induction that
x S x ( n ) = ( I B Z n ) ( x S x ( n 1 ) ) + E Z n = ( I B Z n ) ( I B Z n 1 ) ( x S x ( n 2 ) ) + ( I B Z n ) ( E Z n 1 ) + E Z n = = j = 1 n ( I B Z j ) ( x S x ( 0 ) ) + j = 1 n k = j + 1 n ( I B Z k ) E Z j .
Note that the first term is precisely the estimates obtained from Algorithm 1 with ϵ = 0 , so we can utilize Theorem 2 to obtain the following estimate
E x S x ( n ) E j = 1 n ( I B Z j ) ( x S x ( 0 ) ) + E j = 1 n k = j + 1 n ( I B Z k ) E Z j Σ ( A , D ) n / 2 x S x ( 0 ) + E j = 1 n k = j + 1 n ( I B Z k ) E Z j .
Thus, we need to estimate the terms in the sum.
As in the proof of Theorem 2, we can estimate the expectation conditioned on Z 1 , , Z n 1 as
E k = j + 1 n ( I B Z k ) E Z j : Z 1 , , Z n 1 Σ ( A , D ) 1 / 2 k = j + 1 n 1 ( I B Z k ) E Z j .
Iterating this estimate n j times using the tower property of conditional expectation yields
E k = j + 1 n ( I B Z k ) E Z j : Z 1 , , Z j Σ ( A , D ) ( n j ) / 2 E Z j .
We have by Equation (26) that
E E Z j max v V | ϵ v | a v .
Thus, taking the full expectation over the entire history yields
E k = j + 1 n ( I B Z k ) E Z j Σ ( A , D ) ( n j ) / 2 max v V | ϵ v | a v .
We thus obtain
E j = 1 n k = j + 1 n ( I B Z k ) E Z j j = 1 n Σ ( A , D ) ( n j ) / 2 max v V | ϵ v | a v j = 0 Σ ( A , D ) j / 2 max v V | ϵ v | a v
from which the estimate in Equation (27) now follows. □
Remark 3.
We note here that Theorem 3 recovers a similar, but coarser, estimate as the one obtained in Theorem 2.1 in [47]. Indeed, as in Remark 1, suppose that the distribution D only selects singletons from P ( V ) , and each singleton { v } is selected with probability a v 2 A F 2 . Then, we have Σ ( A , D ) = 1 λ m i n n z ( A T A ) A F 2 , and so Theorem 3 becomes
E x ( n ) Π x ( n ) Σ ( A , D ) n / 2 x ( 0 ) Π x ( 0 ) + 1 Σ ( A , D ) 1 / 2 1 max v V | ϵ v | a v = 1 1 R n / 2 x ( 0 ) Π x ( 0 ) + 1 1 1 R 1 max v V | ϵ v | a v .
Here, R = A F 2 λ m i n n z ( A T A ) in the notation of Theorem 2.1 in [47]. The estimate is similar for R 1 , but for R > > 1 , our estimate is worse. This is because the proof of Theorem 2.1 [47] utilizes orthogonality at a crucial step, which is not valid for our situation—the error E I is not orthogonal to the solution space for the affected equations.

3.2. Anomaly Detection in Distributed Systems of Noisy Equations

We again consider the case of noisy measurements, again denoted by A x = b + ϵ , where now the error vector ϵ is assumed to be sparse, and the nonzero entries are large. This situation is considered in [45]. In that study, the authors propose multiple methods of using the Randomized Kaczmarz algorithm to estimate which equations are corrupted by the noise, i.e., which equations correspond to the nonzero entries of ϵ . Once those equations are detected, they are removed from the system. The assumption is that the subsystem of uncorrupted equations is consistent; thus, once the corrupted equations are removed, the Randomized Kaczmarz algorithm can be used to estimate a solution. Moreover, the Randomized Kaczmarz algorithm can be used on the full (corrupted) system of equations to obtain an estimate of the solution with positive probability. We demonstrate here that the methods proposed in [45] can be utilized in our context of distributed systems of equations to identify corrupted equations and estimate a solution to the uncorrupted equations.
Indeed, we utilize without any alteration the methods of [45] to detect corrupted equations; we provide the Algorithms 2 and 3 for completeness. To prove that the algorithms are effective in our distributed context, we follow the proofs in [45] with virtually no change. Once we establish an initial lemma, the proofs of the main results (Theorems 4 and 5) are identical. The lemma we require is an adaptation of Lemma 2 in [45] to our Distributed Randomized Kaczmarz algorithm. Our proof proceeds similarly to that in [45]; we include it here for completeness.
We establish some notation first. For an arbitrary U V , we use A V \ U to denote the submatrix of A obtained by removing the rows indexed by U. Similarly, for the vector b , b V \ U consists of the components whose indices are in V \ U . For the probability distribution D and U V , we denote by D ˜ the conditional probability distribution on P ( V \ U ) conditioned on I U = for I V \ U .
For the remainder of this subsection, let U V denote the support of the noise ϵ , with | U | = s ; let ε * = min { | ϵ j | : j U } . We assume that A V \ U x = b V \ U has a unique solution; let x S be that solution to this restricted system. Let
P ( D , U ) = I U = P D ( I ) .
Recall that k is the number of variables in the system of equations.
Lemma 1.
Let 0 < δ < 1 . Define
n * = max 0 , log δ ( ε * ) 2 4 x S 2 log Σ ( A V \ U , D ˜ )
Then, in round i of Algorithm 2 or Algorithm 3, the iterate x ( n * , i ) produced by n * iterations of the RTK satisfies
P x ( n * , i ) x S ε * 2 ( 1 δ ) P ( D , U ) n * .
Proof. 
Let E be the event that the blocks Z 1 , , Z n * chosen according to distribution D are all uncorrupted, i.e., Z j U = for j = 1 , 2 , , n * . This is equivalent to applying the RTK algorithm to the system A V \ U x = b V \ U with distribution D ˜ . Thus, by Theorem 2, we have the conditional expectation:
E x ( n * , i ) x S 2 | E E A V \ U , b V \ U x ( n * , i ) x S 2 | E Σ ( A V \ U , D ˜ ) n * x S 2 .
From Equation (30), we obtain
E x ( n * , i ) x S 2 | E δ ( ε * ) 2 4 .
Thus,
P x ( n * , i ) x S 2 ( ε * ) 2 4 E x ( n * , i ) x S 2 | E ( ε * ) 2 4 δ .
Hence,
P x ( n * , i ) x S 2 ( ε * ) 2 4 | E 1 δ
and
P x ( n * , i ) x S 2 ( ε * ) 2 4 1 δ P ( E ) 1 δ P ( D , U ) n * .
The following are Theorems 2 and 3 in [45], respectively, restated to our situation; the proofs are identical using our Lemma 1 and are omitted.
Theorem 4.
Let 0 < δ < 1 . Fix d s , W | V | k d , and let n * be as in Equation (30). Then, the MRRTK Algorithm (Algorithm 2) applied to A , b + e will detect the corrupted equations with probability at least
1 1 ( 1 δ ) P ( D , U ) n * W .
Theorem 5.
Let 0 < δ < 1 . Fix d 1 , W | V | k d , and let n * be as in Equation (30). Then, the MRRTKUS Algorithm (Algorithm 3) applied to A , b + e will detect the corrupted equations and the remaining equations will have solution x S with probability at least
1 j = 0 s / d 1 W j p j ( 1 p ) W j
where p = ( 1 δ ) P ( D , U ) n * .
See [45] for an extensive analysis of numerical experiments of these algorithms, which we do not reproduce here.

4. Numerical Experiments

4.1. The Test Equations

We randomly generated several types of test equations, full and sparse, of various sizes. The results for different types of matrices were very similar, so we just present some results for full matrices with entries generated from a standard normal distribution.
We generated the matrices once, made sure they had full rank, and stored them. Thus, all algorithms are working on the same matrices. However, the sequence of equations used in the random algorithms is generated at runtime.
There are two types of problems we considered. In the underdetermined case, illustrated here with a 255 × 1023 matrix, all algorithms converge to the solution of minimal norm. In the consistent overdetermined case, illustrated here with a 1023 × 255 matrix, all algorithms converge to the standard solution. The matrix dimensions are of the form 2 d 1 , for easier experimentation with binary trees.
In the inconsistent overdetermined case, deterministic Kaczmarz algorithms will converge to a weighted least-squares solution, depending on the type of algorithm and on the relaxation parameter ω . However, random Kaczmarz algorithms do not converge in this case but do accumulate around a weighted least-squares solution, e.g., Theorem 1 in [15], and Theorem 3.

4.2. The Algorithms

We included several types of deterministic Kaczmarz algorithms in the numerical experiments:
  • Standard Kaczmarz.
  • Sequential block Kaczmarz, with several different numbers of blocks. The equations are divided into a small number of blocks, and the updates are performed as an orthogonal projection onto the solution space of each block, rather than each individual equation.
  • Distributed Kaczmarz based on a binary tree as in [5].
  • Distributed block Kaczmarz. This is distributed Kaczmarz based on a tree of depth 2 with a small number of leaves, where each leaf contains a block of equations.
In sequential block Kaczmarz we work on each block in sequence. In distributed block Kaczmarz we work on each block in parallel and average the results.
The block Kaczmarz case, whether sequential or distributed, is not actually covered by our theory. However, we believe that our results could be extended to this case fairly easily, as long as the equations in each block are underdetermined and have full rank, following the approach in [4].
These deterministic algorithms are compared to corresponding types of random Kaczmarz algorithms:
  • Random standard Kaczmarz; one equation at a time is randomly chosen.
  • Random block Kaczmarz; one block at a time is randomly chosen. There is no difference between sequential and parallel random block Kaczmarz.
  • Random distributed Kaczmarz based on a binary tree, for several kinds of random choices:
    -
    Generations, that is, we use all nodes at a randomly chosen distance from the root.
    -
    Families, that is, using the children of a randomly chosen node; for a binary tree, these are pairs.
    -
    Subtrees, that is, using the subtree rooted at a randomly chosen node. This is not an incomparable choice, so it is not covered by our theory.
For a matrix of size m × k , in the deterministic cases one iteration consists of applying m updates, using each equation once. A block of size n × k counts as n updates. In the random cases, different random choices may involve different numbers of equations; we apply updates until the number of equations used reaches or slightly exceeds m.
After each iteration, we compute the 2-norm of the error. The convergence factor at each step is the factor by which the error has gone down at the last step. These factors often vary considerably in the first few steps and then settle down. The empirical convergence factors given in the tables below are calculated as the geometric average of the convergence factors over the second half of the iterations.

4.3. Numerical Results

The empirical convergence factors shown in Table 1 and Table 2 are based on 10 iterations. Each convergence factor is computed as the fifth root of the ratio of errors between iterations 5 and 10. For the random algorithms, each experiment (of 10 iterations) was run 20 times and the resulting convergence factors averaged.
For binary trees, with a random choice of generations or families, we also calculated the estimated convergence factors described in Equation (13). These estimates are for one random step. For a binary tree of K levels, it takes on average K choices of generation to use the entire tree once. For families, it is 2 K 1 choices of families (pairs, in this case). To estimate the convergence factor for one iteration, we took a corresponding power of the estimates.
Table 1 shows the convergence factors for the underdetermined case. An empty entry means that the algorithm did not converge for this value of ω .
Here are some observations about Table 1:
  • Sequential block Kaczmarz and random block Kaczmarz for 255 blocks are identical to their standard Kaczmarz counterparts and are not shown.
  • Sequential methods, including standard Kacmarz and sequential block Kaczmarz, converge faster than parallel methods, such as binary trees or distributed block Kaczmarz. This is not surprising: in sequential methods, each step uses the results of the preceding step; in parallel methods, each step uses older data.
  • The same reasoning explains why the Family selection is faster than Generations. Consider level 3, as an example. With Generations, we do 8 equations in parallel. With Families, we do four sets of 2 equations each, but each pair uses the result from the previous step.
  • The block algorithm for a single block with ω = 1 converges in a single step, so the convergence factor is 0. At the other end of the spectrum, with 255 blocks of one equation each, the block algorithm becomes standard Kaczmarz. As the number of blocks increases, the convergence factor is observed to increase and approach the standard Kaczmarz value.
  • Standard Kaczmarz, deterministic or random, converges precisely for ω ( 0 , 2 ) . By the results in [4], this is also true for sequential block Kaczmarz.
    Distributed Kaczmarz methods are guaranteed to converge for the same range of ω , but in practice they often converge for larger ω as well, sometimes up to ω near 4. Random distributed methods appear to have similar behavior.
  • The observed convergence factors for random algorithms are comparable to those for their deterministic counterparts but slightly worse. We attribute this to the fact that in the underdetermined case, all equations are important; random algorithms on an m × k matrix do not usually include all equations in a set of m updates, while deterministic algorithms do.
    As pointed out in [8], there are types of equations where random algorithms are significantly faster than deterministic algorithms, but our sample equations are obviously not in that category.
Table 2 shows the convergence factors for the consistent overdetermined case.
Observations about Table 2:
  • All algorithms converge faster in the overdetermined consistent case than in the underdetermined case. That is not surprising: since we have four times more equations than we actually need; one complete run through all equations is comparable to four complete run-throughs for the underdetermined case.
  • For the same reason, the parallel block algorithm with four blocks (or fewer) for ω = 1 converges in a single step.
  • We observe that random algorithms are still slower in the overdetermined case, even though the argument from the underdetermined case does not apply here.

Author Contributions

F.K. designed the numerical experiments and produced the associated code. E.S.W. developed the algorithms and proved the qualitative results. Conceptualization, F.K. and E.S.W.; methodology, F.K.; software, F.K.; validation, F.K.; writing—original draft preparation, E.S.W.; writing—review and editing, F.K. All authors have read and agreed to the published version of the manuscript.

Funding

Fritz Keinert and Eric S. Weber were supported in part by the National Science Foundation and the National Geospatial Intelligence Agency under award #1830254. Eric S. Weber was supported in part by the National Science Foundation under award #1934884.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaczmarz, S. Angenäherte Auflösung von Systemen linearer Gleichungen. Bull. Int. Acad. Pol. Sci. Lett. Cl. Sci. Math. Nat. Ser. A Sci. Math. 1937, 35, 355–357. [Google Scholar]
  2. Tanabe, K. Projection Method for Solving a Singular System of Linear Equations and its Application. Numer. Math. 1971, 17, 203–214. [Google Scholar] [CrossRef]
  3. Eggermont, P.P.B.; Herman, G.T.; Lent, A. Iterative Algorithms for Large Partitioned Linear Systems, with Applications to Image Reconstruction. Linear Alg. Appl. 1981, 40, 37–67. [Google Scholar] [CrossRef] [Green Version]
  4. Natterer, F. The Mathematics of Computerized Tomography; Teubner: Stuttgart, Germany, 1986. [Google Scholar]
  5. Hegde, C.; Keinert, F.; Weber, E.S. A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations. In Excursions in Harmonic Analysis; Balan, R., Benedetto, J.J., Czaja, W., Dellatorre, M., Okoudjou, K.A., Eds.; Applied and Numerical Harmonic Analysis; Birkhäuser/Springer: Cham, Switzerland, 2021; Volume 6, pp. 385–411. [Google Scholar] [CrossRef]
  6. West, D.B. Introduction to Graph Theory; Prentice Hall, Inc.: Upper Saddle River, NJ, USA, 1996; p. xvi+512. [Google Scholar]
  7. Hamaker, C.; Solmon, D.C. The angles between the null spaces of X rays. J. Math. Anal. Appl. 1978, 62, 1–23. [Google Scholar] [CrossRef] [Green Version]
  8. Strohmer, T.; Vershynin, R. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 2009, 15, 262–278. [Google Scholar] [CrossRef] [Green Version]
  9. Zouzias, A.; Freris, N.M. Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. Appl. 2013, 34, 773–793. [Google Scholar] [CrossRef]
  10. Needell, D.; Zhao, R.; Zouzias, A. Randomized block Kaczmarz method with projection for solving least squares. Linear Algebra Appl. 2015, 484, 322–343. [Google Scholar] [CrossRef]
  11. Needell, D.; Srebro, N.; Ward, R. Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm. Math. Progr. 2016, 155, 549–573. [Google Scholar] [CrossRef] [Green Version]
  12. Cimmino, G. Calcolo approssimato per soluzioni dei sistemi di equazioni lineari. In La Ricerca Scientifica XVI, Series II, Anno IX 1; Consiglio Nazionale delle Ricerche: Rome, Italy, 1938; pp. 326–333. [Google Scholar]
  13. Censor, Y.; Gordon, D.; Gordon, R. Component averaging: An efficient iterative parallel algorithm for large and sparse unstructured problems. Parallel Comput. 2001, 27, 777–808. [Google Scholar] [CrossRef]
  14. Necoara, I. Faster randomized block Kaczmarz algorithms. SIAM J. Matrix Anal. Appl. 2019, 40, 1425–1452. [Google Scholar] [CrossRef]
  15. Moorman, J.D.; Tu, T.K.; Molitor, D.; Needell, D. Randomized Kaczmarz with averaging. BIT Numer. Math. 2021, 61, 337–359. [Google Scholar] [CrossRef]
  16. Tsitsiklis, J.; Bertsekas, D.; Athans, M. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 1986, 31, 803–812. [Google Scholar] [CrossRef] [Green Version]
  17. Xiao, L.; Boyd, S.; Kim, S.J. Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 2007, 67, 33–46. [Google Scholar] [CrossRef] [Green Version]
  18. Shah, D. Gossip Algorithms. Found. Trends Netw. 2008, 3, 1–125. [Google Scholar] [CrossRef]
  19. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  20. Nedic, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48. [Google Scholar] [CrossRef]
  21. Johansson, B.; Rabi, M.; Johansson, M. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 2009, 20, 1157–1170. [Google Scholar] [CrossRef]
  22. Yuan, K.; Ling, Q.; Yin, W. On the convergence of decentralized gradient descent. SIAM J. Optim. 2016, 26, 1835–1854. [Google Scholar] [CrossRef] [Green Version]
  23. Sayed, A.H. Adaptation, learning, and optimization over networks. Found. Trends Mach. Learn. 2014, 7, 311–801. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, X.; Liu, J.; Zhu, Z.; Bentley, E.S. Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 2431–2439. [Google Scholar] [CrossRef] [Green Version]
  25. Scaman, K.; Bach, F.; Bubeck, S.; Massoulié, L.; Lee, Y.T. Optimal algorithms for non-smooth distributed optimization in networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 2740–2749. [Google Scholar]
  26. Loizou, N.; Richtárik, P. Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols. arXiv 2019, arXiv:1905.08645. [Google Scholar] [CrossRef]
  27. Necoara, I.; Nesterov, Y.; Glineur, F. Random block coordinate descent methods for linearly constrained optimization over networks. J. Optim. Theory Appl. 2017, 173, 227–254. [Google Scholar] [CrossRef] [Green Version]
  28. Necoara, I.; Nesterov, Y.; Glineur, F. Linear convergence of first order methods for non-strongly convex optimization. Math. Progr. 2019, 175, 69–107. [Google Scholar] [CrossRef] [Green Version]
  29. Bertsekas, D.P.; Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods; Athena Scientific: Nashua, NH, USA, 1997; Available online: http://hdl.handle.net/1721.1/3719 (accessed on 1 December 2021).
  30. Kamath, G.; Ramanan, P.; Song, W.Z. Distributed Randomized Kaczmarz and Applications to Seismic Imaging in Sensor Network. In Proceedings of the 2015 International Conference on Distributed Computing in Sensor Systems, Fortaleza, Brazil, 10–12 June 2015; pp. 169–178. [Google Scholar] [CrossRef]
  31. Herman, G.T.; Hurwitz, H.; Lent, A.; Lung, H.P. On the Bayesian approach to image reconstruction. Inform. Control 1979, 42, 60–71. [Google Scholar] [CrossRef] [Green Version]
  32. Hansen, P.C. Discrete Inverse Problems: Insight and Algorithms; Fundamentals of Algorithms; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2010; Volume 7, p. xii+213. [Google Scholar] [CrossRef]
  33. Liu, J.; Wright, S.J.; Sridhar, S. An asynchronous parallel randomized Kaczmarz algorithm. arXiv 2014, arXiv:1401.4780. [Google Scholar]
  34. Herman, G.T.; Lent, A.; Hurwitz, H. A storage-efficient algorithm for finding the regularized solution of a large, inconsistent system of equations. J. Inst. Math. Appl. 1980, 25, 361–366. [Google Scholar] [CrossRef]
  35. Chi, Y.; Lu, Y.M. Kaczmarz method for solving quadratic equations. IEEE Signal Process. Lett. 2016, 23, 1183–1187. [Google Scholar] [CrossRef]
  36. Crombez, G. Finding common fixed points of strict paracontractions by averaging strings of sequential iterations. J. Nonlinear Convex Anal. 2002, 3, 345–351. [Google Scholar]
  37. Crombez, G. Parallel algorithms for finding common fixed points of paracontractions. Numer. Funct. Anal. Optim. 2002, 23, 47–59. [Google Scholar] [CrossRef]
  38. Nikazad, T.; Abbasi, M.; Mirzapour, M. Convergence of string-averaging method for a class of operators. Optim. Methods Softw. 2016, 31, 1189–1208. [Google Scholar] [CrossRef]
  39. Reich, S.; Zalas, R. A modular string averaging procedure for solving the common fixed point problem for quasi-nonexpansive mappings in Hilbert space. Numer. Algorithms 2016, 72, 297–323. [Google Scholar] [CrossRef]
  40. Censor, Y.; Zaslavski, A.J. Convergence and perturbation resilience of dynamic string-averaging projection methods. Comput. Optim. Appl. 2013, 54, 65–76. [Google Scholar] [CrossRef] [Green Version]
  41. Zaslavski, A.J. Dynamic string-averaging projection methods for convex feasibility problems in the presence of computational errors. J. Nonlinear Convex Anal. 2014, 15, 623–636. [Google Scholar]
  42. Witt, M.; Schultze, B.; Schulte, R.; Schubert, K.; Gomez, E. A proton simulator for testing implementations of proton CT reconstruction algorithms on GPGPU clusters. In Proceedings of the 2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC), Anaheim, CA, USA, 27 October–3 November 2012; pp. 4329–4334. [Google Scholar] [CrossRef]
  43. Censor, Y.; Nisenbaum, A. String-averaging methods for best approximation to common fixed point sets of operators: The finite and infinite cases. Fixed Point Theory Algorithms Sci. Eng. 2021, 21, 9. [Google Scholar] [CrossRef]
  44. Censor, Y.; Tom, E. Convergence of string-averaging projection schemes for inconsistent convex feasibility problems. Optim. Methods Softw. 2003, 18, 543–554. [Google Scholar] [CrossRef]
  45. Haddock, J.; Needell, D. Randomized projections for corrupted linear systems. In Proceedings of the AIP Conference Proceedings, Thessaloniki, Greece, 25–30 September 2017; Volume 1978, p. 470071. [Google Scholar]
  46. Borgard, R.; Harding, S.N.; Duba, H.; Makdad, C.; Mayfield, J.; Tuggle, R.; Weber, E.S. Accelerating the distributed Kaczmarz algorithm by strong over-relaxation. Linear Algebra Appl. 2021, 611, 334–355. [Google Scholar] [CrossRef]
  47. Needell, D. Randomized Kaczmarz solver for noisy linear systems. BIT 2010, 50, 395–403. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of updates in the distributed Kaczmarz algorithm with measurements indexed by nodes of the tree. (a) Updates disperse through nodes, (b) updates pool and pass to next iteration.
Figure 1. Illustration of updates in the distributed Kaczmarz algorithm with measurements indexed by nodes of the tree. (a) Updates disperse through nodes, (b) updates pool and pass to next iteration.
Axioms 11 00106 g001
Table 1. Convergence factors for various algorithms, for a random underdetermined equation of size 255 × 1023 . Numbers in parentheses represent the estimates from Equation (13).
Table 1. Convergence factors for various algorithms, for a random underdetermined equation of size 255 × 1023 . Numbers in parentheses represent the estimates from Equation (13).
Relaxation Parameter ω
0.511.522.533.5
deterministic
standard0.81380.54280.5579
sequential blocks
    4 blocks0.79630.49260.5227
    16 blocks0.81010.53620.5531
    64 blocks0.81350.54480.5628
parallel blocks
    4 blocks0.93460.89470.86040.82760.79480.76160.7273
    16 blocks0.98000.96360.94990.93790.92710.91730.9080
    64 blocks0.99470.98970.98490.98040.97610.97200.9681
    255 blocks0.99860.99730.99600.99470.99340.99220.9909
binary tree0.99410.99030.98700.9841
random
standard0.84400.74720.7039
blocks
   4 blocks0.80130.67360.6724
   16 blocks0.81460.71620.7001
   64 blocks0.82520.73930.7136
binary tree
    family0.90550.85100.81330.77420.76920.75280.8178
(0.9099)(0.8817)(0.9099)
    generation0.99400.99030.98740.9849
(0.9985)(0.9980)(0.9985)
    subtree0.93520.90170.86170.87350.9078
Table 2. Convergence factors for various algorithms, for a random overdetermined consistent equation of size 1023 × 255 . Numbers in parentheses represent the estimates from Equation (13).
Table 2. Convergence factors for various algorithms, for a random overdetermined consistent equation of size 1023 × 255 . Numbers in parentheses represent the estimates from Equation (13).
Relaxation Parameter ω
0.511.522.533.5
deterministic
standard0.49230.25640.2424
sequential blocks
    4 blocks0.06250.00000.0625
    16 blocks0.43110.18630.1894
    64 blocks0.47590.21310.2263
    255 blocks0.49300.25880.2409
parallel blocks
    4 blocks0.50000.00000.5000
    16 blocks0.91250.86960.83030.79190.75500.71950.6795
    64 blocks0.97070.94830.93130.91780.90630.89580.8857
    256 blocks0.99200.98450.97750.97090.96470.95900.9536
    1023 blocks0.99800.99600.99400.99200.99010.98820.9864
binary tree0.98890.98170.97580.9704
random
standard0.54810.34210.2748
blocks
   4 blocks0.06250.00000.0625
   16 blocks0.44510.24640.2291
   64 blocks0.46900.28470.2579
   256 blocks0.48120.28740.2675
binary tree
    family0.70400.54010.42250.33560.26150.27690.4462
(0.6879)(0.6073)(0.6879)
    generation0.98890.98090.97680.9709
(0.9985)(0.9980)(0.9985)
    subtree0.83490.74740.64860.64080.7103
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Keinert, F.; Weber, E.S. A Randomized Distributed Kaczmarz Algorithm and Anomaly Detection. Axioms 2022, 11, 106. https://doi.org/10.3390/axioms11030106

AMA Style

Keinert F, Weber ES. A Randomized Distributed Kaczmarz Algorithm and Anomaly Detection. Axioms. 2022; 11(3):106. https://doi.org/10.3390/axioms11030106

Chicago/Turabian Style

Keinert, Fritz, and Eric S. Weber. 2022. "A Randomized Distributed Kaczmarz Algorithm and Anomaly Detection" Axioms 11, no. 3: 106. https://doi.org/10.3390/axioms11030106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop