Next Article in Journal
Semi-Exponential Operators
Next Article in Special Issue
A Special Multigrid Strategy on Non-Uniform Grids for Solving 3D Convection–Diffusion Problems with Boundary/Interior Layers
Previous Article in Journal
Evaluation of Development Level and Technical Contribution of Recent Technologies Adopted to Meet the Challenges of 5G Wireless Cellular Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anderson Acceleration of the Arnoldi-Inout Method for Computing PageRank

1
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
2
School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu 611130, China
3
College of Science, Sichuan Agricultural University, Ya’an 625000, China
*
Authors to whom correspondence should be addressed.
Symmetry 2021, 13(4), 636; https://doi.org/10.3390/sym13040636
Submission received: 15 March 2021 / Revised: 29 March 2021 / Accepted: 1 April 2021 / Published: 10 April 2021

Abstract

:
Anderson( m 0 ) extrapolation, an accelerator to a fixed-point iteration, stores m 0 + 1 prior evaluations of the fixed-point iteration and computes a linear combination of those evaluations as a new iteration. The computational cost of the Anderson( m 0 ) acceleration becomes expensive with the parameter m 0 increasing, thus m 0 is a common choice in most practice. In this paper, with the aim of improving the computations of PageRank problems, a new method was developed by applying Anderson(1) extrapolation at periodic intervals within the Arnoldi-Inout method. The new method is called the AIOA method. Convergence analysis of the AIOA method is discussed in detail. Numerical results on several PageRank problems are presented to illustrate the effectiveness of our proposed method.

1. Introduction

As the core technology of network information retrieval, Google’s PageRank model (called the PageRank problem) uses the original hyperlink structure of the World Wide Web to determine the importance of each page and has received a lot of attention in the last two decades. The core of the PageRank problem is to compute a dominant eigenvector (or PageRank vector) of the Google matrix A by using the classical power method [1]:
A x = x ,     A = α P + 1 α v e T ,                 | | x | | 1 = 1 ,
where x is the PageRank vector, e is a column vector with all elements equal to 1, v is a personalized vector and the sum of its elements is 1, P is a column-stochastic matrix (i.e., the dangling nodes have been replaced by columns with 1 / n ), and α 0 , 1 is a damping factor.
As the damping factor α gradually approaches 1, the Google matrix is close to the original hyperlink structure. However, for large α such as α 0.99 , the second eigenvalue ( α ) of the matrix A will be close to the main eigenvalue (equal to 1) [2], such that the classical power method suffers from slow convergence. In order to accelerate the power method, a lot of new algorithms are used to compute PageRank problems. The quadratic extrapolation method proposed by Kamvar et al. [3] accelerates the convergence by periodically subtracting estimates of non-dominant eigenvectors from the current iteration of the power method. It is worth mentioning that the authors [4] provide a theoretical justification for acceleration methods, generalizing the quadratic extrapolation and interpreting it as a Krylov subspace method. Gleich et al. [5] proposed an inner-outer iteration, wherein an inner PageRank linear system with a smaller damping factor is solved in each iteration. The inner-outer iteration shows good potential as a framework for accelerating PageRank computations, and a series of methods have been proposed based on it. For example, Gu et al. [6] constructed the power-inner-outer (PIO) method by combining the inner-outer iteration with the power method. It is worth mentioning that different versions of the Arnoldi algorithm applied to PageRank computations were first introduced in [7]. Gu and Wang [8] proposed the Arnoldi-Inout (AIO) algorithm by knitting the inner-outer iteration with the thick restarted Arnoldi algorithm [9]. Hu et al. [10] proposed a variant of the Power-Arnoldi (PA) algorithm [11] by using an extrapolation process based on a trace of the Google matrix A [12].
Anderson( m 0 ) acceleration [13,14] has been widely used to accelerate the convergence of a fixed-point iteration. Its principle is to store m 0 + 1 prior evaluations of the fixed-point method and compute a linear combination of those evaluations such that a new iteration is obtained. Anderson(0) is the given fixed-point iteration. Note that when the parameter m 0 becomes large, the computational cost of the Anderson( m 0 ) acceleration becomes expensive. Hence, in most applications, m 0 is chosen to be small, and we set m 0 = 1 as a usual choice in this paper. In [15], Toth et al. proved that Anderson(1) extrapolation was locally q-linearly convergent. Pratapa et al. [16] developed the Alternating Anderson–Jacobi (AAJ) method by periodically employing Anderson extrapolation to accelerate the classical Jacobi iterative method for sparse linear systems.
In this paper, with the aim of accelerating the Arnoldi-Inout method for computing PageRank problems, the Anderson(1) extrapolation is used as an accelerator, and thus a new method is presented by combining the Anderson(1) extrapolation with the Arnoldi-Inout method periodically. Our proposed method is called the AIOA method, and its construction and convergence behavior are analyzed in detail, and numerical simulation experiments prove the effectiveness of the new algorithm.
The other parts of this article are structured as follows: In Section 2, we briefly review the Anderson acceleration and the Arnoldi-Inout method for PageRank problems. In Section 3, the AIOA method is constructed, and its convergence behavior is discussed. In Section 4, numerical comparisons are reported. Finally, in Section 5, we give some conclusions.

2. Previous Work

2.1. Anderson Acceleration

Anderson acceleration (also known as Anderson mixing) has been widely used in electronic structure computations [17]. Walker et al. [14] developed it for solving fixed-point problems: x = g x , where x R n and g : R n R n . They showed that Anderson acceleration without truncation was essentially equivalent, in a certain sense, to the generalized minimum residual method (GMRES) [18] for linear problems. It has been proved that the Anderson iteration is convergent if the fixed-point iteration g   is a contraction and the coefficients in the linear combination remain bounded [15].
In this paper, we consider the Anderson(1) acceleration that stores two prior evaluations g x 0 , g x 1 and then computes x 2 (a linear combination of g x 0   and g x 1 ) as the new iteration. The main algorithmic steps of Anderson(1) are given as Algorithm 1.
Algorithm 1 The Anderson(1) acceleration
(1) Given an initial vector x 0 .
(2) Compute x 1 = g x 0 , where g is a fixed-point iteration.
(3) Compute F = f 0 , f 1 ,   where f i = g x i x i , i = 0 , 1 .
(4) Compute γ = γ 0 , γ 1 T that satisfies
       min γ = γ 0 , γ 1 T | | F γ | | 2 , s . t . i = 0 1 γ i = 1 .   (2)
(5) Compute x 2 = γ 0 g x 0 + 1 γ 0 g x 1 .
According to [15], the constrained linear least-squares problem (2) in step 4 of Algorithm 1 can be formulated as an equivalent, unconstrained least-squares problem:
  min   γ 0 | | f 1 + f 0 f 1 γ 0 | | 2 .  
It is easy to solve the unconstrained least-squares problem   3 , for example, Pratapa et al. [16] chose the generalized inverse to compute γ 0 , and Walker et al. [19] chose QR decomposition [18] to compute γ 0 .

2.2. The Arnoldi-Inout Method for Computing PageRank

Gu and Wang [8] proposed the Arnoldi-Inout method by preconditioning the inner-outer iteration with the thick restarted Arnoldi method. Its algorithmic version can be found in Algorithm 2.
Algorithm 2 Arnoldi-Inout method [8]
Input: an initial vector x 0 , the size of the subspace m , the number of approximate eigenvectors that are retained from one cycle to the next p ^ , an inner tolerance η , an outer tolerance τ , three parameters α 1 , α 2 , and m a x i t to control the inner-outer iteration. Set r e s t a r t = 0 , r = 1 , d = 1 , d 0 = d .
Output: PageRank vector x .
(1). Apply the thick restarted Arnoldi algorithm [8,9] a few times (2–3 times). If the residual norm satisfies the prescribed tolerance, then stop; otherwise, continue.
(2). Run the inner-outer iteration with x as the initial guess, where x is the approximate vector obtained from the thick restarted Arnoldi algorithm:
r e s t a r t   =   0 ;
2.1. While r e s t a r t   <   m a x i t   &   r   >   τ
2.2.             x = x / | | x | | 1 ;   z = P x ;
2.3.             r = | | α z + 1 α v x | | 2 ;
2.4.             r 0 = r ;   r 1 = r ;   r a t i o = 0 ;
2.5.            While r a t i o < α 1   &   r >   τ
2.6.                         f = α β z + 1 α v ;
2.7.                         r a t i o 1 = 0 ;  
2.8.                        While r a t i o 1 < α 2   &   d >   η
2.9.                                     x   =   f   +   β z ;   z   =   P x ;
2.10.                                   d = | | f + β z x | | 2 ;
2.11.                                   r a t i o 1 = d / d 0 ; d 0 = d ;  
2.12.                        End While
2.13.                           r = | | α z + 1 α v x | | 2 ;
2.14.                         r a t i o = r / r 0 ; r 0 = r ;
2.15.            End While
2.16.               x   =   α z + 1 α v ;   x   =   x   / | | x | | 1 ;
2.17.            If   r / r 1 > α 1
2.18.                         r e s t a r t   =   r e s t a r t   +   1 ;
2.19.            End If
2.20. End While
2.21. If   r   τ , stop, else goto step 1.
For Algorithm 2, it is necessary to indicate that:
(1)
The detailed description of the thick restarted Arnoldi algorithm in step 1 can be found in [8,9]. Here, we leave out its implementation for conciseness.
(2)
The parameters α 1 , α 2 ,   r e s t a r t   and m a x i t are used to control the conversion between the inner-outer iteration and the thick restarted Arnoldi algorithm. The specific utility mechanism and more details can be found in [8].

3. The AIOA Method for Computing PageRank

In this section, we combine the Arnoldi-Inout method with the Anderson(1) acceleration. The new method is called the AIOA method, which can be understood as the Arnoldi-Inout method accelerated with the Anderson(1) extrapolation. We first describe the construction of the AIOA method and then analyze its convergence behavior.

3.1. The Construction of the AIOA Method

The mechanism of the AIOA method can be described as follows: We first ran the Arnoldi-Inout method with a given initial guess x 0 to get an approximation vector x ˜ 1 . If the approximation vector was unsatisfactory, then we treated the inner-outer iteration as a fixed-point problem and ran Algorithm 1 with vector x ˜ 1 as the starting vector to get another approximation vector x n e w . If the vector x n e w did not work better than the approximation vector x ˜ 3 of the fixed-point problem, we set x n e w = x ˜ 3 . If the new approximation vector x n e w was still not up to the specified accuracy, then we returned to the Arnoldi-Inout method with x n e w as the starting vector. We repeated the above process similarly until the required accuracy was reached. The specific algorithmic version is shown as follows.

3.2. Convergence Analysis

The convergence of the Arnoldi-Inout method and that of the Anderson acceleration can be found in [8,14,15]. In this subsection, we analyze the convergence of the AIOA method. Specifically, the convergence analysis of Algorithm 3 focuses on the process when turning from the Anderson(1) acceleration to the Arnoldi-Inout method.
Algorithm 3 AIOA method
(1). Given a unit initial guess x 0 , an inner tolerance η , an outer tolerance τ , the size of the subspace m , the number of approximate eigenvectors that are retained from one cycle to the next p ^ , three parameters α 1 , α 2   and m a x i t to control the inner-outer iteration. Set r e s t a r t   =   0 ,     r   =   1 ,     d   =   1 ,     d 0   =   d ,     l   =   1 .
(2). Run the Algorithm 2 with the initial vector x 0 . If the residual norm satisfies τ , then stop, otherwise continue.
(3). Run the Algorithm 1 with x ˜ 1 as the starting guess, where x ˜ 1   is the approximation vector obtained from step 2.
3.1.  l = 1 , z = P x ˜ 1 ;  
3.2. While   l   <   3   &   r   >   τ
3.3.                f = α β z + 1 α v ;
3.4.               Repeat
3.5.                          x   =   f   +   β z ;     z   =   P x ;  
3.6.               Until | | f + β z x | | 2 < η
3.7.                l   =   l   +   1 ;
3.8.                x ˜ l     =   α   z   +   1     α v ;
3.9.                r = | | x ˜ l   x | | 2 ;
3.10. End While
3.11. Compute f 0 = x ˜ 2 x ˜ 1 ,       f 1 = x ˜ 3 x ˜ 2 .
3.12. Compute γ 0 that satisfies min γ 0 | | f 1 + f 0 f 1 γ 0 | | 2 .
3.13. Compute x n e w   =   γ 0   x ˜ 2 +   1 γ 0   x ˜ 3 .
3.14. If | | x ˜ 3 x ˜ 2 | | 2 < | | x n e w x ˜ 2 | | 2
3.15.                x n e w =   x ˜ 3 ;
3.16. else
3.17.                r = | | x n e w x ˜ 2 | | 2 ;
3.18. End If
3.19. If r   τ , stop, else go back to step 2 with the vector x n e w as the starting vector.
Let L m 1 denote the set of polynomials whose degree does not exceed m 1 and σ A represent the set of eigenvalues of the matrix A . Assume the eigenvalues of A are sorted in the decreasing order 1 = λ 1 > λ 2 λ n . The following theorem proposed by Saad [20] describes the relationship between an approximate eigenvector μ 1 and the Krylov subspace K m .
Theorem 1.
[20] Assume that A is diagonalizable and that the initial vector v 0 in Arnoldi’s method has expansion v 0 = i = 1 n ζ i μ i with respect to the eigenbasis μ i i = 1 , 2 , 3 , , n in which | | μ i | | 1 = 1 , i = 1 , 2 , 3 , , n and ζ 1 0 . Then the following inequality holds
| | I P m μ 1 | | 2 ξ ε m ,    
where P m is the orthogonal projector onto the subspace K m A , v 0 ,   ξ = i = 2 n ζ i ζ 1 and
ε m = min p L m 1 , p λ 1 = 1 max λ σ A / λ 1 p λ .
For the purpose of analyzing the convergence speed of our algorithm, it is given that two useful theorems about the spectrum properties of the Google matrix   A are as follows.
Theorem 2.
[21] Assume that the spectrum of the column-stochastic matrix P is 1 , π 2 , , π n and then the spectrum of the matrix A = α P + 1 α e v T is 1 , α π 2 , , α π n , where α 0 , 1 , and v is a vector with nonnegative elements such that e T v = 1 .
Theorem 3.
[2] Let P be an   n   × n   column-stochastic matrix. Let α be a real number such that 0   <   α <   1 . Let E be an n   × n rank-one column-stochastic matrix   E   =   v e T , where   e is the n -vector whose elements are all ones and v is an n -vector whose elements are all nonnegative and sum to 1. Let A = α P + 1 α E be an n   × n column-stochastic matrix, and then its dominant eigenvalue λ 1 = 1 , λ 2 α .
In the Arnoldi-Inout method, let v 0   from the previous thick restarted Arnoldi method be the starting vector for the inner-outer iteration. Next, the inner-outer method produces the vector v 1 = G k v 0 , where k   m a x i t and G = I β P 1 α β P + 1 α v e T . The derivation of the iterative matrix G can be found in [5]. In our proposed method, we ran Algorithm 1 with vector   v 1 as the initial vector. Note that in the Anderson(1) acceleration, we treated the inner-outer iteration as a fixed-point iteration such that the new vector v n e w = ω 1 γ 0 G 2 v 1 + γ 0 G v 1 was produced such that ω was the normalizing factor. If the vector v n e w worked better than the vector G 2 v 1 , then, as given in Algorithm 3, we set v n e w = G 2 v 1 , which meant the Anderson(1) acceleration was reduced to the inner-outer iteration and the convergence of Algorithm 3 was certainly established for this case. Hence, it is discussed that the convergence for another case when the vector v n e w = ω 1 γ 0 G 2 v 1 + γ 0 G v 1 works better than the vector   G 2 v 1 .
In the next cycle of the AIOA algorithm, a m -step Arnoldi process was run with v n e w as the starting vector, and then the new Krylov subspace
K m A , v n e w = s p a n v n e w , A v n e w , , A m 1 v n e w
was constructed. Next, we introduced the theorem that illustrates the convergence of the AIOA method.
Theorem 4.
Suppose that the matrix A is diagonalizable if we denote by P m ˜ the orthogonal projector onto the subspace   K m A , v n e w . Then under the notations of Theorem 1, it has
| | I P m ˜ μ 1 | | 2 Λ · α β 1 β k + 1 · ξ ε m ,    
where Λ = 1 γ 0 α β 1 β + γ 0 , ξ = i = 2 n ζ i ζ 1 , ε m = min p L m 1 , p λ 1 = 1 max λ σ A / λ 1 p λ and k m a x i t .
Proof of Theorem 4.
For any   u     K m A , v n e w , there exists q x L m 1 such that
u = q A v n e w = ω · q A · 1 γ 0 G 2 + γ 0 G v 1 = ω · q A · 1 γ 0 G k + 2 + γ 0 G k + 1 v 0 = ω · q A · 1 γ 0 G k + 2 + γ 0 G k + 1 · ζ 1 μ 1 + i = 2 n ζ i μ i ,
where v 0 = i = 1 n ζ i μ i is the expansion of v 0 within the eigenbasis μ 1 , μ 2 , , μ n .
As shown in [5] and [8], it has
G = I β P 1 α β P + 1 α v e T = I β P 1 A I β P 1 + I ,
then
G μ i = I β P 1 A μ i     I β P 1   μ i   +   μ i = I β P 1 λ i μ i I β P 1 μ i + μ i = λ i 1 I β P 1 μ i + μ i ,
where we use   A μ i = λ i μ i ,   i   =   1 ,   2 ,   ,   n .
Assume that π i is an eigenvalue of P , and from Theorem 2, π 1 = 1 ,   π i = 1 α λ i ,   i = 2 ,   3 ,   ,   n , then the matrix I β P 1 has eigenvalues
η i = 1 1 β π i ,   i = 1 , 2 , , n ,
such that
G μ i = λ i β π i 1 β π i μ i ,   i = 2 , 3 , , n .    
Using the fact that λ 1 = 1 and π 1 = 1 , we have G μ 1 = μ 1 and G k μ 1 = μ 1 . Let
φ i = λ i β π i 1 β π i , i = 2 , 3 , , n ,    
then, according to Theorem 3 and derivation in [8], it has   λ i α ,   i   =   2 , 3 ,   ,   n , such that
φ i = λ i β π i 1 β π i α β 1 β .
Substituting (7) and (8) into (6), it has
u = ω q 1 ζ 1 μ 1 + ω 1 γ 0 i = 2 n q λ i φ i k + 2 ζ i μ i + γ 0 i = 2 n q λ i φ i k + 1 ζ i μ i ,
and then
| | u ω q 1 ζ 1 μ 1 | | 2 = | | i = 2 n ζ i ζ 1 · q λ i q 1 · φ i k + 1 μ i · 1 γ 0 φ i + γ 0 | | 2 1 γ 0 α β 1 β + γ 0 · α β 1 β k + 1 i = 2 n ζ i ζ 1 · max i 1 p λ i = Λ · α β 1 β k + 1 ξ · max i 1 p λ i ,
where we let p λ = q λ / q 1 satisfy p 1 = 1 , ξ = i = 2 n ζ i ζ 1 and Λ = 1 γ 0 α β 1 β + γ 0 .
Therefore, we proved
| | I P m ˜ μ 1 | | 2 = min u K m A , v n e w | | u μ 1 | | 2 Λ · α β 1 β k + 1 · ξ · min p L m 1 , p λ 1 = 1 max λ σ A / λ 1 p λ .
Remark 1.
Comparing (4) with (5), it is easy to find that our method can improve the convergence speed by a factor of at least Λ · α β 1 β k + 1 when turning from the Anderson(1) acceleration to the Arnoldi-Inout method.

4. Numerical Experiments

In this section, we first give the appropriate choice for the parameter m a x i t and then test the effectiveness of the AIOA method. For the thick restarted Arnoldi method, there were two parameters, m and p ^ , that needed to be considered, but the thick restarted Arnoldi method had the same effect as the Arnoldi-Inout [8] method and the AIOA method. In addition, with the parameters m and p ^ increasing, the cost would have been expensive, and they usually take small values. As a result, we don’t discuss the choice of the two parameters m and p ^ in detail and set m =   4 and p ^ = 3 for all test examples.
All the numerical experiments were performed using MATLAB R2018a programming package on 2.10 GHZ CPU with 1 6GB RAM.
Table 1 lists the characteristics of the test matrices, where n represents the matrix size, n n z denotes the number of nonzero elements and d e n is the density which is defined by d e n = n n z n × n × 100 . All the test matrices are available from https://sparse.tamu.edu/ (accessed on 14 July 2020). For the sake of justice, the same initial guess x 0 = v = e / n with e = 1 , 1 , , 1 T was used. The damping factors were chosen as α =   0.99 ,   0.993 ,   0.995 and 0.998 in all numerical experiments. The stopping criterion were set as the 2-norm of the residual, and the prescribed outer tolerance was τ = 10 8 . For the inner-outer iterations, the inner residual tolerance was η = 10 2 , and the smaller damping factor was β =   0.5 . The parameters chosen to control the flip-flop were α 1 = α 0.1 and α 2 = α 0.1 . We ran the thick restarted Arnoldi procedure twice in each loop of the Arnoldi-Inout [8] method and the AIOA method. In the AIOA algorithm, we chose the QR decomposition to compute γ 0 .

4.1. The Selection of Parameter M a x i t

In this subsection, we discuss the selection of the parameter value m a x i t by analyzing the numerical results of the Arnoldi-Inout [8] (denoted as “AIO”) method and the AIOA method for the web-Stanford matrix, which contains 281,903 pages and 2,312,497 links. Table 2 lists the matrix–vector products (MV) of the AIO method and the AIOA method for the web-Stanford matrix when α =   0.99 ,   0.993 ,   0.995 ,   0.998 and m a x i t   =   2 , 4 , 6 , 8 , 10 . Figure 1 depicts the curves of computing time (CPU) of the two methods versus number m a x i t , respectively.
From Table 2, it is observed that the optimal m a x i t was different for different α and different methods. From Figure 1, optimal m a x i t is 6 and the worst performing m a x i t is 8 for the AIO method, but for the AIOA method, the best value of m a x i t is not 6. For fairness, we decided to choose the m a x i t   =   4   in the following numerical experiments. In addition, in Table 2, when α =   0.995 and m a x i t   =   6 , the MV of the AIOA is a little more than that of the AIO method, but the CPU time of AIOA method is better than that of the AIO method. The situation suggests that our method has some potential.

4.2. Comparisons of Numerical Results

In this subsection, we tested the effectiveness of the AIOA method through numerical comparison experiments with the inner-outer (denoted as “Inout”) [5] method, the power-inner-outer (denoted as “PIO”) [6] method and the Arnoldi-Inout (denoted as “AIO”) [8] method in terms of iteration counts (IT), the number of matrix-vector products (MV) and the computing time (CPU) in seconds. In all experiments in this subsection, we set the parameters m = 4 ,   p ^ = 3 and m a x i t   =   4 . Table 3, Table 4, Table 5 and Table 6 give the numerical experiment results of the Inout method, the PIO method, the AIO method and the AIOA method for four matrices when α =   0.99 ,   0.993 ,   0.995 ,   0.998 , and Figure 2, Figure 3, Figure 4 and Figure 5 describe the residual convergence images of the above methods with different α for all test matrices.
In order to better demonstrate the efficiency of our proposed method, we defined
speedup = CPU AIO CPU AIOA CPU AIO × 100 % ,
to show the speedup of the AIOA method with respect to the AIO method in terms of CPU.
From the numerical results in Table 3, Table 4, Table 5 and Table 6, it is easy to see that the AIOA method performed better than the other three methods in terms of IT, MV and CPU time for four matrices with different damping factors. As we expected, the advantage of the AIOA method was obvious for large α . For instance, when   α =   0.995 , the speedup is 52.65 % in Table 3 and   36.66 % in Table 5. When α =   0.998 , the speedup is 49.48 % in Table 4 and   60.32 % in Table 6. In addition, from Figure 2, Figure 3, Figure 4 and Figure 5, it is easy to observe that the AIOA method can reach the accuracy requirement faster than the Inout method, the PIO method and the AIO method for all test examples. Therefore, the above results verify the effectiveness of the AIOA method.

5. Conclusions

In this paper, by employing the Anderson(1) extrapolation at periodic intervals within the Arnoldi-Inout method, we have presented a new method called the AIOA method to accelerate the computation speed of PageRank problems. Its implementation process and convergence theorem can be found in Section 3. Numerical simulation experiment results in Section 4 proved that the AIOA method was very efficient and converged faster compared to the inner-outer method, the power-inner-outer method and the Arnoldi-Inout method. However, there is still a lot of work to be further studied. For example, it is difficult to handle the best choices for parameters m ,   β ,   m a x i t .

Author Contributions

Methodology, C.W.; software, X.T.; writing—original draft preparation, X.T.; writing—review and editing, C.W., X.-M.G. and Z.-L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Page, L.; Brin, S.; Motwani, R. The PageRank Citation Ranking: Bringing Order to the Web; Stanford InfoLab: Stanford, CA, USA, 1999. [Google Scholar]
  2. Haveliwala, T.; Kamvar, S. The Second Eigenvalue of the Google Matrix; Stanford InfoLab: Stanford, CA, USA, 2003. [Google Scholar]
  3. Kamvar, S.D.; Haveliwala, T.H.; Manning, C.D.; Golub, G.H. Extrapolation methods for accelerating PageRank computations. In Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, 20–24 May 2003; pp. 261–270. [Google Scholar]
  4. Brezinski, C.; Redivo-Zaglia, M. The PageRank vector: Properties, computation, approximation, and acceleration. SIAM J. Matrix Anal. Appl. 2006, 28, 551–575. [Google Scholar] [CrossRef]
  5. Gleich, D.F.; Gray, A.P.; Greif, C. An inner-outer iteration for computing PageRank. SIAM J. Sci. Comput. 2010, 32, 349–371. [Google Scholar] [CrossRef]
  6. Gu, C.Q.; Xie, F.; Zhang, K. A two-step matrix splitting iteration for computing PageRank. J. Comput. Appl. Math. 2015, 278, 19–28. [Google Scholar] [CrossRef]
  7. Golub, G.H.; Greif, C. An Arnoldi-type algorithm for computing page rank. BIT 2006, 46, 759–771. [Google Scholar] [CrossRef]
  8. Gu, C.Q.; Wang, W. An Arnoldi-Inout algorithm for computing PageRank problems. J. Comput. Appl. Math. 2017, 309, 219–229. [Google Scholar] [CrossRef]
  9. Morgan, R.B.; Zeng, M. A harmonic restarted Arnoldi algorithm for calculating eigenvalues and determining multiplicity. Linear Algebra Appl. 2006, 415, 96–113. [Google Scholar] [CrossRef] [Green Version]
  10. Hu, Q.Y.; Wen, C.; Huang, T.Z.; Shen, Z.L.; Gu, X.M. A variant of the Power–Arnoldi algorithm for computing PageRank. J. Comput. Appl. Math. 2021, 381, 113034. [Google Scholar] [CrossRef]
  11. Wu, G.; Wei, Y. A Power—Arnoldi algorithm for computing PageRank. Numer. Linear Algebra Appl. 2007, 14, 521–546. [Google Scholar] [CrossRef]
  12. Tan, X. A new extrapolation method for PageRank computations. J. Comput. Appl. Math. 2017, 313, 383–392. [Google Scholar] [CrossRef]
  13. Anderson, D.G. Iterative procedures for nonlinear integral equations. JACM 1965, 12, 547–560. [Google Scholar] [CrossRef]
  14. Walker, H.F.; Ni, P. Anderson acceleration for fixed-point iterations. SIAM J. Numer. Anal. 2011, 49, 1715–1735. [Google Scholar] [CrossRef] [Green Version]
  15. Toth, A.; Kelley, C.T. Convergence analysis for Anderson acceleration. SIAM J. Numer. Anal. 2015, 53, 805–819. [Google Scholar] [CrossRef]
  16. Pratapa, P.P.; Suryanarayana, P.; Pask, J.E. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems. J. Comput. Phys. 2016, 306, 43–54. [Google Scholar] [CrossRef] [Green Version]
  17. Yang, C.; Meza, J.C.; Wang, L.W. A trust region direct constrained minimization algorithm for the kohn–sham equation. SIAM J. Sci. Comput. 2007, 29, 1854–1875. [Google Scholar] [CrossRef] [Green Version]
  18. Allaire, G.; Kaber, S.M.; Trabelsi, K.; Allaire, G. Numerical Linear Algebra; Springer: New York, NY, USA, 2008. [Google Scholar]
  19. Walker, H.F. Anderson Acceleration: Algorithms and Implementations; Report MS-6-15-50; WPI Math. Sciences Dept.: Worcester, MA, USA, 2011. [Google Scholar]
  20. Saad, Y. Numerical Methods for Large Eigenvalue Problems; Manchester University Press: Manchester, UK, 1992. [Google Scholar]
  21. Langville, A.; Meyer, C. Google’s PageRank and Beyond: The Science of the Search Engine Rankings; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
Figure 1. The total computing (CPU) time of the Arnoldi-Inout (AIO) method and the AIOA method versus number m a x i t on the web-Stanford matrix.
Figure 1. The total computing (CPU) time of the Arnoldi-Inout (AIO) method and the AIOA method versus number m a x i t on the web-Stanford matrix.
Symmetry 13 00636 g001
Figure 2. Convergence behaviors of the four methods on the wb-cs-stanford matrix.
Figure 2. Convergence behaviors of the four methods on the wb-cs-stanford matrix.
Symmetry 13 00636 g002
Figure 3. Convergence behaviors of the four methods on the usroads-48 matrix.
Figure 3. Convergence behaviors of the four methods on the usroads-48 matrix.
Symmetry 13 00636 g003
Figure 4. Convergence behaviors of the four methods on the web-Stanford matrix.
Figure 4. Convergence behaviors of the four methods on the web-Stanford matrix.
Symmetry 13 00636 g004
Figure 5. Convergence behaviors of the four methods on the wiki-Talk matrix.
Figure 5. Convergence behaviors of the four methods on the wiki-Talk matrix.
Symmetry 13 00636 g005
Table 1. The characteristics of test matrices.
Table 1. The characteristics of test matrices.
Name n n n z d e n
wb-cs-stanford991436,854 0.375 × 10 1
usroads-48126,146323,900 0.204 × 10 2
web-Stanford281,9032,312,497 0.291 × 10 2
wiki-Talk2,394,3855,021,410 0.875 × 10 4
Table 2. The number of the matrix–vector products of the AIO method and the AIOA method on the web-Stanford matrix.
Table 2. The number of the matrix–vector products of the AIO method and the AIOA method on the web-Stanford matrix.
α m a x i t = 2 m a x i t = 4 m a x i t = 6 m a x i t = 8 m a x i t = 10
AIOAIOAAIOAIOAAIOAIOAAIOAIOAAIOAIOA
α = 0.99 342266337277386306423308439281
α = 0.993 446309422327433376542417563358
α = 0.995 558383637414524544706440711471
α = 0.998 104458897567769966115036691533789
Table 3. Numerical results of the four methods on the wb-cs-stanford matrix.
Table 3. Numerical results of the four methods on the wb-cs-stanford matrix.
  α InoutPIOAIOAIOA
α = 0.99 IT997333192116
MV997666238167
CPU0.23440.20110.17410.1038
speedup 40.35%
α = 0.993 IT1427476252133
MV1427952316200
CPU0.32830.24880.22970.1211
speedup 47.28%
α = 0.995 IT2000667304143
MV20001334378209
CPU0.45900.34900.26890.1273
speedup 52.65%
α = 0.998 IT50091670396216
MV50093340496315
CPU1.13470.86700.38170.1985
speedup 47.99%
Table 4. Numerical results of the four methods on the usroads-48 matrix.
Table 4. Numerical results of the four methods on the usroads-48 matrix.
α InoutPIOAIOAIOA
α = 0.99 IT4361469651
MV43629210973
CPU1.12751.03620.79280.4347
speedup 45.16%
α = 0.993 IT53718011859
MV53736013584
CPU1.64841.08881.04870.6894
speedup 34.26%
α = 0.995 IT64621614664
MV64643216494
CPU1.95621.49691.15190.6894
speedup 40.14%
α = 0.998 IT999334242106
MV999668272155
CPU2.53752.04791.81380.9163
speedup 49.48%
Table 5. Numerical results of the four methods on the web-Stanford matrix.
Table 5. Numerical results of the four methods on the web-Stanford matrix.
α InoutPIOAIOAIOA
α = 0.99 IT768381284191
MV769762337277
CPU9.542611.74888.44377.2447
speedup 14.20%
α = 0.993 IT1087544360229
MV10881088422327
CPU10.456713.482611.05648.2018
speedup 25.81%
α = 0.995 IT1516763540279
MV15171526637414
CPU16.634417.867817.048510.7983
speedup 36.66%
α = 0.998 IT37811908828484
MV37823816975677
CPU38.150743.284326.516916.8771
speedup 36.35%
Table 6. Numerical results of the four methods on the wiki-Talk matrix.
Table 6. Numerical results of the four methods on the wiki-Talk matrix.
α InoutPIOAIOAIOA
α = 0.99 IT6872309786
MV687460117109
CPU47.523534.359723.783420.1552
speedup 15.25%
α = 0.993 IT971324113109
MV971648136136
CPU73.274045.546327.877625.1927
speedup 9.63%
α = 0.995 IT1339448145118
MV1339896173157
CPU98.478162.380635.812229.3671
speedup 17.99%
α = 0.998 IT3127104427598
MV31272088324141
CPU208.5881155.735865.680826.0576
speedup 60.32%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, X.; Wen, C.; Gu, X.-M.; Shen, Z.-L. Anderson Acceleration of the Arnoldi-Inout Method for Computing PageRank. Symmetry 2021, 13, 636. https://doi.org/10.3390/sym13040636

AMA Style

Tang X, Wen C, Gu X-M, Shen Z-L. Anderson Acceleration of the Arnoldi-Inout Method for Computing PageRank. Symmetry. 2021; 13(4):636. https://doi.org/10.3390/sym13040636

Chicago/Turabian Style

Tang, Xia, Chun Wen, Xian-Ming Gu, and Zhao-Li Shen. 2021. "Anderson Acceleration of the Arnoldi-Inout Method for Computing PageRank" Symmetry 13, no. 4: 636. https://doi.org/10.3390/sym13040636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop