Next Article in Journal
On Some Mean Value Results for the Zeta-Function and a Rankin–Selberg Problem
Previous Article in Journal
Test-Path Scheduling for Interposer-Based 2.5D Integrated Circuits Using an Orthogonal Learning-Based Differential Evolution Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ε-Algorithm Accelerated Fixed-Point Iteration for the Three-Way GIPSCAL Problem in Asymmetric MDS

1
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
2
Center for Applied Mathematics of Guangxi (GUET), Guilin 541004, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2680; https://doi.org/10.3390/math13162680
Submission received: 13 July 2025 / Revised: 11 August 2025 / Accepted: 14 August 2025 / Published: 20 August 2025

Abstract

The Generalized Inner Product SCALing (GIPSCAL) model is a specialized tool for analyzing square asymmetric tables within asymmetric multidimensional scaling (MDS), with applications in sociology (e.g., social mobility tables) and marketing (e.g., brand switching data). This paper presents the development of an efficient numerical algorithm for solving the three-way GIPSCAL problem. We focus on vector ε -algorithm-accelerated fixed-point iterations, detailing the underlying acceleration principles. Extensive numerical experiments show that the proposed method achieves acceleration performance comparable to polynomial extrapolation and Anderson acceleration. Furthermore, compared to continuous-time projected gradient flow methods and first- and second-order Riemannian optimization algorithms from the Manopt toolbox, our approach demonstrates superior computational efficiency and scalability.

1. Introduction

Asymmetric square matrices naturally arise in numerous scientific domains, including sociology (e.g., social mobility matrices), marketing (e.g., brand-switching data), and psychology (e.g., stimulus identification experiments) [1,2]. To facilitate the analysis of such data, Harshman [3] introduced the DEcomposition into DIrectional COMponents (DEDICOM) model, which approximates an asymmetric matrix X R n × n as
X = Q R Q T + E ,
where Q R n × r (with r n ) is typically assumed to be column-orthonormal and encodes object coordinates in a latent space, R R r × r represents the (possibly asymmetric) inter-dimensional relationships, and E denotes the residual matrix. While DEDICOM offers interpretability in modeling asymmetric relationships, it lacks capabilities for effective visualization. To address this limitation, Chino [4] introduced the GIPSCAL (Generalized Inner Product SCALing) model, which not only handles asymmetric data but also provides a framework for graphical representation. However, the original GIPSCAL formulation may exhibit inadequate fit to empirical data. A generalized variant, introduced by Kiers and Takane [5], improves model flexibility via the decomposition
X = A ( I r + K ) A T + E ,
where A R n × r is a weight matrix, I r is the r × r identity matrix, K R r × r is skew-symmetric, and E denotes the error matrix. The symmetric part A A T may be visualized using Classical Multidimensional Scaling (MDS) techniques [6,7], while the asymmetric component A K A T is treated using Gower’s method [7,8]. The model parameters are typically estimated by minimizing the least-squares objective, as outlined in Kiers and Takane [5]. Subsequent developments by Trendafilov and Gallo [9], Trendafilov [10] recast the GIPSCAL framework into the following constrained optimization problem:
Minimize 1 2 | | X Q ( D + K ) Q T | | 2 , subject to ( Q , D , K ) O ( n , r ) × D + ( r ) × K ( r ) .
Here, · denotes the Frobenius norm, O ( n , r ) : = { Q R n × r : Q T Q = I r } is the Stiefel manifold of orthonormal matrices, D + ( r ) is the set of r × r diagonal matrices with nonnegative entries, and K ( r ) denotes the set of skew-symmetric r × r matrices.
This reformulation clarifies the connection between GIPSCAL and the DEDICOM model in (1), revealing that GIPSCAL constitutes a special case of DEDICOM in which the symmetric part of R is constrained to lie in the nonnegative diagonal cone. Additionally, this formulation can be interpreted as a natural asymmetric extension of the INdividual Differences SCALing (INDSCAL) model [7,11]. The GIPSCAL methodology has further been generalized to handle three-way data arrays comprising N asymmetric slices X i R n × n , analogous to three-way extensions of INDSCAL. In this setting, each slice is modeled as [9,10,12,13]
X i = Q ( D i + K i ) Q T + E i ,
where Q R n × r is a common loading matrix shared across all slices, D i R r × r are slice-specific nonnegative diagonal matrices, and K i R r × r are slice-specific skew-symmetric matrices. Consequently, the three-way GIPSCAL seeks to determine ( Q , D 1 , , D N , K 1 , , K N ) by fitting the model to the n × n asymmetric data matrices X i in a least-squares sense [9,10,12,13,14]
Minimize 1 2 i = 1 N | | X i Q ( D i + K i ) Q T | | 2 , subject to ( Q , D 1 , , D N , K 1 , , K N ) O ( n , r ) × D ( r ) + N × K ( r ) N .
where D + ( r ) N and K ( r ) N denote N independent copies of the nonnegative diagonal cone and skew-symmetric matrix space, respectively.
In this work, we revisit the numerical challenge of fitting the three-way GIPSCAL model (5), a problem characterized by nonlinear coupling among variables and the need for scalable, efficient algorithms. Despite its relevance in multivariate data analysis, the literature on this topic remains relatively sparse. Early contributions by Trendafilov [10,13] reformulated problem (5) as a constrained gradient dynamical system and proposed a continuous-time projected gradient flow algorithm. This method guarantees global convergence and has shown strong empirical performance across various applications in multivariate analysis [9,15,16,17,18]. Nevertheless, its scalability may be limited in large-scale settings due to computational inefficiencies. To accelerate the inherently slow convergence of alternating least squares (ALS) methods, Loisel and Takane [19] proposed a minimal polynomial extrapolation (MPE) scheme, leveraging vector sequence fixed-point iteration. Empirical results indicate that this approach can substantially speed up convergence. However, in practical implementations, selecting an appropriate backtracking step size often relies on heuristic tuning rather than principled criteria. More recently, Trendafilov and Gallo [9] explored optimization-based reformulations of multivariate models on matrix manifolds and demonstrated the effectiveness of the Manopt toolbox [20] in addressing such problems through Riemannian optimization techniques. Motivated by these developments, this paper builds upon the fixed-point acceleration strategy of Loisel and Takane [19], extending its application to problem (5). We propose a new algorithmic framework that interprets approximate ALS iterations as a matrix-valued fixed-point iteration. Furthermore, we develop acceleration schemes based on the vector ε -algorithm (VEA), the topological ε -algorithm (TEA), and its simplified variant (STEA) [21,22], incorporating recent advances in numerical extrapolation and available algorithmic toolboxes. Extensive numerical experiments show that, compared to the original matrix sequences generated by fixed-point iteration, the matrix sequence extrapolation acceleration significantly improves convergence. Furthermore, when compared with existing solvers for Problem (5), such as the continuous-time projection gradient flow algorithm and the Riemannian optimization-based Manopt toolbox solvers, the ε -algorithms accelerated fixed-point iteration demonstrates a notable reduction in iteration time.
The remainder of the paper is organized as follows: In Section 2, we present the fixed-point iteration framework for solving the three-way GIPSCAL problem in (5). Section 3 introduces the core acceleration principles and describes the implementation of the VEA, TEA, and STEA within this context. Section 4 reports a comprehensive set of numerical experiments, benchmarking the proposed acceleration schemes against the continuous-time projected gradient flow method and several first- and second-order Riemannian optimization algorithms implemented in Manopt. Finally, Section 5 concludes the paper.

2. Fixed-Point Iteration Framework for Problem (5)

Building on the conditional minimization strategy introduced by Loisel and Takane [19], the three-way GIPSCAL problem in (5) can be reformulated as a fixed-point iteration scheme. Their original framework also incorporates minimal polynomial extrapolation (MPE) to accelerate convergence. For the sake of completeness, we revisit and extend this approach by developing an alternating least squares (ALS) framework, which naturally leads to a fixed-point iteration formulation.
Let S ( r ) denote the space of symmetric r × r matrices. For a point Z : = ( Q , D 1 , , D N , K 1 , , K N ) in the product space R n × r × S ( r ) N × K ( r ) N , we define the residual mapping
F i ( Q , D i , K i ) = X i Q ( D i + K i ) Q T , i = 1 , , N ,
which allows us to express the objective function of problem (5) as f ( Z ) = 1 2 i = 1 N F i 2 . Since the variables D i and K i are independent for each slice i, through straightforward algebraic derivation, the Euclidean gradient of f ( Z ) can be decomposed component-wise as follows:
f ( Z ) = ( Q f , D 1 f , , D N f , K 1 f , , K N f ) .
Here,
Q f = 2 i = 1 N sym ( X i ) Q D i skew ( X i ) Q K i Q ( D i 2 K i 2 ) , D i f = D i Q T sym ( X i ) Q , i = 1 , , N , K i f = K i Q T skew ( X i ) Q , i = 1 , , N .
Additionally, sym ( A ) = 1 2 ( A + A T ) and skew ( A ) = 1 2 ( A A T ) denote the symmetric and skew-symmetric parts of matrix A, respectively.
Given a current iterate Z ( j ) = ( Q ( j ) , D 1 ( j ) , , D N ( j ) , K 1 ( j ) , , K N ( j ) ) O ( n , r ) × D + ( r ) N × K ( r ) N , the ALS-based iterative scheme for solving (5) is defined by the following update rules:
D i ( j + 1 ) = argmin D i f i ( Q ( j ) , D i , K i ( j ) ) , D i D + ( r ) , i = 1 , , N ; K i ( j + 1 ) = argmin K i f i ( Q ( j ) , D i ( j + 1 ) , K i ) , K i K ( r ) , i = 1 , , N ; Q ( j + 1 ) = argmin Q f ( Q , D 1 ( j + 1 ) , , D N ( j + 1 ) , K 1 ( j + 1 ) , , K N ( j + 1 ) ) , Q O ( n , r ) .
The update for D i involves solving a convex constrained matrix optimization problem:
min D i D + ( r ) f i ( Q ( j ) , D i , K i ( j ) ) = 1 2 X i Q ( j ) ( D i + K i ( j ) ) Q ( j ) T 2 .
By deriving the first-order optimality condition, we obtain the variational inequality
D i D i ( j + 1 ) , D i f i ( Q ( j ) , D i , K i ( j ) ) 0 , D i D + ( r ) .
According to Theorem 3.1.1 of Hiriart-Urruty and Lemaréchal [23], this is equivalent to solving the implicit projection equation:
D i ( j + 1 ) P D + ( r ) D i ( j + 1 ) D i f i ( Q ( j ) , D i ( j + 1 ) , K i ( j ) ) = 0 , D i ( j + 1 ) D + ( r ) .
Here, the projection P D + ( r ) : R r × r D + ( r ) is defined element-wise via
P D + ( r ) ( M ) = max { 0 , I r M } , M R r × r ,
where ⊙ denotes the Hadamard (element-wise) product. The closed-form solution is thus
D i ( j + 1 ) = max 0 , I r Q ( j ) T sym ( X i ) Q ( j ) , i = 1 , , N .
Since K ( r ) is a linear subspace, the optimal K i satisfies the projected gradient condition
P K ( r ) K i f i Q ( j ) , D i ( j + 1 ) , K i ( j + 1 ) = 0 ,
where P K ( r ) ( M ) = skew ( M ) . Hence, the solution is explicitly given by
K i ( j + 1 ) = Q ( j ) T skew ( X i ) Q ( j ) , i = 1 , , N .
The update for Q entails solving the following orthogonally constrained, nonconvex optimization problem
min Q O ( n , r ) 1 2 i = 1 N X i Q D i ( j + 1 ) + K i ( j + 1 ) Q T 2 .
The associated Lagrangian, incorporating the constraint Q T Q = I r , is given by
L ( Q , Λ ) = 1 2 i = 1 N X i Q D i ( j + 1 ) + K i ( j + 1 ) Q T 2 + Λ , Q T Q I r ,
where Λ is a symmetric Lagrange multiplier matrix. The first-order optimality condition yields
( I r Q Q T ) i = 1 N sym ( X i ) Q D i ( j + 1 ) skew ( X i ) Q K i ( j + 1 ) = 0 .
Although solving (16) analytically is challenging, we follow the strategy of Loisel and Takane [19] and approximate the solution as follows:
  • Compute the matrix
    G ( j ) : = i = 1 N sym ( X i ) Q ( j ) D i ( j + 1 ) skew ( X i ) Q ( j ) K i ( j + 1 ) ;
  • Compute the thin singular value decomposition G ( j ) = U Σ V T , and set
    Q ( j + 1 ) : = U V T .
By integrating the subproblem updates (13), (14), and (18) within the ALS scheme from (8), we obtain the following iterative sequence:
Q ( j ) ( D 1 ( j + 1 ) , , D N ( j + 1 ) , K 1 ( j + 1 ) , , K N ( j + 1 ) ) G ( j ) Q ( j + 1 ) .
This iterative process naturally defines a fixed-point iteration:
Q ( j + 1 ) = H GIPSCAL ( Q ( j ) ) ,
Here, H GIPSCAL : R n × r R n × r denotes the nonlinear mapping associated with one complete ALS update.
To further analyze the convergence of problem (5) and iteration (19), we present the following theoretical results:
Theorem 1. 
Problem (5) has a global optimal solution.
Proof. 
The feasible set M : = O ( n , r ) × D + ( r ) N × K ( r ) N is compact under the Frobenius norm topology. The objective function f ( Z ) = 1 2 i = 1 N X i Q ( D i + K i ) Q T 2 is continuous. Therefore, by the Weierstrass extreme value theorem, f achieves its minimum value on M .    □
Theorem 2. 
The problem (5) does not have a closed-form analytical solution. This conclusion holds even in special cases where N = 1 or r = 1 .
Proof. 
(i) The non-convex feasible set induced by the orthogonal constraint Q Q = I r and the quartic complexity of the objective function with respect to Q results in the Hessian matrix being indefinite at the critical points.
(ii) When D i , K i are fixed, the Q-subproblem degenerates into a generalized orthogonal Procrustes problem
min Q O ( n , r ) i = 1 N X i Q A i Q F 2 , ( A i = D i + K i ) ,
where the asymmetry of A i (due to K i 0 ) causes this subproblem to have no analytical solution (unlike the case when A i is symmetric and can be solved by eigenvalue decomposition);
(iii) The coupling between the slices induced by the summation i = 1 N in the objective function.
(i)–(iii) together exclude the possibility of a closed-form solution, so numerical iterative methods must be used to approximate the solution.    □
Theorem 3. 
The iterative sequence Z ( j ) j = 0 , where Z ( j ) = ( Q ( j ) , D i ( j ) , K i ( j ) ) , generated by alternating least squares to solve problem (5), has its objective function value sequence f ( Z ( j ) ) converge to a non-negative limit L.
Proof. 
The objective function f ( Z ) = 1 2 i = 1 N X i Q ( D i + K i ) Q T 2 is a sum of squared Frobenius norms, and its value is always non-negative. Thus, the sequence f ( Z ( j ) ) is bounded below by 0. In the alternating update process,
  • Fixing Q ( j ) , K i ( j ) and updating D i ( j + 1 ) , the global optimality of the convex subproblem (13) ensures that f i ( Q ( j ) , D i ( j + 1 ) , K i ( j ) ) f i ( Q ( j ) , D i ( j ) , K i ( j ) )
  • Fixing Q ( j ) , D i ( j + 1 ) and updating K i ( j + 1 ) , the closed-form solution in (14) guarantees that f i ( Q ( j ) , D i ( j + 1 ) , K i ( j + 1 ) ) f i ( Q ( j ) , D i ( j + 1 ) , K i ( j ) )
  • Fixing D i ( j + 1 ) , K i ( j + 1 ) and updating Q ( j + 1 ) , the orthogonal Procrustes projection in (18) as a contraction mapping ensures that f ( Q ( j + 1 ) , · ) f ( Q ( j ) , · )
Thus, f ( Z ( j + 1 ) ) f ( Z ( j ) ) . By the monotonicity and boundedness convergence theorem, the sequence f ( Z ( j ) ) converges to L 0 .    □

3. ε -Algorithms Acceleration for the Fixed-Point Problem (19)

In numerical analysis and applied mathematics, sequences arise naturally across a broad range of computational problems. When a sequence converges slowly, acceleration techniques are often employed to improve the convergence rate. A common strategy involves transforming the original sequence into another that converges more rapidly to the same limit, assuming appropriate regularity conditions. One of the most influential transformations in this context is the Shanks transformation [24], originally derived by Schmidt [25] for iterative solutions of linear systems. It was later implemented algorithmically through the scalar ε -algorithm introduced by Wynn [26]. In a subsequent extension, Wynn [27] generalized the scalar ε -algorithm to handle vector-valued sequences. However, the algebraic structure underlying the vector version does not follow directly from the scalar case. To address this gap, Brezinski [28] proposed two distinct generalizations of the Shanks transformation and its associated algorithms to sequences in vector spaces. This led to the formulation of the topological Shanks transformation and the development of two corresponding topological ε -algorithms (TEA1 and TEA2). These algorithms operate using elements from both a vector space E and its dual space E * , enabling a rigorous extension to infinite-dimensional settings. Recognizing the computational overhead associated with dual space operations, Brezinski [29] introduced the simplified topological ε -algorithms (STEA1 and STEA2). These variants avoid direct manipulation of dual space elements by substituting scalar ε -algorithm outputs, thereby reducing memory usage and enhancing numerical stability. In parallel, the Shanks transformation has inspired a family of vector extrapolation methods, including minimal polynomial extrapolation (MPE), modified minimal polynomial extrapolation (MMPE), and reduced-rank extrapolation (RRE). These techniques—collectively referred to as vector extrapolation methods—share the advantages of simple iterative structure and avoidance of explicit matrix decompositions. Owing to their general applicability and efficiency, ε -type and vector extrapolation algorithms have found widespread use in the numerical solution of linear and nonlinear systems, eigenvalue computations, Padé-type approximations, matrix functions, matrix equations, and Krylov subspace methods such as Lanczos iterations [28,30,31,32,33].
In this section, we introduce the principles and specific implementation processes of three ε algorithms: VEA, TEA and STEA. We then explain how each of these methods can be systematically integrated into our fixed-point iteration scheme (introduced in the previous section) to accelerate convergence in the context of solving the three-way GIPSCAL problem.

3.1. Scalar Shanks Transform and Scalar ε -Algorithm

Let S n be a sequence of scalars in the field K, where K is either R or C . If lim n S n = S , under certain conditions, then the sequence S n can be transformed into a new sequence T n that converges to the same limit more efficiently, as described by
lim n T n S S n S = 0 .
Shanks [24] introduced a transformation technique in his 1955 study, which can be used to determine the anticipated limiting value of a sequence of k + 1 terms. The Shanks transformation assumes that the sequence satisfies the following relation:
α 0 ( S n S ) + + α k ( S n + k S ) = 0 , n = 0 , 1 ,
Here, the coefficient α i is an arbitrary constant independent of n and satisfies the condition α 0 α k 0 . Assuming that the equation α 0 + + α k 0 holds for all n, we can expand and rearrange it to derive the difference form
Δ S n α 0 + + Δ S n + k α k = 0 ,
where the forward difference operator Δ is defined as Δ S i = S i + 1 S i . To determine the k + 1 coefficients α 0 , , α k , we further require that α 0 + + α k = 1 , leading to a linear system consisting of k scalar equations derived from (22):
α 0 + + α k = 1 , Δ S n + i α 0 + + Δ S n + k + i α k = 0 , i = 0 , , k 1 .
The coefficients α i can then be linearly combined to obtain S:
S = α 0 S n + α 1 S n + 1 + + α k S n + k .
Even if the sequence { S n } does not satisfy the relation in Equation (21), the coefficients α i can still be determined by solving the linear system in Equation (23) using a similar approach. Consequently, an approximate solution for the limit of the sequence S can be obtained via the linear combination outlined in Equation (24). It is important to note that these coefficients and the approximate solution now depend on the current starting index n and the depth k of the acceleration window. These dependencies are denoted as α i ( n , k ) and e k ( S n ) , respectively. Therefore, we have
e k ( S n ) = α 0 ( n , k ) S n + + α k ( n , k ) S n + k , k , n = 0 , 1 , ,
where α i ( n , k ) satisfies the following linear system:
α 0 ( n , k ) + + α k ( n , k ) = 1 , α 0 ( n , k ) Δ S n + + α k ( n , k ) Δ S n + k = 0 , α 0 ( n , k ) Δ S n + k 1 + + α k ( n , k ) Δ S n + 2 k 1 = 0 .
The original sequence { S n } is transformed into a new sequence { e k ( S n ) } , and this transformation, { S n } { e k ( S n ) } , is known as the Shanks transform. By applying Cramer’s rule for solving systems of linear equations, e k ( S n ) can be written as the ratio of determinants, as shown below:
e k ( S n ) = S n S n + 1 S n + k Δ S n Δ S n + 1 Δ S n + k Δ S n + k 1 Δ S n + k Δ S n + 2 k 1 1 1 1 Δ S n Δ S n + 1 Δ S n + k Δ S n + k 1 Δ S n + k Δ S n + 2 k 1 , k , n = 0 , 1 , .
Since directly computing the determinant in Equation (26) is relatively complex, Wynn [26] proposed the scalar ε -algorithm (SEA), which implements the Shanks transform using a straightforward recursive procedure. The computational rules for the SEA are given by
ε 1 ( n ) = 0 , n = 0 , 1 , , ε 0 ( n ) = S n , n = 0 , 1 , , ε k + 1 ( n ) = ε k 1 ( n + 1 ) + ( ε k ( n + 1 ) ε k ( n ) ) 1 , k , n = 0 , 1 , . . . .
These elements are typically organized into a two-dimensional array, as shown in Figure 1, which is referred to as the ε -table. Rule (27) establishes a connection between the four vertices of a diamond in the ε -table, where the column index k remains constant across each column, and the row index n remains constant along the descending diagonals. Furthermore, Wynn [26] demonstrated the relationship between the Shanks transformation and the ε -algorithm by applying Sylvester’s and Schweins’ determinant identities
ε 2 k ( n ) = e k ( S n ) , ε 2 k + 1 ( n ) = 1 e k ( Δ S n ) .

3.2. Topological Shanks Transformation and Topological ε -Algorithm

Assume that the sequence S n is a sequence of vectors. The Samelson inverse of a nonzero vector in the vector space E is defined by the following formula:
z 1 = z ( z , z ) , z E ,
let ( · , · ) denote the standard inner product on the vector space E. Wynn [27] proposed the vector ε -algorithm (VEA) by “vectorizing” the scalar version of the ε -algorithm. However, the Shanks transformation cannot be directly extended to vector spaces. To address this, Brezinski et al. [34] introduced the algebraic dual space E * of the vector space E, along with an auxiliary vector y E * , and used the duality product · , · to derive two types of topological Shanks transformations. The first topological Shanks transformation is defined by the formula
e ^ k ( S n ) = α 0 ( n , k ) S n + + α k ( n , k ) S n + k , k , n = 0 , 1 , ,
where the coefficients α i ( n , k ) satisfy the following system of linear equations:
α 0 ( n , k ) + + α k ( n , k ) = 1 , α 0 ( n , k ) y , Δ S n + + α k ( n , k ) y , Δ S n + k = 0 , α 0 ( n , k ) y , Δ S n + k 1 + + α k ( n , k ) y , Δ S n + 2 k 1 = 0 .
Similar to the scalar Shanks transformation, e ^ k ( S n ) can be expressed as a ratio of determinants as follows:
e ^ k ( S n ) = S n S n + 1 S n + k y , Δ S n y , Δ S n + 1 y , Δ S n + k y , Δ S n + k 1 y , Δ S n + k y , Δ S n + 2 k 1 1 1 1 y , Δ S n y , Δ S n + 1 y , Δ S n + k y , Δ S n + k 1 y , Δ S n + k y , Δ S n + 2 k 1 , k , n = 0 , 1 ,
Here, y E * . Furthermore, the second topological Shanks transformation can be obtained by replacing S n , , S n + k with S n + k , , S n + 2 k in (30).
We then introduce a new ordered vector pair { u , y } E × E * and define the inverse of the ordered vector pair as follows:
u 1 = y y , u E * , y 1 = u y , u E .
Based on this definition, Brezinski and Redivo-Zaglia [21] derived two distinct forms of topological ε -algorithms, denoted as TEA1 and TEA2. The recursive rule for the first topological ε -algorithm (TEA1) to compute e ^ k ( S n ) is given by:
ε ^ 1 ( n ) = 0 E * , n = 0 , 1 , , ε ^ 0 ( n ) = S n E , n = 0 , 1 , , ε ^ 2 k + 1 ( n ) = ε ^ 2 k 1 ( n + 1 ) + ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) 1 E * , k , n = 0 , 1 , , ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ( ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) ) 1 E , k , n = 0 , 1 , . . . .
Due to the different inversion rules for elements in E and its algebraic dual space E * , as stated in Equation (33), and the dependence on the ordered vector pair, the calculation rules for the odd-indexed and even-indexed sequences in TEA1 differ from those in SEA and VEA. Specifically, for the odd-indexed sequence in TEA1, corresponding to the calculation rule in SEA (27), we have ε ^ 2 k + 1 ( n ) = ε ^ 2 k 1 ( n + 1 ) + ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) 1 , where ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) E * , and the inversion corresponds to the ordered vector pair ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) , y ) E × E * , where y E * , i.e.,
( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) 1 = y y , ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) E * .
Similarly, corresponding to (27), for the even-indexed sequence in TEA1, we have ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ( ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) ) 1 , where ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) E , and the inversion corresponds to the ordered vector pair ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) , ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) ) E × E * , i.e.,
( ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) ) 1 = ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) , ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) E .
The recursive rule for computing e ˜ k ( S n ) using the second topological ε -algorithm (TEA2) is given by
ε ˜ 1 ( n ) = 0 E * , n = 0 , 1 , , ε ˜ 0 ( n ) = S n E , n = 0 , 1 , , ε ˜ 2 k + 1 ( n ) = ε ˜ 2 k 1 ( n + 1 ) + ( ε ˜ 2 k ( n + 1 ) ε ˜ 2 k ( n ) ) 1 E * , k , n = 0 , 1 , , ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + ( ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) ) 1 E , k , n = 0 , 1 , . . . .
The computation of the odd-indexed sequence in TEA2 follows the same rule as ε ^ 2 k + 1 ( n ) in TEA1. For the even-indexed sequence, corresponding to (27), we have ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + ( ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) ) 1 , and, unlike TEA1, the inversion in TEA2 for ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) E corresponds to the ordered vector pair ( ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) , ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) ) E × E * , i.e.,
( ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) ) 1 = ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) ε ˜ 2 k + 1 ( n + 1 ) ε ˜ 2 k + 1 ( n ) , ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) E .
Note the connection between the topological Shanks transformation and the topological ε -algorithm. For the first topological Shanks transformation, e ^ k ( S n ) , and ε ^ k ( n ) in TEA1, the following holds:
ε ^ 2 k ( n ) = e ^ k ( S n ) , y , ε ^ 2 k ( n ) = e k ( y , S n ) , ε ^ 2 k + 1 ( n ) = y y , e ^ k ( Δ S n ) , ε ^ 2 k + 1 ( n ) = y e k ( y , Δ S n ) , k , n = 0 , 1 , . . . .
For the second topological Shanks transformation e ˜ k ( S n ) and ε ˜ k ( n ) in TEA2, the following holds:
ε ˜ 2 k ( n ) = e ˜ k ( S n ) , y , ε ˜ 2 k ( n ) = e k ( y , S n ) , ε ˜ 2 k + 1 ( n ) = y y , e ˜ k ( Δ S n ) , ε ˜ 2 k + 1 ( n ) = y e k ( y , Δ S n ) , k , n = 0 , 1 , . . . .
The computational rules for the odd-indexed and even-indexed sequences in TEA1 and TEA2 are summarized in the Figure 2. For odd indices, the computational rules for the two types of topological ε -algorithms are identical and consistent with those of the scalar ε -algorithm (SEA). The computation of ε 2 k + 1 ( n ) requires only the three elements located at the vertices of the diamond structure table: ε 2 k 1 ( n + 1 ) , ε 2 k ( n ) , and ε 2 k ( n + 1 ) . However, for even-indexed sequences, both TEA1 and TEA2 require additional elements beyond these three. Moreover, during the recursion process, both topological ε -algorithms must simultaneously store information about the odd-indexed sequence in E and the even-indexed sequence in the algebraic dual space E * , thereby significantly increasing the storage burden in large-scale computations. Furthermore, both TEA1 and TEA2 require dual product calculations during the recursion process, which further increases the computational complexity.

3.3. Simplified Topological ε -Algorithm

To optimize the computational rules of the two topological ε -algorithms, Brezinski and Redivo-Zaglia [21] proposed simplified versions of the these algorithms (STEA1 and STEA2) for TEA1 and TEA2. These algorithms combine the odd and even recursive rules of TEA into a single unified rule, thereby requiring the storage of only the even-indexed column vectors. This streamlined computational process not only reduces storage requirements and algorithmic complexity but also enhances overall efficiency, leading to improved performance in large-scale data processing.
For the scalar sequence { y , S n } , based on the relationship in Equation (28) between the scalar Shanks transformation and SEA, the following holds:
ε 2 k ( n ) = e k ( y , S n ) , ε 2 k + 1 ( n ) = 1 e k ( y , Δ S n ) .
From Equations (35) and (37), the following holds:
ε ^ 2 k + 1 ( n + 1 ) ε ^ 2 k + 1 ( n ) = y e k ( y , Δ S n + 1 ) y e k ( y , Δ S n ) = y ( ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ) .
Therefore, the recursive rule for the even-indexed sequence in TEA1 can be rewritten as
ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) y , ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ( ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ) , k , n = 0 , 1 , .
Then, by combining the recursive rules in Equations (27) and (34), the following four equivalent forms for the even indices of the TEA1 algorithm can be derived:
STEA 1 - 1 : ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + 1 ( ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ) ( ε 2 k ( n + 1 ) ε 2 k ( n ) ) ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) , STEA 1 - 2 : ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ε 2 k + 1 ( n ) ε 2 k 1 ( n + 1 ) ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) , STEA 1 - 3 : ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ε 2 k + 2 ( n ) ε 2 k ( n + 1 ) ε 2 k ( n + 1 ) ε 2 k ( n ) ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) , STEA 1 - 4 : ε ^ 2 k + 2 ( n ) = ε ^ 2 k ( n + 1 ) + ( ε 2 k + 1 ( n ) ε 2 k 1 ( n + 1 ) ) ( ε 2 k + 2 ( n ) ε 2 k ( n + 1 ) ) ( ε ^ 2 k ( n + 1 ) ε ^ 2 k ( n ) ) .
Here, ε ^ 0 ( n ) = S n E , n = 0 , 1 , . Similarly, the following four mutually equivalent recursive formulas for the even-indexed sequences in TEA2 can be derived:
STEA 2 - 1 : ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + 1 ( ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ) ( ε 2 k ( n + 2 ) ε 2 k ( n + 1 ) ) ( ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) ) , STEA 2 - 2 : ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + ε 2 k + 1 ( n + 1 ) ε 2 k 1 ( n + 2 ) ε 2 k + 1 ( n + 1 ) ε 2 k + 1 ( n ) ( ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) ) , STEA 2 - 3 : ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + ε 2 k + 2 ( n ) ε 2 k ( n + 1 ) ε 2 k ( n + 2 ) ε 2 k ( n + 1 ) ( ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) ) , STEA 2 - 4 : ε ˜ 2 k + 2 ( n ) = ε ˜ 2 k ( n + 1 ) + ( ε 2 k + 1 ( n + 1 ) ε 2 k 1 ( n + 2 ) ) ( ε 2 k + 2 ( n ) ε 2 k ( n + 1 ) ) ( ε ˜ 2 k ( n + 2 ) ε ˜ 2 k ( n + 1 ) ) .
Here, ε ˜ 0 ( n ) = S n E , n = 0 , 1 , .
From the above calculation, it is evident that the generation of e k ( S n ) in both STEA1 and STEA2 depends solely on the even-indexed sequence, with the odd-indexed sequence serving only as an auxiliary sequence. As a result, during the recursive processes of both simplified topological ε -algorithms, only the information of the even-indexed sequences needs to be stored, eliminating the need to store the odd-indexed sequences. Additionally, in STEA1 and STEA2, the number of pairwise product operations is reduced during recursion. The linear functional y in E * is involved solely in the duality product operation with the initial sequence S n in E, generating a scalar sequence y , S n . Furthermore, the recursion of this scalar sequence y , S n can be performed using SEA, as shown in Equation (27).

3.4. Implementation of the ε -Algorithms

The core principle of the ε -algorithms is founded on the Shanks transformation, which leverages the linear difference structure of the sequence by forming specific linear combinations aimed at eliminating the dominant error terms during the convergence process. The algorithm calculates these constructed linear combinations following explicit recursive rules, thus facilitating the acceleration of the sequence’s convergence.
The implementation of ε -algorithms (SEA, VEA, TEA, and STEA) and the construction of the associated ε -number table are most directly achieved by storing all elements corresponding to each pair of indices k and n. This process begins with a prescribed number of terms in the first two columns and proceeds recursively, computing subsequent entries column by column. As each successive column contains one fewer element than the previous, the resulting structure forms the lower triangular part of the ε -table. However, this approach requires retaining all elements within the triangular region, which can lead to considerable memory overhead, especially in the case of vector- or matrix-valued sequences.
To address this limitation, a more memory-efficient strategy computes the terms of the original sequence incrementally and constructs the ε -table along ascending diagonals [21,22]. Specifically, after generating an initial triangular portion of the table and retaining only the last row along its ascending diagonal, a new element from the original sequence S n is introduced, and the next ascending diagonal is computed iteratively. This approach requires storing only a single ascending diagonal ( e i ) and three auxiliary temporary variables, thereby significantly reducing the storage demands.
S t e p i | 0 e i = S i e i 1 e i 2 e 2 e 1 S t e p i + 1 | 0 e i + 1 = S i + 1 e i e i 1 e 3 e 2 e 1
The implementations of the SEA, VEA, TEA, and STEA algorithms were previously provided in the MATLAB toolbox EPSfun [22]. In this paper, we present a more intuitive and transparent implementation of these algorithms. Specifically, Algorithms 1 and 2 illustrate the procedures for SEA and VEA, respectively. Although both algorithms share the same fundamental computational structure, they differ in how the inverse operation is treated. In particular, the inverse operation in VEA is defined as shown in Equation (29).
For a fixed acceleration window of width k, both SEA and VEA compute new elements incrementally along the rising diagonals until the 2 k -th column is completed. In Algorithm 2, the inner product in R m is defined using the standard Euclidean inner product, namely, ( x , y ) = x T y , for all x , y R m .
Algorithm 1 Scalar ε -algorithm (SEA)
Require: 
2 k + 1 elements of the scalar sequence { S n } : S 0 , S 1 , , S 2 k , where k is the acceleration window width.
Ensure: 
Return the values W and e
1:
e ( 1 ) = S 1
2:
for   i = 2 : 2 k + 1  do
3:
    N = 0 ; e ( i ) = S i
4:
   for  j = i : 1 : 2  do
5:
      d = e ( j ) e ( j 1 ) ; W = N + 1 / d
6:
      N = e ( j 1 ) ; e ( j 1 ) = W
7:
   end for
8:
end for
Algorithm 2 Vector ε -algorithm (VEA)
Require: 
2 k + 1 elements of the vector sequence { S n } : S 0 , S 1 , , S 2 k , where k is the acceleration window width.
Ensure: 
Return the values W and e
1:
e ( : , 1 ) = S 1
2:
for   i = 2 : 2 k + 1  do
3:
    N = zeros ( n , 1 )    % Initialize a zero vector of the same size as S i .
4:
    e ( : , i ) = S i
5:
   for  j = i : 1 : 2  do
6:
      d = e ( : , j ) e ( : , j 1 ) ; W = N + d / ( d T d )
7:
      N = e ( : , j 1 ) ; e ( : , j 1 ) = W
8:
   end for
9:
end for
Algorithm 3 presents the implementation of the TEA algorithm. It is important to note that TEA applies distinct computational rules to subsequences with odd and even indices. Specifically, the computational rules for odd-indexed terms are identical to those used in SEA and VEA, whereas the computation of even-indexed terms requires the introduction of additional elements. Since the construction of the ε -table proceeds in a top-down manner, it is sufficient to store only the elements along a single ascending diagonal. In TEA2, the additional elements required for computing even-indexed terms correspond to the newly introduced elements from the initial sequence, and therefore, no extra storage is required. In contrast, in TEA1, the additional elements required for computing the even-indexed terms are not located on the current ascending diagonal. Therefore, TEA1 must additionally store the even-indexed elements from the previous ascending diagonal.
Algorithm 3 Topological ε -algorithm (TEA)
Require: 
2 k + 1 elements of sequence { S n } : S 0 , S 1 , , S 2 k , where k is the acceleration window width.
Ensure: 
Return the values W and e
1:
Select the appropriate duality pairing between the linear generalized function y and the ordered vector pair { u , y } E × E * , where E * denotes the algebraic dual space of E.
2:
e ( : , 1 ) = S 1
3:
for   i = 2 : 2 k + 1  do
4:
    N = zeros ( n , 1 ) ; e ( : , i ) = S i ; counter = 1
5:
   for  j = i : 1 : 2  do
6:
      d = e ( : , j ) e ( : , j 1 )
7:
     if mod(counter, 2) == 1 then
8:
         W = N + y / y , d
9:
     else
10:
         W = N + ( e ( : , j + 1 ) N ) / e ( : , j + 1 ) N , d
11:
     end if
12:
      N = e ( : , j 1 ) ; e ( : , j 1 ) = W ; counter = counter + 1
13:
   end for
14:
end for
Algorithm 4 presents the implementation of the STEA method. Notably, the structure of STEA consists of two components: a scalar part and a vector component. The scalar component retains a diamond-shaped structure, while the computational rules for the vector component are adapted to a triangular form. In the scalar part, each new scalar value is obtained by computing the duality product between the newly introduced elements of the sequence S n and the vector y while preserving the entire ascending diagonal during computation. In contrast, the vector component retains only the even-indexed elements along the ascending diagonal. As a result, the algorithm requires storing only k vectors. The distinction between STEA1 and STEA2 is analogous to that between TEA1 and TEA2: STEA1 requires storing additional even-indexed elements from the previous ascending diagonal, while STEA2 avoids this, making it generally more memory-efficient. In the implementation provided in the literature [22], the scalar component of STEA is computed first using SEA, and the resulting scalar values are then incorporated into the STEA algorithm. In this paper, we integrate these two procedures into a unified approach. Algorithm 4 provides the detailed implementation of STEA2-3, while the other three equivalent variants can be obtained by modifying the index variable j within the algorithm.
Algorithm 4 The simplified topological ε -algorithms (STEA)
Require: 
2 k + 1 elements of sequence { S n } : S 0 , S 1 , , S 2 k , where k is the acceleration window width.
Ensure: 
Return the values W and e
1:
Select the appropriate duality pairing between the linear generalized function y and the ordered vector pair { u , y } E × E * , where E * denotes the algebraic dual space of E.
2:
e 1 { 1 } = S 1 ;     e 2 ( 1 ) = y , e 1 { 1 } ;     i t e r = 0 ;
3:
for   i = 2 : 2 k + 1  do
4:
   if mod(i, 2) == 0 then
5:
      e 1 ( 1 ) = [ ] ;    % Deletes the first element of the table and shifts the subsequent elements forward.
6:
      i t e r = i t e r + 1 ;
7:
   end if
8:
    N = 0 ;     e 1 { i i t e r } = S i ;     e 2 ( i ) = y , e 1 { i i t e r } ;     counter = 1 ;
9:
   for  j = i : 1 : 2  do
10:
      d = e 2 ( j ) e 2 ( j 1 ) ; W = N + 1 / d ;
11:
     if mod(counter, 2) == 0 then
12:
         J = ( W N ) / ( e ( j + 1 ) N ) ;
13:
         index = j i t e r + 0.5 ( counter 2 ) ;
14:
         e 1 { index } = e 1 { index } + J ( e 1 { index + 1 } e 1 { index } ) ;
15:
     end if
16:
      N = e 2 ( j 1 ) ; e 2 ( j 1 ) = W ; counter = counter + 1 ;
17:
   end for
18:
end for
Remark 1. 
The ε-acceleration algorithms (SEA, VEA, TEA, and STEA) do not require matrix decomposition or subproblem solving, which gives them a significant advantage over polynomial extrapolation methods (such as MPE, MMPE, and RRE) and Anderson acceleration.
Remark 2.
In the algebraic dual space E * of E, the selection of linear functionals y and the corresponding dual product are detailed in the literature [21,22] for common selection methods. If the original sequence is a vector sequence { S n } R m or a matrix sequence { S n } R m × n , and considering that the vector space R m and the matrix space R m × n are algebraically self-dual, the following selections are made:
  • E = R m , typically y = ones ( m , 1 ) , i.e., the m-dimensional vector with all elements equal to 1, and the dual product is defined as y , S n = ( y , S n ) = y T S n ;
  • E = R n × n , typically y = I n , and the dual product is defined as y , S n = trace ( S n ) ;
  • E = R m × n , typically y = ones ( m , n ) , and the dual product is defined as y , S n = trace ( y T S n ) .

3.5. Combining ε -Algorithms with Fixed-Point Iterations to Solve the Problem (5)

Given that the sequence generated by the fixed-point iteration in Equation (19) for solving the GIPSCAL problem is a matrix sequence, and that the operator H GIPSCAL : R n × r R n × r is a nonlinear mapping, we adopt a restart-based acceleration strategy for applying the ε -algorithm. This approach follows the vector sequence polynomial extrapolation acceleration framework proposed in [35].
In practical implementations of the acceleration algorithm, a delayed start strategy is commonly employed to prevent premature acceleration and improve overall performance. Specifically, the basic fixed-point iteration (19) is first executed for a fixed number of steps, or until the matrix sequence { ( Q ( k ) , D 1 ( k ) , , D N ( k ) , K 1 ( k ) , , K N ( k ) ) } a specified level of initial accuracy. Only after this preliminary phase is the acceleration algorithm applied. The detailed implementation steps of the iterative acceleration algorithm based on the ε -algorithm for solving the three-way GIPSCAL problem (5) are presented in Algorithm 5.
Algorithm 5 VEA, TEA, and STEA accelerated fixed-point iterations for solving the three-way GIPSCAL problem (5)
Require: 
N asymmetric n -order matrices X 1 , , X N , the initial iteration Q ( 0 ) O ( n , r ) , and the window width parameter k.
1:
Basic Iteration:
Initialize the iteration matrix S 0 ( 0 ) to Q ( 0 )  , such that Z 0 ( 0 ) = S 0 ( 0 ) and perform κ iterations through the nonlinear map H GIPSCAL to get Z 1 , Z 2 , , Z κ .
2:
Perform extrapolation acceleration:
Use Z κ as the initial value of acceleration S 0 , carry out 2 k iterations to obtain S 0 , S 1 , S 2 , , S 2 k , and use this 2 k + 1 sequence as the sequence of history iterations required for extrapolating the acceleration. Perform the VEA, TEA, or STEA algorithms to generate S 2 k ( 0 ) , which is denoted as Q * after re-orthogonalization by the “economic” SVD decomposition.
3:
Convergence judgment and iterative update:
Calculate D i * and K i * according to Equations (13) and (14) check whether [ Q * , D 1 * , , D N * , K 1 * , , K N * ] satisfies the termination condition, and if it doesn’t, update Q ( 0 ) = Q * , and return to Step 1 to continue iterating until convergence.
4:
return  Q * , D 1 * , , D N * , K 1 * , , K N * .
The termination criterion of Algorithm 5 refers to the first-order optimality conditions of problem (5) as presented in [10]. Note also that the Stiefel manifold O ( n , p ) is an embedded submanifold in the Euclidean space R n × p , as discussed in [36], while Ω + ( p ) is a convex set, and K ( r ) is a linear subspace. The termination criterion can be formulated as follows:
Error = ( P T Q ( k ) O ( n , r ) Q f ( Z ( k ) ) 2 + i = 1 N P K ( r ) K i f i Q ( k ) , D i ( k ) , K i ( k ) 2 + i = 1 N P D + ( r ) D i ( k ) D i f i Q ( k ) , D i ( k ) , K i ( k ) D i ( k ) 2 ) 1 2 ϵ .
Here, ϵ is a predefined accuracy threshold, and P T Q ( k ) O ( n , r ) denotes the orthogonal projection onto the tangent space T Q ( k ) O ( n , r ) at the point Q ( k ) . As shown in [9,36], for any matrix M R n × r and any Q O ( n , r ) , the projection is given by
P T Q O ( n , r ) ( M ) = M Q sym ( Q T M ) ,
where sym ( A ) = 1 2 ( A + A T ) denotes the symmetric part of the matrix A. For the implementation of Algorithm 5, the following notation is introduced:
Remark 3. 
The selection of the linear generalized functional y and the corresponding duality product in the dual space E * should follow the approach outlined in Remark 2.
Remark 4. 
The TEA and STEA methods presented in Algorithm 5 can be applied directly to the matrix sequence generated by the fixed-point iteration in Equation (19). However, if Algorithm 5 employs VEA for sequence acceleration, a combination of matrix vectorization (straightening) and inverse vectorization (inverse straightening) operators is needed. Specifically, for the 2 k + 1 matrices S 0 , S 1 , , S 2 k involved in each single-step acceleration loop, the matrix straightening operator is applied to obtain the corresponding 2 k + 1 vectors s 0 , s 1 , , s 2 k . VEA is then applied to these vectors to calculate the accelerated vector s 2 k ( 0 ) , which is subsequently converted back to matrix form via the inverse straightening operator, resulting in the matrix S 2 k ( 0 ) . An “economy-size” singular value decomposition (SVD) is performed on S 2 k ( 0 ) to reorthogonalize it, producing the updated matrix Q * . Finally, Step 3 of Algorithm 5 is executed to proceed the iterative process.

4. Numerical Experiments

In this section, we present a comprehensive numerical evaluation of the proposed ε -algorithm accelerated fixed-point iterations for solving the three-way GIPSCAL problem in Equation (5). We begin by comparing the original fixed-point iteration with its accelerated variants. Additionally, we benchmark these methods against the continuous-time projected gradient flow algorithm introduced by Trendafilov [10,13] as well as several state-of-the-art first- and second-order Riemannian optimization algorithms from the MATLAB toolbox Manopt [9,20]. All experiments were conducted on a standard desktop computer equipped with an Intel(R) Core(TM)i7-13620H CPU (2.40 GHz) and 16.00 GB of RAM and running MATLAB R202b.
To enable controlled and diverse benchmarking, we generated a collection of N square, asymmetric data matrices X i R n × n ( i = 1 , , N ) using a factorial design approach inspired by Takane et al. [37], originally developed for orthogonal INDSCAL problems [9,17]. This setup includes three types of datasets: one purely random and two structured variants. In the random setting, each entry of X i is drawn independently from a standard normal distribution, i.e., X i = randn ( n , n ) . Such unstructured randomness often leads to large fit errors in the objective function of the problem (5). For the structured datasets, we construct each slice as X i = Q ( D ˇ i + K i ) Q T + E i , where Q R n × r , D ˇ i R r × r , and K i R r × r are randomly generated, and E i denotes an additive noise matrix. The matrix Q is populated with uniformly distributed random entries from [ 0 , 1 ] via rand(n, r) and column-orthogonalized using singular value decomposition (SVD). The diagonal entries of D ˇ i are sampled from a standard normal distribution. Each K i is drawn from the uniform distribution on [ 0 , 1 ] and skew-symmetrized as K i 1 2 ( K i K i T ) . To evaluate robustness under varying structural assumptions, we consider two structured variants: one allowing potentially negative diagonal elements in D ˇ i (indefinite case), and the other enforcing nonnegativity by taking element-wise absolute values (nonnegative definite or nnd case). The disturbance terms E i are sampled from a normal distribution with zero mean and variance σ 2 , where σ is set to 10% of the standard deviation of the structural term Q ( D ˇ i + K i ) Q T . To better reflect practical scenarios, the number of “subjects” N varies between 20 and 50, while the number of “stimuli” n is capped at 200. For effective visualization in multidimensional scaling (MDS), the target dimensionality r is set to three representative levels: r = 2 , 3 , and 5.
The initial iterate Q ( 0 ) O ( n , r ) , used in both the original and accelerated fixed-point iterations, is computed using a structured initialization strategy consistent with the recommendations of [10,17]. Specifically, we perform an eigenvalue decomposition (EVD) on the symmetric component of the aggregated data i = 1 N 1 2 ( X i + X i T ) = P Λ P T , where the eigenvalues in Λ are sorted in descending order. The first r eigenvectors, denoted P r , are used to initialize Q ( 0 ) : = P r . The corresponding initial values of D ˜ i ( 0 ) and K i ( 0 ) , required by both the Riemannian optimization framework and the projected gradient flow method used to solve the equivalent product-manifold-constrained optimization problem in Equation (42), are given by
D ˜ i ( 0 ) = max diag 1 2 Q ( 0 ) T ( X i + X i T ) Q ( 0 ) , 0 1 / 2 , i = 1 , , N ; K i ( 0 ) = 1 2 Q ( 0 ) T ( X i X i T ) Q ( 0 ) , i = 1 , , N .
Due to the matrix-valued nature of the fixed-point sequence, storage constraints require relatively small window sizes k for ε -algorithm acceleration. To evaluate the impact of window width on acceleration performance, we experiment with k = 4 , 5 , and 6, selecting the value that produces the best empirical performance. To further enhance the reliability of the acceleration, particularly in cases where the base fixed-point iteration converges slowly, we adopt a delayed-start strategy. Under this scheme, the fixed-point algorithm is executed until an intermediate solution [ Q ( j ) , D 1 ( j ) , , D N ( j ) , K 1 ( j ) , , K N ( j ) ] satisfies a specified error threshold, Error 10 1 . Only then is the ε -algorithm-based acceleration activated. In our experiment, the core implementations of various ε -algorithms (including SEA, VEA, TEA, and STEA) are provided by the MATLAB toolbox EPSfun, as described in [22]. The specific code can be downloaded from the following website: http://www.netlib.org/numeralgo/. In this experiment, the topological ε -algorithm uses the TEA2 version, while the simplified topological ε -algorithm uses the STEA2-3 version. For both the original fixed-point iteration and each of the acceleration algorithms, different termination thresholds are applied based on the data generation method for X i . In the case of RAND-generated data, convergence is relatively slow; therefore, the termination tolerance is set to 10 6 . For the NND and IND cases, which typically converge faster, a tighter tolerance of 10 8 is used. The computation of Error follows the definition given in Equation (39). In all subsequent numerical results, the reported Error values are consistently calculated using this formula.

4.1. Numerical Comparison of Fixed-Point Acceleration Methods

This section presents a comprehensive numerical comparison between the original fixed-point iteration (denoted FPI) and several accelerated variants. These include ε -algorithm-based methods (FPI-VEA, FPI-TEA, and FPI-STEA), polynomial extrapolation techniques (FPI-MPE, FPI-RRE, and FPI-MMPE), and Anderson acceleration (FPI-Anderson). The experimental results, summarized in Table 1, span a range of configurations including different data generation strategies for the matrices X i , system dimensions [ n , r ] , number of components N, and acceleration window widths k. In the table, IT denotes the total number of iterations to reach convergence, CPU refers to the total computation time in seconds, and Error represents the final residual error. It is important to note that for all acceleration schemes—including FPI-VEA, FPI-TEA, FPI-STEA, FPI-MPE, FPI-RRE, FPI-MMPE, and FPI-Anderson—the reported iteration count (IT) includes the initial delayed-start iterations, the base iterations required to form the extrapolation sequence, and the subsequent accelerated iterations.
From Table 1, we observe that to achieve equivalent termination accuracy, the polynomial extrapolation method FPI-MPE generally outperforms the ε -based methods in terms of iteration count and runtime. This performance gap is attributed to the number of sequence elements each method requires. Specifically, for a given window size k, polynomial extrapolation methods utilize k + 1 vectors from the underlying sequence, while ε -algorithms require 2 k + 1 vectors [35]. Nonetheless, the ε -based methods (FPI-VEA, FPI-TEA, and FPI-STEA) demonstrate consistent acceleration across different system sizes and window widths, exhibiting good performance scalability.
In Section 6, Theorem 6.8 of the paper [21], the theoretical analysis of the STEA algorithm for accelerating the convergence rate with the window width k is presented. Specifically, as k increases, the convergence order after acceleration improves. The specific content is as follows:
Theorem 4 
([21]). We consider sequences of the form
S n S i = 1 a i λ i n u i ( n ) or S n S ( 1 ) n i = 1 a i λ i n u i ( n ) ,
where a i , λ i K , u i E , and 1 > λ 1 > λ 2 > > 0 . Then, when k is fixed and n tends to infinity,
ε ^ 2 k ( n ) S = O ( λ k + 1 n ) , ε ^ 2 k + 2 ( n ) S ε ^ 2 k ( n ) S = O ( λ k + 2 / λ k + 1 ) n .
Table 1 demonstrates that the choice of window width k significantly influences convergence behavior. While increasing k generally improves acceleration by leveraging more sequence information, it also raises the per-iteration computational cost. To strike a balance between efficiency and overhead, the acceleration window is fixed at k = 5 for subsequent experiments.
Figure 3 illustrates the evolution of the error norm log 10 Error as a function of iteration time across different system configurations. The results are presented for three data-generation scenarios: nonnegative definite (NND), indefinite (IND), and fully random (RAND), with k = 5 held fixed throughout. For data generated via NND and IND schemes, the underlying fixed-point sequences exhibit relatively fast convergence. In these cases, extrapolation is applied immediately, without delay. As seen in the top four subplots of Figure 3 (with the top two corresponding to NND and the next two to IND), the application of ε -acceleration after the initial 2 k + 1 base iterations leads to a substantial and rapid drop in the residual error. In contrast, for RAND-generated data—where the underlying sequence converges more slowly—a delayed-start strategy is employed. The final six subplots of Figure 3 show the evolution of log 10 | | Error | | for RAND matrices across varying system sizes. These results confirm that even under more challenging conditions, the accelerated methods remain effective and yield significant convergence improvements.
The numerical experiments collectively demonstrate that ε -based acceleration methods achieve convergence performance comparable to polynomial extrapolation and Anderson acceleration when applied to the three-way GIPSCAL problem (5). A noteworthy advantage of the ε -algorithms is their ability to operate directly on matrix sequences without requiring vectorization (or “straightening”) and inverse reshaping procedures—transformations that are typically needed for polynomial and Anderson-based methods. In addition, ε -methods avoid costly matrix factorizations and auxiliary subproblem solves, resulting in simpler implementation and reduced computational overhead.

4.2. Comparison with Riemannian Optimization Methods in Manopt

In 2021, Trendafilov and Gallo [9] proposed a unified framework for classical multivariate data analysis models by leveraging the geometric structure of matrix manifolds. They further introduced a computational approach for the three-way GIPSCAL problem based on the Riemannian optimization toolbox Manopt [20]. In this section, we present a numerical comparison between the proposed fixed-point iteration acceleration methods (VEA, TEA, and STEA) and the existing optimization algorithms implemented in the Manopt toolbox. To enable the application of Riemannian optimization techniques, the original three-way GIPSCAL problem (5) is reformulated as a constrained optimization problem defined on a product manifold, as follows:
Minimize 1 2 i = 1 N X i Q D ˜ i 2 + K i Q T 2 , subject to ( Q , D ˜ 1 , , D ˜ N , K 1 , , K N ) M : = O ( n , r ) × D ( r ) N × K ( r ) N .
Here, D ( r ) denotes the linear subspace of diagonal r × r matrices. Through straightforward algebraic derivation, the Euclidean gradient of the objective function f ˜ in (42) with respect to Z = ( Q , D ˜ 1 , , D ˜ N , K 1 , , K N ) R n × r × S ( r ) N × K ( r ) N can be expressed componentwise as
Q f ˜ ( Z ) = i = 1 N 2 sym ( X i ) Q D ˜ i 2 + 2 skew ( X i ) Q K i + 2 Q ( D ˜ i 4 K i 2 ) , D ˜ i f ˜ ( Z ) = ( D ˜ i Q T X i Q + Q T X i Q D ˜ i ) + 2 D ˜ i 3 + D ˜ i K i + K i D ˜ i , i = 1 , , N , K i f ˜ ( Z ) = D ˜ i 2 + K i Q T X i Q , i = 1 , , N .
Since M is an embedded submanifold of the Euclidean space, the Riemannian gradient is obtained by orthogonally projecting the Euclidean gradient onto the tangent space T Z M  [36]. Using the known projection operator (40) and the tangent space of the Stiefel manifold O ( n , r ) and recognizing that D ( r ) and S ( r ) are linear subspaces of K ( r ) and R r × r , respectively, the Riemannian gradient of f ˜ in (42) at Z M is given by
grad f ˜ ( Z ) = { P Q i = 1 N 2 sym ( X i ) Q D ˜ i 2 + 2 skew ( X i ) Q K i + 2 Q ( D ˜ i 4 K i 2 ) , 2 D ˜ 1 2 Q T sym ( X 1 ) Q D ˜ 1 , , 2 D ˜ N 2 Q T sym ( X N ) Q D ˜ N , K 1 Q T skew ( X 1 ) Q , K N Q T skew ( X N ) Q } .
The generic update step in a Riemannian optimization algorithm over M is given by
Z ( k + 1 ) = R Z ( k ) α k Δ Z ( k ) ,
where α k is the step size, Δ Z ( k ) = ξ ( k ) , η 1 ( k ) , , η N ( k ) , θ 1 ( k ) , , θ N ( k ) T Z ( k ) M is the search direction, and R Z is a retraction operator that maps tangent vectors back to the manifold. Specifically, the retraction is defined as
R Z ( Δ Z ) = R Q ( ξ ) , D ˜ 1 + η 1 , , D ˜ N + η N , K 1 + θ 1 , , K N + θ N ,
where R Q ( ξ ) is a retraction on the Stiefel manifold O ( n , r ) . For conjugate gradient-type methods, vector transport operations are required as well. Given two tangent vectors Δ Z , Δ Z T Z M , their transport is defined by
T Δ Z ( Δ Z ) = T ξ 1 ( ξ 2 ) , η 1 , , η N , θ 1 , , θ N ,
where T ξ 1 ( ξ 2 ) denotes transport on the Stiefel manifold O ( n , r ) . Our experiments use Manopt’s default retraction and transport implementations. For second-order algorithms, the Riemannian Hessian is required as well. For brevity, derivations are omitted; detailed formulas can be found in Absil et al. [38]. For further literature on applying Riemannian optimization techniques to various matrix optimization models arising in multidimensional scaling, see also [14,39,40]. Optimization terminates when the following first-order optimality condition is satisfied:
Error = grad f ˜ Q ( k ) , D ˜ 1 ( k ) , , D ˜ N ( k ) , K 1 ( k ) , , K N ( k ) 10 6 .
We benchmarked our accelerated fixed-point methods against the following Manopt solvers: Riemannian steepest descent (Manopt-RSD), conjugate gradient (Manopt-RCG), Barzilai-Borwein (Manopt-RBB), trust-region (Manopt-RTR), limited-memory BFGS (Manopt-RLBFGS), and adaptive regularization by cubics (Manopt-ARC). All solvers were executed using default parameter settings with a maximum iteration cap of 10,000. To ensure consistency, all methods were initialized from the same starting point, and both the Riemannian gradient and (when required) Hessian were supplied explicitly. Notably, Manopt defaults to finite-difference approximations when the Hessian is not provided. In contrast, our implementation consistently utilized exact second-order information.
Table 2 summarizes performance across multiple problem sizes and input data types: completely random (RAND), and two structured categories (IND and NND). The performance metrics follow those in Table 1, with “CPU” denoting total runtime, “IT” indicating the number of iterations, and “Fvalue” representing the final objective value f or f ˜ . The accelerated fixed-point methods are applied to the original three-way GIPSCAL problem (5), while the Riemannian optimization algorithms solve the equivalent reformulation (42). The “Error” column records the norm associated with the respective stopping criteria (39) for accelerated fixed-point methods and (44) for Riemannian solvers. Convergence trajectories are visualized in Figure 4, where the horizontal axis denotes runtime and the vertical axis plots log 10 Error . The results clearly demonstrate that the proposed accelerated fixed-point methods outperform several Manopt solvers in terms of both convergence rate and computational efficiency. Notably, as shown by the pink-dotted solid line and black-dotted dashed line in Figure 4, both Manopt-RBB and Manopt-RSD exhibit pronounced slowdowns in convergence near stationary points. Furthermore, Table 2 highlights the elevated computational cost of Manopt-RLBFGS, which aligns with the internal implementation details of Manopt. Specifically, its default memory size of 30 necessitates 30 projection-based vector transport operations per iteration, contributing to significant per-iteration overhead.

4.3. Comparison with the Projected Gradient Flow Method

Trendafilov [10,13] reformulated the equivalent three-way GIPSCAL problem (42) as a constrained dynamical system and proposed a continuous-time projected gradient flow algorithm. This method is globally convergent, conceptually simple, and broadly applicable to matrix optimization problems arising in multidimensional data analysis [9,15,16,17,18]. In this subsection, we conduct a numerical comparison between the proposed accelerated fixed-point algorithms—VEA, TEA, and STEA—and the projected gradient flow algorithm.
For a general constrained optimization problem of the form min X M E ( X ) , the projected gradient method generates a sequence of iterates { X t } via
X t + 1 = π M X t h t E ( X ) | X = X t ,
where h t is the step size, and π M ( · ) denotes the orthogonal projection onto the feasible set M . The associated continuous-time version, known as the projected gradient flow, evolves along the negative projected gradient and is governed by the following differential equation:
d X ( t ) d t = π M ( E ( X ( t ) ) ) , X ( 0 ) = X 0 M .
For the equivalent reformulated three-way GIPSCAL problem (42), this yields the following system of ordinary differential equations:
d Q d t = P Q i = 1 m 2 sym ( X i ) Q D ˜ i 2 + 2 skew ( X i ) Q K i + 2 Q ( D ˜ i 4 K i 2 ) , d D ˜ i d t = 2 D ˜ i 2 Q T sym ( X i ) Q D ˜ i , i = 1 , , N , d K i d t = K 1 + Q T skew ( X 1 ) Q , i = 1 , , N .
We employ MATLAB’s ode15s solver [41,42] to integrate this system numerically. The ode15s routine is a variable-step, variable-order implicit solver designed for stiff differential equations and is part of the Klopfenstein–Shampine family of solvers. We set both absolute and relative error tolerances to 10 12 to ensure highly accurate tracking of the flow dynamics. While such precision may exceed practical requirements in typical data analysis tasks, it facilitates a fair and rigorous comparison of algorithmic performance. Although looser tolerance settings could reduce runtime, their effect is marginal in this context. During integration, solution states are recorded at regular intervals of 10 time units. The integration process is terminated automatically when the relative decrease in the objective function between successive outputs falls below 10 4 , indicating proximity to a local minimum. This termination criterion, adopted from Loisel and Takane [19], is significantly more lenient than the convergence threshold used in the proposed accelerated fixed-point algorithms—VEA, TEA, and STEA.
Although the projected gradient flow algorithm exhibits global convergence and features a simple, intuitive structure that facilitates implementation, its efficiency tends to degrade when applied to large-scale problems. Table 3 reports a detailed numerical comparison between the accelerated fixed-point iteration methods (namely, FPI-VEA, FPI-TEA, and FPI-STEA) and the projection gradient flow approach (denoted as PG-ODE), under a fixed acceleration window width parameter of k = 5 . The experiments are conducted across varying coefficient dimensions and under three distinct original data generation schemes: NND, IND, and RAND. The definitions of CPU, IT, Obj, and Error are consistent with those provided in Table 1. The results in Table 3 demonstrate that, in terms of iteration efficiency, the ε -accelerated fixed-point iteration algorithms significantly outperform the projected gradient flow method.

5. Conclusions

This paper examines a Generalized Inner Product SCALing model (GIPSCAL) in multidimensional scaling from a numerical perspective, with a focus on individual differences among the observed objects. The model can be formulated as a multivariate constrained matrix optimization problem with column orthogonality and non-negative diagonal constraints (problem (5)). Using the alternating least squares iterative approach, the original model is first transformed into a matrix-based, fixed-point iteration problem. Furthermore, by incorporating the ε -acceleration principle from vector sequence acceleration, we design fixed-point iteration acceleration algorithms based on the vector ε -algorithm, topological ε -algorithm, and simplified topological ε -algorithm (denoted as FPI-VEA, FPI-TEA, and FPI-STEA). Extensive numerical experiments show that, when solving the GIPSCAL problem in (5), the fixed-point iteration acceleration algorithms combined with the ε -algorithm achieve better acceleration convergence compared to the original fixed-point iteration algorithm. Additionally, compared to existing algorithms for solving matrix optimization models in multidimensional data analysis, such as the Riemannian optimization-based Manopt toolbox algorithms and the classical projected gradient flow algorithm, the fixed-point iteration acceleration algorithms demonstrate a clear advantage in iteration time. Improving the convergence speed of scalar, vector, matrix, or tensor sequences generated by iterative methods is highly significant in fields such as scientific and engineering computing and machine learning. It is worth noting that, unlike traditional vector sequence polynomial extrapolation acceleration methods, the implementation of FPI-VEA, FPI-TEA, and FPI-STEA does not require the use of matrix flattening and unflattening operators, nor does it involve matrix decomposition to solve related subproblems.

Author Contributions

Conceptualization, Y.Q. and J.L.; methodology, Y.Q.; validation, Y.Q., C.M., and J.L.; formal analysis, J.L.; writing—original draft preparation, Y.Q. and C.M.; writing—review and editing, Y.Q. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (12261026) and the National College Student Innovation and Entrepreneurship Training Program at Guilin University of Electronic Technology (202410595070).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chino, N. A brief survey of asymmetric MDS and some open problems. Behaviormetrika 2012, 39, 127–165. [Google Scholar] [CrossRef]
  2. Okada, A.; Imaizumi, T. Applied Multidimensional Scaling of Asymmetric Relationships; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  3. Harshman, R.A. Models for analysis of asymmetrical relationships among N objects or stimuli. In Proceedings of the First Joint Meeting of the Psychometric Society and the Society of Mathematical Psychology, Hamilton, ON, Canada, August 1978. [Google Scholar]
  4. Chino, N. A graphical technique for representing the asymmetric relationships between n objects. Behaviormetrika 1978, 5, 23–40. [Google Scholar] [CrossRef]
  5. Kiers, H.A.; Takane, Y. A generalization of GIPSCAL for the analysis of nonsymmetric data. J. Classif. 1994, 11, 79–99. [Google Scholar] [CrossRef]
  6. Cox, T.F.; Cox, M.A. Multidimensional Scaling; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  7. Krzanowski, W. Principles of Multivariate Analysis; OUP Oxford: Oxford, UK, 2000; Volume 23. [Google Scholar]
  8. Constantine, A.; Gower, J.C. Graphical representation of asymmetric matrices. J. R. Stat. Soc. Ser. C (Applied Stat.) 1978, 27, 297–304. [Google Scholar] [CrossRef]
  9. Trendafilov, N.; Gallo, M. Multivariate Data Analysis on Matrix Manifolds (with Manopt); Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  10. Trendafilov, N.T. GIPSCAL revisited. a projected gradient approach. Stat. Comput. 2002, 12, 135–145. [Google Scholar] [CrossRef]
  11. Carroll, J.D.; Chang, J.-J. Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
  12. Chino, N. GIPSCAL. In Structure and Dynamics of Asymmetric Interactions; Springer: Berlin/Heidelberg, Germany, 2025; pp. 77–100. [Google Scholar]
  13. Trendafilov, N.T. The dynamical system approach to multivariate data analysis. J. Comput. Graph. Stat. 2006, 15, 628–650. [Google Scholar] [CrossRef]
  14. Zhou, X.-L.; Li, J.-F.; Li, C.-Q. An efficient algorithm for fitting the three-way GIPSCAL problem with missing values from asymmetric multidimensional scaling. In Numerical Algorithms; Springer: Berlin/Heidelberg, Germany, 2025; pp. 1–42. [Google Scholar]
  15. Trendafilov, N.T. DINDSCAL: Direct INDSCAL. Stat. Comput. 2012, 22, 445–454. [Google Scholar] [CrossRef]
  16. Trendafilov, N.T. Dynamical system approach to factor analysis parameter estimation. Br. J. Math. Stat. Psychol. 2003, 56, 27–46. [Google Scholar] [CrossRef]
  17. Trendafilov, N.T. Orthonormality-constrained indscal with nonnegative saliences. In Proceedings of the International Conference on Computational Science and Its Applications, Assisi, Italy, 14–17 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 952–960. [Google Scholar]
  18. Trendafilov, N.T.; Jolliffe, I.T. Projected gradient approach to the numerical solution of the SCoTLASS. Comput. Stat. Data Anal. 2006, 50, 242–253. [Google Scholar] [CrossRef]
  19. Loisel, S.; Takane, Y. Generalized GIPSCAL re-revisited: A fast convergent algorithm with acceleration by the minimal polynomial extrapolation. Adv. Data Anal. Classif. 2011, 5, 57–75. [Google Scholar] [CrossRef]
  20. Boumal, N.; Mishra, B.; Absil, P.-A.; Sepulchre, R. Manopt, a matlab toolbox for optimization on manifolds. J. Mach. Learn. Res. 2014, 15, 1455–1459. [Google Scholar]
  21. Brezinski, C.; Redivo-Zaglia, M. The simplified topological ε-algorithms for accelerating sequences in a vector space. SIAM J. Sci. Comput. 2014, 36, A2227–A2247. [Google Scholar] [CrossRef]
  22. Brezinski, C.; Redivo-Zaglia, M. The simplified topological ε-algorithms: Software and applications. Numer. Algorithms 2017, 74, 1237–1260. [Google Scholar] [CrossRef]
  23. Hiriart-Urruty, J.-B.; Lemaréchal, C. Convex Analysis and Minimization Algorithms I: Fundamentals; Springer Science and Business Media: Dordrecht, The Netherlands, 1996; Volume 305. [Google Scholar]
  24. Shanks, D. Non-linear transformations of divergent and slowly convergent sequences. J. Math. Phys. 1955, 34, 1–42. [Google Scholar] [CrossRef]
  25. Schmidt, R.J. Xxxii. on the numerical solution of linear simultaneous equations by an iterative method. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1941, 32, 369–383. [Google Scholar] [CrossRef]
  26. Wynn, P. On a device for computing the em(Sn) transformation. Math. Tables Other Aids Comput. 1956, 10, 91–96. [Google Scholar] [CrossRef]
  27. Wynn, P. Acceleration techniques for iterated vector and matrix problems. Math. Comput. 1962, 16, 301–322. [Google Scholar] [CrossRef]
  28. Brezinski, C. Padé-Type Approximation and General Orthogonal Polynomials; International Series of Numerical Mathematics; Birkhäuser Verlag: Basel, Switzerland, 1980; Volume 50. [Google Scholar]
  29. Brezinski, C. Généralisations de la transformation de shanks, de la table de padé et de l’ε-algorithme. Calcolo 1975, 12, 317–360. [Google Scholar] [CrossRef]
  30. Jbilou, K.; Reichel, L.; Sadok, H. Vector extrapolation enhanced TSVD for linear discrete ill-posed problems. Numer. Algorithms 2009, 51, 195–208. [Google Scholar] [CrossRef]
  31. Salam, A.; Graves-Morris, P.R. On the vector ε-algorithm for solving linear systems of equations. Numer. Algorithms 2002, 29, 229–247. [Google Scholar] [CrossRef]
  32. Smith, D.A.; Ford, W.F.; Sidi, A. Erratum: Correction to extrapolation methods for vector sequences. SIAM Rev. 1988, 30, 623–624. [Google Scholar] [CrossRef]
  33. Tan, R.C. Implementation of the topological ε-algorithm. SIAM J. Sci. Stat. Comput. 1988, 9, 839–848. [Google Scholar] [CrossRef]
  34. Brezinski, C.; Zaglia, M.R. Extrapolation Methods: Theory and Practice; Elsevier: Amsterdam, The Netherlands, 1991. [Google Scholar]
  35. Sidi, A. Vector Extrapolation Methods with Applications; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  36. Absil, P.-A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  37. Takane, Y.; Jung, K.; Hwang, H. An acceleration method for ten berge et al.’s algorithm for orthogonal INDSCAL. Comput. Stat. 2010, 25, 409–428. [Google Scholar] [CrossRef]
  38. Absil, P.-A.; Mahony, R.; Trumpf, J. An extrinsic look at the Riemannian Hessian. In Proceedings of the International Conference on Geometric Science of Information, Paris, France, 28–30 August 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 361–368. [Google Scholar]
  39. Li, J.-F.; Zhou, J.; Zhou, X.-L.; Li, C.-Q.; Song, J.-S. A trust-region approach for iteration solution of the direct fitting metric MDS. BIT Numer. Math. 2025, 65, 339–375. [Google Scholar] [CrossRef]
  40. Zhou, X.-L.; Li, C.-Q.; Li, J.-F.; Duan, X.-F. A Riemannian inexact newton method for solving the orthogonal INDSCAL problem in multidimensional scaling. IMA J. Numer. Anal. 2025, draf047. [Google Scholar] [CrossRef]
  41. Shampine, L.F.; Reichelt, M.W. The matlab ODE suite. SIAM J. Sci. Comput. 1997, 18, 1–22. [Google Scholar] [CrossRef]
  42. Shampine, L.F.; Gladwell, I.; Thompson, S. Solving ODEs with MATLAB; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
Figure 1. ε -table [22].
Figure 1. ε -table [22].
Mathematics 13 02680 g001
Figure 2. Computation rules for odd-indexed sequences and even-indexed sequences in TEA1 and TEA2 [22].
Figure 2. Computation rules for odd-indexed sequences and even-indexed sequences in TEA1 and TEA2 [22].
Mathematics 13 02680 g002
Figure 3. Comparison of the performance of the accelerated fixed-point iterations with the original method across varying problem size. The horizontal axis indicates the computation time (in seconds), while the vertical axis plots the logarithm of the error norm, log10 Error .
Figure 3. Comparison of the performance of the accelerated fixed-point iterations with the original method across varying problem size. The horizontal axis indicates the computation time (in seconds), while the vertical axis plots the logarithm of the error norm, log10 Error .
Mathematics 13 02680 g003
Figure 4. Comparison of the performance of the accelerated fixed-point iterations with the original method across varying problem size. The horizontal axis indicates the computation time (in seconds), while the vertical axis plots the logarithm of the error norm, log10 | | Error | | .
Figure 4. Comparison of the performance of the accelerated fixed-point iterations with the original method across varying problem size. The horizontal axis indicates the computation time (in seconds), while the vertical axis plots the logarithm of the error norm, log10 | | Error | | .
Mathematics 13 02680 g004aMathematics 13 02680 g004b
Table 1. Numerical comparison of the accelerated fixed-point iterations with the original method across various problem settings.
Table 1. Numerical comparison of the accelerated fixed-point iterations with the original method across various problem settings.
Partially structured with nnd weight matrices (NND)
N, [n,r] FPI FPI-VEA FPI-TEA
ITCPUErrorFvalue k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
50 , [ 45 , 3 ] 500.059 8.98 × 10 9 0.660 190.024 170.020 150.021 250.029 300.037 360.040
50 , [ 45 , 5 ] 750.094 8.75 × 10 9 1.363 190.026 230.035 270.037 360.052 340.052 400.053
50 , [ 100 , 3 ] 440.247 7.66 × 10 9 0.666 150.091 130.072 140.079 190.105 210.116 190.106
50 , [ 100 , 5 ] 770.444 9.33 × 10 9 1.380 190.111 230.135 270.157 300.169 410.233 350.199
50 , [ 200 , 3 ] 470.880 7.38 × 10 9 0.665 180.338 160.290 140.245 190.327 230.408 240.439
50 , [ 200 , 5 ] 761.449 9.47 × 10 9 1.375 190.364 230.435 270.505 360.672 320.609 310.577
FPI-STEA FPI-MPE FPI-RRE
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
320.043 280.031 370.045 160.019 130.019 150.017 150.016 130.016 150.019
340.045 310.040 350.043 240.032 190.023 220.030 260.032 190.027 220.027
190.106 210.120 190.109 120.067 130.082 120.076 130.072 130.072 140.078
330.187 320.183 380.219 210.121 190.110 170.097 210.121 190.110 190.108
190.339 230.419 240.428 140.251 130.234 150.273 130.233 130.240 150.269
410.759 320.586 300.554 220.420 190.353 200.376 210.401 190.353 190.354
FPI-MMPE FPI-Anderson
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
160.021 130.016 150.018 90.016 90.012 90.010
210.027 190.026 220.028 220.040 160.020 170.025
160.088 130.072 150.083 80.048 80.045 80.043
210.123 190.110 210.121 160.091 150.085 150.087
160.293 130.238 150.269 80.146 80.138 80.143
190.370 190.350 200.369 170.319 150.271 150.292
Partially structured with indefinite weight matrices (IND)
N, [n,r] FPI FPI-VEA FPI-TEA
ITCPUErrorFvalue k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
30 , [ 30 , 3 ] 230.013 5.73 × 10 9 20.719 130.010 120.009 140.009 180.011 130.008 140.007
50 , [ 30 , 5 ] 290.026 9.80 × 10 9 64.644 140.014 130.016 140.014 170.016 140.012 140.014
50 , [ 150 , 3 ] 220.312 6.75 × 10 9 31.572 100.148 120.162 140.189 100.137 120.161 140.200
50 , [ 150 , 5 ] 250.351 6.93 × 10 9 64.621 120.161 120.167 140.189 140.190 140.205 140.197
50 , [ 300 , 3 ] 220.768 4.78 × 10 9 31.578 100.358 120.431 140.501 100.351 120.428 140.490
50 , [ 300 , 5 ] 270.955 5.84 × 10 9 64.629 120.436 120.425 140.486 140.491 120.428 140.494
FPI-STEA FPI-MPE FPI-RRE
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
180.014 130.010 140.009 120.008 130.007 110.005 120.006 130.007 120.006
170.019 140.014 140.014 160.016 140.016 150.015 160.014 140.013 150.016
100.147 120.157 140.196 100.137 80.110 80.107 110.151 80.109 80.111
140.197 140.193 140.195 150.201 130.186 140.186 150.213 130.183 140.197
100.358 120.424 140.498 110.389 80.288 80.284 110.387 80.274 80.281
140.521 120.423 140.499 150.550 130.459 130.473 150.550 130.468 130.470
FPI-MMPE FPI-Anderson
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
110.007 130.009 110.008 80.008 80.006 80.004
160.017 140.013 150.014 120.011 120.012 110.010
110.148 110.153 80.106 70.097 70.090 70.094
150.208 130.183 150.210 110.149 110.147 100.138
110.390 80.279 80.292 70.248 70.251 70.254
150.550 130.476 130.481 110.385 100.360 100.362
Completely random data sets (RAND)
N, [n,r] FPI FPI-VEA FPI-TEA
ITCPUErrorFvalue k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
30 , [ 25 , 2 ] 2030.130 9.62 × 10 7 19059.419 1440.090 1440.072 1440.071 1760.101 1720.083 1690.081
30 , [ 25 , 3 ] 5230.246 9.71 × 10 7 18908.352 1810.088 1800.087 1800.086 2080.102 2060.098 2810.137
50 , [ 40 , 3 ] 3580.349 9.56 × 10 7 81523.263 2460.246 2480.244 2520.252 2820.284 2700.267 2700.264
30 , [ 40 , 5 ] 2740.173 9.95 × 10 7 48,131.172 1290.083 1310.088 1340.088 1580.106 1800.117 1580.105
50 , [ 60 , 3 ] 12112.628 9.87 × 10 7 182304.537 6781.489 6731.442 6691.430 7971.706 7371.570 6891.477
50 , [ 60 , 5 ] 4851.079 9.94 × 10 7 181536.539 2470.546 2460.541 2480.549 3170.709 2790.611 2710.574
FPI-STEA FPI-MPE FPI-RRE
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
1760.108 1720.079 1690.076 1420.074 1420.068 1440.070 1430.076 1420.071 1440.070
2160.106 2130.100 2790.138 1670.081 1680.080 1690.080 1680.080 1680.081 1690.082
2820.282 2700.266 2710.269 2440.238 2380.230 2400.235 2460.243 2400.233 2400.235
1570.100 1800.118 1580.108 1300.088 1310.089 1270.084 1290.083 1310.085 1270.085
8061.739 7601.628 8211.755 6521.376 6481.362 6491.371 6491.382 6451.374 6411.370
3150.676 2830.613 2710.586 2450.527 2380.499 2360.505 2450.523 2430.509 2400.513
FPI-MMPE FPI-Anderson
k = 4 k = 5 k = 6 k = 4 k = 5 k = 6
ITCPU ITCPU ITCPU ITCPU ITCPU ITCPU
1440.072 1430.071 1440.071 1450.083 1440.067 1440.065
1690.081 1690.081 1690.079 2000.098 1970.093 1900.094
2510.239 2430.231 2440.234 2550.243 2470.241 2480.245
1320.089 1310.086 1270.083 1410.093 1400.092 1400.096
6501.382 6471.374 6411.358 6901.474 6761.424 6691.417
2480.526 2420.517 2400.504 2660.569 2520.535 2550.547
Table 2. Numerical comparison of the acclerated fixed-point iterations and existing algorithms in the toolbox Manopt.
Table 2. Numerical comparison of the acclerated fixed-point iterations and existing algorithms in the toolbox Manopt.
N, [n,r]CPUIT Error ObjN, [n,r]CPUIT Error Obj
Partially structured with nnd weight matrices (NND)
FPI-VEA30, [20, 2]0.00612 3.4 × 10 13 0.25520, [20, 3]0.01923 7.3 × 10 12 0.305
FPI-TEA0.00612 1.0 × 10 13 0.2550.01423 2.7 × 10 9 0.305
FPI-STEA0.00612 8.2 × 10 13 0.2550.02423 1.3 × 10 9 0.305
RCG-Manopt55.670151 8.5 × 10 7 0.255115.256598 9.3 × 10 7 0.305
RBB-Manopt92.000306 9.5 × 10 7 0.255172.0841141 9.7 × 10 7 0.305
RSD-Manopt333.1572069 7.4 × 10 6 0.255810.09510,000 1.3 × 10 5 0.305
RTR-Manopt112.7887 2.2 × 10 7 0.25580.4349 8.0 × 10 7 0.305
RLBFGS-Manopt662.733115 9.8 × 10 7 0.255428.122159 7.8 × 10 7 0.305
ARC-Manopt416.3786 6.2 × 10 7 0.255158.3989 4.3 × 10 7 0.305
FPI-VEA20, [30, 2]0.01612 5.9 × 10 14 0.18420, [30, 3]0.01422 9.0 × 10 9 0.309
FPI-TEA0.01812 6.7 × 10 14 0.1840.01023 1.5 × 10 9 0.309
FPI-STEA0.02412 1.8 × 10 12 0.1840.01023 1.8 × 10 9 0.309
RCG-Manopt26.947129 9.0 × 10 7 0.18464.849355 9.0 × 10 7 0.309
RBB-Manopt23.463148 9.9 × 10 7 0.18496.764616 9.3 × 10 7 0.309
RSD-Manopt81.875979 4.7 × 10 6 0.184427.4095153 7.3 × 10 6 0.309
RTR-Manopt39.1435 7.0 × 10 7 0.18482.4199 2.2 × 10 7 0.309
RLBFGS-Manopt291.052107 9.7 × 10 7 0.184375.096144 8.2 × 10 7 0.309
ARC-Manopt125.7784 7.2 × 10 7 0.184201.7438 3.1 × 10 7 0.309
FPI-VEA30, [60, 2]0.04112 5.8 × 10 14 0.24720, [60, 3]0.03013 5.5 × 10 10 0.308
FPI-TEA0.02912 3.9 × 10 14 0.2470.02616 5.7 × 10 9 0.308
FPI-STEA0.03812 4.3 × 10 14 0.2470.02516 5.7 × 10 9 0.308
RCG-Manopt28.099134 9.5 × 10 7 0.24790.802264 9.6 × 10 7 0.308
RBB-Manopt20.644133 9.8 × 10 7 0.24764.967232 9.9 × 10 7 0.308
RSD-Manopt158.428560 5.6 × 10 6 0.247155.4531005 1.1 × 10 5 0.308
RTR-Manopt273.3695 3.1 × 10 7 0.247111.8495 9.0 × 10 7 0.308
RLBFGS-Manopt1518.80598 9.5 × 10 7 0.247462.869161 8.7 × 10 7 0.308
ARC-Manopt867.4784 4.5 × 10 7 0.247192.5175 3.0 × 10 7 0.308
Partially structured with indefinite weight matrices (IND)
FPI-VEA30, [20, 2]0.0048 1.4 × 10 9 14.05020, [20, 3]0.00912 4.3 × 10 9 13.503
FPI-TEA0.0048 1.4 × 10 9 14.0500.00815 6.6 × 10 9 13.503
FPI-STEA0.0048 1.4 × 10 9 14.0500.01215 6.6 × 10 9 13.503
RCG-Manopt111.312129 9.5 × 10 7 14.050126.823330 9.5 × 10 7 13.503
RBB-Manopt121.659136 6.6 × 10 7 14.050286.979755 9.9 × 10 7 13.503
RSD-Manopt452.4171065 9.8 × 10 7 14.0501884.26010,000 4.4 × 10 6 13.503
RTR-Manopt281.0187 8.8 × 10 7 14.050131.5328 9.0 × 10 7 13.503
RLBFGS-Manopt1662.039107 8.8 × 10 7 14.050972.222157 7.0 × 10 7 13.503
ARC-Manopt1231.7887 4.6 × 10 7 14.050477.4818 2.4 × 10 7 13.503
FPI-VEA30, [30, 2]0.0058 8.2 × 10 9 13.99320, [30, 3]0.04912 8.0 × 10 10 13.367
FPI-TEA0.0058 8.2 × 10 9 13.9930.04812 2.9 × 10 9 13.367
FPI-STEA0.0058 8.2 × 10 9 13.9930.05112 2.9 × 10 9 13.367
RCG-Manopt32.592161 8.6 × 10 7 13.993111.185266 9.7 × 10 7 13.367
RBB-Manopt96.056132 9.2 × 10 7 13.993155.082480 1.0 × 10 6 13.367
RSD-Manopt668.2821520 9.3 × 10 7 13.993944.9645986 9.8 × 10 7 13.367
RTR-Manopt297.7908 2.4 × 10 7 13.993105.3228 5.7 × 10 7 13.367
RLBFGS-Manopt1333.03485 8.3 × 10 7 13.993584.491107 6.7 × 10 7 13.367
ARC-Manopt1212.5967 6.1 × 10 7 13.993480.1338 2.2 × 10 7 13.367
FPI-VEA30, [55, 2]0.0378 7.0 × 10 9 13.96920, [55, 3]0.03812 2.2 × 10 10 13.380
FPI-TEA0.0378 7.0 × 10 9 13.9690.03712 1.8 × 10 10 13.380
FPI-STEA0.0398 7.0 × 10 9 13.9690.03812 1.9 × 10 10 13.380
RCG-Manopt112.628121 9.8 × 10 7 13.96989.848211 9.8 × 10 7 13.380
RBB-Manopt154.269219 8.9 × 10 7 13.96974.459233 9.9 × 10 7 13.380
RSD-Manopt396.3931122 9.6 × 10 7 13.969506.5813202 1.0 × 10 6 13.380
RTR-Manopt228.9857 4.8 × 10 7 13.969108.3148 4.2 × 10 7 13.380
RLBFGS-Manopt1225.719100 7.0 × 10 7 13.969567.610104 6.1 × 10 7 13.380
ARC-Manopt954.3787 2.4 × 10 7 13.969375.8347 6.9 × 10 7 13.380
Completely random data sets (RAND)
FPI-VEA20, [25, 2]0.00913 7.4 × 10 7 12,415.98920, [25, 3]0.01221 8.4 × 10 7 12,281.929
FPI-TEA0.00812 7.5 × 10 7 12,415.9890.02345 3.9 × 10 7 12,281.929
FPI-STEA0.00713 7.9 × 10 7 12,415.9890.02844 9.8 × 10 7 12,281.929
RCG-Manopt0.2171 9.4 × 10 2 12,415.9890.2091 8.0 × 10 2 12,281.929
RBB-Manopt175.9741573 9.7 × 10 7 12,415.989866.7156021 1.0 × 10 6 12,281.929
RSD-Manopt320.4024174 3.7 × 10 5 12,415.989605.8107096 5.4 × 10 5 12,281.929
RTR-Manopt328.18036 8.4 × 10 7 12,415.989215.10815 6.0 × 10 7 12,281.929
RLBFGS-Manopt351.008137 5.8 × 10 6 12,415.989444.609178 2.1 × 10 5 12,281.929
ARC-Manopt1440.73136 8.4 × 10 7 12,415.989904.21315 5.7 × 10 7 12,281.929
FPI-VEA20, [40, 3]0.01123 8.8 × 10 7 32,116.86030, [40, 5]0.01613 9.5 × 10 7 48,131.172
FPI-TEA0.02556 5.8 × 10 7 32,116.8600.04762 9.3 × 10 7 48,131.172
FPI-STEA0.02656 4.0 × 10 7 32,116.8600.04662 9.9 × 10 7 48,131.172
RCG-Manopt0.1971 9.4 × 10 2 32,116.8600.4541 9.1 × 10 2 48,131.172
RBB-Manopt1278.20510,000 2.3 × 10 3 32,116.8603126.30610,000 1.4 × 10 4 48,131.172
RSD-Manopt727.26110,000 6.3 × 10 4 32,116.860958.4905508 1.8 × 10 4 48,131.172
RTR-Manopt737.79332 7.8 × 10 7 32,116.8601279.05124 8.9 × 10 7 48,131.172
RLBFGS-Manopt1414.241525 4.2 × 10 5 32,116.8601321.907249 2.1 × 10 5 48,131.172
ARC-Manopt2697.66931 9.9 × 10 7 32,116.8605498.76224 8.3 × 10 7 48,131.172
FPI-VEA30, [55, 3]0.02018 8.5 × 10 7 91,180.31630, [55, 5]0.06644 9.8 × 10 7 90,577.201
FPI-TEA0.04644 9.9 × 10 7 91,180.3160.136121 9.9 × 10 7 90,577.201
FPI-STEA0.04043 9.6 × 10 7 91,180.3160.135115 1.0 × 10 6 90,577.201
RCG-Manopt0.4351 9.0 × 10 2 91,180.3160.4621 9.6 × 10 2 90,577.201
RBB-Manopt2837.12210,000 1.1 × 10 3 91,180.3163126.55510,000 2.9 × 10 3 90,577.201
RSD-Manopt949.2315953 3.2 × 10 4 91,180.3161686.06910,000 1.3 × 10 3 90,577.201
RTR-Manopt1194.25225 8.5 × 10 7 91,180.3164798.69953 9.9 × 10 7 90,577.201
RLBFGS-Manopt978.798174 1.2 × 10 4 91,180.3162383.108351 7.3 × 10 5 90,577.201
ARC-Manopt5176.63625 8.1 × 10 7 91,180.3169758.52853 9.6 × 10 7 90,577.201
Table 3. Numerical comparison of the acclerated fixed-point iterations and the projected gradient flow algorithm.
Table 3. Numerical comparison of the acclerated fixed-point iterations and the projected gradient flow algorithm.
N, [n,r]CPUITErrorObjN, [n,r]CPUITErrorObj
Partially structured withnndweight matrices (NND)
FPI-VEA50, [25, 2]0.04012 3.5 × 10 12 0.37150, [25, 3]0.02714 7.1 × 10 9 0.661
FPI-TEA0.02312 2.1 × 10 11 0.3710.02923 3.6 × 10 10 0.661
FPI-STEA0.02912 2.2 × 10 11 0.3710.03023 8.4 × 10 9 0.661
PG-ODE7.7661321 4.7 × 10 9 0.37113.5261096 8.3 × 10 10 0.661
FPI-VEA50, [50, 2]0.02812 2.2 × 10 12 0.37050, [50, 5]0.05523 2.9 × 10 10 1.369
FPI-TEA0.02712 2.4 × 10 13 0.3700.08436 6.5 × 10 9 1.369
FPI-STEA0.02812 1.8 × 10 12 0.3700.08134 4.9 × 10 9 1.369
PG-ODE7.926853 8.5 × 10 10 0.370508.9361411 7.3 × 10 9 1.369
FPI-VEA30, [100, 2]0.28912 3.6 × 10 13 0.25650, [100, 5]0.20123 2.7 × 10 10 1.378
FPI-TEA0.28012 1.3 × 10 12 0.2560.39044 9.4 × 10 9 1.378
FPI-STEA0.27912 1.9 × 10 12 0.2560.40045 6.1 × 10 9 1.378
PG-ODE26.166555 5.9 × 10 9 0.256188.4031056 7.5 × 10 9 1.378
Partially structured with indefinite weight matrices (IND)
FPI-VEA30, [30, 3]0.05712 8.1 × 10 10 20.58730, [30, 5]0.01213 7.6 × 10 9 41.498
FPI-TEA0.02414 7.0 × 10 9 20.5870.01523 3.2 × 10 10 41.498
FPI-STEA0.03014 7.0 × 10 9 20.5870.01723 7.0 × 10 11 41.498
PG-ODE7.9731451 3.7 × 10 9 20.58726.8151384 1.4 × 10 10 41.498
FPI-VEA30, [50, 2]0.0199 8.0 × 10 10 13.98530, [50, 3]0.02212 1.5 × 10 11 20.591
FPI-TEA0.0149 8.0 × 10 10 13.9850.01612 1.3 × 10 10 20.591
FPI-STEA0.0169 8.0 × 10 10 13.9850.01612 1.4 × 10 10 20.591
PG-ODE5.668984 4.0 × 10 9 13.98513.0041280 9.5 × 10 9 20.591
FPI-VEA50, [100, 3]0.14812 8.8 × 10 13 31.56730, [100, 5]0.08612 1.3 × 10 9 41.516
FPI-TEA0.13412 2.5 × 10 13 31.5670.10815 5.1 × 10 9 41.516
FPI-STEA0.14312 1.9 × 10 13 31.5670.11115 5.1 × 10 9 41.516
PG-ODE50.604786 6.7 × 10 9 31.56778.5321056 3.9 × 10 9 41.516
Completely random data sets (RAND)
FPI-VEA30, [25, 2]0.110144 8.9 × 10 7 19,059.41930, [25, 3]0.085180 8.7 × 10 7 18,908.352
FPI-TEA0.093172 8.6 × 10 7 19,059.4190.096206 7.9 × 10 7 18,908.352
FPI-STEA0.097172 8.6 × 10 7 19,059.4190.103213 1.0 × 10 6 18,908.352
PG-ODE3.8292778 9.2 × 10 14 19,157.1309.4973046 2.4 × 10 10 19,008.643
FPI-VEA50, [30, 3]0.347397 5.2 × 10 7 45,923.01850, [30, 5]0.608597 9.1 × 10 7 45,485.370
FPI-TEA0.379445 9.0 × 10 7 45,923.0180.613604 9.0 × 10 7 45,485.370
FPI-STEA0.368444 9.4 × 10 7 45,923.0180.596603 9.9 × 10 7 45,485.370
PG-ODE25.9863303 1.6 × 10 13 46,132.681164.8683775 1.5 × 10 11 45,811.876
FPI-VEA30, [60, 2]1.704219 7.7 × 10 7 109,708.39250, [60, 3]8.621673 9.8 × 10 7 182,304.537
FPI-TEA2.213285 9.4 × 10 7 109,708.3929.378737 8.8 × 10 7 182,304.537
FPI-STEA2.215285 9.6 × 10 7 109,708.3929.648760 9.4 × 10 7 182,304.537
PG-ODE28.1391997 2.9 × 10 13 109,927.833171.9394369 2.1 × 10 11 182,628.808
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, Y.; Mao, C.; Li, J. ε-Algorithm Accelerated Fixed-Point Iteration for the Three-Way GIPSCAL Problem in Asymmetric MDS. Mathematics 2025, 13, 2680. https://doi.org/10.3390/math13162680

AMA Style

Qin Y, Mao C, Li J. ε-Algorithm Accelerated Fixed-Point Iteration for the Three-Way GIPSCAL Problem in Asymmetric MDS. Mathematics. 2025; 13(16):2680. https://doi.org/10.3390/math13162680

Chicago/Turabian Style

Qin, Yuefeng, Chen Mao, and Jiaofen Li. 2025. "ε-Algorithm Accelerated Fixed-Point Iteration for the Three-Way GIPSCAL Problem in Asymmetric MDS" Mathematics 13, no. 16: 2680. https://doi.org/10.3390/math13162680

APA Style

Qin, Y., Mao, C., & Li, J. (2025). ε-Algorithm Accelerated Fixed-Point Iteration for the Three-Way GIPSCAL Problem in Asymmetric MDS. Mathematics, 13(16), 2680. https://doi.org/10.3390/math13162680

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop