Next Article in Journal
TRX Cryptocurrency Profit and Transaction Success Rate Prediction Using Whale Optimization-Based Ensemble Learning Framework
Previous Article in Journal
The Expected Competitive Ratio on a Kind of Stochastic-Online Flowtime Scheduling with Machine Subject to an Uncertain Breakdown
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Proximal Point Algorithm with Euclidean Distance on the Stiefel Manifold

Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Av. Diag. Las Torres 2640, Santiago de Chile 7941169, Chile
Mathematics 2023, 11(11), 2414; https://doi.org/10.3390/math11112414
Submission received: 11 March 2023 / Revised: 9 May 2023 / Accepted: 11 May 2023 / Published: 23 May 2023
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
In this paper, we consider the problem of minimizing a continuously differentiable function on the Stiefel manifold. To solve this problem, we develop a geodesic-free proximal point algorithm equipped with Euclidean distance that does not require use of the Riemannian metric. The proposed method can be regarded as an iterative fixed-point method that repeatedly applies a proximal operator to an initial point. In addition, we establish the global convergence of the new approach without any restrictive assumption. Numerical experiments on linear eigenvalue problems and the minimization of sums of heterogeneous quadratic functions show that the developed algorithm is competitive with some procedures existing in the literature.

1. Introduction

In this paper, we are interested in designing a proximal point procedure to solve the following optimization problem
min X R n   ×   p F ( X ) , s . t . X X = I p ,
where F : R n   ×   p R is a continuously differentiable matrix function and I p R p   ×   p denotes the identity matrix. The feasible set of Problem (1), denoted by S t ( n , p ) = { X R n   ×   p : X X = I p } , is known as the Stiefel manifold [1]. Actually, this set constitutes an embedded Riemannian sub-manifold of the Euclidean space R n   ×   p with dimensions equal to n p 1 2 p ( p + 1 ) (see [1]). Notice that (1) is a well-defined optimization problem because S t ( n , p ) is a compact set and F is a continuous function; therefore, the Weierstrass theorem ensures the existence of at least one global minimizer (and even a global maximizer) for F on the Stiefel manifold.
The orthogonality-constrained minimization problem (1) is widely applicable in many fields, such as the nearest low-rank correlation matrix problem [2,3], the linear eigenvalue problem [4,5,6], sparse principal component analysis [5,7], Kohn–Sham total energy minimization [4,6,8,9], low-rank matrix completion [10], the orthogonal Procrustes problem [8,11], maximization of sums of heterogeneous quadratic functions from statistics [4,12,13], the joint diagonalization problem [13], dimension reduction techniques in pattern recognition [14], and deep neural networks [15,16], among others.
In the Euclidean setting, given f : R n R { } , a closed proper convex function, the proximal operator p r o x f : R n R n is defined by
p r o x f ( x ) = arg min y R n f ( y ) + 1 2 α y x 2 2 ,
where · 2 is the standard norm of R n and α > 0 is the proximal parameter [17].
The scalar α plays an important role controlling the magnitude by which the proximal operator sends the points towards the optimum values of f. In particular, larger values of α are related to mapped points near the optimum, while smaller values of this parameter promote a smaller movement to the minimum. It can design iterative optimization procedures that use the proximal operator to define the recursive update scheme. For example, the proximal minimization algorithm [17], minimizes the cost function f by consecutively applying the proximal operator p r o x f ( · ) , similar to fixed-point methods, to some given initial vector x 0 R n .
Several researchers have proposed generalizations of the proximal minimization algorithm in the Riemannian context. This kind of method was first considered in the Riemannian context by Ferreira and Oliveira [18] in the particular case of Hadamard manifolds. Papa Quiroz and Oliveira [19] adapted the proximal point method for quasiconvex functions and proved full convergence of the sequence { x k } to a minimizer over Hadamard manifolds. In addition, Souza and Oliveira [20] introduced a proximal point algorithm to minimize DC functions on Hadamard manifolds. Wang et. al. [21] established linear convergence and finite termination of this type of algorithm on Hadamard manifolds. For this same type of manifold, in [22], the authors proved global convergence of inexact proximal point methods. Recently, Almeida et. al. [23] developed a modified version of the proximal point procedure for minimization over Hadamard manifolds. For the specific case of optimization on the Stiefel manifold, in [7], the authors proposed a proximal gradient method to minimize the sum of two function f ( X ) + g ( X ) over S t ( n , p ) , where f is smooth and its gradient is Lipschitz continuous, and g is convex and Lipschitz continuous. Another proximal-type algorithm is proposed in [24], where the authors developed a proximal linearized augmented Lagrangian algorithm (PLAM) to solve (1). However, the PLAM algorithm does not build a feasible sequence of iterates, which differs from the rest of the Riemannian proposals.
To minimize a function F : M R defined on a Riemannian manifold M , all the approaches presented in [18,19,20,21,23,25] consider the following generalization of (2)
p r o x F ( X ) = arg min Y M F ( Y ) + 1 2 α dist 2 ( X , Y ) ,
where dist ( X , Y ) is the Riemannian distance [1]. The main disadvantage of all these works is that the authors proposed methods and theoretical analyses based on the exponential mapping, which requires the construction of geodesics on M . However, it is not always possible to find closed expressions for geodesics over a given Riemannian manifold, since geodesics are defined through a differential equation, [1,26]. Even in the case when we have available a closed formula for the corresponding geodesics on M , the computational cost of calculating the exponential mapping over a matrix space is too high, which is an obstacle to solving large-scale problems.
In this paper, we introduce a very simple proximal point algorithm to tackle Stiefel-manifold-constrained optimization problems. The proposed approach replaces the term dist ( X , Y ) in (3) with the usual matrix distance X Y F in order to avoid purely Riemannian concepts and techniques such as Riemannian distance and geodesics. The proposed iterative method tries to solve the optimization problem (1) by repeatedly applying our modified proximal operator to a given starting point. We prove (without imposing the Lipschitz continuity hypothesis) that our method converges to critical points of the restriction of the cost function to the Stiefel manifold. Our preliminary computational results suggest that our proposal presents competitive numerical performance against several feasible methods existing in the literature.
The rest of this manuscript is organized as follows. Section 2 summarizes a few well-known notations and concepts of linear algebra and Riemannian geometry that will be exploited in this paper. Afterwards, Section 3 introduces a new proximal point algorithm to deal with optimization problems with orthogonality constraints. Section 4 provides a concise convergence analysis for the proposed algorithm. Section 5 presents some illustrative numerical results, where we compare our approach with several state-of-the-art methods for the solution of linear eigenvalue problems and the minimization of sums of heterogeneous quadratic functions. Finally, the paper ends with a conclusion in Section 6.

2. Preliminaries

Throughout this paper, we say that W R n   ×   n is skew-symmetric if W = W . Given a square matrix A R m   ×   m , skew ( A ) denotes the skew-symmetric part of A; that is, skew ( A ) = 0.5 ( A A ) . The trace of X is defined as the sum of the diagonal elements, which we denote as T r [ X ] . The standard inner product between two matrices A , B R m   ×   n is given by A , B i , j a i j b i j = T r [ A B ] . The Frobenius norm is defined by A F = A , A . Let X S t ( n , p ) be an arbitrary matrix in the Stiefel manifold; the tangent space of the Stiefel manifold at X is given by [1]
T X S t ( n , p ) = { Z R n   ×   p : Z X + X Z = 0 } .
Let X S t ( n , p ) ; the canonical metric [6] associated with the tangent space of the Stiefel manifold is defined by
ξ X , η X c T r ξ X I 1 2 X X η X , ξ X , η X T X S t ( n , p ) .
Let F : R n   ×   p R be a differentiable function; we denote as DF ( X ) ( F ( X ) X i j ) the matrix of partial derivatives of F (the Euclidean gradient of F ). Let Φ : S t ( n , p ) R be a smooth function defined on the Stiefel manifold; then the Riemannian gradient of Φ at X S t ( n , p ) , denoted as Φ ( X ) , is the unique vector in T X S t ( n , p ) satisfying
D Φ ( X ) [ ξ X ] : = lim τ 0 Φ ( γ ( τ ) ) Φ ( γ ( 0 ) ) τ = Φ ( X ) , ξ X , ξ X T X S t ( n , p ) ,
where γ : [ 0 , τ max ] S t ( n , p ) is any curve that verifies γ ( 0 ) = X and γ ˙ ( 0 ) = ξ X .
The Riemannian gradient of Φ : S t ( n , p ) R under the canonical metric has the following closed expression [4,6]
Φ ( X ) = D Φ ( X ) X D Φ ( X ) X , X S t ( n , p ) .
In addition, based on Formula (5), we can define the following projection operator over T X S t ( n , p )
P X c [ Z ] = Z X Z X , with Z R n   ×   p .
It can be easily shown that the operator (6) effectively projects matrices from R n   ×   p to the tangent space T X S t ( n , p ) . This projection operator was also considered in [27].
Similar to the case of smooth unconstrained optimization, X S t ( n , p ) is a critical point of Φ : S t ( n , p ) R if it satisfies [1,28]
Φ ( X ) = 0 .
Therefore, the critical points of the restriction of the cost function F to the Stiefel manifold are candidates to be local minimizers of Problem (1). Here, we clarify that the objective function F that appears in (1) has both a Euclidean gradient and a Riemannian gradient. In the rest of this paper, we will denote by F ( · ) the Riemannian gradient of the restriction of F to the set S t ( n , p ) under the canonical metric.

3. Proximal Point Algorithm on St ( n , p )

In this section, we propose an implicitly defined curve on the Stiefel manifold. We also validate that the proposed curve verifies analogous properties that Riemannian gradient methods have. Using this curve, we further present in detail the proposed proximal point algorithm.
As we mentioned in the introduction, we consider an adaptation of the exact proximal point method to the framework of minimization over the Stiefel manifold: given a feasible point X S t ( n , p ) , we compute the new iterate X ( α ¯ ) as a point on the curve X ( · ) : [ 0 , α max ] S t ( n , p ) defined by
X ( α ) arg min Y R n   ×   p α F ( Y ) + 1 2 Y X F 2 , s . t . Y Y = I p .
Observe that the argmin set in (7) is never empty since α F ( Y ) + 1 2 Y X F 2 is a continuous function on a compact domain. In addition, note that this proximal optimization problem is obtained from (3) by substituting the Riemannian distance with the standard metric associated with the matrix space R n   ×   p . Additionally, X ( α ) satisfies that X ( 0 ) = X . Thus, X ( α ) is a curve on S t ( n , p ) that connects the consecutive iterates X and X ( α ¯ ) . This is an analogous property to that which the retraction-based line-search curves verify (see [1,29]). Here it is important to remark that (7) can have multiple global solutions; in this situation, X ( α ) would not be a curve since it would not be a well-defined function.
The lemma below establishes that X ( α ) is a descent iterative process.
Lemma 1.
Given X S t ( n , p ) , consider the curve (7). Then,
F ( X ( α ) ) F ( X ) 1 2 α X ( α ) X F 2 , α ( 0 , ) .
Proof of Lemma 1.
Let α be a positive real number. In view of the optimality of X ( α ) in (7), we have
α F ( X ( α ) ) + 1 2 X ( α ) X F 2 α F ( X ) .
Post-multiplying both sides of (9) by α 1 and rearranging, we obtain
F ( X ( α ) ) F ( X ) 1 2 α X ( α ) X F 2 ,
completing the poof.    □
Lemma 1 guarantees that the proposed approach is a descent iterative process. Therefore, by executing an iterative process based on the proximal curve (7), we can minimize the objective function and at the same time move towards stationarity. Taking into account this fact, we propose our proximal point algorithm on S t ( n , p ) , the steps for which are described in Algorithm 1 (PPA-St).
Algorithm 1 PPA-St
1:
X 0 S t ( n , p ) , 0 < α min α max < , { α k } is a sequence such that α k [ α min , α max ] for all k N , k 0 .
2:
while  F ( X k ) F 0  do
3:
X k + 1 = arg min Y S t ( n , p ) α k F ( Y ) + 1 2 Y X k F 2 .
4:
   If  X k + 1 X k F   = 0  then stop the algorithm.
5:
    k k + 1 ,
6:
end while
Notice that the proposed PPA-St algorithm can be interpreted as a Euclidean proximal point algorithm with respect to the cost function
f ( X ) = F ( X ) + δ S t ( n , p ) ( X ) ,
where δ S t ( n , p ) is the indicator function given by
δ S t ( n , p ) ( X ) = 0 if   X St ( n , p ) + otherwise ,
and F : R n   ×   p ( , + ] is defined by
F ( X ) = F ( X ) if   X St ( n , p ) + otherwise .
Thus the PPA-St process can be seen as a special case of the proximal alternating linearized descent (PALM) algorithm developed in [30] by selecting the functions H ( x , y ) = 0 and g ( y ) = 0 (in the notation of [30]). Particularly, PALM has a rich convergence analysis and was already applied to the Stiefel manifold in [31,32]. Nonetheless, the main differences between PALM and PPA-St are that our proposal does not require linearizing the function F ( · ) to solve the problem, and PPA-St does not involve inertial steps.
On the other hand, there are some cost functions for which the proxy (7) can be solved analytically. For example, if F 1 ( Y ) = T r [ Y M ] , where M R n   ×   p is a data matrix, then the proximity operators are
X ( α ) arg min Y R n   ×   p α T r [ Y M ] + 1 2 Y X F 2 , s . t . Y Y = I p .
with X S t ( n , p ) constant. Since X , Y S t ( n , p ) , the optimization problem above is equivalent to
X ( α ) arg min Y R n   ×   p 1 2 Y ( X α M ) F 2 , s . t . Y Y = I p ,
which has a closed-form solution (see Proposition 2.3 in [8]). Additionally, let A , B R n   ×   n be two constant matrices and X S t ( n , n ) . Let us consider the objective function F 2 : R n   ×   n R given by F 2 ( Y ) = 1 2 A Y B F 2 . In this special case, the proximal operator is reduced to
X ( α ) arg min Y R n   ×   n α 2 A Y B F 2 + 1 2 Y X F 2 , s . t . Y Y = I n .
The above constrained optimization problem can be reformulated as
X ( α ) = arg min Y R n   ×   n 1 2 A ( 1 / α ) I Y B ( 1 / α ) X F 2 s . t . Y Y = I n ,
which is an orthogonal Procrustes problem with an analytical solution (see [33]). However, in general the proximal operator (7) does not have a closed-form expression for its solutions. Therefore, in the implementation of PPA-St, we will use an efficient Riemannian gradient method based on the QR-retraction mapping (see Example 4.1.3 in [1]) in order to solve the optimization subproblem (10).
By introducing the notation ϕ k ( Y ) α k F ( Y ) + 1 2 Y X k F 2 , we propose to use the following feasible line-search method, starting at Y 0 k = X k and τ 0 = α k ,
Y i + 1 k = ( Y i k τ i ϕ k ( Y i k ) ) c h o l ( I p + τ i 2 ϕ k ( Y i k ) ϕ k ( Y i k ) ) 1 ,
where ϕ k ( Y ) is the Riemannian gradient under the canonical metric of ϕ k evaluated at Y; that is, ϕ k ( Y ) = P Y c [ D ϕ k ( Y ) ] , where D ϕ k ( Y ) is the Euclidean gradient of ϕ k , i.e., D ϕ k ( Y ) = α k DF ( Y ) + Y X k . In addition, c h o l ( A ) denotes the Cholesky factor obtained from the Cholesky factorization of A R p   ×   p , i.e., let A R p   ×   p be a symmetric and positive definite matrix (PSD) and suppose that A = L A L A is its Cholesky decomposition; then c h o l ( A ) L A . Observe that this function is well-defined due to the uniqueness of Cholesky factorization for PSD matrices. Additionally, notice that in the recursive scheme (15), we are projecting Y i k τ i ϕ k ( Y i k ) over the Stiefel manifold using its QR factorization obtained from the Cholesky decomposition (see Equation (1.3) in [34]). We now present the inexact version of our Algorithm 1 based on the Riemannian gradient scheme (15).
It is well-known that if we endow the iterative method (15) with a globalization strategy to determine the step-size τ i > 0 , such as the Armijo’s rule [1,35] or a non-monotone Zhang–Hager type condition [36,37], then the Riemannian line-search method (15) is globally convergent (please see [1,36]). This means that the while loop between Steps 3 and 6 of Algorithm 2 does not take infinite time, i.e., there must exist an index I k N such that ϕ k ( Y i k ) F ε k for all i I k . In addition, it is always possible to determine a step-size τ i > 0 such that Armijo’s rule (16) holds. For practical purposes, we implement our Algorithm 2 using the well-known backtracking strategy to find such a step-size (see [35]). Furthermore, to reduce the number of line searches performed in each inner iteration (the iterations in terms of the index i), we incorporate the Barzilai–Borwein step-size, which typically improves the performance of gradient-type methods (see [38]).
Algorithm 2 Inexact PPA-St
1:
X 0 S t ( n , p ) , 0 < α min α max < , c 1 ( 0 , 1 ) , { α k } is a sequence such that α k [ α min , α max ] for all k N , { ε k } is a sequence of positive real numbers such that ε k 0 , k 0 .
2:
while  F ( X k ) F 0  do
3:
   Set Y 0 k = X 0 and i = 0 .
4:
   while  ϕ k ( Y i k ) F > ε k  do
5:
      Y i + 1 k = ( Y i k τ i ϕ k ( Y i k ) ) c h o l ( I p + τ i 2 ϕ k ( Y i k ) ϕ k ( Y i k ) ) 1 ,
     where τ i > 0 is selected in such a way that the Armijo condition is satisfied, i.e.,
ϕ k ( Y i + 1 k ) ϕ k ( Y i k ) c 1 τ i ϕ k ( Y i k ) F 2 ,
6:
      i i + 1 ,
7:
   end while
8:
    X k + 1 = Y i k ,
9:
    k k + 1 ,
10:
end while

4. Convergence Results

In this section, we establish the global convergence of Algorithms 1 and 2. Here, we say that an algorithm is globally convergent if, for any initial point X 0 S t ( n , p ) , the generated sequence { X k } satisfies that F ( X k ) 0 . Thus, global convergence does not refer to convergence towards global optima. Firstly, we analyze the convergence properties of Algorithm 1 by revealing the relationships between the residuals F ( X k ) F , F ( X k + 1 ) F ( X k ) and X k + 1 X k F .
Since F is continuously differentiable over R n   ×   p , then its derivative DF ( X ) is continuous. Hence, DF ( X ) is bounded on S t ( n , p ) due to the compactness of the Stiefel manifold. Then, there exists a constant κ > 0 such that
DF ( X ) F κ , X S t ( n , p ) .
Consequently, the Riemannian gradient of F satisfies
F ( X ) F = DF ( X ) X DF ( X ) X F 2 κ ,
for all X S t ( n , p ) .
The following proposition states that Algorithm 1 stops at Riemannian critical points of F .
Proposition 1.
Let { X k } be a sequence generated by Algorithm 1. Suppose that Algorithm terminates at iteration k N ; then F ( X k ) = 0 .
Proof of Proposition 1.
The first-order necessary optimality condition associated with Subproblem (10) leads to
α k F ( X k + 1 ) + P X k + 1 c [ X k + 1 X k ] = 0 ,
but since Algorithm 1 terminates at the k-th iteration, we have that X k + 1 = X k , which directly implies that P X k + 1 c [ X k + 1 X k ] = P X k + 1 c [ 0 ] = 0 . By substituting this fact in (19), we obtain the desired result. □
The rest of this section is devoted to study the asymptotic behavior of Algorithm 1 for infinite sequences { X k } generated by our approach, since otherwise, Proposition 1 says that Algorithm 1 returns a stationary point for Problem (1). The lemma below provides us two key theoretical results.
Lemma 2.
Let { X k } be an infinite sequence generated by Algorithm 1. Then, we have
1.
{ F ( X k ) } is a convergent sequence.
2.
The residual sequence { X k + 1 X k F } converges to zero.
Proof of Lemma 2.
It follows from Lemma 1 that
F ( X k ) F ( X k 1 ) 1 2 α k 1 X k X k 1 F 2 .
Therefore, { F ( X k ) } is a monotonically decreasing sequence. Now, since the Stiefel manifold is a compact set and F is a continuous function, we obtain that F has a maximum and minimum on S t ( n , p ) . Therefore, { F ( X k ) } is bounded, and then { F ( X k ) } is a convergent sequence, which proves the first part of the lemma.
On the other hand, by rearranging Inequality (20), we arrive at
X k X k 1 F 2 2 α k 1 F ( X k 1 ) F ( X k ) 2 α max F ( X k 1 ) F ( X k ) .
Applying limits in (21) and using the first part of this lemma, we obtain
lim k X k + 1 X k F = 0 .
Now we are ready to prove the global convergence of Algorithm 1, which is established in the theorem below.
Theorem 1.
Let { X k } be an infinite sequence generated by Algorithm 1. Then
lim k F ( X k + 1 ) F = 0 .
Proof of Theorem 1.
Firstly, let us denote by P k : = X k + 1 ( X k + 1 X k ) X k + 1 . Now, notice that
P k F = X k + 1 ( X k + 1 X k ) X k + 1 F X k + 1 X k F .
By applying Lemma 2 in (22), we obtain
lim k P k F = 0 .
From Lemma 1, we have
F ( X k + 1 ) F ( X k ) 1 2 α k X k + 1 X k F 2 .
It follows from (24), (7), the Cauchy–Schwarz inequality, and (18) that
F ( X k + 1 ) F ( X k ) 1 2 α k X k + 1 X k F 2 = F ( X k ) 1 2 α k P k α k F ( X k + 1 ) F 2 = F ( X k ) 1 2 α k P k F 2 + T r [ P k F ( X k + 1 ) ] α k 2 F ( X k + 1 ) F 2 < F ( X k ) + T r [ P k F ( X k + 1 ) ] α k 2 F ( X k + 1 ) F 2 F ( X k ) + T r [ P k F ( X k + 1 ) ] α min 2 F ( X k + 1 ) F 2 F ( X k ) + 2 κ P k F α min 2 F ( X k + 1 ) F 2 ,
which implies that
F ( X k + 1 ) F 2 2 α min F ( X k ) F ( X k + 1 ) + 4 κ α min P k F .
Finally, taking limits in (26) and considering (23) and Lemma 2, we obtain
lim k F ( X k + 1 ) F = 0 ,
which completes the proof. □
We now turn to proving the convergence to stationary points of the inexact version of the PPA-St approach.
Theorem 2.
Let { X k } be an infinite sequence generated by Algorithm 2. Then
lim k F ( X k + 1 ) F = 0 .
Proof of Theorem 2.
From the Armijo condition (16), Step 7, and the definition of ϕ k ( Y ) , we have
α k F ( X k + 1 ) + 1 2 X k + 1 X k F 2 ϕ k ( Y i 1 k ) c 1 τ i 1 ϕ k ( Y i 1 k ) F 2 .
Additionally, the Armijo condition (16) clearly implies that, for fixed k, { ϕ k ( Y i k ) } is a non-increasing sequence. Combining this result with inequality (27) and Step 2, we arrive at
α k F ( X k + 1 ) + 1 2 X k + 1 X k F 2 ϕ k ( Y i 1 k ) c 1 τ i 1 ϕ k ( Y i 1 k ) F 2 ϕ k ( Y i 1 k ) ϕ k ( Y 0 k ) = α k F ( X k ) ,
which leads to
F ( X k + 1 ) F ( X k ) 1 2 α k X k + 1 X k F 2 .
Therefore, the sequence of objective values { F ( X k ) } is convergent. Moreover, we get that lim k X k + 1 X k F 2 = 0 .
On the other hand, notice that
ϕ k ( X k + 1 ) = α k F ( X k + 1 ) + P X k c [ X k + 1 X k ] .
The second term on the right-hand side of (28) verifies that
| | P X k c [ X k + 1 X k ] | | F = ( X k + 1 X k ) X k ( X k + 1 X k ) X k F X k + 1 X k F + X k ( X k + 1 X k ) X k F X k + 1 X k F + X k F 2 X k + 1 X k F = 1 + X k F 2 X k + 1 X k F = 1 + T r [ X k X k ] X k + 1 X k F = 1 + T r [ I p ] X k + 1 X k F = ( 1 + p ) X k + 1 X k F .
By rearranging (28), we get
F ( X k + 1 ) = 1 α k ϕ k ( X k + 1 ) P X k c [ X k + 1 X k ] .
Applying the norm to both sides of the above equality and considering that ϕ k ( X k + 1 ) F ε k and (29), we arrive at
F ( X k + 1 ) F 1 α k ϕ k ( X k + 1 ) P X k c [ X k + 1 X k ] F 1 α min ϕ k ( X k + 1 ) F + P X k c [ X k + 1 X k ] F 1 α min ε k + ( 1 + p ) X k + 1 X k F .
Taking limits in the above relation, we conclude that
lim k F ( X k + 1 ) F = 0 ,
proving the theorem. □
Corollary 1.
Let { X k } be an infinite sequence of iterates generated by Algorithm 1 or Algorithm 2. Then every accumulation point of { X k } is a critical point of F in the Riemannian sense.
Proof. 
Let { X k } be an infinite sequence generated by Algorithm 1 (or Algorithm 2). Clearly, the set of all accumulation points of the sequence { X k } is non-empty since { X k } S t ( n , p ) and S t ( n , p ) is bounded. Let X ¯ be an accumulation point of { X k } ; that is, there is a subsequence { X k } k K converging to X ¯ . Since S t ( n , p ) is compact and X k is feasible, for all k 0 , we have X ¯ S t ( n , p ) . Applying Theorem 1 (or Theorem 2) and considering that F is a continuous function, we arrive at
F ( X ¯ ) = lim k K F ( X k ) = 0 ,
proving the corollary. □

5. Computational Experiments

In this section, we present some numerical results to verify the practical performance of the proposed algorithm. We test our Algorithm 2 on academic problems, considering linear eigenvalue problems and minimization of sums of heterogeneous quadratic functions. We coded our simulations in MATLAB (version 2017b) with double precision on a machine with an Intel(R) Core(TM) i7-4770 CPU@3.40 GHz, a 1TB HD, and 16 GB RAM. We compare our approach with the Riemannian gradient method based on the Cayley transform [6] (OptStiefel) and with three Riemannian conjugate gradient methods—RCG1a, RCG1b, and RCG1b+ZH—developed in [13]. (The OptSt MATLAB code is available at https://github.com/wenstone/OptM, accessed on 10 May 2023, and the Riemannian conjugate gradient methods—RCG1a, RCG1b, and RCG1b + ZH—can be downloaded from http://www.optimization-online.org/DB_HTML/2016/09/5617.html, accessed on 10 May 2023). In addition, we stop all the methods when the algorithms find a matrix X ^ S t ( n , p ) such that F ( X ^ ) F < 1 e −4. In all the tests, we consider Algorithm 2 with α k = α = p for all k 0 . The implementation of our algorithm is available at https://www.mathworks.com/matlabcentral/fileexchange/128644-proximal-point-algorithm-on-the-stiefel-manifold, accessed on 10 May 2023.
In the rest of this section, we use the following notation: Time, Nitr, Grad, Feasi, and Fval denote the average total computing time in seconds, average number of iterations, average residual F ( X ^ ) F , average feasibility error X ^ X ^ I p F , and average final cost function value, respectively. In all experiments presented below, we solve ten independent instances for each pair ( n , p ) , and then we report all of these mean values. For all of the computational tests, we randomly generate the starting point X 0 using the MATLAB command [ X 0 , ] = q r ( r a n d n ( n , p ) , 0 ) .

5.1. The Linear Eigenvalues Problem

In order to illustrate the numerical behavior of our method in computing some eigenvalues of a given symmetric matrix A R n   ×   n , we present a numerical experiment taken from [8]. Let λ 1 λ 2 λ n be the eigenvalues of A. The p-largest eigenvalue problem can be mathematically formulated as
i = 1 p λ i = max X R n   ×   p T r [ X A X ] s . t . X X = I p .
Firstly, we illustrate the numerical behavior of Algorithm 2 via varying the proximal parameter α k . Specifically, we conduct an experiment where we choose α k = α constant throughout the iterations process and solve an instance of Problem (30) with ( n , p ) = ( 1000 , 100 ) for different values of α . We randomly generate the matrix A as follows:
r a n d n ( s e e d , 1 ) ; B = r a n d n ( 1000 , 100 ) ; A = 0.5 ( B + B ) ,
using MATLAB’s commands, and we solve the problem for each α { 1 , 5 , 10 , 50 , 100 } . In Figure 1, we present the convergence history of Algorithm 2 for all considered values of α . In addition, Table 1 contains the numerical results associated with each value of α . From Figure 1 and Table 1, we clearly see that Algorithm 2 requires a larger number of iterations to achieve the desired accuracy in the gradient norm for small values of α .
Now, we consider the following experiment design: given ( n , p ) , we randomly generate dense matrices assembled as A = 0.5 ( A ¯ + A ¯ ) , where A ¯ R n   ×   n is a matrix whose entries are sampled from a standard Gaussian distribution. Table 2 contains the computational results associated with varying p { 1 , 50 , 100 , 300 , 500 } but fixed n = 1000 . As shown in Table 2, all of the methods obtained estimates of a solution of Problem (30) with the required precision. Furthermore, we clearly observe that as p approaches n, our proposal converges more quickly, even in terms of computational time, than the rest of the methods.

5.2. Heterogeneous Quadratic Minimization

In this subsection, we consider the minimization of sums of heterogeneous quadratic functions over the Stiefel manifold; this problem is formulated as
min X R n   ×   p i = 1 p T r [ X [ i ] A i X [ i ] ] s . t . X X = I p ,
where A i s are n-by-n symmetric matrices and X [ i ] denotes the i-th column of X. For benchmarking, we consider two structures for the data matrices A i s obtained by using the following MATLAB commands:
  • Structure I: A i = d i a g ( i 1 ) n + 1 p : 1 p : i n p , for all i { 1 , 2 , , p } .
  • Structure II: A i = d i a g ( i 1 ) n + 1 p : 1 p : i n p + B i + B i , for all i { 1 , 2 , , p } ,
where B i s are random matrices generated by B i = 0.1 r a n d n ( n ) . This experiment design was taken from [13]. The numerical results concerning Structures I and II are contained in Table 3 and Table 4, respectively. From Table 3, we see that the most efficient method both in terms of the number of iterations and in total computational time was our procedure. The second most efficient method was OptStiefel. However, the numerical performance of PPASt and OptStiefel is very similar.
On the other hand, the results related to Structure II show that OptStiefel is slightly superior to PPASt in terms of computational time. However our PPASt approach was much more efficient than the three Riemannian conjugate gradient methods and took fewer iterations to reach the desired tolerance than the rest of the methods. In addition, to illustrate the numerical efficiency of our proposal, we designed two randomized experiments where the initial points were built using the following MATLAB commands,
r a n d n ( s e e d , 1 ) ; [ X 0 , ] = q r ( r a n d n ( n , p ) , 0 ) ,
and generated the heterogeneous quadratic minimization problems according to Structures I and II described above. In Figure 2 and Figure 3, we show the convergence history in terms of iterations and time (in seconds) of all the methods for these specific experiments. From these figures, we clearly see that our proximal point method converges faster (in terms of iterations) to a local minimizer than the rest of the methods, while in terms of computational time, the proposal achieves competitive results with respect to the other methods. Particularly, PPASt is the most efficient method for problems with Structure I.

5.3. The Joint Diagonalization Problem

Now, we evaluate the performance of our method on non-quadratic objective functions. Particularly, we consider the joint diagonalization problem [39], mathematically formulated as
max X R n   ×   p i = 1 N D i a g ( X A i X ) F 2 s . t . X X = I p ,
where the data A i s are n-by-n symmetric matrices and D i a g ( M ) is the diagonal matrix obtained from M by replacing the non-diagonal elements of M with zero. In order to test all the algorithms, we randomly generate the matrices A i s as follows
A i = d i a g ( n + 1 , n + 2 , 2 n ) + B i + B i ,
where B i R n   ×   n is a random matrix whose entries are generated independently from a Gaussian distribution, and, given a vector v R n , d i a g ( v ) denotes the diagonal matrix of size n-by-n whose i-th diagonal entry is exactly the i-th element of the vector v. This experiment was taken from [13]. We include in the numerical comparisons the Riemannian proximal gradient method (ManPG) developed in [7]. For this experiment, we generate ten independent instances for different values of n and p and report the mean values of Time, Nitr, Grad, Feasi, and Fval for each method. Additionally, we use N = 10000 as the maximum number of iterations allowed for all the algorithms and we use ϵ = 1   ×   10 5 as the tolerance for the termination rule associated with the gradient norm. The numerical results associated with this numerical test are presented in Table 5.
From Table 5, we see that the ManPG method obtains very poor performance, which is possibly due to the step-size used to initialize the backtracking process being set to α k ini = 1 in each iteration, which can lead to the method carrying out many re-orthogonalizations and function evaluations per iteration. Furthermore, we notice that our proposal was the most efficient both in terms of total computational time and in the number of iterations. In fact, in terms of CPU time, the OptStiefel method is better than PPASt only for instances with ( n , p , N ) = ( 500 , 3 , 3 ) , while for the rest of the instances, our PPASt outperforms the other methods.

6. Conclusions

In this article, we introduced a new, feasible method, free of exponential mapping, for solving smooth optimization problems on the Stiefel manifold. In particular, the proposal is an exact proximal point algorithm constructed with the standard distance of the matrix space R n   ×   p that exploits the geometric properties of the Stiefel manifold. The proposed algorithm constructs a sequence { X k } S t ( n , p ) through an iterative process that successively applies a simple proximal operator to the initial point X 0 . The global convergence of the proposed approach was established. In addition, we demonstrated that every accumulation point of the sequence generated by the algorithms is a stationary point without imposing any restrictive hypotheses on the objective function. Since the proposed algorithm is a special case of PALM [30], inertial extensions are already discussed in [32,40]. Our computational studies show that the developed method can be a good tool to solve trace maximization problems and heterogeneous quadratic minimization problems on the Stiefel manifold.
Similarly to the proofs presented in [30,40], we can try to establish convergence of the sequence { X k } to a critical point of F under the Kurdyka–Lojasiewicz property. These idea will remain as future work.

Funding

This research was financially supported by the Faculty of Engineering and Sciences, Universidad Adolfo Ibáñez, through the FES startup package for scientific research.

Data Availability Statement

The data sets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Absil, P.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  2. Grubišić, I.; Pietersz, R. Efficient rank reduction of correlation matrices. Linear Algebra Its Appl. 2007, 422, 629–653. [Google Scholar] [CrossRef]
  3. Pietersz, R.; Groenen, P. Rank reduction of correlation matrices by majorization. Quant. Financ. 2004, 4, 649–662. [Google Scholar] [CrossRef]
  4. Oviedo, H. Implicit steepest descent algorithm for optimization with orthogonality constraints. Optim. Lett. 2022, 16, 1773–1797. [Google Scholar] [CrossRef]
  5. Oviedo, H.; Dalmau, O. A scaled gradient projection method for minimization over the Stiefel manifold. In Proceedings of the Mexican International Conference on Artificial Intelligence MICAI-2019, Xalapa, Mexico, 28 October 2019. [Google Scholar] [CrossRef]
  6. Wen, Z.; Yin, W. A feasible method for optimization with orthogonality constraints. Math. Program. 2013, 142, 397–434. [Google Scholar] [CrossRef]
  7. Chen, S.; Ma, S.; Man-Cho, S.; Zhang, T. Proximal gradient method for nonsmooth optimization over the Stiefel manifold. SIAM J. Optim. 2020, 30, 210–239. [Google Scholar] [CrossRef]
  8. Oviedo, H.; Lara, H.; Dalmau, O. A non-monotone linear search algorithm with mixed direction on Stiefel manifold. Optim. Methods Softw. 2018, 34, 437–457. [Google Scholar] [CrossRef]
  9. Zhang, X.; Zhu, J.; Wen, Z.; Zhou, A. Gradient type optimization methods for electronic structure calculations. SIAM J. Sci. Comput. 2014, 36, 265–289. [Google Scholar] [CrossRef]
  10. Lara, H.; Oviedo, H.; Yuan, J. Matrix completion via a low rank factorization model and an augmented Lagrangean succesive overrelaxation algorithm. Bull. Comput. Appl. Math. 2014, 2, 21–46. [Google Scholar]
  11. Oviedo, H.; Guerrero, S. Solving Weighted Orthogonal Procrustes Problems via a Projected Gradient Method. Preprint in Optimization Online. Available online: http://www.optimization-online.org/DB_HTML/2021/05/8375.html (accessed on 11 March 2023).
  12. Bolla, M.; Michaletzky, G.; Tusnády, G.; Ziermann, M. Extrema of sums of heterogeneous quadratic forms. Linear Algebra Its Appl. 1998, 269, 331–365. [Google Scholar] [CrossRef]
  13. Zhu, X. A Riemannian conjugate gradient method for optimization on the Stiefel manifold. Comput. Optim. Appl. 2017, 67, 73–110. [Google Scholar] [CrossRef]
  14. Kokiopoulou, E.; Chen, J.; Saad, Y. Trace optimization and eigenproblems in dimension reduction methods. Numer. Linear Algebra Appl. 2011, 18, 565–602. [Google Scholar] [CrossRef]
  15. Hasannasab, M.; Hertrich, J.; Neumayer, S.; Plonka, G.; Setzer, S.; Steidl, G. Parseval proximal neural networks. J. Fourier Anal. Appl. 2020, 26, 1–31. [Google Scholar] [CrossRef]
  16. Huang, L.; Liu, X.; Lang, B.; Yu, A.; Wang, Y.; Li, B. Orthogonal weight normalization: Solution to optimization over multiple dependent Stiefel manifolds in deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
  17. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  18. Ferreira, O.; Oliveira, P. Proximal point algorithm on Riemannian manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
  19. Quiroz, E.; Oliveira, P. Proximal point methods for quasiconvex and convex functions with Bregman distances on Hadamard manifolds. J. Convex Anal. 2009, 16, 49–69. [Google Scholar]
  20. Souza, J.; Oliveira, P. A proximal point algorithm for DC fuctions on Hadamard manifolds. J. Glob. Optim. 2015, 63, 797–810. [Google Scholar] [CrossRef]
  21. Wang, J.; Li, C.; Lopez, G.; Yao, J. Proximal point algorithms on Hadamard manifolds, linear convergence and finite termination. SIAM J. Optim. 2016, 26, 2696–2729. [Google Scholar] [CrossRef]
  22. Wang, J.; Li, C.; Lopez, G.; Yao, J. Convergence analysis of inexact proximal point algorithms on Hadamard manifolds. Journal Glob. Optim. 2015, 61, 553–573. [Google Scholar] [CrossRef]
  23. Almeida, Y.; Da-Cruz, N.; Oliveira, P.; Souza, J. A modified proximal point method for DC functions on Hadamard manifolds. Comput. Optim. Appl. 2020, 76, 649–673. [Google Scholar] [CrossRef]
  24. Gao, B.; Liu, X.; Yuan, Y. Parallelizable algorithms for optimization problems with orthogonality constraints. SIAM J. Sci. Comput. 2019, 41, 1949–1983. [Google Scholar] [CrossRef]
  25. Bento, G.; Ferreira, O.; Melo, J. Iteration-complexity of gradient, subgradient and proximal point methods on Riemannian manifolds. J. Optim. Theory Appl. 2017, 173, 548–562. [Google Scholar] [CrossRef]
  26. Dreisigmeyer, D. Equality Constraints, Riemannian Manifolds and Direct Search Methods. Preprint in Optimization Online. Available online: https://optimization-online.org/2007/08/1743/ (accessed on 11 March 2023).
  27. Lara, H.; Oviedo, H. Solving joint diagonalization problems via a Riemannian conjugate gradient method in Stiefel manifold. In Proceedings of the Congresso Nacional de Matemática Aplicada e Computacional CNMAC2018, Campinas, Brazil, 17–21 September 2018. [Google Scholar] [CrossRef]
  28. Upadhyay, B.; Ghosh, A. On constraint qualifications for mathematical programming problems with vanishing constraints on Hadamard manifolds. J. Optim. Theory Appl. 2023. [Google Scholar] [CrossRef]
  29. Hu, J.; Liu, X.; Wen, Z.; Yuan, Y. A brief introduction to manifold optimization. J. Oper. Res. Soc. China 2020, 8, 199–248. [Google Scholar] [CrossRef]
  30. Bolte, J.; Sabach, S.; Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 2014, 146, 459–494. [Google Scholar] [CrossRef]
  31. Hertrich, J.; Nguyen, D.; Aujol, J.; Bernard, D.; Berthoumieu, Y.; Saadaldin, A.; Steidl, G. PCA reduced Gaussian mixture models with applications in superresolution. Inverse Probl. Imaging 2020, 2, 341–366. [Google Scholar] [CrossRef]
  32. Hertrich, J.; Steidl, G. Inertial stochastic PALM and applications in machine learning. Sampl. Theory Signal Process. Data Anal. 2022, 20. [Google Scholar] [CrossRef]
  33. Schönemann, P. A generalized solution of the orthogonal procrustes problem. Psychometrika 1966, 31, 1–10. [Google Scholar] [CrossRef]
  34. Fukaya, T.; Kannan, R.; Nakatsukasa, Y.; Yamamoto, Y.; Yanagisawa, Y. Shifted Cholesky QR for computing the QR factorization of ill-conditioned matrices. SIAM J. Sci. Comput. 2020, 42, 477–503. [Google Scholar] [CrossRef]
  35. Nocedal, J.; Wright, S. Numerical Optimization; Springer: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  36. Oviedo, H. Global convergence of Riemannian line search methods with a Zhang-Hager-type condition. Numer. Algorithms 2022, 91, 1183–1203. [Google Scholar] [CrossRef]
  37. Zhang, H.; Hager, W. A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 2004, 14, 1043–1056. [Google Scholar] [CrossRef]
  38. Barzilai, J.; Borwein, J. Two-point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  39. Theis, F.; Cason, T.; Absil, P. Soft dimension reduction for ICA by joint diagonalization on the Stiefel manifold. In Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation, Berlin, Germany, 15–18 March 2009. [Google Scholar] [CrossRef]
  40. Pock, T.; Sabach, S. Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems. SIAM J. Imaging Sci. 2016, 9, 1756–1787. [Google Scholar] [CrossRef]
Figure 1. Convergence behavior of Algorithm 2 from the same initial point for different values of α .
Figure 1. Convergence behavior of Algorithm 2 from the same initial point for different values of α .
Mathematics 11 02414 g001
Figure 2. Convergence behavior of all the methods from the same initial point for minimization of the heterogeneous quadratics function: Structure I with ( n , p ) = ( 100 , 00 , 10 ) . (a) Iterations vs. the logarithm of the gradient norm. (b) Seconds vs. the logarithm of the gradient norm.
Figure 2. Convergence behavior of all the methods from the same initial point for minimization of the heterogeneous quadratics function: Structure I with ( n , p ) = ( 100 , 00 , 10 ) . (a) Iterations vs. the logarithm of the gradient norm. (b) Seconds vs. the logarithm of the gradient norm.
Mathematics 11 02414 g002
Figure 3. Convergence behavior of all the methods from the same initial point for minimization of the heterogeneous quadratics function: Structure II with ( n , p ) = ( 1000 , 5 ) . (a) Iterations vs. the logarithm of the gradient norm. (b) Seconds vs. the logarithm of the gradient norm.
Figure 3. Convergence behavior of all the methods from the same initial point for minimization of the heterogeneous quadratics function: Structure II with ( n , p ) = ( 1000 , 5 ) . (a) Iterations vs. the logarithm of the gradient norm. (b) Seconds vs. the logarithm of the gradient norm.
Mathematics 11 02414 g003
Table 1. Numerical performance of the PPASt scheme for different values of α in the solution of a randomly generated linear eigenvalue problem with ( n , p ) = ( 1000 , 100 ) .
Table 1. Numerical performance of the PPASt scheme for different values of α in the solution of a randomly generated linear eigenvalue problem with ( n , p ) = ( 1000 , 100 ) .
α NitrTimeGradFeasiFval
118911.1459.99   ×   10 5 3.10   ×   10 15 3.657   ×   10 3
5585.28319.70   ×   10 5 3.89   ×   10 15 3.657   ×   10 3
10373.33629.76   ×   10 5 3.77   ×   10 15 3.657   ×   10 3
50262.74024.93   ×   10 5 3.64   ×   10 15 3.657   ×   10 3
100222.44244.39   ×   10 5 3.84   ×   10 15 3.657   ×   10 3
Table 2. Eigenvalues on randomly generated dense matrices for fixed n = 1000 .
Table 2. Eigenvalues on randomly generated dense matrices for fixed n = 1000 .
p150100300500
PPASt
Nitr9.918.319.721.126.1
Time0.040.891.759.5121.64
Grad3.79   ×   10 5 5.61   ×   10 5 6.41   ×   10 5 6.31   ×   10 5 7.51   ×   10 5
Feasi1.67   ×   10 16 2.13   ×   10 15 3.53   ×   10 15 7.89   ×   10 15 1.19   ×   10 14
Fval44.49941.97   ×   10 3 3.64   ×   10 3 8.08   ×   10 3 9.49   ×   10 3
OptStiefel
Nitr141.3273.4293.3295.1336.7
Time0.040.932.5314.5831.12
Grad7.98   ×   10 5 7.86   ×   10 5 8.25   ×   10 5 8.25   ×   10 5 8.19   ×   10 5
Feasi8.88   ×   10 17 2.59   ×   10 14 3.05   ×   10 14 1.65   ×   10 14 1.59   ×   10 14
Fval44.49941.97   ×   10 3 3.64   ×   10 3 8.08   ×   10 3 9.49   ×   10 3
RCG1a
Nitr143.9298.0303.5339.6358.9
Time0.061.594.3930.6478.21
Grad8.48   ×   10 5 8.34   ×   10 5 8.02   ×   10 5 8.78   ×   10 5 8.56   ×   10 5
Feasi5.00   ×   10 16 5.75   ×   10 15 9.62   ×   10 15 2.55   ×   10 14 1.57   ×   10 14
Fval44.49941.97   ×   10 3 3.64   ×   10 3 8.08   ×   10 3 9.49   ×   10 3
RCG1b
Nitr137.2288.7296.1309.9360.8
Time0.051.484.0125.3468.74
Grad8.19   ×   10 5 8.61   ×   10 5 8.25   ×   10 5 8.51   ×   10 5 8.37   ×   10 5
Feasi3.55   ×   10 16 6.22   ×   10 15 1.01   ×   10 14 2.55   ×   10 14 1.57   ×   10 14
Fval44.49941.97   ×   10 3 3.64   ×   10 3 8.08   ×   10 3 9.49   ×   10 3
RCG1b+ZH
Nitr197.0272.2319.0355.7384.8
Time0.061.253.9927.2467.52
Grad7.84   ×   10 5 8.10   ×   10 5 8.53   ×   10 5 8.60   ×   10 5 8.29   ×   10 5
Feasi3.55   ×   10 16 7.23   ×   10 15 1.04   ×   10 14 2.85   ×   10 14 1.57   ×   10 14
Fval44.49941.97   ×   10 3 3.64   ×   10 3 8.08   ×   10 3 9.49   ×   10 3
Table 3. Numerical results of heterogeneous quadratic minimization considering Structure I.
Table 3. Numerical results of heterogeneous quadratic minimization considering Structure I.
MethodNitrTimeGradFeasiFval
Structure I: n = 10000 , p = 10
PPASt76.73.258.14   ×   10 5 8.17   ×   10 16 4.50   ×   10 4
OptStiefel1012.64.748.39   ×   10 5 3.98   ×   10 15 4.50   ×   10 4
RCG1a1216.49.138.90   ×   10 5 6.72   ×   10 14 4.50   ×   10 4
RCG1b1140.18.478.58   ×   10 5 5.77   ×   10 14 4.50   ×   10 4
RCG1b + ZH1191.98.458.36   ×   10 5 5.72   ×   10 14 4.50   ×   10 4
Table 4. Numerical results of heterogeneous quadratics minimization considering Structure II.
Table 4. Numerical results of heterogeneous quadratics minimization considering Structure II.
MethodNitrTimeGradFeasiFval
Structure II: n = 1000 , p = 5
PPASt57.616.098.21   ×   10 5 7.66   ×   10 16 1.99   ×   10 3
OptStiefel592.114.278.18   ×   10 5 2.86   ×   10 15 1.99   ×   10 3
RCG1a672.723.239.01   ×   10 5 4.44   ×   10 15 1.99   ×   10 3
RCG1b643.621.778.50   ×   10 5 4.05   ×   10 15 1.99   ×   10 3
RCG1b + ZH638.718.558.79   ×   10 5 4.98   ×   10 15 1.99   ×   10 3
Table 5. Numerical results on the joint diagonalization problem.
Table 5. Numerical results on the joint diagonalization problem.
MethodNitrTimeGradFeasiFval
( n , p , N ) = ( 500 , 3 , 3 )
OptStiefel444.71.258.56   ×   10 6 3.37   ×   10 16 −3.59   ×   10 4
ManPG9904.854.861.54   ×   10 4 1.24   ×   10 15 −3.59   ×   10 4
PPASt27.61.516.10   ×   10 6 3.67   ×   10 16 −3.59   ×   10 4
RCG1a547.72.417.87   ×   10 6 2.49   ×   10 15 −3.59   ×   10 4
RCG1b570.72.247.97   ×   10 6 2.52   ×   10 15 −3.59   ×   10 4
RCG1b + ZH605.31.947.39   ×   10 6 2.03   ×   10 15 −3.59   ×   10 4
( n , p , N ) = ( 500 , 3 , 5 )
OptStiefel634.24.118.59   ×   10 6 9.25   ×   10 16 −4.60   ×   10 4
ManPG11232.097.742.25   ×   10 4 1.04   ×   10 15 −4.60   ×   10 4
PPASt32.13.407.48   ×   10 6 2.85   ×   10 16 −4.60   ×   10 4
RCG1a654.64.828.44   ×   10 6 2.41   ×   10 15 −4.60   ×   10 4
RCG1b653.74.767.85   ×   10 6 2.61   ×   10 15 −4.60   ×   10 4
RCG1b + ZH770.04.447.91   ×   10 6 2.92   ×   10 15 −4.60   ×   10 4
( n , p , N ) = ( 1000 , 3 , 3 )
OptStiefel1702.525.368.90   ×   10 6 3.84   ×   10 16 −7.28   ×   10 4
ManPG19801.0426.693.54   ×   10 2 8.66   ×   10 16 −7.28   ×   10 4
PPASt57.213.716.91   ×   10 6 3.22   ×   10 16 −7.28   ×   10 4
RCG1a1277.122.038.28   ×   10 6 2.02   ×   10 15 −7.28   ×   10 4
RCG1b1207.220.758.26   ×   10 6 2.47   ×   10 15 −7.28   ×   10 4
RCG1b + ZH1216.617.018.75   ×   10 6 2.82   ×   10 15 −7.28   ×   10 4
( n , p , N ) = ( 1000 , 3 , 5 )
OptStiefel2194.350.453.40   ×   10 6 3.34   ×   10 16 −9.26   ×   10 4
ManPG15401.0505.593.10   ×   10 3 8.01   ×   10 16 −9.26   ×   10 4
PPASt46.818.596.68   ×   10 6 3.52   ×   10 16 −9.26   ×   10 4
RCG1a1045.330.717.71   ×   10 6 2.14   ×   10 15 −9.26   ×   10 4
RCG1b1019.930.018.58   ×   10 6 1.83   ×   10 15 −9.26   ×   10 4
RCG1b + ZH996.022.958.22   ×   10 6 1.64   ×   10 15 −9.26   ×   10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oviedo, H. Proximal Point Algorithm with Euclidean Distance on the Stiefel Manifold. Mathematics 2023, 11, 2414. https://doi.org/10.3390/math11112414

AMA Style

Oviedo H. Proximal Point Algorithm with Euclidean Distance on the Stiefel Manifold. Mathematics. 2023; 11(11):2414. https://doi.org/10.3390/math11112414

Chicago/Turabian Style

Oviedo, Harry. 2023. "Proximal Point Algorithm with Euclidean Distance on the Stiefel Manifold" Mathematics 11, no. 11: 2414. https://doi.org/10.3390/math11112414

APA Style

Oviedo, H. (2023). Proximal Point Algorithm with Euclidean Distance on the Stiefel Manifold. Mathematics, 11(11), 2414. https://doi.org/10.3390/math11112414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop