A Modiﬁed Spectral Conjugate Gradient Method for Absolute Value Equations Associated with Second-Order Cones

: In this paper, we propose a modiﬁed spectral conjugate gradient (MSCG) method for solving absolute value equations associated with second-order cones (SOCAVEs). Some properties of the SOCAVEs are analyzed, and the global convergence of the MSCG method is discussed in depth. Numerical experiments are given to illustrate the effectiveness and competitiveness of our algorithm


Introduction
In this paper, we focus on investigating the following absolute value equations associated with second-order cones (SOCAVEs) where A ∈ R n×n , b ∈ R n , and |x| ∈ K n .The second-order cone (SOC) is characterized by the following mathematical representation If n = 1, let K n be the set of non-negative reals R + .In addition, the general SOC K can be considered as the Cartesian product of individual SOCs, that is, where n 1 + • • • + n r = n and n i (i = 1, • • • , r) is a non-negative integer.Without loss of generality, we will concentrate on the scenario where r = 1, as all the analysis can be extended to the case of r > 1 based on the characteristics of the Cartesian product.For every x = (x 1 , x 2 ) ∈ R × R n−1 and y = (y 1 , y 2 ) ∈ R × R n−1 , their Jordan product • is defined as [1][2][3] x • y = ( x, y , y 1 Based on this definition, the absolute value vector |x| in SOC K n can be calculated by In the context of solving SOCAVEs (1), the definition of |x| is specified in (2).SO-CAVEs (1) can be seen as a particular form of the generalized absolute value equations associated with SOCs (SOCGAVEs) with C, D ∈ R m×n , c ∈ R m , and |x| ∈ K n .Actually, SOCGAVEs (3) was first introduced in [4] and then further explored in subsequent research conducted by [5][6][7][8], as well as the references therein.Furthermore, SOCAVEs (1) can be viewed as a generalization of absolute value equations (AVEs) where A ∈ R n×n and b ∈ R n .Meanwhile, SOCGAVEs (3) can be seen as an extension of generalized AVEs (GAVEs) where C, D ∈ R m×n and c ∈ R m .In AVEs (4) and GAVEs (5 It is worth noting that GAVEs (5) was initially introduced in [9] and subsequently explored in [10][11][12], along with the references included therein.Undoubtedly, AVEs (4) can be regarded as a particular case of GAVEs (5).
In recent years, AVEs (4) and GAVEs (5) have been extensively studied due to their significance in a variety of mathematical programming problems, including the linear complementarity problem (LCP), bimatrix game and others; for further details, refer to [10,11,13].As a result, numerous theoretical findings and numerical algorithms have been developed for both AVEs (4) and GAVEs (5).In term of theory, we elaborate on two aspects which are the equivalent reformulations of AVEs (4) and its solvabilities.As for the equivalent reformulations, ref. [13] proved that AVEs (4) could be reformulated as the LCP when 1 is not a singular value of A, which can subsume many optimization problems such as linear programs, quadratic programs, and bimatrix games.Ref. [11] further improved the above equivalence which indicates that AVEs (4) can be equivalently recast as the LCP without any assumption on A and B, and the authors also provided a relationship with mixed integer programming.Ref. [14] gave the equivalent relation between the horizontal linear complementarity problems (HLCP) and AVEs (4).When considering the existence of a solution, the sufficient conditions of a unique solution, many solutions, and no solution were presented by [13].Ref. [15] presented two necessary and sufficient conditions for the unique solution of AVEs (4).That is, matrix A − I + 2D or A + I − 2D are nonsingular for any diagonal matrix D = diag(d i ) with 0 ≤ d i ≤ 1.Based on the equivalent reformulation between AVEs (4) and the HLCP, ref. [14] also gave the necessary and sufficient condition for the unique solvability of AVEs (4), i.e., {A − I, A + I} has the column W-property.Under the condition that AVEs (4) has a unique solution, ref. [16] conducted a study on the error bound and condition number of AVEs (4), which play crucial roles for the convergence analysis of AVEs (4).Ref. [12] analyzed topological properties of a solution set for AVEs (4), including convexity, boundedness, or whether it consists of many finite solutions.
When considering numerical algorithms, ref. [10] proposed a generalized Newton method for solving AVEs (4).Based on the equivalent reformulation between AVEs (4) and the two-by-two block nonlinear equations, ref. [17] developed the SOR-like iterative method and [18] analyzed the selection of optimal parameter for this algorithm.By reformulating AVEs (4) into a new nonlinear system, ref. [19] put forward an alternative SOR-like method to solve AVEs (4).Subsequently, many researchers, such as [20][21][22], presented many iterative methods by applying the idea of [17].Furthermore, AVEs (4) can be seen as a class of special nonlinear equations; based on this, ref. [23] proposed a modified multivariate spectral gradient algorithm for AVEs (4) and [24] improved a new three-term spectral subgradient method for solving AVEs (4).
Our interest in SOCAVEs (1) and SOCGAVEs (3) is motivated by their status as extensions of conventional forms, as well as their equivalence to LCPs associated with SOC (SOCLCPs).These problems have wide-ranging applications in engineering, control systems, and finance, as evidenced by studies such as [4,6,7].Notably, recent research efforts have yielded significant developments in both numerical methodologies and theoretical findings for addressing SOCAVEs (1) and SOCGAVEs (3).Ref. [4] have shown that SOCAVEs (1) is equivalent with the second-order cone linear complementarity problem (SOCLCP).Furthermore, ref. [6] have proved that SOCAVEs (1) can be converted into the standard SOCLCP.When considering the existence of a solution, ref. [25] presented some sufficient conditions for the unique solution of SOCAVEs (1).Furthermore, by using P-property and a globally uniquely solvable (GUS) property in the SOCLCP, ref. [26] showed some sufficient and necessary conditions for the unique solution of SOCAVEs (1).When considering the numerical algorithms for solving SOCAVEs (1), ref. [4] proposed a generalized Newton method and demonstrated that the proposed approach exhibits global linear convergence and local quadratic convergence under appropriate assumptions.Considering the nonlinear and non-smoothing term of |x| ∈ K n , by applying different smoothing techniques, the smoothing-type algorithms have been proposed by [6][7][8] for SOCAVEs (1).The above smoothing-type algorithms usually apply the monotone line search technique.In order to improve the computing performance of the smoothing-type algorithms, ref. [27] introduced a non-monotone smoothing Newton algorithm for solving SOCAVEs (1).Based on a splitting of the two-by-two block coefficient matrix, ref. [28] proposed a modified SOR-like method to solve SOCAVEs (1).
SOCAVEs (1) can be interpreted as a special case of the following system of nonlinear equations where F : R n → R n is continuous and monotone.F(x) is called a monotone equation such that it satisfies For the general nonlinear equations, the spectral conjugate gradient methods have garnered significant attention since they do not require the first order information at each iteration.Moreover, the spectral conjugate gradient methods have been successfully applied to find local minimizers of larger-scale problems.Using these motivations, in this paper, we develop a MSCG method for solving SOCAVEs (1).
In particular, the significant contributions of this paper can be summarized as follows.
(i) A modified spectral conjugate gradient method is proposed for solving SOCAVEs (1).
(ii) The properties of the objective function of SOCAVEs (1) under suitable conditions are established.(iii) In comparison to the spectral gradient algorithm, the proposed method is appropriate for resolving SOCAVEs (1) due to its low storage demands and exclusive reliance on the value of objective function.(iv) Numerical examples are given to demonstrate the effectiveness of the proposed method.
The remainder of this paper is structured as follows.Section 2 provides an overview of the preliminaries, propositions, and lemmas that are essential for understanding and analyzing the subsequent content.In Section 3, we propose a modified spectral conjugate gradient method for solving SOCAVEs (1).Furthermore, the global convergence of the proposed method is presented in Section 4. In Section 5, numerical examples are given to illustrate the efficiency of the proposed algorithm.Concluding remarks are made in Section 6.

Preliminaries
In this section, we gather fundamental results essential for our subsequent analysis.We begin by reviewing key concepts and background information concerning SOCs, as detailed in [1][2][3]29,30].
where λ 1 , λ 2 and µ 1 , µ 2 are the spectral values and the corresponding spectral vectors of x defined as and with ω ∈ R n−1 and satisfying ω = 1.Indeed, the decomposition ( 8) is guaranteed to be unique if x 2 = 0.
To derive a more explicit formula for |x| ∈ K n , we need to give some results about the functions associated with SOC.

Definition 1. For any function
where λ 1 and λ 2 are the spectral values and µ 1 and µ 2 are the associated spectral vectors of x.
If n = 1, according to (8), we have . By the definition of the SOC function (9), it holds that f Next, we present some special examples about the function associated with SOC.For any x = (x 1 , x 2 ) ∈ R × R n−1 , according to the spectral decomposition (8), we have and When n = 1, we have According to (11), it holds that x.By using (10) and ( 11), we have Therefore, for any According to ( 8) and ( 12), we have Finally, we discuss the Lipschitz continuous property and monotonicity of the mapping F (6). First, we give the following definitions and the auxiliary lemma.Definition 2. The mapping F (6) is Lipschitz continuous; a constant L > 0 exists and satisfies Definition 3. The mapping F (6) is strongly monotone; a constant m > 0 exists such that Lemma 1 ([31]).For any vectors x = (x 1 , x 2 ) and y = (y 1 , Proposition 1.The mapping F (6) is Lipschitz continuous.
Proof.For ∀x, y ∈ R n , we have Therefore, the mapping F (6) is Lipschitz continuous with L = A + 1.
Proposition 2. Assume A − I is positive semi-definite matrix, then the mapping F (6) is monotone.
Proof.For ∀x, y ∈ R n , we have Using the Cauchy-Schwarz inequality, we have in which Lemma 1 is used.Then, Obviously, when A − I positive semi-definite matrix, the mapping F (6) is monotone.

Proposition 3.
If A is a symmetric matrix and A − I is positive definite, then the mapping F (6) is strongly monotone.
Proof.Owing to the last inequality of (18), that is then we have in which λ min (A) denotes the smallest eigenvalue of A; thus, the last equation holds by the symmetry property of matrix A. Because A − I is definitely positive, then λ min (A) − 1 > 0. Therefore, the mapping F (6) is strongly monotone.

The Algorithm
In this section, we develop a modified spectral conjugate gradient method for solving SOCAVEs (1).
The spectral conjugate gradient method is a class of important optimization methods for solving nonlinear equations.Let us simply describe the algorithm framework of the spectral conjugate gradient method for nonlinear Equation (6).Starting from an initial guess x 0 ∈ R n , the spectral conjugate gradient method usually generates a sequence {x k } satisfying where α k > 0 is the step size which is obtained by a suitable line search, and the search direction d k is defined as where θ k and β k are the spectral parameter and the conjugate parameter, respectively.Therefore, the performance property of spectral conjugate gradient methods depends on the choice of the parameters θ k and β k .For the sake of convenience, we use F k to replace F(x k ).Inspired by [32], we choose the following parameters with and where In what follows, we will demonstrate that the search direction d k generated by ( 20)-( 22) is a sufficient descent direction, which satisfies In general, the formula of the sufficient descent condition is where γ > 0 is a positive constant.In contrast to the above condition, an attractive property of the sufficient descent direction ( 23) is its independence from any line search.Moreover, if the line search is exact, then F k d k−1 = 0.In this case, we have then the MSCG method can reduce to the standard conjugate gradient method.
Proof.Pre-multiplying (20) by F k , we have Let Applying (21), it is easy to deduce that δ k = 1.
Therefore, we have and the condition ( 23) is satisfied.
This means that if we can choose the search direction d k to satisfy δ k ≡ 1, then d k always satisfies the sufficient descent property.Moreover, this remarkable property is independent of any line search.
The above analysis allows us to obtain a reasonable parameter θ k .To obtain a global convergent algorithm for solving nonlinear monotone Equation (6), the search direction generated by the algorithm is required to be bounded.At this point, we describe a more specific algorithm as follows.
The following lemma shows that Algorithm 1 is well defined.
Step 1.If F k ≤ tol, terminate.Else go to Step 2.
Step 3. Set the test point µ k where Step 4. Update next iterative point by the hyperplane projection method, where Step 5. Set k := k + 1, go to Step 1.
Lemma 3. The Algorithm 1 is well defined, i.e., for all k, the step size α k = δ i exists such that the line search condition (25) holds.
Proof.We prove the conclusion by contradiction.Assume there exists a constant k 0 ≥ 0 such that (25) does not satisfy for any non-negative integer i, i.e.,

−F(x k
From Proposition 1 and allowing i → ∞, we have Using (23), we can derive that which contradicts (28).Therefore, the Algorithm 1 is well defined.
Remark 1.By the monotonicity of F( 6), we have for all x such that F( x) = 0.By applying a line search procedure along the descent direction d k , we obtain a point Thus, the hyperplane strictly separates the current iterate x k from zeros of Equation ( 6).Once the separating hyperplane is obtained, the next iteration point x k+1 is computed by projecting x k onto the hyperplane.

Convergence Analysis
In this section, we establish the global convergence of Algorithm 1.To this end, we give the following auxiliary lemmas to complete the proof of convergence.

Lemma 4 ([33]
). Suppose that the sequence {µ k } and {x k+1 } are generated by (19) and (26) in Algorithm 1.If the mapping F is monotone and satisfying then for any x ∈ R n such that F( x) = 0, it holds that Lemma 5 ([34,35]).Let {τ k } be a sequence of real numbers and {t k } be a sequence of real nonnegative numbers.Let's assume that the sequence {τ k } is bounded and for any k ≥ 0, it satisfies that τ k+1 ≤ τ k − t k , then the following statements hold: (iii) The sequence {τ k } is monotonically decreasing and convergent.
Proof.Assuming that x is a solution of SOCAVEs (1).
(a) From the line search condition (25), we can easily deduce that the inequity (29) holds.
Based on Lemmas 4 and 5, we obtain the following results: -The sequence { x k+1 − x } is monotonically decreasing and convergent; Based on the aforementioned results, we have and the sequence {x k } is bounded, i.e., a constant c > 0 exists such that In addition, based on the inequality (32), we have (b) By the Lipschitz continuity of F and (33), it follows that then, by setting ε = x 0 − x , we have Therefore, the sequence {F k } is bounded.(c) Based on Proposition 2, we obtain that From ( 19) and ( 25) and the inequality above, Using ( 33) and (37), we have Based on the Lipschitz continuity of F, it follows that By ( 26) and ( 38), and the line search condition (25), we can obtain It follows from ( 35) and ( 39) that the equality (31) Proof.Suppose α k = δ, then α k = α k δ does not satisfy (25), that is The inequality above combined with ( 23) and Proposition 1 yields Therefore, which completes this proof.
Proof.Suppose lim Using the sufficient descent condition ( 23) and the Cauchy-Schwarz inequality, we have This implies that d k ≥ ν.Therefore, it follows from (31) that However, Lemma 7 shows that This contradicts (43) and hence lim k→∞ inf F k = 0. Lemma 9 ([25]).If all singular values of A exceed 1, then SOCAVEs (1) has a unique solution.
Proof.Based on Lemma 9 and Proposition 3, it is evident that SOCAVEs (1) has a unique solution.The assertion follows immediately on account of Theorem 1, thereby completing this proof.

Numerical Experiments
In this section, we report some numerical examples to compare the MSCG method with the modified multivariate spectral gradient algorithm proposed by [23] (for short MMSGA), the modified SOR-like method proposed by [28] (for short MSOR), and the generalized Newton method proposed by [4] (for short GN).All codes are written in MATLAB R2020a on a personal computer with Intel(R) CPU of 2.10 GHZ and RAM of 16.00 GB.
The parameters used for the implementation of the MSCG method are chosen to be ξ = 0.001, δ = 0.5, and σ = 2.We choose the parameters of the MMSGA as follows: As for the MSOR method, we choose the approximate optimal parameters, which satisfy and where In addition, we choose all parameters of the GN method to be the same as those in [4].The termination criterion used for the three algorithms is F k ≤ 10 −6 or if the number of iterations exceeds 1000.Additionally, we choose x 0 = (0, • • • , 0) as the initial point.
In our experiments, the performance indicators of the algorithms are shown in Table 1.
Table 1.Performance indicators of the algorithms.

Indicators Meaning of Representation
From Table 2, it is evident that these iteration methods provide approximations to the desired solutions for different dimensions of n.The elapsed CPU time of the MSCG method is the shortest among the four algorithms considered.Moreover, Table 2 demonstrates that the elapsed CPU time of the MSCG method is not significantly affected by changes in problem size compared to the other three algorithms.
Example 2. Let m be a given positive integer and n = m 2 .We consider SOCAVEs (1), where A ∈ R n×n is expressed as A = Â + 4I and b ∈ R n is given by b = A x − | x|, where with B = tridiag(−1, 4, −1) ∈ R m×m , I ∈ R m×m being the identity matrix, and O ∈ R m×m being the zero matrix.
In Example 2, we consider the second-order cone as The numerical results are presented in Tables 3-5, demonstrating that all approaches provide approximations to the desired solutions.From Tables 3 to 5, it is evident that the MSCG method, MSOR method, and GN method can effectively and accurately solve the problem, with the GN method exhibiting the highest accuracy among these algorithms.Although the MSCG method has the highest number of iterations, its elapsed CPU time is the shortest.In Example 2, A is a sparse block tridiagonal matrix, and A − I is a symmetric positive matrix.Therefore, the convergent condition of the MSCG method is satisfied.The numerical results are listed in Table 3.It can be seen from Table 3 that the MSCG method is the most effective among the four algorithms.In addition, the GN method is the most accurate.In contrast to Example 2, A − I is an asymmetric positive matrix, and the numerical results for this scenario are presented in Table 3.It shows that the MSCG method exhibits superior computing performance compared to the other three algorithms.
Example 4. Consider SOCAVEs (1) which is generated the same as Example 1 with n = 1000.But, here the SOCs is given by K In Example 4, we choose that r = 2, 5, 10, 20, 50, 100.The numerical results are presented in Table 5, demonstrating that all approaches provide approximations to the desired solutions.It is evident that the MSCG method, MMSGA method, MSOR method, and GN method can effectively and accurately solve the problem, with the GN method exhibiting the highest accuracy among these algorithms.Although the MSCG method has a higher iteration count compared to the MSOR method and GN method, its elapsed CPU time is the shortest.Example 5.This example is an adaptation of Example 1 from the work of [23].We select a random A according to the following structure: then we choose a random x ∈ R n , and subsequently, we compute b = A x − | x|.Finally, we randomly generate x 0 ∈ R n in the interval [−5, 5] as the initial point for the iterative process.
We analyze the numerical comparison among Algorithm 1, MMSGA, MSOR, and GN.In particular, we utilize the performance profile introduced in [36] as a method of evaluation and the performance measurement is the running CPU time.Indeed, for large-scale sparse problems, our proposed method is much more efficient than three existing algorithms.However, its numerical performance deteriorates when dealing with dense coefficient matrices.

Conclusions
A modified spectral conjugate gradient (MSCG) method is developed for solving SOCAVEs (1).The convergence results of the MSCG method are proved under certain assumptions.Some numerical results of the MSCG method for solving SOCAVEs (1) demonstrate the effectiveness of the proposed approach.There are also some issues to be further studied, including the following: (i) The necessary and sufficient conditions for solving SOCAVEs are worth further research.(ii) Dynamic models have demonstrated significant success in addressing AVEs, as evidenced by works such as [37][38][39].Consequently, how the dynamic models solve SOCAVEs deserves further investigation.

Example 1 .
Iter the total number of iterations Time the elapsed CPU time in seconds Res the norm of absolute residual vectors Err the norm of absolute error vectors Here, the Res and Err are defined as Res := Ax k − |x k | − b and Err := x k − x , where x is the exact solution.Consider SOCAVEs (1) with

Example 3 .
Consider SOCAVEs (1); we choose A ∈ R n×n and b ∈ R n by A = Â + 4I and b = A x − | x|, respectively.Here, Â is defined as follows

Figure 1
Figure1illustrates the performance profile of the computing time in Algorithm 1 in the range of τ ∈[1, 1487]  for four solvers on 200 randomly generated test problems from Example 5. From Figure1, it is evident that the proposed methods outperform MMSGA, MSOR, and GN nearly 100% in CPU time.This demonstrates the superior performance and advantages of the proposed methods over the other algorithms.

Figure 1 .
Figure 1.Performance profile of computation time of four different algorithms.

Lemma 6 .
Based on Propositions 1 and 2, if {µ k } and {x k } are sequences generated by (19) and (26) in Algorithm 1, then the following statements hold: (a) The sequence {x k } is bounded; (b) The sequence {F k } is bounded; (c) The step size α k and the search direction d k satisfy

Theorem 1 .
Based on Propositions 1 and 2, if {x k } is the sequence generated by (19) and (26) in Algorithm 1, then lim

Table 2 .
Numerical results for Example 1.

Table 3 .
Numerical results for Example 2.

Table 4 .
Numerical results for Example 3.

Table 5 .
Numerical results for Example 4.