Next Article in Journal
Sinc Collocation Method to Simulate the Fractional Partial Integro-Differential Equation with a Weakly Singular Kernel
Next Article in Special Issue
Research on the Calculation Model and Control Method of Initial Supporting Force for Temporary Support in the Underground Excavation Roadway of Coal Mine
Previous Article in Journal
Almost Boyd-Wong Type Contractions under Binary Relations with Applications to Boundary Value Problems
Previous Article in Special Issue
Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Step Newton Algorithm for the Weighted Complementarity Problem with Local Biquadratic Convergence

1
School of Sciences, Xi’an Technological University, Xi’an 710021, China
2
School of Statistics, Xi’an University of Finance and Economics, Xi’an 710100, China
3
School of Science, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2023, 12(9), 897; https://doi.org/10.3390/axioms12090897
Submission received: 11 August 2023 / Revised: 8 September 2023 / Accepted: 18 September 2023 / Published: 20 September 2023
(This article belongs to the Special Issue Computational Mathematics in Engineering and Applied Science)

Abstract

:
We discuss the weighted complementarity problem, extending the nonlinear complementarity problem on R n . In contrast to the NCP, many equilibrium problems in science, engineering, and economics can be transformed into WCPs for more efficient methods. Smoothing Newton algorithms, known for their at least locally superlinear convergence properties, have been widely applied to solve WCPs. We suggest a two-step Newton approach with a local biquadratic order convergence rate to solve the WCP. The new method needs to calculate two Newton equations at each iteration. We also insert a new term, which is of crucial importance for the local biquadratic convergence properties when solving the Newton equation. We demonstrate that the solution to the WCP is the accumulation point of the iterative sequence produced by the approach. We further demonstrate that the algorithm possesses local biquadratic convergence properties. Numerical results indicate the method to be practical and efficient.

1. Introduction

The weighted complementarity problem (WCP) was originally introduced by Potra [1], extending the standard nonlinear complementarity problem on R n (NCP). Because of its substantial applicability in various fields such as engineering, management, science, and market equilibrium, it has garnered a great deal of interest from researchers. In particular, problems such as linear programming, weighted centering problems [2], and Fisher market equilibrium problems [3] can be expressed using the model of the weighted linear complementarity problem (WLCP) as shown below:
x R + n , y R + n , x y = w , A x + B y + C u = d ,
and the WLCP model provides a more efficient approach to these problems than the NCP model does.
Here we consider a more general WCP model: to find a triple ( x , y , u ) R 2 n × R m satisfying
x R + n , y R + n , x y = w , G ( x , y , u ) = 0 ,
in which G ( x , y , u ) : R 2 n + m R n + m is a nonlinear mapping and w R + n is a weighted vector. When w = 0 , the WCP (2) becomes the NCP [4,5,6]. When the function G ( x , y , u ) becomes linear, then the WCP (2) is reduced to (1) [7,8,9,10,11].
The WCP has been researched extensively, and numerous efficient algorithms have been proposed. Smoothing Newton algorithms have gained popularity among researchers because of their at least locally superlinear convergence properties [12,13,14,15,16]. With the aid of the complementarity function, the WCP problem can be transformed into an equivalent system of equations, which are then solved using Newton algorithms. This represents the core concept of employing Newton algorithms to solve the WCP. For WLCPs, Zhang [17] presents a smoothing Newton algorithm. Tang [18] offers a damped derivative-free Gauss–Newton method for a nonmonotone WLCP that is globally convergent and requires no problem assumptions.
When solving the nonlinear equations G ( z ) = 0 , it is widely known that the two-step Newton approach [19,20,21,22] typically achieves higher-order convergence, such as third-order or fourth-order convergence, than the classical Newton method. The two-step Newton approach for solving nonlinear equations has recently been used in an attempt to solve WCPs. Tang et al. [23] present a smoothing Newton approach to solve WCPs with local cubic convergence rates. The approach accelerates the convergence rate by adding an approximate Newton step, as the sequence of iterations is close to the solution, raising the local convergence rate from second-order to third-order. This approach can be viewed as an approximate two-step Newton algorithm. Liu et al. [24] propose a super-quadratic smoothing Newton algorithm for solving WCPs based on the following two-step Newton algorithm for solving nonlinear equations G ( z ) = 0 :
z k + 1 = s k G ( z k ) 1 G ( s k ) , where s k = z k G ( z k ) 1 G ( z k ) .
When solving Newton equations with the right parameter selections, the algorithm in particular possesses the property of local cubic convergence. Argyros [25] and Magrenan Ruiz [26] discuss the two-step Newton algorithm, whose iteration sequence { z k } is generated by
z k + 1 = s k G ( s k ) 1 G ( s k ) , where s k = z k G ( z k ) 1 G ( z k ) ,
and they show that the two-step Newton algorithm has a fourth-order convergence. A natural question emerges whether we can apply this algorithm to solve the WCP to obtain an algorithm that is more efficient than the Newton method with local cubic convergence rate. With these considerations, we develop a two-step Newton iterative technique for solving the WCP (2) with a local biquadratic convergence property. The major contributions of the new algorithm are as follows.
  • The algorithm computes two Newton equations directly to obtain the next iteration point, in contrast to the algorithm in [23]. If the value of the objective function meets a certain descent criterion, the algorithm takes the iteration point produced by the two Newton directions directly as the next iteration point; otherwise, the step size is confirmed via a derivative-free line search to find the next iteration point. By doing this, the computational efficiency of the algorithm is successfully improved without adding to the time investment.
  • Compared with the algorithm in [24], we employ different Jacobian matrices for calculating Newton equations and add the new term χ k = min { 1 , ξ k 4 } when doing so, in order to guarantee the local biquadratic convergence property. Due to this architecture, the new algorithm exhibits local biquadratic convergence under the right conditions.
  • Because the nonlinear complementarity problem [4,5,6] and system of inequalities [27,28] can be transformed into an equivalent system of equations, the novel algorithm provides a fresh approach to solve these problems.
The paper proceeds as follows. Section 2 shows a smoothing function whose fundamental characteristics are also discussed. Section 3 proposes a new two-step smoothing Newton approach for WCPs and demonstrates its viability. Section 4 and Section 5 deal with the global and local strong convergence properties, respectively. Section 6 presents numerical experiments. Section 7 contains final remarks.

2. Preliminaries

In this paper, we deal with the WCP (2) by using smoothing Newton methods. To this end, we first introduce a smoothing function as
φ c ( ξ , p , q ) = p 2 + q 2 + 2 c + 4 ξ 2 ( p + q ) ,
where ξ ( 0 , 1 ) and c R + . The following lemma shows some basic properties of φ c ( ξ , p , q ) , whose proof is obvious by simple calculations.
Lemma 1. 
Let  φ c ( ξ , p , q )  be defined by (3). Then
1.
p 0 , q 0 , p q = c φ c ( 0 , p , q ) = 0 .
2.
If  ξ > 0 , then  φ c ( ξ , p , q )  is continuously differentiable for any  ( p , q ) R 2 .
With the smoothing function φ c ( ξ , p , q ) defined by (3), for the WCP (2), we define a function F ( ξ , x , y , u ) : R × R 2 n + m R × R 2 n + m by
F ( ξ , x , y , u ) = ξ φ w ( ξ , x , y ) G ( x , y , u ) ,
where
φ w ( ξ , x , y ) = φ w 1 ( ξ , x 1 , y 1 ) φ w n ( ξ , x n , y n ) .
Let z = ( ξ , x , y , u ) to simplify the notation, then,
F ( z ) = F ( ξ , x , y , u ) = 0 ξ = 0 , G ( x , y , u ) = 0 , φ w ( ξ , x , y ) = 0 .
It is simple to demonstrate that F ( z ) is continuously differentiable on R 2 n + m  for any ξ > 0 by using Lemma 1. By simple calculations, we have the Jacobian matrix for F ( z ) as below:
F ( z ) = 1 0 0 0 J 1 J 2 J 3 0 0 G x G y G u ,
where
J 1 = vec 4 ξ x i 2 + y i 2 + 2 w i + 4 ξ 2 , i = 1 , 2 , , n ,
J 2 = diag x i x i 2 + y i 2 + 2 w i + 4 ξ 2 1 , i = 1 , 2 , , n ,
J 3 = diag s i x i 2 + y i 2 + 2 w i + 4 ξ 2 1 , i = 1 , 2 , , n .
We next discuss the nonsingularity of Jacobian matrix H ( z ) . For this purpose, we need an assumption.
Assumption 1. 
Suppose that R a n k ( G y ) = m , for any ( Δ x , Δ y , Δ u ) R 2 n + m , if 
G x Δ x + G y Δ y + G u Δ u = 0 ,
then  Δ x , Δ y 0 .
If G ( x , y , u ) is linear, i.e.,  G ( x , y , u ) = A x + B y + C u d , then (10) reduces to
A Δ x + B Δ y + C Δ u = 0 ,
which means that the associated WLCP is monotone [9,17,29]. Similar to Lemma 1 in [17], we can draw a conclusion as follows.
Theorem 1. 
If Assumption 1 is true, then F ( z ) is invertible for any ξ > 0 .

3. Description of the Method

This section presents and illustrates the feasibility of a new two-step smoothing Newton approach. We start with the formal explanation of the new approach.
Remark 1. 
1.
It is worth noting that the Newton direction d 1 k + d 2 k obtained by computing Newton equations twice is not necessarily the descent direction of the objective function. Therefore, to guarantee the global convergence properties, we introduce a derivation-free line search. When the objective function satisfies a certain descent quantity, we can use d 1 k + d 2 k directly as a descent direction. Otherwise, we utilize (15) to generate a step length for obtaining the next iteration point.
2.
In Step 2, the additional term χ k = min { 1 , ξ k 4 } is added to Newton Equations (11) and (12), in contrast to the current smoothing Newton methods [17,23,30]. The property of local biquadratic convergence of Algorithm 1 depends on this particular perturbation term. Algorithm 1 has a similar computational cost to the traditional Newton approach even though it computes the Newton direction twice.
3.
The main distinction between Algorithm 1 and the accelerated algorithm in [23] as two-step Newton algorithms is that Algorithm 1 employs two Newton directions from the beginning, whereas the accelerated algorithm in [23] begins with one Newton direction and adds a second Newton direction when certain conditions are met. We also note that Algorithm 1 is able to solve the WCP better than the accelerated method in the following section of numerical experiments.
4.
In addition to the difference between Algorithm 1 and the algorithm in [24] in solving Newton directions, another difference is that when the Newton equation is used as the descent direction for the line search, there are different choices for the descent direction. As described in Step 4, the descent direction is chosen to be the sum of two Newton directions under certain conditions. In the subsequent discussion, it will be shown that this choice is made to ensure global convergence of the algorithm.
Algorithm 1 A Two-Step Newton Method
Input parameters: Required stopping criterion δ > 0 , ξ 0 > 0 , c , l , ρ ( 0 , 1 ) , γ , σ 1 , σ 2 ( 0 , 1 ) , κ 0 such that γ + γ κ + κ c < 1 , and  h = ( γ , 0 , 0 , 0 ) T R × R 2 n + m . { η k } R + satisfies that k = 0 η k η < and lim k η k = 0 , and starting point ( x 0 , y 0 , u 0 ) R 2 n + m .
Output: an approximate solution ( x k , s k , u k ) to the WCP (2);
Step 0. Let z 0 = ( ξ 0 , x 0 , y 0 , u 0 ) and k = 0 .
Step 1. If F ( z k ) δ , stop.
Step 2.
a.
Calculate d 1 k by
F ( z k ) d 1 k = F ( z k ) + χ k h
where χ k = min { 1 , ξ k 4 } . Let s k = z k + d 1 k .
b.
Calculate d 2 k by
F ( s k ) d 2 k = F ( s k ) + χ k h .
Step 3. If
F ( z k + d 1 k + d 2 k ) > l · F ( z k ) ,
go to Step 4. Else, set d k = d 1 k + d 2 k and β k = 1 , go to Step 5.
Step 4. Let
d k = d 1 k + d 2 k , if F ( s k ) c F ( z k ) and F ( z k ) F ( s k ) 1 κ , d 1 k + β k d 2 k , otherwise ,
and let β k be the maximum of { ρ 0 , ρ , ρ 2 , } , satisfying:
F ( z k + ρ m ( k ) d k ) 2 ( 1 + η k ) F ( z k ) 2 σ 1 ( ρ m ( k ) ) 2 d k 2 σ 2 ( ρ m ( k ) ) 2 F ( z k ) 2 .
Step 5. Set z k + 1 = z k + β k d k and k = k + 1 . Return to Step 1.
To investigate the convergence of Algorithm 1, we first demonstrate that it is clearly defined.
Theorem 2. 
Supposing Assumption 1 is true, then Algorithm 1 is well-defined.
Proof of Theorem 2. 
As F ( z ) is a nonsingular duo to Theorem 1, Step 2 is feasible. By the definition of χ k , we have that
χ k = min { 1 , ξ k 4 } = 1 ξ k F ( z k ) , if ξ k 1
or
χ k = min { 1 , ξ k 4 } = ξ k 4 ξ k F ( z k ) , if ξ k < 1 .
Thus,
χ k F ( z k ) .
We next discuss the following two cases.
I.
d k = d 1 k + d 2 k
We obtain from (11), (12), (14), and (16) that
F ( z k ) T F ( z k ) d k = F ( z k ) T F ( z k ) ( d 1 k + d 2 k ) = F ( z k ) T { F ( z k ) + χ k h + F ( z k ) [ F ( s k ) 1 ( F ( s k ) + χ k h ) ] } ( 1 γ ) F ( z k ) 2 F ( z k ) T F ( z k ) F ( s k ) 1 F ( s k ) + χ k F ( z k ) T F ( z k ) F ( s k ) 1 h ( 1 γ κ c κ γ ) F ( z k ) 2 ,
which together with γ + γ κ + κ c < 1 yields that
F ( z k ) T F ( z k ) d k < 0 .
Note that, for any k 0 , if the line search (15) is not satisfied, then
F ( z k + β k d k ) 2 > ( 1 + η k ) F ( z k ) 2 σ 1 β k 2 d k 2 σ 2 β k 2 F ( z k ) 2 F ( z k ) 2 σ 1 β k 2 d k 2 σ 2 β k 2 F ( z k ) 2 .
Then,
F ( z k + β k d k ) 2 F ( z k ) 2 β k > σ 1 β k d k 2 σ 2 β k F ( z k ) 2 .
By letting k on (20), we obtain
F ( z k ) T F ( z k ) d k 0 ,
which is in contradiction with (18). We can thus derive a step size β k that satisfies (15).
II.
d k = d 1 k + β k d 2 k We obtain from (11) and (16) that
F ( z k ) T F ( z k ) d 1 k = F ( z k ) T ( F ( z k ) + χ k h ) ( 1 γ ) F ( z k ) 2 < 0 .
On the other hand, if (15) is not satisfied, then
F ( z k + β k ( d 1 k + β k d 2 k ) ) 2 > ( 1 + η k ) F ( z k ) 2 σ 1 β k 2 d 1 k + β k d 2 k 2 σ 2 β k 2 F ( z k ) 2 F ( z k ) 2 σ 1 β k 2 d 1 k + β k d 2 k 2 σ 2 β k 2 F ( z k ) 2 .
Then,
F ( z k + β k ( d 1 k + β k d 2 k ) ) 2 F ( z k ) 2 β k > σ 1 β k d 1 k + β k d 2 k 2 σ 2 β k F ( z k ) 2 .
By letting k on (23), we obtain
F ( z k ) T F ( z k ) d 1 k 0 ,
which is in contradiction with (21). We can thus derive a step size β k that satisfies (15).
In conclusion, Algorithm 1 is well-defined. □

4. Global Convergence

We start defining the set Ω ( z ) as
Ω ( z ) = { z R + × R 2 n + m | F ( z ) e ξ 2 F ( z 0 ) } .
Theorem 3. 
If Assumption 1 is true, then Algorithm 1 generates a sequence z k = ( ξ k , x k , s k , u k ) satisfying 0 ξ k + 1 ξ k ξ 0 and z k Ω ( z ) .
Proof of Theorem 3. 
We show ξ k 0 by induction. Supposing that ξ k 0 , it follows from (11) and (12) that
Δ ξ k 1 = ξ k + χ k γ
and
Δ ξ k 2 = ( ξ k + Δ ξ k 1 ) + χ k γ .
Then, by Step 5, we have
ξ k + 1 = ξ k + β k ( Δ ξ k 1 + β k Δ ξ k 2 ) = ξ k + β k { ξ k + χ k γ + β k [ ( ξ k + Δ ξ k 1 ) + χ k γ ] } = ( 1 β k ) ξ k + β k χ k γ ,
indicating that ξ k + 1 0 due to the fact that 0 β k 1 . Moreover, combining with (16) yields
ξ k + 1 ξ k = ( 1 β k ) ξ k + β k χ k γ ξ k = β k ( χ k γ ξ k ) 0 ,
i.e., ξ k + 1 ξ k for any k 0 .
It follows from (15) that
F ( z k + 1 ) 2 ( 1 + η k ) F ( z k ) 2 ( 1 + η k ) · ( 1 + η k 1 ) ( 1 + η 0 ) F ( z 0 ) 2 j = 0 k 1 k + 1 ( 1 + η j ) k + 1 F ( z 0 ) 2 1 + η k + 1 k + 1 F ( z 0 ) 2 e η F ( z 0 ) 2 ,
which indicates that z k Ω ( z ) . □
Theorem 4. 
If Assumption 1 is satisfied and { z k } is bounded, then lim k ξ k = 0 .
Proof of Theorem 4. 
We have that { ξ k } is monotonically nonincreasing and bounded from Theorem 3, and is therefore convergent. Let lim k ξ k = ξ * 0 . If ξ * = 0 , the conclusion is clearly valid. Supposing that ξ * > 0 , we next show a contradiction.
Letting lim k z k = z * = ( ξ * , x * , y * , u * ) , then lim k F ( z k )   = F ( z * ) ξ * > 0 . From (15), we have
F ( z k + β k d k ) 2 ( 1 + η k ) F ( z k ) 2 σ 1 β k 2 d k 2 σ 2 β k 2 F ( z k ) 2 .
Since lim k η k = 0 , by letting k , we obtain that
F ( z * ) 2 F ( z * ) 2 σ 1 β * 2 d * 2 σ 2 β * 2 F ( z * ) 2 ,
i.e.,
β * 2 ( σ 1 d * 2 + σ 2 F ( z * ) 2 ) 0 ,
which indicates that β * = 0 due to F ( z * ) > 0 and σ 1 , σ 2 > 0 . We proceed to the discussion of two cases.
Case 1: F ( s k ) c F ( z k ) and F ( z k ) F ( s k ) 1 κ .
Letting β ^ = β k ρ , it holds that
F ( z k + β ^ d k 2 > ( 1 + η k ) F ( z k ) 2 σ 1 β ^ 2 d k 2 σ 2 β ^ 2 F ( z k ) 2 F ( z k ) 2 σ 1 β ^ 2 d k 2 σ 2 β ^ 2 F ( z k ) 2
for sufficiently large k. Since
F ( z k + β ^ d k ) 2 = F ( z k ) + β ^ F ( z k ) d k 2 + o ( β ^ ) = F ( z k ) 2 + 2 β ^ F ( z k ) T F ( z k ) d k + o ( β ^ ) ,
combining (28) with (29), we obtain that
2 F ( z k ) T F ( z k ) d k + o ( β ^ ) > β ^ ( σ 1 d k 2 + σ 2 F ( z k ) 2 ) .
By (11), (12), and (14), we have
F ( z k ) T F ( z k ) d k = F ( z k ) T F ( z k ) ( d 1 k + d 2 k ) = F ( z k ) T { F ( z k ) + χ k h + F ( z k ) [ F ( s k ) 1 ( F ( s k ) + χ k h ) ] } F ( z k ) 2 + χ k γ F ( z k ) + κ c F ( z k ) 2 + χ k κ γ F ( z k ) .
Then, from (30) and (31), we obtain
2 [ ( 1 + κ c ) F ( z k ) 2 + χ k ( γ + κ γ ) F ( z k ) ] > β ^ ( σ 1 d k 2 + σ 2 F ( z k ) 2 ) .
By letting k in (32), it holds that
2 [ ( 1 + κ c ) F ( z * ) + ( γ + κ γ ) χ * ] 0 ,
where χ * = min { 1 , ξ * 4 } . Then, we have
χ * 1 κ c γ + γ κ F ( z * ) > γ + γ κ γ + γ κ F ( z * ) = F ( z * ) ,
which contradicts (16). Thus, ξ * = 0 .
Case 2: If the condition that F ( s k ) c F ( z k ) and F ( z k ) F ( s k ) 1 κ is not satisfied, we obtain from Step 4 that d k = d 1 k + β k d 2 k . Similarly to Case 1, it can be deduced that
γ χ * F ( z * )
and then
χ * F ( z * ) γ > F ( z * ) ,
which is a contradiction. Therefore, we have ξ * = 0 . □
Theorem 5. 
If Assumption 1 is satisfied, then the sequence of iterations { z k } produced by Algorithm 1 converges to a solution to the WCP (2).
Proof of Theorem 5. 
From (15), we obtain
F ( z k + 1 ) 2 ( 1 + η k ) F ( z k ) 2 .
Since k = 0 η k ξ < , { F ( z k ) 2 } is convergent according to Lemma 3.3 in [31], and { F ( z k ) } is also convergent as a result.
Suppose that lim k z k = z * = ( ξ * , x * , y * , u * ) without loss of generality. We only need to verify that F ( z * ) = 0 . If not, we can obtain by a similar proof to that in Theorem 4,
1 γ κ c γ κ 0 ,
which is a contradiction, or
( 1 γ ) F ( z * ) 2 1 ,
which is a contradiction. Hence, F ( z * ) = 0 . □

5. Local Convergence

We deal with the local biquadratic convergence property in this section.
Theorem 6. 
If Assumption 1 is true, all D F ( z * ) are nonsingular. If the conditions hold that both G ( x , y , u ) and F ( x , y , u ) are of Lipschitz continuity near z * , then d k = d 1 k + d 2 k for any sufficiently large k, and { z k } converges locally biquadratically to z * .
Proof of Theorem 6. 
Since z * is the solution to the WCP (2), the Jacobian matrix F ( z k ) is invertible for any z k sufficiently close to z * according to Theorem 1. We obtain, for any sufficiently large k,
F ( z k ) 1   = O ( 1 )
from the condition that all D F ( z * ) are nonsingular and Proposition 4.1 in [32]. Additionally, since F ( z ) is strongly semismooth and locally Lipschitz continuous,
F ( z k ) F ( z * ) F ( z k ) ( z k z * )   = O ( z k z * 2 )
and
F ( z k )   =   F ( z k ) F ( z * )   = O ( z k z * ) .
Since
χ k h γ ξ k 4 F ( z k ) 4 ,
by (11) and (34)–(37), we have
s k z * =   z k + d 1 k z * =   z k + F ( z k ) 1 ( F ( z k ) + χ k h ) z * = O χ k h + F ( z k ) F ( z * ) F ( z k ) ( z k z * ) O ( F ( z k ) 4 ) + O ( z k z * 2 ) = O ( z k z * 2 ) ,
implying that s k is sufficiently close to z * as is z k . By (36) and (38), we obtain
F ( s k )   = O ( s k z * ) = O ( F ( z k ) 2 ) .
Hence, combining (12), (34), (37), and (39) yields
d 2 k = F ( y ) 1 F ( s k ) + χ k h O F ( s k ) + χ k h = O ( F ( z k ) 2 ) ,
and combining with (36) and (38) yields
z k + d 1 k + d 2 k Δ z *   s k z * + d 2 k = O ( z k z * 2 ) ,
indicating that z k + d 1 k + d 2 k is sufficiently close to z * as is z k .
Therefore, by combining (36) and (41), we have
F ( z k + d 1 k + d 2 k ) = O ( z k + d 1 k + d 2 k z * ) = O ( z k z * 2 ) = o ( F ( z k ) ) = l k F ( z k ) ,
where l k 0 . This means that (13) holds, indicating that β k 1 for any sufficiently large k, i.e.,
z k + 1 = z k + d 1 k + d 2 k .
Upon the condition that F ( z ) is Lipschitz continuous, we have from (12), (34), (37), and (43) that
z k + 1 z * = s k + d 2 k z * = s k z * + F ( s k ) 1 ( F ( s k ) + χ k h ) O F ( s k ) F ( z * ) F ( s k ) ( s k z * ) + χ k h = O ( s k z * 2 ) + O ( F ( z k ) 4 ) = O ( z k z * 4 ) .
Additionally, we have from (36) that
F ( z k + 1 ) = O ( z k + 1 z * ) = O ( z k z * 4 ) = O ( F ( z k ) 4 ) ,
implying that { z k } converges locally biquadratically to z * . □

6. Numerical Experiments

This section reports the computational efficiency of Algorithm 1, denoted as SNQ_L, for WLCPs and WCPs. The numerical experiments are performed on a PC with 16 GB RAM running MATLAB R2018b. In our tests, we set n = 2 m , δ = 10 6 ,   ρ = 0.6 ,   γ = 0.01 ,   l = 0.1 ,   c = 0.01 ,   σ 1 = σ 2 = 0.001 ,   ξ 0 = 0.1 ,   and   η k = 1 / 2 k + 2 .
Furthermore, we use the algorithm in [23], denoted as ANM_Tang, and that in [24], denoted as TSN_Liu, and compare them to SNQ_L. We utilize the same parameters for ANM_Tang and TSN_Liu as those in [23,24], respectively.

6.1. Numerical Tests for a WLCP

Consider the following WLCP,
x R + n , y R + n , x y = w , A x + B y + C u = d ,
where
A = M N , B = 0 I , C = 0 M T , d = M f g .
We set M R m × n to be generated from a standard normal distribution and N = P T P / P T P , where P R n × n is uniformly distributed over the interval ( 0 , 1 ) . The elements of f R n and g R n , respectively, follow uniform distributions over the intervals ( 0 , 1 ) and ( 1 , 0 ) . Then, w is generated by w = x ^ y ^ with y ^ = N x ^ g and x ^ = rand ( n , 1 ) . The starting points ( x 0 , y 0 , u 0 ) = ( 0 , 0 , , 0 ) T . In the following, Aveero reflects the average value of F ( z k ) at the end of the iteration, Avek stands for the average number of iterations, and AveCPU stands for the average runtime of the associated algorithm in seconds.
To verify the local convergence rates of TSN_Liu, ANM_Tang, and SNQ_L, we first test them on a specific case with n = 3000 . Table 1 illustrates the variations in F ( z k ) with an increase in the number of iterations. From Table 1, it is evident that our algorithm SNQ_L exhibits local fourth-order convergence and, compared to ANM_Tang and TSN_Liu, it is able to converge to the solution more rapidly.
Next, we randomly generate 10 instances of varying size for testing on each problem. The test results are displayed in Table 2, showing that SNQ_L consistently requires fewer iterations and typically utilizes less CPU time to achieve the stopping tolerance compared to ANM_Tang and TSN_Liu. Furthermore, we can observe that SNQ_L requires even fewer iterations and saves more CPU time compared to ANM_Tang and TSN_Liu as the problem size increases. This is due to the local fourth-order convergence rate exhibited by SNQ_L.

6.2. Numerical Tests for a WCP

Consider the following WCP,
x R + n , y R + n , x y = w , G ( x , y , u ) = 0 ,
with
G ( x , y , u ) = R x P T u y + d P ( x f ) ,
where R = D T D / D T D , D R n × n is uniformly distributed over the interval ( 0 , 1 ) , and P R m × n is obtained from a uniform distribution on the interval ( 0 , 1 ) . The vectors d , f , w R n are all uniformly distributed over the interval ( 0 , 1 ) .
We perform 10 random experiments for each dimension. The average test results are displayed in Table 3. These numerical results also demonstrate that SNQ_L is more stable and effective compared to ANM_Tang and TSN_Liu. Furthermore, as the dimension of the problem increases, SNQ_L requires less time and fewer iterations.

7. Conclusions

We suggested a two-step Newton algorithm for the WCP by combining a two-step Newton approach for nonlinear systems of equations with a classical Newton approach for WCPs. The algorithm equivalently describes the WCP as a nonlinear set of equations. The new algorithm computes an additional Newton direction to obtain the next iteration point at each iteration. When the objective function value satisfies a specific descent requirement, the algorithm uses the iteration point created by the two Newton directions directly as the next iteration point; otherwise, the step size is decided by a derivative-free line search to find the next iteration point. Under certain assumptions, the global and local biquadratic convergence properties are verified. In numerical tests, we also compared the algorithm with a two-step Newton algorithm and an accelerated Newton algorithm, and the test results demonstrate that the computational efficiency of the new algorithm is effectively improved without increasing the time cost and that the algorithm has the property of local biquadratic convergence when the sequence of iterations is close to the solution to the WCP.

Author Contributions

X.L.: conceptualization, writing—original draft, methodology, validation, supervision; Y.L.: formal analysis, investigation, writing—review and editing; J.Z.: software, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

On reasonable request, the corresponding authors will provide the data sets utilized in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Potra, F. Weighted complementarity problems-a new paradigm for computing equilibria. SIAM J. Optim. 2012, 22, 1634–1654. [Google Scholar] [CrossRef]
  2. Anstreicher, K.M. Interior-point algorithms for a generalization of linear programming and weighted centering. Optim. Method Softw. 2012, 27, 605–612. [Google Scholar] [CrossRef]
  3. Ye, Y. A path to the Arrow-Debreu competitive market equilibrium. Math. Program. 2008, 111, 315–348. [Google Scholar] [CrossRef]
  4. Facchinei, F.; Pang, J. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  5. Che, H.; Wang, Y.; Li, M. A smoothing inexact Newton method for P0 nonlinear complementarity problem. Front. Math. 2012, 7, 1043–1058. [Google Scholar] [CrossRef]
  6. Huang, B.; Ma, C. Accelerated modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems. Comput. Appl. Math. 2018, 37, 3053–3076. [Google Scholar] [CrossRef]
  7. Gowda, M.S. Weighted LCPs and interior point systems for copositive linear transformations on Euclidean Jordan algebras. J. Glob. Optim. 2019, 74, 285–295. [Google Scholar] [CrossRef]
  8. Chi, X.; Gowda, M.S.; Tao, J. The weighted horizontal linear complementarity problem on a Euclidean Jordan algebra. J. Glob. Optim. 2019, 73, 153–169. [Google Scholar] [CrossRef]
  9. Asadi, S.; Darvay, Z.; Lesaja, G.; Mahdavi-Amiri, N.; Potra, F. A full-Newton step interior-point method for monotone weighted linear complementarity problems. J. Optim. Theory Appl. 2020, 186, 864–878. [Google Scholar] [CrossRef]
  10. Chi, X.; Wang, G. A full-Newton step infeasible interior-point method for the special weighted linear complementarity problem. J. Optim. Theory Appl. 2021, 190, 108–129. [Google Scholar] [CrossRef]
  11. Chi, X.; Wan, Z.; Hao, Z. A full-modified-Newton step O(n) infeasible interior-point method for the special weighted linear complementarity problem. J. Ind. Manag. Optim. 2021, 18, 2579–2598. [Google Scholar] [CrossRef]
  12. Narushima, Y.; Sagara, N.; Ogasawara, H. A smoothing Newton method with Fischer-Burmeister function for second-order cone complementarity problems. J. Optimiz. Theory App. 2011, 149, 79–101. [Google Scholar] [CrossRef]
  13. Liu, X.; Liu, S. A new nonmonotone smoothing Newton method for the symmetric cone complementarity problem with the Cartesian P0-property. Math. Method Oper. Res. 2020, 92, 229–247. [Google Scholar] [CrossRef]
  14. Zhou, S.; Pan, L.; Xiu, N.; Qi, H. Quadratic convergence of smoothing Newton’s method for 0/1 Loss optimization. SIAM J. Optim. 2021, 31, 3184–3211. [Google Scholar] [CrossRef]
  15. Chen, P.; Lin, G.; Zhu, X.; Bai, F. Smoothing Newton method for nonsmooth second-order cone complementarity problems with application to electric power markets. J. Glob. Optim. 2021, 80, 635–659. [Google Scholar] [CrossRef]
  16. Khouja, R.; Mourrain, B.; Yakoubsohn, J. Newton-type methods for simultaneous matrix diagonalization. Calcolo 2022, 59, 38. [Google Scholar] [CrossRef]
  17. Zhang, J. A smoothing Newton algorithm for weighted linear complementarity problem. Optim. Lett. 2016, 10, 499–509. [Google Scholar]
  18. Tang, J.; Zhou, J. A modified damped Gauss–Newton method for non-monotone weighted linear complementarity problems. Optim. Method. Softw. 2022, 37, 1145–1164. [Google Scholar] [CrossRef]
  19. Potra, F.A.; Ptak, V. Nondiscrete induction and iterative processes. SIAM Rev. 1987, 29, 505–506. [Google Scholar]
  20. Kou, J.; Li, Y.; Wang, X. A modification of Newton method with third-order convergence. Appl. Math. Comput. 2006, 181, 1106–1111. [Google Scholar] [CrossRef]
  21. Argyros, I.K.; Hilout, S. On the local convergence of fast two-step Newton-like methods for solving nonlinear equations. J. Comput. Appl. Math. 2013, 245, 1–9. [Google Scholar] [CrossRef]
  22. Dehghan Niri, T.; Shahzadeh Fazeli, S.A.; Heydari, M. A two-step improved Newton method to solve convex unconstrained optimization problems. J. Appl. Math. Comput. 2020, 62, 37–53. [Google Scholar] [CrossRef]
  23. Tang, J.; Zhou, J.; Zhang, H. An accelerated smoothing Newton method with cubic convergence for weighted complementarity problems. J. Optim. Theory Appl. 2023, 196, 641–665. [Google Scholar] [CrossRef]
  24. Liu, X.; Zhang, J. Strong convergence of a two-step modified Newton method for weighted complementarity problems. Axioms 2023, 12, 742. [Google Scholar] [CrossRef]
  25. Argyros, I.K.; George, S. On the influence of center-Lipschitz conditions in the convergence analysis of multi-point iterative methods. Rev. Colomb. Mat. 2008, 42, 15–24. [Google Scholar]
  26. Magrenan Ruiz, A.A.; Argyros, I.K. Two-step Newton methods. J. Complex. 2014, 30, 533–553. [Google Scholar] [CrossRef]
  27. Chen, J.S.; Ko, C.H.; Liu, Y.D.; Wang, S.P. New smoothing functions for solving a system of equalities and inequalities. Pac. J. Optim. 2016, 12, 185–206. [Google Scholar]
  28. Fan, X.; Yan, Q. Solving system of inequalities via a smoothing homotopy method. Numer. Algor. 2019, 82, 719–728. [Google Scholar] [CrossRef]
  29. Dong, L.; Tang, J.Y.; Song, X.Y. A non-monotone inexact non-interior continuation method based on a parametric smoothing function for LWCP. Int. J. Comput. Math. 2018, 95, 739–751. [Google Scholar]
  30. Zhou, Z.; Peng, Y. The locally Chen–Harker–Kanzow–Smale smoothing functions for mixed complementarity problems. J. Glob. Optim. 2019, 74, 169–193. [Google Scholar] [CrossRef]
  31. Dennis, J.; More, J. A characterization of superlinear convergence and its applications to quasi-Newton methods. Math. Comput. 1974, 28, 549–560. [Google Scholar] [CrossRef]
  32. Qi, L.; Sun, J. A non-smooth version of Newton’s method. Math. Program. 1993, 58, 353–367. [Google Scholar] [CrossRef]
Table 1. Display of the variation in F ( z k ) as the number of iterations k increases.
Table 1. Display of the variation in F ( z k ) as the number of iterations k increases.
kSNQ_LANM_TangTSN_Liu
1 3.5540 × 10 1 1.5520 × 10 1 6.6272 × 10 0
2 3.7347 × 10 2 1.6755 × 10 0 2.6368 × 10 1
3 4.2037 × 10 8 5.1659 × 10 3 3.6580 × 10 4
4 3.2440 × 10 7 6.6823 × 10 12
Table 2. Numerical comparison results of the three algorithms for solving the WLCP.
Table 2. Numerical comparison results of the three algorithms for solving the WLCP.
nSNQ_LANM_TangTSN_Liu
AvekAveCPUAveeroAvekAveCPUAveeroAvekAveCPUAveero
10003.01.4839 2.2325 × 10 7 3.51.5091 9.4296 × 10 8 4.01.8323 6.1994 × 10 11
20003.09.8266 4.3843 × 10 7 3.69.4651 3.9083 × 10 7 4.015.3614 1.1060 × 10 10
30003.036.4712 2.9185 × 10 7 4.037.7936 3.2919 × 10 7 4.051.4373 1.6420 × 10 10
40003.189.5122 5.9255 × 10 7 4.0101.6152 6.0811 × 10 7 4.0110.9661 4.0618 × 10 10
50003.2165.3057 4.9597 × 10 7 4.0160.4525 7.7175 × 10 7 4.0216.3353 1.4777 × 10 10
60003.3289.1295 4.1117 × 10 7 4.0296.4240 3.8819 × 10 7 4.0368.0030 7.5609 × 10 9
70003.4478.9888 7.6231 × 10 11 4.0758.7281 1.1165 × 10 11 4.0584.5569 5.9611 × 10 11
80003.4731.9228 4.7637 × 10 7 4.01174.9030 1.3886 × 10 11 4.0871.0305 3.6116 × 10 10
Table 3. Numerical comparison results of the three algorithms for solving the WCP.
Table 3. Numerical comparison results of the three algorithms for solving the WCP.
nSNQ_LANM_TangTSN_Liu
AvekAveCPUAveeroAvekAveCPUAveeroAvekAveCPUAveero
5003.30.5099 1.5306 × 10 7 4.00.4196 1.4293 × 10 7 4.20.3881 8.6678 × 10 8
10003.72.0400 1.2644 × 10 7 4.02.1728 1.2036 × 10 7 4.71.9977 3.1280 × 10 8
15003.95.4378 9.5575 × 10 8 4.05.3286 1.6225 × 10 8 4.96.5876 8.3102 × 10 8
20003.812.2531 1.0501 × 10 7 4.317.5133 1.2893 × 10 7 4.917.1392 2.4996 × 10 7
25004.025.5304 1.1410 × 10 11 4.326.1118 5.1913 × 10 7 4.935.1112 3.1818 × 10 8
30004.044.2121 6.2264 × 10 9 5.042.3906 1.0581 × 10 7 5.156.0453 5.7393 × 10 8
35004.074.8664 1.4966 × 10 7 5.070.4032 1.4966 × 10 7 5.391.4523 2.8774 × 10 9
40004.0109.1118 5.8961 × 10 11 6.0149.8035 1.6964 × 10 8 5.4139.3034 4.1105 × 10 8
45004.0144.2821 1.3936 × 10 10 6.0164.8998 1.4673 × 10 8 5.4198.5842 2.9454 × 10 7
50004.0210.1606 6.9677 × 10 9 7.0259.7001 7.6293 × 10 9 5.2278.2668 4.1158 × 10 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Liu, Y.; Zhang, J. A Two-Step Newton Algorithm for the Weighted Complementarity Problem with Local Biquadratic Convergence. Axioms 2023, 12, 897. https://doi.org/10.3390/axioms12090897

AMA Style

Liu X, Liu Y, Zhang J. A Two-Step Newton Algorithm for the Weighted Complementarity Problem with Local Biquadratic Convergence. Axioms. 2023; 12(9):897. https://doi.org/10.3390/axioms12090897

Chicago/Turabian Style

Liu, Xiangjing, Yihan Liu, and Jianke Zhang. 2023. "A Two-Step Newton Algorithm for the Weighted Complementarity Problem with Local Biquadratic Convergence" Axioms 12, no. 9: 897. https://doi.org/10.3390/axioms12090897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop