Next Article in Journal
Mean-Field Modeling of Green Technology Adoption: A Competition for Incentives
Previous Article in Journal
Asymptotic Behaviours for an Index Whittaker Transform over E(R+)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Completely Smooth Lower-Order Penalty Approach for Solving Second-Order Cone Mixed Complementarity Problems

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(5), 690; https://doi.org/10.3390/math13050690
Submission received: 17 January 2025 / Revised: 12 February 2025 / Accepted: 18 February 2025 / Published: 20 February 2025
(This article belongs to the Section C: Mathematical Analysis)

Abstract

:
In this paper, a completely smooth lower-order penalty method for solving a second-order cone mixed complementarity problem (SOCMCP) is studied. Four distinct types of smoothing functions are taken into account. According to this method, SOCMCP is approximated by asymptotically completely smooth lower-order penalty equations (CSLOPEs), which includes penalty and smoothing parameters. Under mild assumptions, the main results show that as the penalty parameter approaches positive infinity and the smooth parameter monotonically decreases to zero, the solution sequence of asymptotic CSLOPEs converges exponentially to the solution of SOCMCP. An algorithm based on this approach is developed, and numerical experiments demonstrate its feasibility. The performance profile of four specific smooth functions is given. The final results show that the numerical performance of CSLOPEs is better than that of a smooth-like lower-order penalty method.

1. Introduction

This paper focuses on the second-order cone mixed complementarity problem (SOCMCP) [1], which is to find the vectors x I R m and y I R n such that
y K , F ( x , y ) K , y T F ( x , y ) = 0 , G ( x , y ) = 0 ,
where F : I R m + n I R n and G : I R m + n I R m are vector-valued functions that are continuously differentiable, and K I R n represents a Cartesian product of second-order cones (SOCs), also known as Lorentz cones [2]. That is to say,
K = K n 1 × × K n r
where r , n 1 , , n r 1 , n 1 + + n r = n and K n i I R n i ( i = 1 , , r ) is the n i -dimensional SOC, i.e.,
K n i = ( x 1 ; x 2 ) | x 1 I R , x 2 I R n i 1 , x 1 | | x 2 | | ,
where | | · | | denotes the Euclidean norm, and ( x 1 ; x 2 ) denotes ( x 1 , x 2 T ) T (the semicolon denotes the concatenation of two column vectors). If n i = 1 , K 1 denotes I R + , which is a set of non-negative real numbers. It is evident that K constitutes a closed, convex, and self-dual cone within I R n .
SOCMCP (1) has a strong connection with convex second-order cone programming (SOCP) and encompasses a diverse array of issues. If x and G ( x , y ) = 0 are removed from (1), then SOCMCP will be simplified as the second-order cone nonlinear complementarity problem (SOCNCP), which is to find y I R n such that
y K , F ( y ) K , y T F ( y ) = 0 .
Specifically, when the mapping F is only regarded as an affine function, SOCNCP (3) can be reduced to the second-order cone linear complementarity problem (SOCLCP). Meanwhile, if  n 1 = = n r = 1 in (2), then K reduces to a non-negative quadrant I R + n , and SOCMCP, SOCNCP, and SOCLCP reduce to the normal mixed nonlinear complementarity problem (MNCP), nonlinear complementarity problem (NCP), and linear complementarity problem (LCP), respectively.
Over the past two decades, SOCP and the generalized second-order cone complementarity problem (SOCCP) have found extensive applications in engineering design, finance, mechanics, economics, management science, control, and other fields (see [3,4,5,6]). The convex SOCP encompasses quadratically constrained convex quadratic programs, convex quadratic programs, linear programs, and other related problems. SOCP and SOCCP are collectively called second-order cone optimization. Various algorithms have been introduced to address second-order cone optimization problems, including the interior-point method [3,4,7], smoothing Newton method [2,8,9], semismooth Newton method [10,11], smoothing regularization method [1], matrix splitting method [12,13], and merit function method [14,15,16]. While the effectiveness of some methods has seen substantial improvement in recent years, numerous complementary problem challenges still necessitate the development of effective and precise numerical techniques.
It is well known that penalty function algorithms play a significant role in solving constrained optimization problems [17,18,19,20]. The l 1 penalty function and the generalized lower-order penalty functions possess numerous advantageous properties, which have garnered significant attention from scholars [17,18,19,20]. In [21], Wang and Yang introduced a power penalty function algorithm for solving LCP. This approach transforms LCP into asymptotically nonlinear equations. The authors demonstrate that, under certain mild assumptions, the sequence of solutions to the asymptotically nonlinear equations converges exponentially to the solution of LCP, when the penalty parameter approaches positive infinity. In [22,23], Huang and Wang employed the power penalty function algorithm to solve NCP and MNCP. By transforming these problems into asymptotically nonlinear equations, they demonstrated that, under certain assumptions, the sequence of solutions to these nonlinear equations converges exponentially. In [24,25], the power penalty algorithm is utilized to solve SOCLCP and SOCNCP, building upon the foundations established in [21,22]. In [26], Chen and Mangasarian introduced a smoothing method, which will be elaborated on in Section 3. This method generates the plus and minus functions via convolution. Building upon the foundations laid in [26], the methodologies in [24,25] were further refined and expanded upon in [27,28]. In these latter works, a smoothing method that generates plus and minus functions through convolution was applied to the power penalty algorithm, enabling its application to SOCLCP and SOCNCP. However, it is important to note that the power penalty algorithm constitutes merely a specialized instance of the broader class of generalized lower-order penalty algorithms. In solving SOCP, SOCMCP (1) offers a more versatile and appropriate framework for tackling the Karush-Kuhn-Tucker (KKT) conditions associated with general SOCP. Furthermore, SOCMCP fundamentally belongs to the category of asymmetric cone complementarity problems, thereby distinguishing it from both SOCLCP and SOCNCP.
In [29], Hao et al. indicated that SOCMCP (1) can be seen as a ‘customized’ complementarity problem tailored to SOCP. They employed the power penalty algorithm to transform SOCMCP into the lower-order penalty equations (LOPEs)
G ( x , y ) F ( x , y ) α 0 ( P K ( y ) ) σ = 0 ,
where α 1 is a penalty parameter, σ ( 0 , 1 ] is a power parameter, and P K ( y ) is the projection of y on K (which will be introduced in (12)). Under certain assumptions, both the LOPEs (4) and SOCMCP (1) possess unique solutions. The convergence analysis reveals that, as  α + , ( x α ; y α ) converges exponentially to the solution of SOCMCP (1), where ( x α ; y α ) is the solution of the power penalty Equation (4) associated with the penalty parameter α .
In [30], Hao employed the smoothing method of generating positive and negative functions through convolution in [26] to construct the smooth function Φ ( μ , y ) of the projection function P K ( y ) , and transformed SOCMCP(1) into the smooth-like lower order penalty equations (SLOPEs)
G ( x , y ) F ( x , y ) α 0 ( Φ ( μ , y ) ) σ = 0 ,
where α 1 is a penalty parameter, σ ( 0 , 1 ] is a power parameter, and μ is a smoothing parameter. Under this criterion, according to the convergence analysis, when the penalty parameter α + and the smooth parameter μ 0 , the solution sequence of asymptotic SLOPEs (5) converges to the solution of SOCMCP (1) at an exponential rate. Under certain assumptions, the solutions of SLOPEs(5) and SOCMCP(1) are unique. In [30], four smooth functions provided by [27] are used to solve SLOPEs(5). In the process of designing the algorithm for solving, a simple criterion is given to estimate the parameter μ . When y μ , α K , the parameter μ is sufficiently small; when y μ , α K , the parameter μ is not small enough and a smaller parameter μ should be chosen.
In this paper, the method presented in [30] will undergo further enhancements and extensions. First, when the index σ of SLOPEs (5) is not equal to 1, SLOPEs (5) are not smooth at individual points. On this basis, when considering the case where the index σ of SLOPEs (5) is equal to 1, it facilitates the transformation of SOCMCP (1) into completely smooth lower-order penalty equations. Second, the algorithm presented in [30] provides a simple criterion for estimating the parameter μ . In this paper, we will try to solve the completely smooth lower-order penalty equations without adding this criterion in the algorithm. Furthermore, we will allow the parameters μ and α to vary concurrently within the steps of the algorithm. Finally, by employing the four specific smooth functions introduced in [31,32], we construct a completely smooth lower-order penalty algorithm, which is subsequently solved using the smooth Newton method. Numerical experiments have been conducted on the examples presented in [29,33,34], and the results of these experiments are analyzed. These results show that the method is feasible.
The remainder of the article is structured as follows: In Section 2, we present some foundational knowledge essential for understanding the subsequent discussions. In Section 3, we structure a smooth approximation method tailored for a specific class of lower-order penalty equations. In Section 5, we investigate the completely smooth lower-order penalty equations aimed at solving SOCMCP (1), along with an analysis of their convergence properties. In Section 6, we construct corresponding algorithms, implement them, and display their numerical experimental results. Finally, we conclude with a summary of our findings. In this paper, for any p > 1 , notation | | · | | p denotes norm l p . In particular, it is a Euclidean norm | | · | | when p = 2 .

2. Preliminary Results

This section presents the fundamental operations pertaining to a single SOC block K n , and these results can be seamlessly extended to a general case K (2). For any x = ( x 1 ; x 2 ) I R × I R n 1 , y = ( y 1 ; y 2 ) I R × I R n 1 , their Jordan product [4,33,34] is defined as
x y = ( x T y ; y 1 x 2 + x 1 y 2 ) .
The Jordan product is commutative, as  x y = y x . Additionally, the vector e = ( 1 , 0 , , 0 ) T I R n serves as the identity element, satisfying x e = x for any x. Here, x 2 denotes x x , and  x + y represents the standard vector addition. Some basic concepts [2,4,32,33,34,35] of the Jordan product are listed below.
Proposition 1.
For any x = ( x 1 ; x 2 ) I R × I R n 1 , y = ( y 1 ; y 2 ) I R × I R n 1 with the Jordan product (6), then:
(a) 
The Jordan product does not satisfy associativity for n > 2 in general, but it does satisfy power associativity, i.e.,  x ( x x ) = ( x x ) x .
(b) 
For any positive integer k, the power of the element is recursively defined as x k = x x k 1 , and when x 0 , x 0 = e is defined.
(c) 
For any positive integers m and n, x m + n = x m x n holds.
(d) 
The inverse of x is denoted as x 1 , satisfying x x 1 = e .
(e) 
If x int K n , then for any positive integer k, the inverse of x k is denoted as x k = ( x k ) 1 .
(f) 
When n > 2 , the Jordan product about K n does not satisfy the closure in general; i.e., there are x , y K n , x y K n .
(g) 
The trace is tr ( x ) = 2 x 1 . The determinant is det ( x ) = x 1 2 | | x 2 | | 2 .
(h) 
When x K n , the square root exists, denoted as x 1 / 2 , and the square root is unique, satisfying x 1 / 2 K n , ( x 1 / 2 ) 2 = x .
(i) 
The absolute value vector of x is denoted by | x | satisfying | x | = ( x 2 ) 1 / 2 . Clearly x 2 = | x | 2 .
By the properties of SOC, for any x , y K n , x , y = 0 if and only if x y = 0 . Therefore, the complementary conditions can be equivalently expressed through Jordan products.
Subsequently, we present the spectral factorization of vectors belonging to I R n in relation to K n [2,32,34,35]. For any vector x = ( x 1 ; x 2 ) I R × I R n 1 , the vector can be decomposed as
x = λ 1 ( x ) u x ( 1 ) + λ 2 ( x ) u x ( 2 ) ,
where λ 1 ( x ) , λ 2 ( x ) are the spectral values of x, and u x ( 1 ) , u x ( 2 ) are the spectral vectors of x. They are respectively given by
λ i ( x ) = x 1 + ( 1 ) i | | x 2 | | , u x ( i ) = 1 2 ( 1 ; ( 1 ) i x 2 | | x 2 | | ) , if | | x 2 | | 0 , 1 2 ( 1 ; ( 1 ) i ω ) , if | | x 2 | | = 0 , i = 1 , 2 ,
where ω is any unit vector in I R n 1 . Obviously, λ 1 ( x ) λ 2 ( x ) , e = u x ( 1 ) + u x ( 2 ) , and the spectral decompositions (7) and (8) are unique when x 2 0 . The following will list some basic properties of spectral decomposition [2,32,35].
Proposition 2.
For any x = ( x 1 ; x 2 ) I R × I R n 1 with spectral decompositions (7) and (8), then
(a) 
u x ( 1 ) u x ( 2 ) = 0 , | | u x ( 1 ) | | = | | u x ( 2 ) | | = 1 / 2 .
(b) 
u x ( i ) u x ( i ) = u x ( i ) , i = 1 , 2 .
(c) 
λ 1 ( x ) , λ 2 ( x ) are non-negative (positive) if and only if x K n ( x int K n ) .
For the sake of brevity and clarity, we often express the spectral decomposition of x simply as x = λ 1 u ( 1 ) + λ 2 u ( 2 ) . Spectral decompositions (7) and (8) and Proposition 2 offer a highly valuable tool for analyzing power functions in the context of Jordan product. For example, we have x 2 = ( λ 1 u ( 1 ) + λ 2 u ( 2 ) ) ( λ 1 u ( 1 ) + λ 2 u ( 2 ) ) = λ 1 2 u ( 1 ) + λ 2 2 u ( 2 ) for any x I R n , so x 2 K n . On the contrary, for any x K n , according to spectral decompositions (7) and (8), we know 0 λ 1 λ 2 . Let ω = λ 1 u ( 1 ) + λ 2 u ( 2 ) , then ω 2 = x . By the uniqueness of the square root, we have   x 1 / 2 = λ 1 u ( 1 ) + λ 2 u ( 2 ) . These results indicate that when squaring or taking the square root of a vector, only the spectral value of the vector needs to undergo the respective operation, while the corresponding spectral vector remains unchanged.
By utilizing spectral decompositions (7) and (8), we can extend a scalar function f ^ : I R I R to a vector-valued function within the SOC space associated with K n ( n 1 )  [2,32,36], which is given by
f ( x ) = f ^ ( λ 1 ( x ) ) u x ( 1 ) + f ^ ( λ 2 ( x ) ) u x ( 2 ) , x = ( x 1 ; x 2 ) I R × I R n 1 ,
where λ 1 ( x ) , λ 2 ( x ) and u x ( 1 ) , u x ( 2 ) are the spectral values and spectral vectors related to x, respectively, as shown in (8).
For any x I R n , under the Euclidean norm, the nearest point of x onto K n is called the projection of x, denoted by P K n ( x ) , i.e.,  P K n ( x ) K n and satisfying
| | x P K n ( x ) | | = min | | x y | | | y K n .
Clearly, when n = 1 , a projection function reduces to [ t ] + = max 0 , t .
The subsequent lemma demonstrates that | x | and P K n ( x ) have the form of (9) (see [2], Proposition 3.3).
Lemma 1.
For any x = ( x 1 ; x 2 ) I R × I R n 1 with spectral factorizations (7) and (8), then
(a) 
| x | = ( x 2 ) 1 / 2 = | λ 1 | u ( 1 ) + | λ 2 | u ( 2 ) .
(b) 
The representation of the projection functions for x onto K n can be expressed as
P K n ( x ) = ( x + | x | ) / 2 = [ λ 1 ] + u ( 1 ) + [ λ 2 ] + u ( 2 ) ,
where for any scalar α I R , [ α ] + = max 0 , α .
Analogous to the concept of projection functions on K n , for any vector x with spectral factorizations (7) and (8), we define [8]
P ¯ K n ( x ) = [ λ 1 ] u ( 1 ) + [ λ 2 ] u ( 2 ) ,
where [ λ i ] = max 0 , λ i , i = 1 , 2 . Clearly, P ¯ K n ( x ) represents the projection functions of x on K n , and  x = P K n ( x ) P ¯ K n ( x ) , P K n ( x ) , P ¯ K n ( x ) K n , P K n ( x ) P ¯ K n ( x ) = 0 .
The analysis of the aforementioned single SOC block can be extended to general situations (2). Specifically, for any x = ( x 1 ; ; x r ) I R n 1 × × I R n r , y = ( y 1 ; ; y r ) I R n 1 × × I R n r , their Jordan product is defined as
x y = ( x 1 y 1 ; ; x r y r ) .
Let P K ( x ) and P ¯ K ( x ) respectively denote the projection functions of x and x onto K ; then
P K ( x ) = ( P K n 1 ( x 1 ) ; ; P K n r ( x r ) ) , P ¯ K ( x ) = ( P ¯ K n 1 ( x 1 ) ; ; P ¯ K n r ( x r ) ) ,
where P K n i ( x i ) and P ¯ K n i ( x i ) for i = 1 , , r respectively denote the projection functions of x i and x i onto the single SOC block K n i .

3. C–M Smooth Function of the Projection Function

In this section, we will introduce a method proposed by Chen and Mangasarian [26], which involves utilizing convolution to generate smooth functions from the plus function [ t ] + = max 0 , t and the minus function [ t ] = max 0 , t . To begin, we define the piecewise continuous function d ( t ) , termed the density (kernel) function [26], which satisfies
d ( t ) 0 , + d ( t ) d t = 1 .
Next, we define s ^ ( μ , t ) = 1 μ d ( t μ ) , where μ is a positive parameter. If + | t | d ( t ) d t < + , then there forms a smoothing approximation of [ t ] + , i.e.,
ϕ + ( μ , t ) = + ( t s ) + s ^ ( μ , s ) d s = t ( t s ) s ^ ( μ , s ) d s [ t ] + .
The subsequent proposition delineates the properties of ϕ + ( μ , t ) . For a detailed proof, refer to Proposition 2.2 in [26].
Proposition 3.
Suppose that d ( t ) is a density function that satisfies (13) and s ^ ( μ , t ) = 1 μ d ( t μ ) for positive parameter μ. If d ( t ) is the piecewise continuous function, and satisfies + | t | d ( t ) d t < + , then ϕ + ( μ , t )  (14) exhibits the following properties:
(a) 
ϕ + ( μ , t ) is continuously differentiable.
(b) 
D 2 μ ϕ + ( μ , t ) [ t ] + D 1 μ , where
D 1 = 0 | t | d ( t ) d t , D 2 = max + t d ( t ) d t , 0 .
(c) 
t ϕ + ( μ , t ) is bounded and satisfying 0 t ϕ + ( μ , t ) 1 .
Under the assumptions of this proposition, according to Proposition 3(b), we have
lim μ 0 + ϕ + ( μ , t ) = [ t ] + .
By applying the aforementioned method for generating smooth functions to [ t ] = max 0 , t , we obtain
ϕ ( μ , t ) = t ( t s ) s ^ ( μ , s ) d s = t + ( s t ) s ^ ( μ , s ) d s [ t ] .
Analogous to Proposition 3, the following properties hold for ϕ ( μ , t ) .
Proposition 4.
Let d ( t ) and s ^ ( μ , t ) be as stated in Proposition 3; then the function ϕ ( μ , t )  (16) has the following properties:
(a) 
ϕ ( μ , t ) is continuously differentiable.
(b) 
D 2 μ ϕ ( μ , t ) [ t ] D 1 μ , where
D 1 = 0 + | t | d ( t ) d t , D 2 = max + t d ( t ) d t , 0 .
(c) 
t ϕ ( μ , t ) is bounded and satisfying 1 t ϕ ( μ , t ) 0 .
Similar to Proposition 3, for Proposition 4(b), we have
lim μ 0 + ϕ ( μ , t ) = [ t ] .
From (15) and (17), it can be seen that the functions ϕ + ( μ , t ) and ϕ ( μ , t ) defined by (14) and (16) are smooth functions of [ t ] + and [ t ] , respectively.
According to [31,32], the four specific smooth functions of [ t ] are
ϕ 1 ( μ , t ) = 0 , t > μ , 1 16 μ 3 ( t 4 + 6 t 2 μ 2 + 3 μ 4 8 μ 3 t ) , μ t μ , t , t < μ ,
ϕ 2 ( μ , t ) = 1 2 ( t · erf ( t 2 μ ) + 2 π μ e t 2 2 μ 2 ) t 2 , erf = 2 π 0 t e μ 2 d u ,
ϕ 3 ( μ , t ) = 0 , t μ , μ 2 [ ln ( 1 + ( t μ ) 2 ) + 1 ln 2 ] t 2 , μ < t < μ , t , t < μ ,
ϕ 4 ( μ , t ) = 1 2 ( μ 2 + t 2 ) 1 2 t 2 ,
where the corresponding kernel functions are
d 1 ( t ) = 3 4 ( 1 t 2 ) , 1 t 1 , 0 , otherwise ,
d 2 ( t ) = 1 2 π e t 2 2 ,
d 3 ( t ) = 1 t 2 ( 1 + t 2 ) 2 , 1 < t < 1 , 0 , otherwise ,
d 4 ( t ) = 1 2 ( t 2 + 1 ) 3 2 .
In [30], Hao et al. propose four smooth functions, of which the smooth function closest to [ t ] is
ϕ 5 ( μ , t ) = 0 , t μ 2 , 1 2 μ ( t + μ 2 ) 2 , μ 2 < t < μ 2 , t , t μ 2 ,
where the corresponding kernel functions are
d 5 ( t ) = 1 , 1 2 t 1 2 , 0 , otherwise .
For the smoothing functions (18) to (22), they satisfy Proposition 4. The images of [ t ] and ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 at μ = 0.5 are shown in Figure 1.
From Figure 1, it can be seen that, for a fixed parameter μ > 0 , function ϕ 3 ( μ , t ) is the closest function to [ t ] among all ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 . In fact, for a fixed parameter μ > 0 and all t I R , we have
ϕ 4 ( μ , t ) ϕ 2 ( μ , t ) ϕ 1 ( μ , t ) ϕ 3 ( μ , t ) ϕ 5 ( μ , t ) [ t ] .
In this paper, based on the SOC vector-valued function (9), the smooth functions (18)–(22) are applied to the projection function ( P K ( y ) ) σ in (4) to form a completely smooth lower-order penalty equation, which will be introduced in (27), and then an algorithm is constructed to solve SOCMCP (1).
For any x = ( x 1 ; ; x r ) I R n 1 × × I R n r , construct a smoothing function for the projection function P K ( x ) , P ¯ K ( x ) associated with K = K n 1 × × K n r . First, utilizing the smoothing functions defined by (14) and (16), we will explore the construction of a smoothing function for the projection function P K n i ( x i ) , P ¯ K n i ( x i ) related to a single SOC block K n i ( i = 1 , , r ) , where P K n i ( x i ) and P ¯ K n i ( x i ) are given by (10) and (11), respectively. The spectral values and spectral vectors of x i I R n i ( i = 1 , , r ) with respect to K n i are λ 1 ( x i ) , λ 2 ( x i ) and u x i ( 1 ) , u x i ( 2 ) , respectively. We define a vector-valued function
Φ i + ( μ , x i ) = ϕ + ( μ , λ 1 ( x i ) ) u x i ( 1 ) + ϕ + ( μ , λ 2 ( x i ) ) u x i ( 2 ) ,
Φ i ( μ , x i ) = ϕ ( μ , λ 1 ( x i ) ) u x i ( 1 ) + ϕ ( μ , λ 2 ( x i ) ) u x i ( 2 ) ,
where μ I R + + is a smoothing parameter. According to [37], Φ i + ( μ , x i ) , Φ i ( μ , x i ) is smooth on I R + + × I R n i , and 
lim μ 0 + Φ i + ( μ , x i ) = [ λ 1 ( x i ) ] + u x i ( 1 ) + [ λ 2 ( x i ) ] + u x i ( 2 ) = P K n i ( x i ) ,
lim μ 0 + Φ i ( μ , x i ) = [ λ 1 ( x i ) ] u x i ( 1 ) + [ λ 2 ( x i ) ] u x i ( 2 ) = P ¯ K n i ( x i ) .
Finally, we construct smooth functions for projection functions P K ( x ) and P ¯ K ( x ) related to a general cone (2). Define a vector-valued function Φ + , Φ : I R + + × I R n I R n as
Φ + ( μ , x ) = ( Φ 1 + ( μ , x 1 ) ; ; Φ r + ( μ , x r ) ) ,
Φ ( μ , x ) = ( Φ 1 ( μ , x 1 ) ; ; Φ r ( μ , x r ) ) ,
where Φ i + ( μ , x i ) , Φ i ( μ , x i ) ( i = 1 , , r ) are defined by (23) and (24), respectively. In order to facilitate the proof of subsequent convergence analysis, by [27], Lemma 3.2, we have
Φ + ( μ , x ) K , Φ ( μ , x ) K .
Based on the above smoothing idea, a completely smooth lower-order penalty equation can be established to solve SOCMCP (1). This will be described in the next section.

4. Completely Smooth Lower-Order Penalty Approach and Convergence Analysis

This section will introduce a completely smooth lower-order penalty method for solving SOCMCP (1), and it will also provide a comprehensive convergence analysis. In the following text, unless otherwise specified, K is always represented as (2). We consider the completely smooth lower-order penalty equations (CSLOPEs)
G ( x , y ) F ( x , y ) α 0 Φ ( μ , y ) = 0 ,
where α 1 is a penalty parameter, μ ( 0 , 1 ) is a smoothing parameter, and Φ ( μ , y ) is defined as (26). If y K is violated, then the penalty term α Φ ( μ , y ) in (27) penalizes the ‘negative’ part of y. With a penalty parameter α + and a smooth parameter μ 0 + , it will force y to approach K . In (27), it is straightforward to observe that F ( x , y ) K always holds because α Φ ( μ , y ) K . We expect the solution sequence of (27) to tend towards the solution of (1). To achieve this, let
F ( x , y ) = ( G ( x , y ) ; F ( x , y ) ) , K = I R m × K ,
where K is defined in (2), and let us make the following assumptions.
Assumption 1.
The function F ( x , y ) is ξ -monotone on I R m + n ; i.e., there exist constants β > 0 and ξ > 1 , such that
( ( x ; y ) ( x ¯ ; y ¯ ) ) T ( F ( x , y ) F ( x ¯ , y ¯ ) ) β | | ( x ; y ) ( x ¯ ; y ¯ ) | | ξ , ( x ; y ) , ( x ¯ ; y ¯ ) I R m + n .
It is apparent that a function that is strongly monotonic must also be a ξ -monotonic function, though the opposite is not guaranteed. Under Assumption 1, we will undertake an analysis of convergence.
First, we consider the uniqueness of the solutions SOCMCP (1) and CSLOPEs (27). Using Proposition 1.1.3 and Theorem 2.2.3 from [5], Section 3.1 of [29] demonstrates that SOCMCP (1) can be equivalently transformed into a variational inequality. In the case of continuity and ξ -monotone, variational inequalities have a unique solution. Therefore, under Assumption 1, SOCMCP (1) is uniquely solvable.
In [30], SLOPEs (5) consider the approximate smooth case of σ ( 0 , 1 ] . In this paper, we further explore the completely smooth case where σ = 1 based on that. Therefore, the remaining results in this section are similar to the propositions and theorems in [30].
Proposition 5.
For any smoothing parameter μ ( 0 , 1 ) , the function Φ ( μ , y ) is defined as (26); then Φ ( μ , y ) about y is monotone on I R n , i.e.,
( y y ) T ( Φ ( μ , y ) Φ ( μ , y ) ) 0 , y , y I R n .
According to Proposition 5, for any parameter α 1 , μ ( 0 , 1 ) , function α Φ ( μ , y ) is monotone on I R n . Suppose that
Ψ ( x , y ) = F ( x , y ) α ( 0 ; ( Φ ( μ , y ) ) ) .
Due to the fact that nonlinear equations can be regarded as unconstrained variational inequalities, the uniqueness solutions of CSLOPEs (27) can be considered through variational inequalities.
Proposition 6.
Under Assumption 1, for any parameter α 1 , μ ( 0 , 1 ) ; then CSLOPEs (27) have a unique solution.
Proposition 7.
For any α 1 , when μ is sufficiently small, the solution of CSLOPEs (27) is bounded. Specifically, there exists a positive constant M that does not depend on ( x μ , α ; y μ , α ) , μ , α , such that | | ( x μ , α ; y μ , α ) | | M .
Proposition 8.
For any α 1 , when μ is sufficiently small, there exists a positive constant C that does not depend on ( x μ , α ; y μ , α ) , α , μ , such that
| | Φ ( μ , y μ , α ) | | C α .
Theorem 1.
Suppose that ( x * ; y * ) is the solution of SOCMCP (1); then for any α 1 and sufficiently small μ, ( x μ , α ; y μ , α ) is the solution of CSLOPEs (27). Then there exists a positive constant C that does not depend on ( x * ; y * ) , ( x μ , α ; y μ , α ) , μ , α , such that
| | ( x * ; y * ) ( x μ , α ; y μ , α ) | | C α 1 / ξ .

5. Algorithm and Numerical Experiments

In this section, we construct a completely smooth lower-order penalty algorithm for solving SOCMCP (1) based on Theorem 1. Additionally, we present several numerical experiments to demonstrate its effectiveness.
Algorithm 1 offers a completely smooth lower-order penalty method for solving SOCMCP (1). In the following sections, we will present several numerical examples to demonstrate the effectiveness of Algorithm 1. In numerical experiments, the smooth functions (18)–(21) are employed, and all nonlinear equations are solved using the smoothing Newton method. For each numerical example, IP ( x ( 0 ) ; y ( 0 ) ) denotes the initial point, Iter denotes the number of iterations, CPU ( s ) records the time taken to calculate the problems, and Err denotes | | ( x ˜ ; y ˜ ) ( x * ; y * ) | | , where ( x ˜ ; y ˜ ) denotes the approximate optimal solution obtained through Algorithm 1, and  ( x * ; y * ) denotes the exact solution of the problem. For all subsequent numerical examples presented hereinafter, unless otherwise stated, we adopt a termination criterion of eps = 1 × 10 6 . In the numerical experiment table, ‘-’ indicates that the results could not be obtained (due to matrix singularity or other reasons). All the numerical experiments in this paper were run in MATLAB 2012a.
Algorithm 1: Completely Smooth Lower-Order Penalty Algorithm
Step 0
For a continuous differentiable vector-valued function ( G ( x , y ) ; F ( x , y ) ) , it is that which satisfies Assumption 1, where G : I R m + n I R m , F : I R m + n I R n , and the block form of F is F = ( F 1 ; ; F r ) , F i : I R m + n I R n i ( i = 1 , , r ) , and the block form of variable y is y = ( y 1 ; ; y r ) , y i I R n i ( i = 1 , , r ) . Setting ( x ˜ ; y ˜ ) = 0 I R m + n .
Step 1
If G ( 0 ) = 0 , F ( 0 ) K , go to step5; else, go to step2.
Step 2
For the penalty parameter α 1 , the smoothing parameter μ ( 0 , 1 ) , the error bound eps , and the multiple parameters c 1 > 1 and c 2 < 1 , select an initial point ( x ( 0 ) ; y ( 0 ) ) , where y ( 0 ) = ( y 1 ( 0 ) ; ; y r ( 0 ) ) I R n 1 × × I R n r with y i ( 0 ) = ( y i 1 ( 0 ) ; y i 2 ( 0 ) ) I R × I R n i 1 ( i = 1 , , r ) by taking y i 2 ( 0 ) 0 while y i 1 ( 0 ) < 0 .
Step 3
For the parameters α , μ and the initial point ( x ( 0 ) ; y ( 0 ) ) , solve the nonlinear equations
( G ( x , y ) ; F ( x , y ) ) α ( 0 ; Φ ( μ , y ) ) = 0 ;
setting ( x μ , α ; y μ , α ) denotes the solution of this nonlinear equations. Let
Tol = | y μ , α T F ( x μ , α , y μ , α ) | + | | G ( x μ , α , y μ , α ) | | .
Step 4
If Tol eps , set ( x ˜ ; y ˜ ) = ( x μ , α ; y μ , α ) , go to step5; else, let ( x ( 0 ) ; y ( 0 ) ) = ( x μ , α ; y μ , α ) , α = c 1 α , and  μ = c 2 μ , go to step3.
Step 5
The vector ( x ˜ ; y ˜ ) serves as the approximate optimal solution for SOCMCP (1); stop.
Example 1.
Consider SOCMCP (1) on K 3 , where x I R , y = ( y 1 , y 2 , y 3 ) T K 3 and
G ( x , y ) = x + 2 y 1 2 y 2 y 3 2 = 0 , F ( x , y ) = 0.07 y 1 3 4 0.04 y 2 3 3.93 0.03 y 3 3 5.72 K 3 .
This example is derived from [29]. The function G ( x , y ) is a linear function, and F ( x , y ) is ξ -monotonic but not strongly monotonic due to F ( x , y ) , which is composed of a cubic term and a constant term. According to [29], the precise solution for Example 1 is x * = 2 , y * = ( 5 , 3 , 4 ) T . The effect of individual parameter μ , α changes on the numerical results is considered below. Taking the initial point ( x ( 0 ) ; y ( 0 ) ) = ( 1 , 1 , 1 , 1 ) T , the numerical experiments were conducted following these two distinct steps:
  • First, we set μ = 1 × 10 5 and consider various values α = 70 × 7 i , i = 1 , , 6 . The corresponding numerical results are presented in Table 1.
  • Second, we set α = 490 and consider various values μ = 0.5 , 0.2 , 0.1 , 0.01 , 0.001 , 0.0001 . The corresponding numerical results are presented in Table 2,
where Val denotes y μ , α T F ( x μ , α , y μ , α ) in Table 1 and Table 2.
From these tables, we can draw the following conclusions:
  • In Table 1, as the penalty parameter α increases, Err gradually decreases. When employing ϕ i ( i = 1 , 2 , 3 , 4 ) for solving the problem, the value of Val transitions from negative to positive. For instance, when utilizing ϕ 4 to solve, the penalty parameter α increases to 168,070; we observed that Val shifts from a negative to a positive value. This indicates that the penalty parameter α = 168,070 is already large enough, and simply increasing its value will not yield better results. At this juncture, it becomes essential to consider adopting a smaller smoothing parameter μ .
  • In Table 2, as the smoothing parameter μ decreases, the overall trend of Err exhibits a gradual decline. However, the subsequent changes are not significant, primarily due to the simple reduction in the smoothing parameter μ , without taking into account the influence of penalty parameters on numerical calculations. For instance, when employing ϕ 2 for solving purposes, the value of the smoothing parameter μ changes from 0.1 to 0.01; Val changes from positive to negative. This indicates that the smoothing parameter μ = 0.01 is sufficiently small; thus further reductions will not yield improved outcomes. In fact, what is truly required at this point is an increase in the penalty parameter α .
In Example 1, a fixed initial point was chosen to investigate the influence of various parameters on numerical experiments. In fact, when different initial points are employed, the numerical outcomes remain consistent with those presented in Table 1 and Table 2. Thus, we will not reiterate them here. The subsequent example serves to illustrate the effects of varying initial points on the numerical results.
Example 2.
For nonlinear SOCP with convex objective functions
min x 1 2 + 2 x 2 2 + 2 x 1 x 2 10 x 1 12 x 2 , s . t . 8 x 1 + 3 x 2 3 x 1 2 2 x 1 x 2 2 + 2 x 2 K 2 ,
the KKT condition is
y K 2 , F ( x , y ) K 2 , y T F ( x , y ) = 0 , G ( x , y ) = 0 ,
where x = ( x 1 , x 2 ) T , y = ( y 1 , y 2 ) T ,
F ( x , y ) = 8 x 1 + 3 x 2 3 x 1 2 2 x 1 x 2 2 + 2 x 2 , G ( x , y ) = 2 x 1 + 2 x 2 10 + y 1 + 2 ( x 1 + 1 ) y 2 2 x 1 + 4 x 2 12 3 y 1 + 2 ( x 2 1 ) y 2 .
This example is derived from [29]. According to [29], the approximate optimal solution for Example 2 is ( x ˜ ; y ˜ ) ( 2.830835 , 1.637521 , 0.122758 , 0.122758 ) T . When applying different initial points, by taking c 1 = 10 , c 2 = 0.1 , initial α = 1000 , and proper initial μ = 1 × 10 6 , the test results are summarized in Table 3. From Table 3, it can be seen that, for all initial points listed in the table, Example 2 can be solved by using Algorithm 1, which indicates that Algorithm 1 is not very sensitive to the initial points.
In the aforementioned numerical experiment, the smooth functions (18)–(21) were utilized. However, which smooth function has better numerical performance? That is to say, it is imperative to conduct a comparative analysis of the performance among the functions ϕ i ( μ , t ) ( i = 1 , 2 , 3 , 4 , 5 ) . For this purpose, we employ the performance profile method, as introduced in [38], to evaluate and compare the performances of these functions.
Assuming that there are n s solvers from the solver set S and n p test problems from the test set P , we use computation time and the number of iterations as performance metrics. Here, we adopt computation time as the primary measure. For each problem p and solver s, define
f p , s = computing   time   required   to   solve   problem   p   by   solver   s ,
and utilize the performance ratio
r p , s = f p , s min f p , s | s S .
Assuming a parameter r M is selected such that r p , s r M for all p , s , with r p , s = r M if and only if solver s fails to solve problem p. The selection of r M does not influence the performance evaluation. To gain a comprehensive evaluation for each solver, we define
ρ s ( τ ) = 1 n p size p P | r p , s τ .
The function ρ s ( τ ) represents the cumulative performance ratio, also referred to as the performance profile.
According to [30], ϕ 5 ( μ , t ) is the smooth function with the best performance. To compare the effectiveness of the smooth functions ϕ i ( μ , t ) ( i = 1 , 2 , 3 , 4 ) proposed in this paper with ϕ 5 ( μ , t ) from [30], we will consider two numerical examples, which belong to SOCP with both equality and inequality constraints and SOCP with inequality constraint. For the performance profile, the smooth functions ϕ i ( μ , t ) ( i = 1 , 2 , 3 , 4 , 5 ) from (18)–(22) are treated as five solvers. Additionally, 20 initial points are randomly generated as 20 test problems to evaluate the performance of these solvers on the numerical examples.
Example 3.
Consider nonlinear SOCP with equality and inequality constraints, and convex objective functions
min f ( z ) = exp ( ( z 1 3 ) 2 + z 2 2 + ( z 3 1 ) 2 + ( z 4 2 ) 2 + ( z 5 + 2 ) 2 ) , s . t . h ( z ) = z 1 + 2 z 2 + 3 z 3 3 z 4 z 5 2 = 0 , g ( z ) = z K 5 ,
where z = ( z 1 , z 2 , z 3 , z 4 , z 5 ) T . The KKT conditions of this SOCP can be transformed into the following SOCMCP:
y = ( y 1 , y 2 , y 3 , y 4 , y 5 ) T K 5 ,
F ( x , y ) = 2 f ( y ) ( y 1 3 , y 2 , y 3 1 , y 4 2 , y 5 + 2 ) T x ( 1 , 2 , 3 , 3 , 1 ) T K 5 ,
y T F ( x , y ) = 0 , x I R , G ( x , y ) = y 1 + 2 y 2 + 3 y 3 3 y 4 y 5 2 = 0 .
This example is derived from [29]. According to [29], the exact optimal solution for Example 3 is x * = 0 , y * = ( 3 , 0 , 1 , 2 , 2 ) T . Randomly selecting 20 initial points, by taking c 1 = 10 , c 2 = 0.1 , initial α = 1000 , and initial μ = 1 × 10 6 , the performance profile based on calculation time is shown in Figure 2.
Example 4.
Consider the following nonlinear SOCP:
min exp ( x 1 x 2 ) + ( x 1 x 5 ) 4 + 1 2 | | x | | 2 i = 1 5 x i , s . t . x T M x + c T x + m A x b K n ,
where x = ( x 1 , x 2 , x 3 , x 4 , x 5 ) T , n = 4 , m = 1 and
M = 0.0534 0.2689 0.3987 0.5193 0.1132 0.2689 0.4283 0.1669 0.1164 0.4574 0.3987 0.1669 0.8921 0.0996 0.5735 0.5193 0.1164 0.0996 0.0612 0.6769 0.1132 0.4574 0.5735 0.6769 0.0571 ,
A = 0.6687 0.3082 0.0989 0.8267 0.0767 0.2040 0.3784 0.8324 0.6952 0.9923 0.4741 0.4963 0.5420 0.6516 0.8436 ,
c = ( 0.1146 , 0.7867 , 0.9238 , 0.9907 , 0.5498 ) T , b = ( 0.6346 , 0.7374 , 0.8311 ) T .
Let y = ( y 1 , y 2 , y 3 , y 4 ) T ; then the KKT conditions for the above SOCP is SOCMCP, as follows:
y K 4 , F ( x , y ) K 4 , y T F ( x , y ) = 0 , G ( x , y ) = 0 ,
where
F ( x , y ) = x T M x + c T x + 1 A x b ,
G ( x , y ) = exp ( x 1 x 2 ) + 4 ( x 1 x 5 ) 3 + x 1 1 exp ( x 1 x 2 ) + x 2 1 x 3 1 x 4 1 4 ( x 1 x 5 ) 3 + x 5 1 ( 2 M x + c ) T A T y = 0 .
This example is derived from [33]. According to [33], the approximate optimal solution for Example 4 is ( x ˜ ; y ˜ ) ( 0.69393 , 0.87434 , 1.00356 , 0.40226 , 1.06616 , 0.49209 , 0.38664 ,   0.24221 , 0.18440 ) T . Randomly selecting 20 initial points, by taking c 1 = 10 , c 2 = 0.1 , initial α = 1000 , and initial μ = 1 × 10 8 , the performance profile based on calculation time is shown in Figure 3.
In [33], M S 5 , A I R ( n 1 ) × 5 , c I R 5 , b I R n 1 , and m I R . Randomly selecting M , A , c , b , m and 20 initial points, by taking c 1 = 10 , c 2 = 0.1 , initial α = 800 , and initial μ = 1 × 10 8 , the performance profile based on calculation time is shown in Figure 4.
From Figure 2, Figure 3 and Figure 4, it is evident that ϕ 3 ( μ , t ) exhibits the best performance, followed by ϕ 1 ( μ , t ) and then ϕ 5 ( μ , t ) . Under the parameter selection of Examples 3 and 4, ϕ 2 ( μ , t ) and ϕ 4 ( μ , t ) cannot be solved well. Therefore, when using Algorithm 1 to solve SOCMCP, the smooth function ϕ 3 ( μ , t ) is the best choice.
In [30], the smooth-like lower-order penalty equations (SLOPEs) algorithm transforms SOCMCP (1) into smooth-like lower-order penalty equations (5) with smooth functions. We attempt to use the smooth Newton method to solve nonlinear equations when the SLOPEs index σ ( 0 , 1 ] . In Algorithm 1 of this paper, the projection function [ t ] is approximated by the smooth functions ϕ i ( μ , t ) ( i = 1 , 2 , 3 , 4 , 5 ) , and ϕ 3 ( μ , t ) is the best choice. Therefore, the performance of Algorithm 1 is compared with that of SLOPEs below, especially using the best smooth function ϕ 3 ( μ , t ) .
Examples 2–4 are numerical examples that satisfy Assumption 1. In Algorithm 1 and SLOPEs, we take c 1 = 10 , c 2 = 0.1 , initial α = 100 , and initial μ = 1 × 10 7 . In SLOPEs, we take σ = 1 . The numerical performance comparison results for different termination criteria are shown in Table 4.
It can be concluded from Table 4 that, in the case of ϕ 3 ( μ , t ) , Algorithm 1 has better numerical performance than the SLOPEs, which is mainly reflected in the following two aspects: (1) From the termination criteria, Algorithm 1 has higher accuracy than the SLOPEs. When the termination criterion is 1 × 10 7 , Example 2 can be solved by Algorithm 1, but cannot be solved using SLOPEs. Therefore, for more refined termination criteria, Algorithm 1 can be calculated. (2) Under the same termination criteria that can be solved, Algorithm 1 has a slight advantage over SLOPEs in iteration time. Algorithm 1 parameterizes the power to 1 on the basis of SLOPEs, further smoothing the SLOPEs. SLOPEs added a criterion to determine whether the iteration point is within the second-order cone in the algorithm steps, while Algorithm 1 simplifies this step to make the algorithm more concise. According to the results in Table 4, there is no significant difference between the values of Val of the two algorithms. From this point of view, Algorithm 1 does have better numerical performance than SLOPEs.
The numerical comparisons above are based on termination criteria. Since Examples 1 and 3 have exact solutions, the numerical performances of Algorithm 1 and SLOPEs are compared by using the distance between the numerical solutions and the exact solutions as a new termination criterion. The new termination criterion for both algorithms is set to 1 × 10 6 . In Algorithm 1 and SLOPEs, all take c 1 = 10 , c 2 = 0.1 , initial α = 100 , and initial μ = 1 × 10 7 . In the SLOPEs, take σ as 1. Numerical performance comparison results are shown in Table 5. From the degree to which the numerical solution is close to the exact solution, it can be seen that Algorithm 1 has better numerical performance than SLOPEs.
The functions in Examples 1–4 all satisfy Assumption 1 or nearly so. The following example will attempt to solve SOCP with a non-convex objective function.
Example 5.
Consider nonlinear SOCP
min f ( x ) = e x 1 x 3 + 3 ( x 1 + x 2 ) 2 1 + ( 2 x 2 x 3 ) 2 + 1 2 x 4 2 + 1 2 x 5 2 , s . t . h ( x ) = 24.51 x 1 + 58 x 2 16.67 x 3 x 4 3 x 5 + 11 = 0 , g 1 ( x ) = 3 x 1 3 + 2 x 2 x 3 + 5 x 3 2 5 x 1 3 + 4 x 2 2 x 3 + 10 x 3 3 x 3 K 3 , g 2 ( x ) = x 4 3 x 5 K 2 .
The KKT conditions of this SOCP are the following SOCMCP:
y K 3 × K 2 , F ( x , y ) K 3 × K 2 , y T F ( x , y ) = 0 , G ( x , y ) = 0 ,
where y = ( y 1 , y 2 , y 3 , y 4 , y 5 ) T , x = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) T ,
F ( x , y ) = 3 x 1 3 + 2 x 2 x 3 + 5 x 3 2 5 x 1 3 + 4 x 2 2 x 3 + 10 x 3 3 x 3 x 4 3 x 5 ,
G ( x , y ) = x 3 e x 1 x 3 + 6 ( x 1 + x 2 ) 9 x 1 2 y 1 + 15 x 1 2 y 2 + 24.51 x 6 6 ( x 1 + x 2 ) 2 ( 2 x 2 x 3 ) 1 + ( 2 x 2 x 3 ) 2 2 y 1 4 y 2 58 x 6 x 1 e x 1 x 3 + 2 x 2 x 3 1 + ( 2 x 2 x 3 ) 2 ( 10 x 3 1 ) y 1 ( 30 x 3 2 2 ) y 2 y 3 + 16.67 x 6 x 4 y 4 + x 6 x 5 3 y 5 + 3 x 6 24.51 x 1 + 58 x 2 16.67 x 3 x 4 3 x 5 + 11 .
This example is derived from [34]. Since the objective function of the nonlinear SOCP is non-convex, ( G ( x , y ) ; F ( x , y ) ) does not satisfy ξ -monotonicity. For different initial points, we attempt to solve Example 5 using Assumption 1 with ϕ 3 , taking c 1 = 10 , c 2 = 0.1 , initial α = 100 , and initial μ = 1 × 10 7 . For the initial points 10 4 ( 1 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 , 1 , 1 ) T or 10 7 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 0 , 1 , 0 , 1 ) T , the approximate optimal solution is ( 0.12916 , 0.11255 , 0.06336 , 0.00957 , 0.00957 , 0.09046 , 0.04491 , 0.63661 , 0 , 0 , 0.00957 ) T , which is consistent with the computational results in [34]. However, for the initial points 10 4 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 0 , 1 , 0 , 1 ) T or ( 1 , 1 , 1 , 1 , 1 , 1 , 0 , 1 , 0 , 1 , 0 ) T , the numerical solution is ( 5.32406 , 5.17849 , 1.23647 , 0.38361 , 0.38361 , 0.51653 , 0.09572 , 0.12991 , 0.86311 , 0.28770 , 0.47951 ) T . It can be observed that this solution is also a local optimal solution of the original SOCP. This indicates that Assumption 1 is also applicable to some SOCMCPs that are not ξ -monotone.

6. Conclusions

Based on the completely smooth lower-order penalty equations (CSLOPEs) (27), this paper proposes a completely smooth lower-order penalty method for solving SOCMCP (1). As the main result, Theorem 1 proves that, under Assumption 1, the solution sequence of CSLOPEs (27) converges to the solution of SOCMCP (1) at an exponential rate. In the numerical experiments, five smooth functions corresponding to ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 are considered. The numerical experimental results show that ϕ 3 ( μ , t ) has better numerical performance. Meanwhile, the accuracy of Algorithm 1 is generally higher than that of SLOPEs, and Algorithm 1 is also applicable to some SOCMCPs without Assumption 1.

Author Contributions

Conceptualization, Z.H.; methodology, Q.W.; software, Q.W.; validation, Q.W. and Z.H.; formal analysis, Z.H.; investigation, Q.W.; resources, Z.H.; data curation, Q.W.; writing—original draft preparation, Q.W.; writing—review and editing, Q.W.; visualization, Z.H.; supervision, Z.H.; project administration, Z.H.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the author’s Natural Science Fund of Ningxia 2025.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hayashi, S.; Yamashita, N.; Fukushima, M. A combined smoothing and regularization method for monotone second-order cone complementarity problems. Siam J. Optim. 2005, 15, 593–615. [Google Scholar] [CrossRef]
  2. Fukushima, M.; Luo, Z.Q.; Tseng, P. Smoothing functions for second-order-cone complementarity problems. Siam J. Optim. 2002, 12, 436–460. [Google Scholar] [CrossRef]
  3. Lobo, M.S.; Vandenberghe, L.; Boyd, S.; Lebret, H. Applications of second order cone programming. Linear Algebra Its Appl. 1998, 284, 193–228. [Google Scholar] [CrossRef]
  4. Alizadeh, F.; Goldfarb, D. Second-order cone programming. Math. Program. 2003, 95, 3–51. [Google Scholar] [CrossRef]
  5. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  6. Wilmott, P.; Dewynne, J.; Howison, S. Option Pricing: Mathematical Model and Computation; Oxford Financial Press: Oxford, UK, 1993. [Google Scholar]
  7. Monteiro, R.D.C.; Tsuchiya, T. Polynomial convergence of primal–dual algorithms for the second-order cone programs based on the MZ-family of directions. Math. Program. 2000, 88, 61–83. [Google Scholar] [CrossRef]
  8. Chen, X.D.; Sun, D.; Sun, J. Complementarity functions and numerical experiments for second-order cone complementarity problems. Comput. Optim. Appl. 2003, 25, 39–56. [Google Scholar] [CrossRef]
  9. Huang, Z.H.; Ni, T. Smoothing algorithms for complementarity problems over symmetric cones. Comput. Optim. Appl. 2010, 45, 557–579. [Google Scholar] [CrossRef]
  10. Kanzow, C.; Ferenczi, I.; Fukushima, M. On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity. Siam J. Optim. 2009, 20, 297–320. [Google Scholar] [CrossRef]
  11. Pan, S.; Chen, J.S. A damped Gauss-Newton method for the second-order cone complementarity problem. Appl. Math. Optim. 2009, 59, 293–318. [Google Scholar] [CrossRef]
  12. Hayashi, S.; Yamaguchi, T.; Yamashita, N.; Fukushima, M. A matrix-splitting method for symmetric affine second-order cone complementarity problems. J. Comput. Appl. Math. 2005, 175, 335–353. [Google Scholar] [CrossRef]
  13. Zhang, L.H.; Yang, W.H. An efficient matrix splitting method for the second-order cone complementarity problem. Siam J. Optim. 2014, 24, 1178–1205. [Google Scholar] [CrossRef]
  14. Chen, J.S.; Tseng, P. An unconstrained smooth minimization reformulation of the second-order cone complementarity problem. Math. Program. 2005, 104, 293–327. [Google Scholar] [CrossRef]
  15. Chen, J.S. Two classes of merit functions for the second-order cone complementarity problem. Math. Methods Oper. Res. 2006, 64, 495–519. [Google Scholar] [CrossRef]
  16. Chen, J.S.; Pan, S. A descent method for a reformulation of the second-order cone complementarity problem. J. Comput. Appl. Math. 2008, 213, 547–558. [Google Scholar] [CrossRef]
  17. Di Pillo, G.; Grippo, L. An exact penalty function method with global convergence properties for nonlinear programming problems. Math. Program. 1986, 36, 1–18. [Google Scholar] [CrossRef]
  18. Zangwill, W.I. Nonlinear programming via penalty functions. Manag. Sci. 1967, 13, 344–358. [Google Scholar] [CrossRef]
  19. Han, S.P.; Mangasarian, O.L. Exact penalty functions in nonlinear programming. Math. Program. 1979, 17, 251–269. [Google Scholar] [CrossRef]
  20. Bertsekas, D.; Nedić, A.; Ozdaglar, A. Convex Analysis and Optimization; Athena Scientific: Belmont, MA, USA, 2003. [Google Scholar]
  21. Wang, S.; Yang, X. A power penalty method for linear complementarity problems. Oper. Res. Lett. 2008, 36, 211–214. [Google Scholar] [CrossRef]
  22. Huang, C.; Wang, S. A power penalty approach to a nonlinear complementarity problem. Oper. Res. Lett. 2010, 38, 72–76. [Google Scholar] [CrossRef]
  23. Huang, C.; Wang, S. A penalty method for a mixed nonlinear complementarity problem. Nonlinear-Anal.-Theory Methods Appl. 2012, 75, 588–597. [Google Scholar] [CrossRef]
  24. Hao, Z.; Wan, Z.; Chi, X. A power penalty method for second-order cone linear complementarity problems. Oper. Res. Lett. 2015, 43, 137–142. [Google Scholar] [CrossRef]
  25. Hao, Z.; Wan, Z.; Chi, X.; Chen, J. A power penalty method for second-order cone nonlinear complementarity problems. J. Comput. Appl. Math. 2015, 290, 136–149. [Google Scholar] [CrossRef]
  26. Chen, C.; Mangasarian, O.L. A class of smoothing functions for nonlinear and mixed complementarity problems. Comput. Optim. Appl. 1996, 5, 97–138. [Google Scholar] [CrossRef]
  27. Hao, Z.; Nguyen, C.T.; Chen, J.S. An approximate lower order penalty approach for solving second-order cone linear complementarity problems. J. Glob. Optim. 2022, 83, 671–697. [Google Scholar] [CrossRef]
  28. Chieu, T.N.; Jan, H.A.; Hao, Z.; Chen, J.S. Smoothing penalty approach for solving second-order cone complementarity problems. J. Glob. Optim. 2025, 91, 39–58. [Google Scholar]
  29. Hao, Z.; Wan, Z.; Chi, X.; Jin, Z.F. Generalized lower-order penalty algorithm for solving second-order cone mixed complementarity problems. J. Comput. Appl. Math. 2021, 385, 113168. [Google Scholar] [CrossRef]
  30. Hao, Z.; Wu, Q.; Zhao, C. Smooth-like lower order penalty approach for solving second-order cone mixed complementarity problems. J. Comput. Appl. Math. 2024, 91, 39–58. [Google Scholar]
  31. Nguyen, C.T.; Saheya, B.; Chang, Y.L. Unified smoothing functions for absolute value equation associated with second-order cone. Appl. Numer. Math. 2019, 135, 206–227. [Google Scholar] [CrossRef]
  32. Yong, L.Q. Uniformly smooth approximation function and its properties. J. Shaanxi Univ. Technol. (Natural Sci. Ed.) 2018, 34, 74–79. (In Chinese) [Google Scholar]
  33. Okuno, T.; Yasuda, K.; Hayashi, S. SL1QP Based algorithm with trust region technique for solving nonlinear second-order cone programming problems. Interdiscip. Inf. Sci. 2015, 21, 97–107. [Google Scholar]
  34. Miao, X.; Chen, J.S.; Ko, C.H. A smoothed NR neural network for solving nonlinear convex programs with second-order cone constraints. Inf. Sci. 2014, 268, 255–270. [Google Scholar] [CrossRef]
  35. Faraut, J.; Korányi, A. Analysis on Symmetric Cones; Clarendon Press: Oxford, UK, 1994. [Google Scholar]
  36. Chen, J.S. SOC Functions and their Applications. In Springer Optimization and Its Applications; Springer: Singapore, 2019; Volume 143. [Google Scholar]
  37. Chen, J.S.; Chen, X.; Tseng, P. Analysis of nonsmooth vector-valued functions associated with second-order cones. Math. Program. 2004, 101, 95–117. [Google Scholar] [CrossRef]
  38. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Graphs of [ t ] and ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 with μ = 0.5 .
Figure 1. Graphs of [ t ] and ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 with μ = 0.5 .
Mathematics 13 00690 g001
Figure 2. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 3.
Figure 2. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 3.
Mathematics 13 00690 g002
Figure 3. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 4.
Figure 3. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 4.
Mathematics 13 00690 g003
Figure 4. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 4.
Figure 4. Performance profile of ϕ i ( μ , t ) , i = 1 , 2 , 3 , 4 , 5 for Example 4.
Mathematics 13 00690 g004
Table 1. The numerical results corresponding to variations in α ( μ = 1 × 10 4 ).
Table 1. The numerical results corresponding to variations in α ( μ = 1 × 10 4 ).
α 490343024,010168,0701,176,4908,235,430
ϕ 1 Val−0.0913−0.0131−0.0019−2.5852 × 10 4 1.2631 × 10 4 2.9902 × 10 4
Err0.04290.00628.7999 × 10 4 1.2106 × 10 4 5.9144 × 10 5 1.4002 × 10 4
ϕ 2 Val−0.0913−0.0131−0.0019−1.4133 × 10 4 4.8279 × 10 4 8.9470 × 10 4
Err0.04290.00628.7998 × 10 4 6.6178 × 10 5 2.2607 × 10 4 4.1895 × 10 4
ϕ 3 Val−0.0913−0.0131−0.0019−2.6365 × 10 4 8.3499 × 10 5 2.5553 × 10 4
Err0.04290.00628.7999 × 10 4 1.2346 × 10 4 3.9100 × 10 5 1.1965 × 10 4
ϕ 4 Val−0.0913−0.0131−0.00181.5169 × 10 4 0.00290.0206
Err0.04290.00628.6625 × 10 4 3.0870 × 10 5 6.5788 × 10 4 0.0047
Table 2. The numerical results corresponding to variations in μ ( α = 490 ).
Table 2. The numerical results corresponding to variations in μ ( α = 490 ).
μ 0.50.20.10.010.0010.0001
ϕ 1 Val1.05720.2044−0.0060−0.0913−0.0913−0.0913
Err0.47360.09480.00280.04290.04290.0429
ϕ 2 Val3.54240.88660.2429−0.0908−0.0913−0.0913
Err1.46520.39990.11250.04270.04290.0429
ϕ 3 Val0.81100.1225−0.0354−0.0913−0.0913−0.0913
Err0.36690.05700.01660.04290.04290.0429
ϕ 4 Val30.37794.78321.1272−0.0791−0.0912−0.0913
Err4.28710.95110.22880.04010.04290.0429
Table 3. The numerical results with different initial points ( μ = 1 × 10 6 , α = 1000 ).
Table 3. The numerical results with different initial points ( μ = 1 × 10 6 , α = 1000 ).
IP ( x ( 0 ) ; y ( 0 ) ) Val ( ϕ 1 ) Val ( ϕ 2 ) Val ( ϕ 3 ) Val ( ϕ 4 )
(2,1,0,1)-2.0256 × 10 7 -2.0347 × 10 7
10 2 (1,1,0,1)2.0349 × 10 7 2.0384 × 10 7 2.0288 × 10 7 2.0251 × 10 7
10 12 (2,2,1,1)-2.0268 × 10 7 2.0367 × 10 7 2.0300 × 10 7
10 3 (2,1,0,1)2.0366 × 10 7 2.0256 × 10 7 2.0348 × 10 7 2.0300 × 10 7
( 2 × 10 5 , 10 3 , 2 × 10 6 , 1 × 10 4 ) 2.0264 × 10 7 2.0347 × 10 7 2.0348 × 10 7 2.0391 × 10 7
Table 4. Comparison results of the value of Val .
Table 4. Comparison results of the value of Val .
ExampleIP ( x ( 0 ) ; y ( 0 ) ) epsAlgorithm 1 ( ϕ 3 )SLOPEs ( ϕ 3 )
ValIterCPU (s) ValIterCPU (s)
Example 2(2,1,0,1)1 × 10 6 2.0348 × 10 7 80.68402.0349 × 10 7 80.7050
1 × 10 7 2.4119 × 10 8 90.6825---
1 × 10 8 ------
Example 3(1,0,1,0,1,0)1 × 10 6 8.8526 × 10 15 10.87878.8526 × 10 15 10.8837
1 × 10 7 8.8526 × 10 15 10.86048.8526 × 10 15 10.8966
1 × 10 8 8.8526 × 10 15 10.89738.8526 × 10 15 10.9150
Example 4(1,1,1,1,1,1,1,1,1)1 × 10 6 8.5625 × 10 7 51.05518.5625 × 10 7 51.1108
1 × 10 7 8.5450 × 10 8 61.17718.5450 × 10 8 61.2106
1 × 10 8 8.1203 × 10 9 71.22648.1203 × 10 9 71.2755
Table 5. Comparison results of the value of Err .
Table 5. Comparison results of the value of Err .
ExampleIP ( x ( 0 ) ; y ( 0 ) ) Algorithm 1 ( ϕ 3 )SLOPEs ( ϕ 3 )
ErrIterCPU (s) ErrIterCPU (s)
Example 1(1,1,1,1)2.1163 × 10 7 70.75272.1220 × 10 7 70.7827
Example 3(1,0,1,0,1,0)3.3346 × 10 8 10.87873.3346 × 10 8 10.8837
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Q.; Hao, Z. Completely Smooth Lower-Order Penalty Approach for Solving Second-Order Cone Mixed Complementarity Problems. Mathematics 2025, 13, 690. https://doi.org/10.3390/math13050690

AMA Style

Wu Q, Hao Z. Completely Smooth Lower-Order Penalty Approach for Solving Second-Order Cone Mixed Complementarity Problems. Mathematics. 2025; 13(5):690. https://doi.org/10.3390/math13050690

Chicago/Turabian Style

Wu, Qiong, and Zijun Hao. 2025. "Completely Smooth Lower-Order Penalty Approach for Solving Second-Order Cone Mixed Complementarity Problems" Mathematics 13, no. 5: 690. https://doi.org/10.3390/math13050690

APA Style

Wu, Q., & Hao, Z. (2025). Completely Smooth Lower-Order Penalty Approach for Solving Second-Order Cone Mixed Complementarity Problems. Mathematics, 13(5), 690. https://doi.org/10.3390/math13050690

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop