Abstract
In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate gradient method is proved globally convergent. Finally, preliminary numerical experiments show that the new method is efficient.
1. Introduction
We consider the nonlinear nonsmooth complementarity problem, which is to find a vector in satisfying the conditions
where is a locally Lipschitz continuous function. If F is continuously differentiable, then Problem (1) is called the nonlinear complementarity problems NCP(F). As we all know, Equation (1) is a very useful general mathematics model, which is closely related to the mathematical programming, variational inequalities, fixed point problems and mixed strategy problems (such as [1,2,3,4,5,6,7,8,9,10,11,12,13]). The methods for solving NCP(F) are classified into three categories: nonsmooth Newton methods, Jacobian smoothing methods and smoothing methods (see [14,15,16,17,18,19]). Conjugate gradient methods are widely and increasingly used for solving unconstrained optimization problem, especially in large-scale cases. There are few scholar has investigated the problem how to use the conjugate gradient method to solve NCP(F) (such as [10,20]). Moreover, in these papers, F is required to be a continuous differentiable function. In this paper, we present a new smoothing conjugate gradient method for solving Equation (1), where F is only required to be a locally Lipschitz continuous function.
In this paper, we also define the generalized gradient of F at x is
where denotes the convex hull of a set, denotes the set of points at which F is differentiable (see [21]). In the following, we introduce the definition of the smoothing function.
Definition 1
(see [22]) Let be a locally Lipschitz continuous function. We call is a smoothing function of F, if is continuously differentiable in for any fixed , and
for any fixed . If
for any , we say F satisfies gradient consistency property.
In the following sections of our paper, we also use the Fischer-Burmeister function (see [23]) and the smoothing Fischer-Burmeister function. (1) The Fischer-Burmeister function
where . From the definition of , we know that it is twice continuously differentiable besides . Moreover, it is a complementarity function, which satisfies
Denote
It is obvious that is zero at a point x if and only if x is a solution of Equation (1). Then Equation (1) can be transformed into the following unconstrained optimization problem
We know that the optimal value of is zero, and is called the value function of Equation (1).
(2) The smoothing Fischer-Burmeister function
where and
Let
where is the smoothing function of .
The rest of this work is organized as follows. In Section 2, we describe the new smoothing conjugate gradient method for the solution of Problem (1). It is shown that this method has global convergence properties under fairly mild assumptions. In Section 3, preliminary numerical results and some discussions for this method are presented.
2. The New Smoothing Conjugate Gradient Method and its Global Convergence
The new smoothing conjugate gradient direction is defined as
where is a scalar. Here, we use (see [24]) which is defined as
where . When , we set . The line search is Armijo-type line search (see [25]), where is the smallest nonnegative integer satisfies
Then, we give the new smoothing conjugate gradient method for solving Equation (1).
| Algorithm 1: Smoothing Conjugate Gradient Method |
| (S.0) Choose . Set . |
| (S.1) If , then stop, otherwise go to Step 2. |
| (S.2) Compute by Equations (4) and (5), where and is given by Equation (3). Let . |
| (S.3) If , then set , otherwise set . |
| (S.4) Let , go back to Step 1. |
| Algorithm 2: Algorithm Framework of Algorithm 1 |
| PROGRAM ALGORITHM |
| INITIALIZE ; |
| Set and . |
| WHILE the termination condition |
| is not met |
| Find step sizes ; |
| Set |
| Evaluate and ; |
| IF THEN |
| Set ; |
| ELSE |
| Set ; |
| END IF |
| Set ; |
| END WHILE |
| RETURN final solution ; |
| END ALGORITHM |
In the following, we will give the analysis about the global convergence of Algorithm 1. (The Algorithm 2 is the algorithm framework of Algorithm 1.) Before doing this work, we need the following basic assumptions.
Assumption 1.
(i) For any , , the level set is bounded.
(ii) is Lipschitz continuous, that is, there exists a constant such that
Lemma 1.
Suppose that is an infinite sequence of directions generated by Algorithm 1, then
Proof
, by Equation (2) and , we can know that Equation (6) holds. If , by Equation (5) and , we can conclude that Equation (6) holds.
Lemma 2.
Suppose that Assumption 1 holds. Then, there exists for every k, and
with ω is a positive constant.
Proof
Step 0 of Algorithm 1, we know that , i.e., is a descent direction. Suppose that is satisfied
for any . We denote
By
We know that in Equation (3) is equivalent to (see [24])
Since Assumption 1, Equations (10) and (11) yield
by Mean Value Theorem, we have
Then, we obtain that
By Equations (12) and (13), we know that Equations (4) and (5) determine a positive stepsize . And there must exists a constant yields
Denote , then Equation (7) holds. And Equation (5) implies that Equation (8) holds for . Hence, the proof is completed.
Theorem 1.
Suppose that for any fixed , satisfies Assumption 1, then the infinite sequence generated by Algorithm 1 satisfies
Proof
, we first show that K is an infinite set.
If K is a finite set, there exists an integer such that
for all . We also have for all and
In the following, we will proof
By Lemma 1 and Assumption 1, we know that is a monotone decreasing sequence and the limit of is exist. Summing Equation (7), we get
Due to Equation (2), we also have
Square both sides of Equation (18), we get
Divided both sides of Equation (19) by , we have
Denote
Then
If Equation (16) is not hold, there exists such that
We obtain from Equations (20) and (21) that
Because of
provies
which leads to a contradiction with Equation (17). This show that Equation (16) holds. There are conflicts between Equations (16) and (15). This show that K must be an infinite set and
Then, we can assume that with Hence, we get
and completes the proof.
3. Numerical Tests
In this section, we intend to test the efficiency of Algorithm 1 by numerical experiments. We use Algorithm 1 to solve eleven examples, some of them are proposed the first time, some of them are modified by the examples of the references (such as [26,27]).
The smoothing function of F as is used in solving Examples 1–4. From Example 5 to Example 11, the smoothing function of F is defined by (see [26]). where , .
Throughout the experiments, we set In Examples 1–3 and Examples 5–8, we set . Example 4, in which we set parameters . In the case of Examples 9–11, we set . We choose as the termination criterion. Our numerical results are summaried in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, where all components of are randomly selected from 0 to 10. We randomly generate 10 initial points, then implement Algorithm 1 to solve the test problem for each initial point. By the numerical of results of Examples 10–11, we can see that Algorithm 1 is suitably to solve the large scale problems.
Example 1.
We consider Equation (1), where F is defined by . The exact solutions of this problem are 0 and .
Table 1.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 0.9713 | 5.053658e−1 | 5.636783e−5 | 1 |
| 1.7119 | 4.977618e−1 | 9.929266e−6 | 11 |
| 2.7850 | −1.295343e−2 | 8.495830e−5 | 8 |
| 3.1710 | 5.422178e−3 | 1.461954e−5 | 8 |
| 4.0014 | 5.562478e−3 | 1.538368e−5 | 8 |
| 5.4688 | −7.521520e−3 | 2.849662e−5 | 7 |
| 6.5574 | 5.926470e−3 | 1.745635e−5 | 10 |
| 7.9221 | 1.276205e−2 | 8.037197e−5 | 7 |
| 8.4913 | −1.994188e−3 | 1.992344e−6 | 7 |
| 9.3399 | 1.723553e−3 | 1.482749e−6 | 7 |
Example 2.
We consider Equation (1), where There are three exact solutions as and .
Table 2.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 2.162427e−5 | 7 | ||
| 2.974655e−5 | 13 | ||
| 7.106107e−6 | 5 | ||
| 6.744680e−5 | 12 | ||
| 3.327241e−5 | 13 | ||
| 3.348105e−5 | 15 | ||
| 9.857281e−5 | 6 | ||
| 6.417282e−5 | 10 | ||
| 1.946526e−5 | 13 | ||
| 4.037476e−5 | 8 |
Example 3.
We consider Equation (1), where is one of the exact solutions of this problem.
Table 3.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 9.785244e−5 | 21 | ||
| 6.577107e−5 | 25 | ||
| 7.552798e−5 | 19 | ||
| 5.759549e−5 | 26 | ||
| 9.612693e−5 | 36 | ||
| 9.216436e−5 | 31 | ||
| 9.798255e−5 | 26 | ||
| 7.570081e−5 | 28 | ||
| 6.774521e−5 | 24 | ||
| 7.903206e-5 | 25 |
Example 4.
We consider Equation (1), where , , , are four of the exact solutions of this problem.
Table 4.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 4.436641e-4 | 21 | ||
| 5.987181e−4 | 37 | ||
| 3.790585e−4 | 23 | ||
| 9.638295e−4 | 17 | ||
| 3.052065e−4 | 13 | ||
| 7.098848e−4 | 21 | ||
| 4.242339e−4 | 25 | ||
| 8.913504e−4 | 20 | ||
| 5.981987e−4 | 26 | ||
| 9.794091e−4 | 21 |
Example 5.
We consider Equation (1), where . There are two exact solutions as 0 and 2.
Table 5.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 0.2922 | 2.0024 | 2.787626e−6 | 5 |
| 1.7071 | 1.9894 | 5.621467e−5 | 3 |
| 2.2766 | 2.0075 | 2.836408e−5 | 3 |
| 3.1110 | 2.0001 | 2.938429e−9 | 1 |
| 4.3570 | 2.0101 | 5.061011e−5 | 4 |
| 5.7853 | 2.0109 | 5.937701e−5 | 5 |
| 6.2406 | 1.9871 | 8.325445e−5 | 6 |
| 7.1122 | 2.0116 | 6.635145e−5 | 3 |
| 8.8517 | 1.9970 | 4.557770e−6 | 6 |
| 9.7975 | 1.9928 | 2.607803e−5 | 4 |
Example 6.
We consider Equation (1), where , The exact solution of this problem is .
Table 6.
Number of iterations and the final value of
| x0 | x* | (x*) | k |
|---|---|---|---|
| 9.228501e−5 | 29 | ||
| 9.446089e−5 | 22 | ||
| 9.546345e−5 | 36 | ||
| 7.428734e−5 | 34 | ||
| 6.591548e−5 | 39 | ||
| 2.433055e−5 | 25 | ||
| 6.280631e−5 | 32 | ||
| 9.476538e−5 | 21 | ||
| 5.739573e−5 | 33 | ||
| 9.575717e−5 | 22 |
Example 7.
We consider Equation (1), where with , The exact solution of this problem is .
Table 7.
Number of iterations and the final value of
| x0 | (x*) | k |
|---|---|---|
| 9.719070e−5 | 27 | |
| 9.957464e−5 | 45 | |
| 8.965459e−5 | 39 | |
| 9.644608e−5 | 47 | |
| 9.737485e−5 | 37 | |
| 9.240212e−5 | 47 | |
| 8.801643e−5 | 44 | |
| 8.946151e−5 | 44 | |
| 8.815143e−5 | 45 | |
| 6.697806e−5 | 39 |
Example 8.
We consider Equation (1), where with , The exact solution of this problem is .
Table 8.
Number of iterations and the final value of
| x0 | (x*) | k |
|---|---|---|
| 3.670149e−5 | 4 | |
| 4.216994e−5 | 4 | |
| 6.167554e−5 | 7 | |
| 3.838925e−5 | 4 | |
| 6.272257e−5 | 6 | |
| 7.097729e−5 | 5 | |
| 2.693701e−5 | 4 | |
| 9.021922e−5 | 7 | |
| 4.687797e−5 | 5 | |
| 8.657057e−5 | 6 |
Example 9.
We consider Equation (1), where with , The exact solution of this problem is .
Table 9.
Number of iterations and the final value of
| x0 | (x*) | k |
|---|---|---|
| 9.777886e−3 | 8 | |
| 3.912481e−3 | 5 | |
| 9.081688e−3 | 3 | |
| 6.868711e−3 | 7 | |
| 5.318627e−3 | 4 | |
| 7.203761e−3 | 9 | |
| 9.500345e−3 | 4 | |
| 9.421194e−3 | 4 | |
| 6.718722e−3 | 7 | |
| 3.494877e−3 | 4 |
Example 10.
We consider Equation (1), where with , n represents the problem dimension. The solution is (λ is no more than 6). In this problem, we intend to check the efficiency of Algorithm 1 with the dimension of test problem is and 200. We randomly selected ten initial values when and .
Table 10.
Number of iterations, the final value of and dimension of the test problem.
| n = 50 | n = 100 | n = 200 | |||
|---|---|---|---|---|---|
| k | k | k | |||
| 1.625691e−3 | 9 | 9.444914e−3 | 11 | 9.897292e−3 | 15 |
| 4.082584e−3 | 7 | 5.358975e−5 | 9 | 3.937758e−4 | 5 |
| 6.082289e−3 | 7 | 4.734809e−3 | 9 | 5.800944e−3 | 16 |
| 2.042082e−3 | 9 | 3.249863e−3 | 6 | 3.289200e−3 | 11 |
| 3.765484e−3 | 9 | 6.587880e−3 | 10 | 4.674659e−3 | 10 |
| 7.553578e−3 | 13 | 2.632872e−3 | 10 | 1.450852e−3 | 13 |
| 4.208302e−4 | 14 | 4.177174e−3 | 3 | 9.461359e−3 | 16 |
| 4.250316e−3 | 9 | 9.744427e−3 | 7 | 3.778464e−3 | 15 |
| 2.634965e−5 | 10 | 5.854241e−6 | 10 | 1.501579e−3 | 8 |
| 3.445498e−3 | 11 | 4.209193e−3 | 6 | 1.984871e−3 | 25 |
Example 11.
We consider Equation (1), where with , The problem has only unique solution . We randomly selected ten initial values when and .
Table 11.
Number of iterations, the final value of and dimension of the test problem.
| n = 100 | n = 200 | n = 500 | |||
|---|---|---|---|---|---|
| k | k | k | |||
| 9.152621e−03 | 17 | 9.040255e−03 | 9 | 7.682471e−3 | 14 |
| 4.383679e−3 | 15 | 6.976857e−3 | 9 | 8.861191e−3 | 15 |
| 5.172738e−3 | 15 | 6.902897e−3 | 10 | 8.892858e−3 | 12 |
| 5.796109e−3 | 12 | 7.686345e−3 | 12 | 9.210427e−3 | 14 |
| 7.613768e−3 | 16 | 8.400876e−3 | 10 | 9.843579e−3 | 10 |
| 5.398565e−3 | 12 | 8.066523e−3 | 10 | 9.717126e−3 | 13 |
| 3.403516e−3 | 15 | 9.097423e−3 | 12 | 8.999900e−3 | 15 |
| 8.701785e−3 | 13 | 7.208014e−3 | 11 | 9.970099e−3 | 12 |
| 8.302172e−3 | 11 | 7.822304e−3 | 13 | 9.391355e−3 | 15 |
| 6.610621e−3 | 13 | 7.278306e-3 | 9 | 9.624919e−3 | 10 |
4. Conclusions
In this paper, we have presented a new smoothing conjugate gradient method for the nonlinear nonsmooth complementarity problems. The method is based on a smoothing Fischer-Burmeister function and Armijo-type line search. With careful analysis, we are able to show that our method is globally convergent. Numerical tests illustrate that the method can efficiently solve the given test problems, therefor the new method is promising. We might consider more effective ways of choosing smoothing functions and line search methods for our method. This remains under investigation.
Acknowledgments
The authors wish to thank the anonymous referees for their helpful comments and suggestions, which led to great improvement of the paper. This work is also supported by National Natural Science Foundation of China (NO. 11101231, 11401331), Natural Science Foundation of Shandong (No. ZR2015AQ013) and Key Issues of Statistical Research of Shandong Province (KT15173).
Author Contributions
Ajie Chu prepared the manuscript. Yixiao Su assisted in the work. Shouqiang Du was in charge of the overall research of the paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Facchinei, F.; Pang, J.S. Finite-Demensional Variational Inequalities and Complementarity Problems; Spring-Verlag: New York, NY, USA, 2003. [Google Scholar]
- Luca, T.D.; Facchinei, F.; Kanzow, C. A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Programm. 1996, 75, 407–439. [Google Scholar] [CrossRef]
- Ferris, M.C.; Pang, J.S. Engineering and economic applications of complementarity problems. SIAM Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef]
- Zhao, Y.B.; Li, D. A new path-following algorithm for nonlinear complementarity problems. Comp. Optim. Appl. 2005, 34, 183–214. [Google Scholar] [CrossRef]
- Yu, Q.; Huang, C.C.; Wang, X.J. A combined homotopy interior point method for the linear complementarity problem. Appl. Math. Comp. 2006, 179, 696–701. [Google Scholar] [CrossRef]
- Wang, Y.; Zhao, J.X. An algorithm for a class of nonlinear complementarity problems with non-Lipschitzianfunctions. Appl. Numer. Math. 2014, 82, 68–79. [Google Scholar] [CrossRef]
- Fischer, A.; Jiang, H. Merit functions for complementarity and related problems: A survey. Comp. Optim. Appl. 2000, 17, 159–182. [Google Scholar] [CrossRef]
- Chen, J.S.; Pan, S.H. A family of NCP functions and a descent method for the nonlinear complementarity problem. Comp. Optim. Appl. 2008, 40, 389–404. [Google Scholar] [CrossRef]
- Luca, T.D.; Facchinei, F.; Kanzow, C. A theoretical and numerical comparison of some semismooth algorithm for complementarity problems. Comp. Optim. Appl. 2000, 16, 173–205. [Google Scholar] [CrossRef]
- Wu, C.Y. The Conjugate Gradient Method for Solving Nonlinear Complementarity Problems; Inner Mongolia University: Hohhot, China, 2012. [Google Scholar]
- Qi, L.; Sun, D. Nonsmooth and smoothing methods for nonlinear complementarity problems and variational inequalities. Encycl. Optim. 2009, 1, 2671–2675. [Google Scholar]
- Facchinei, F.; Kanzow, C. A nonsmooth inexact Newton method for the solution of large-scale nonlinear complementarity problems. Math. Programm. 1997, 76, 493–512. [Google Scholar] [CrossRef]
- Yang, Y.F.; Qi, L. Smoothing trust region methods for nonlinear complementarity problems with P0 -functions. Ann. Op. Res. 2005, 133, 99–117. [Google Scholar] [CrossRef]
- Chen, B.; Xiu, N. A global linear and local quadratic non-interior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functions. SIAM J. Optim. 1999, 9, 605–623. [Google Scholar] [CrossRef]
- Chen, B.; Chen, X.; Kanzow, C. A penalized Fischer-Burmeister NCP-function: Theoretical investigation and numerical results. Math. Programm. 2000, 88, 211–216. [Google Scholar] [CrossRef]
- Kanzow, C.; Kleinmichel, H. A new class of semismooth Newton-type methods for nonlinear complementarity problems. Comp. Optim. Appl. 1998, 11, 227–251. [Google Scholar] [CrossRef]
- Chen, X.; Qi, L.; Sun, D. Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Math. Comp. 1998, 67, 519–540. [Google Scholar] [CrossRef]
- Kanzow, C.; Pieper, H. Jacobian smoothing methods for general nonlinear complementarity problems. SIAM J. Optim. 1999, 9, 342–372. [Google Scholar] [CrossRef]
- Chen, B.; Harker, P.T. Smoothing approximations to nonlinear complementarity problems. SIAM J. Optim. 1997, 7, 403–420. [Google Scholar] [CrossRef]
- Wu, C.Y.; Chen, G.Q. A smoothing conjugate gradient algorithm for nonlinear complementarity problems. J. Sys. Sci. Sys. Engin. 2008, 17, 460–472. [Google Scholar] [CrossRef]
- Clarke, F.H. Optimization and Nonsmooth Analysis; John Wiley and Sons, Inc.: New York, NY, USA, 1983. [Google Scholar]
- Chen, X.J. Smoothing methods for nonsmooth, nonconvex minimization. Math. Programm. 2012, 134, 71–99. [Google Scholar] [CrossRef]
- Fischer, A. A special Newton-type optimization method. Optimization 1992, 24, 269–284. [Google Scholar] [CrossRef]
- Dai, Y.H.; Yuan, Y. An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 2001, 103, 33–47. [Google Scholar] [CrossRef]
- Dai, Y.H. Conjugate gradient methods with Armijo-type line searches. Acta Math. Appl. Sin. 2002, 18, 123–130. [Google Scholar] [CrossRef]
- Xu, S. Smoothing method for minimax problem. Comp. Optim. Appl. 2001, 20, 267–279. [Google Scholar] [CrossRef]
- Haarala, M. Large-Scale Nonsmooth Optimization: Variable Metric Bundle Method with Limited Memory. Ph.D. Thesis, University of Jyväskylä, Jyväskylä, Finland, 13 November 2004. [Google Scholar]
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).