A New Smoothing Conjugate Gradient Method for Solving Nonlinear Nonsmooth Complementarity Problems

In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate gradient method is proved globally convergent. Finally, preliminary numerical experiments show that the new method is efficient.


Introduction
We consider the nonlinear nonsmooth complementarity problem, which is to find a vector in R n satisfying the conditions x ≥ 0, F (x) ≥ 0, x T F (x) = 0 (1) where F : R n → R n is a locally Lipschitz continuous function.If F is continuously differentiable, then Problem (1) is called the nonlinear complementarity problems NCP(F).As we all know, Equation ( 1) is a very useful general mathematics model, which is closely related to the mathematical programming, variational inequalities, fixed point problems and mixed strategy problems (such as [1][2][3][4][5][6][7][8][9][10][11][12][13]).The methods for solving NCP(F) are classified into three categories: nonsmooth Newton methods, Jacobian smoothing methods and smoothing methods (see [14][15][16][17][18][19]). Conjugate gradient methods are widely and increasingly used for solving unconstrained optimization problem, especially in large-scale cases.There are few scholar has investigated the problem how to use the conjugate gradient method to solve NCP(F) (such as [10,20]).Moreover, in these papers, F is required to be a continuous differentiable P 0 + R 0 function.In this paper, we present a new smoothing conjugate gradient method for solving Equation (1), where F is only required to be a locally Lipschitz continuous function.
In this paper, we also define the generalized gradient of F at x is where conv denotes the convex hull of a set, D F denotes the set of points at which F is differentiable (see [21]).In the following, we introduce the definition of the smoothing function.
Definition 1 (see [22]) Let F : R n −→ R n be a locally Lipschitz continuous function.We call F : R n × R + −→ R n is a smoothing function of F , if F (x, µ) is continuously differentiable in R n for any fixed µ > 0, and lim for any fixed x ∈ R n .If lim for any x k ∈ R n , we say F satisfies gradient consistency property.
In the following sections of our paper, we also use the Fischer-Burmeister function (see [23]) and the smoothing Fischer-Burmeister function.(1) The Fischer-Burmeister function where ϕ : R 2 −→ R. From the definition of ϕ, we know that it is twice continuously differentiable besides (0, 0) T .Moreover, it is a complementarity function, which satisfies It is obvious that H(x) is zero at a point x if and only if x is a solution of Equation (1).Then Equation (1) can be transformed into the following unconstrained optimization problem We know that the optimal value of Ψ is zero, and Ψ is called the value function of Equation ( 1).
(2) The smoothing Fischer-Burmeister function where Fi (x, µ) is the smoothing function of The rest of this work is organized as follows.In Section 2, we describe the new smoothing conjugate gradient method for the solution of Problem (1).It is shown that this method has global convergence properties under fairly mild assumptions.In Section 3, preliminary numerical results and some discussions for this method are presented.

The New Smoothing Conjugate Gradient Method and its Global Convergence
The new smoothing conjugate gradient direction is defined as where β k is a scalar.Here, we use β k (see [24]) which is defined as where The line search is Armijo-type line search (see [25]), where Then, we give the new smoothing conjugate gradient method for solving Equation (1).4) and (5), where In the following, we will give the analysis about the global convergence of Algorithm 1. (The Algorithm 2 is the algorithm framework of Algorithm 1.) Before doing this work, we need the following basic assumptions.Assumption 1.
Lemma 2. Suppose that Assumption 1 holds.Then, there exists α k > 0 for every k, and with ω is a positive constant.
Theorem 1. Suppose that for any fixed µ > 0, Ψ µ satisfies Assumption 1, then the infinite sequence {x k } generated by Algorithm 1 satisfies If K is a finite set, there exists an integer k such that for all k > k.We also have µ k = µk =: μ for all k > k and In the following, we will proof lim inf By Lemma 1 and Assumption 1, we know that {Ψ μ(x k )} is a monotone decreasing sequence and the limit of {Ψ μ(x k )} is exist.Summing Equation ( 7), we get k≥ k+1 Due to Equation ( 2), we also have Square both sides of Equation ( 18), we get Divided both sides of Equation ( 19) by ((∇Ψ μ(x k )) T d k ) 2 , we have If Equation ( 16) is not hold, there exists γ > 0 such that We obtain from Equations ( 20) and ( 21) that Because of which leads to a contradiction with Equation ( 17).This show that Equation ( 16) holds.There are conflicts between Equations ( 16) and ( 15).This show that K must be an infinite set and Then, we can assume that K = {k 0 , k 1 , ...} with k 0 < k 1 < ... Hence, we get and completes the proof.

Numerical Tests
In this section, we intend to test the efficiency of Algorithm 1 by numerical experiments.We use Algorithm 1 to solve eleven examples, some of them are proposed the first time, some of them are modified by the examples of the references (such as [26,27]).
The smoothing function of F as Fi (x, µ) = F i (x) 2 + µ is used in solving Examples 1-4.From Example 5 to Example 11, the smoothing function of F is defined by (see [26]).1-11, where all components of x 0 are randomly selected from 0 to 10.We randomly generate 10 initial points, then implement Algorithm 1 to solve the test problem for each initial point.By the numerical of results of Examples 10-11, we can see that Algorithm 1 is suitably to solve the large scale problems.
(0, 1 2 , 0) T is one of the exact solutions of this problem.
λ is no more than 6).In this problem, we intend to check the efficiency of Algorithm 1 with the dimension of test problem is 50, 100, and 200.We randomly selected ten initial values when n = 50, n = 100 and n = 200.

Conclusions
In this paper, we have presented a new smoothing conjugate gradient method for the nonlinear nonsmooth complementarity problems.The method is based on a smoothing Fischer-Burmeister function and Armijo-type line search.With careful analysis, we are able to show that our method is globally convergent.Numerical tests illustrate that the method can efficiently solve the given test problems, therefor the new method is promising.We might consider more effective ways of choosing smoothing functions and line search methods for our method.This remains under investigation.

Table 1 .
Number of iterations and the final value of Ψ(x * ).

Table 2 .
Number of iterations and the final value of Ψ(x * ).

Table 3 .
Number of iterations and the final value of Ψ(x * ).

Table 4 .
Number of iterations and the final value of Ψ(x * ).

Table 5 .
Number of iterations and the final value of Ψ(x * ).

Table 7 .
Number of iterations and the final value of Ψ(x * ).

Table 10 .
Number of iterations, the final value of Ψ(x * ) and dimension of the test problem.