Next Article in Journal
A Differentiated Anonymity Algorithm for Social Network Privacy Preservation
Previous Article in Journal
Linear Algorithms for Radioelectric Spectrum Forecast

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Nonsmooth Levenberg-Marquardt Type Method for Solving a Class of Stochastic Linear Complementarity Problems with Finitely Many Elements

by
Zhimin Liu
,
Shouqiang Du
* and
Ruiying Wang
School of Mathematics and Statistics, Qingdao University, 308 Qingdao Ningxia Road, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Algorithms 2016, 9(4), 83; https://doi.org/10.3390/a9040083
Submission received: 18 March 2016 / Revised: 29 November 2016 / Accepted: 30 November 2016 / Published: 6 December 2016

## Abstract

:
Our purpose of this paper is to solve a class of stochastic linear complementarity problems (SLCP) with finitely many elements. Based on a new stochastic linear complementarity problem function, a new semi-smooth least squares reformulation of the stochastic linear complementarity problem is introduced. For solving the semi-smooth least squares reformulation, we propose a feasible nonsmooth Levenberg–Marquardt-type method. The global convergence properties of the nonsmooth Levenberg–Marquardt-type method are also presented. Finally, the related numerical results illustrate that the proposed method is efficient for the related refinery production problem and the large-scale stochastic linear complementarity problems.

## 1. Introduction

Suppose $( Ω , F , P )$ is a probability space with $Ω ⊆ ℜ n$ and P is a known probability distribution. A stochastic linear complementarity problem takes the form:
$x ≥ 0 , M ( ω ) x + q ( ω ) ≥ 0 , x T [ M ( ω ) x + q ( ω ) ] = 0 a . e . ω ∈ Ω ,$
where $Ω ⊆ ℜ n$ is the underlying sample space, for given probability distribution P and ∀ $ω ∈ Ω$, $M ( ω ) ∈ ℜ n × n$ and $q ( ω ) ∈ ℜ n$. (1) is denoted as SLCP $( M ( · ) , q ( · ) )$ or SLCP; see [1,2,3,4]. If Ω has only one element, (1) is the standard linear complementarity problem (LCP), which has been studied in [5,6].
Generally, there is no x satisfying (1) for all $ω ∈ Ω$. In order to obtain a reasonable solution of Problem (1), several types of models have been proposed (one can see [1,2,3,4,7,8,9,10,11,12,13,14]). One of them is the expected value (EV) method (see [1]). The EV model is to find a vector $x ∈ ℜ n$, such that:
$0 ≤ x ⊥ M ¯ x + q ¯ ≥ 0 ,$
where $M ¯ = E [ M ( ω ) ]$, $q ¯ = E [ q ( ω ) ]$, and $E [ . ]$ is the mathematical expectation. Another is the expected residual minimization (ERM) method (see [2]). The ERM model is to find a vector $x ∈ ℜ + n$ that minimizes the expected residual function:
$min x ≥ 0 ∑ i = 1 n E [ φ ( x i , M i ( ω ) x + q i ( ω ) ) ] 2 ,$
where $M i ( ω ) ( i = 1 , ⋯ , n )$ is the i-th line of random matrix $M ( ω )$ and φ satisfies:
$φ ( a , b ) = 0 ⟺ a ≥ 0 , b ≥ 0 , a b = 0 .$
In addition to [1,2], Luo and Lin first considered a quasi-Monte Carlo method in [8,9]; they proved that the ERM method is convergent under mild conditions and studied the properties of the ERM problems. In [10], Chen, Zhang and Fukushima considered SLCP including the expectation of matrix is positive semi-definite. Under the condition of a new error bound, they use the ERM formulation to solve the SLCP. In [11], they also studied the ERM formulation, where the involved function is a stochastic $R 0$ function. In [12], Zhou and Caccetta put forward a new model of the stochastic linear complementarity problem with only finitely many elements. A feasible semi-smooth Newton method was also given. In [14], Ma also considered a feasible semi-smooth Gauss–Newton method for solving this new stochastic linear complementarity problem. The stochastic linear complementarity problem with only finitely many elements is to find a vector $x ∈ ℜ n$, such that:
$x ≥ 0 , M ( ω i ) x + q ( ω i ) ≥ 0 , x T [ M ( ω i ) x + q ( ω i ) ] = 0 i = 1 , ⋯ , m , m > 1 ,$
where $Ω = { ω 1 , ω 2 , ⋯ , ω m } .$ Denote:
$M ¯ = ∑ i = 1 m p i M ( ω i ) , q ¯ = ∑ i = 1 m p i q ( ω i ) ,$
where $p i = P ( ω i ∈ Ω ) > 0 , i = 1 , ⋯ , m$. Then, (3) is equivalent to:
$x ≥ 0 , M ¯ x + q ¯ ≥ 0 , x T ( M ¯ x + q ¯ ) = 0 ,$
$M ( ω i ) x + q ( ω i ) ≥ 0 , i = 1 , ⋯ , m .$
where (4) is called the linear complementarity problem.
As we all know, in [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29], many methods were given for solving the nonlinear complementarity problem (NCP) and linear complementarity problem (LCP), such as Kanzow and Petra, who presented a nonsmooth least squares reformulation of (4) in [15]. They defined $Φ : ℜ n → ℜ 2 n$ as:
$Φ ( x ) = λ ϕ F B ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ n ⋮ ( 1 − λ ) ϕ + ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ n ,$
where $λ ∈ ( 0 , 1 )$, $ϕ F B ( a , b ) = ∥ ( a , b ) ∥ 2 − ( a + b )$, $ϕ + ( a , b ) = a + b +$, $z + = m a x { 0 , z }$ for $z ∈ ℜ$. This least squares formulation can gave a faster reduction of the complementarity gap $x T ( M ¯ x + q ¯ )$.
On the other hand, the authors of [16,18,19] studied the generalized Fischer–Burmeister function, i.e., $ϕ p : ℜ n → ℜ$ given by $ϕ p ( a , b ) = ∥ ( a , b ) ∥ p − ( a + b )$ $( p ∈ ( 1 , + ∞ ) )$. Additionally, their research work has shown that this class of functions enjoys some favorable properties as other NCP-functions. The given numerical results for the test problems from mixed complementarity problem library (MCPLIB) have shown that the descent method has better performance when p decreases in $( 1 , + ∞ )$.
The main motivation of this paper is to use the advantages of [12,15,16,18,19] to solve the stochastic linear complementary problem denoted as (3). Firstly, we put forward a new robust reformulation of the complementarity Problem (3). Then, based on the new robust reformulation, we propose a feasible nonsmooth Levenberg–Marquardt-type method to solve (3). The numerical results in Section 4 showed that the given Method 1 is efficient for the related refinery production problem and the large-scale stochastic linear complementarity problems. We also make a comparison with Method 1 and the scaled trust region method (STRM) in [20]; preliminary numerical experiments showed that the numerical results of Method 1 are as good as the numerical results of the STRM method. Additionally, it generates less iterations in contrast to the STRM method. Additionally, we also make a comparison with Method 1 and the method in [13] for solving the related refinery production problem. The preliminary numerical experiments also indicate that Method 1 is quite robust. In other words, Method 1 has the following advantages.
• Faster reduction of the complementarity gap $x T ( M ¯ x + q ¯ )$.
• Flexible NCP functions $ϕ p$.
• Randomly-chosen initial points and less calculation.
Now is the time to give a new reformulation of (4); the new reformulation of (4) is:
$ϕ n ( x ) = λ ϕ p ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ n ⋮ ( 1 − λ ) ϕ + ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ n ,$
where $ϕ n : ℜ n → ℜ 2 n$, $λ ∈ ( 0 , 1 )$. Then, x is a solution of (4) ⟺$ϕ n ( x ) = 0$. Additionally, solving (4) is also equivalent to finding a solution of the unconstrained optimization problem:
$min x ∈ ℜ n Ψ ( x ) ,$
where:
$Ψ ( x ) = 1 2 ∥ ϕ n ( x ) ∥ 2 .$
Then, (4) and (5) can be rewritten as:
$F ( x , y ) = 0 , y ≥ 0 , ( 9 )$
where:
$F ( x , y ) = ϕ n ( x ) M ( ω 1 ) x + q ( ω 1 ) − y 1 M ( ω 2 ) x + q ( ω 2 ) − y 2 ⋮ M ( ω m ) x + q ( ω m ) − y m .$
Additionally, $y = [ y 1 T , y 2 T , … , y m T ] T ∈ ℜ m n$ is a slack variable with $y i ∈ ℜ n , i = 1 , 2 , … , m$. Then, we know that $F ( x , y ) = 0$ has $( m + 2 ) n$ equations with $( m + 1 ) n$ variables.
The remainder of this paper is organized as follows. In Section 2, we review some background definitions and some useful properties. In Section 3, we define a merit function $θ ( z ) = 1 2 ∥ F ( z ) ∥ 2$ and give a feasible nonsmooth Levenberg–Marquardt-type method. Some discussions about this method are also given. In Section 4, the numerical results indicate that the given method is efficient for solving stochastic linear complementarity problems, such as the related refinery production problem and the large-scale stochastic linear complementarity problems.

## 2. Preliminaries

In this section, we give some related definitions and some properties; some of them can be found in [6,14,15,19,20,21,22,23]; some of them are given for the first time.
Let $G : ℜ m ⟶ ℜ n$ be a locally-Lipschitzian function. The B-subdifferential of G at x is:
$∂ B G ( x ) = { V ∈ ℜ m × n | ∃ { x k } ⊆ D G : { x k } → x , G ′ ( x k ) → V } ,$
where $D G$ is the differentiable points set and $G ′ ( x )$ is the Jacobian of G at a point $x ∈ ℜ n$.
The Clarke generalized Jacobian of G is defined as:
$∂ G ( x ) = c o n v { V ∈ ℜ m × n | ∃ { x k } ⊆ D G : { x k } → x , G ′ ( x k ) → V } .$
Furthermore,
$∂ C G ( x ) T = ∂ G 1 ( x ) × ⋯ × ∂ G m ( x )$
denotes the C-subdifferential of G at x.
If
$lim V ∈ ∂ G ( x + t h ′ ) , h ′ → h , t → 0 + V h ′$
exists for any $h ∈ ℜ m$, we call G is semi-smooth at x.
Definition 1.
([6]) Matrix $M ∈ ℜ n × n$ is called a:
(a)
$P 0$-matrix, if each of its principal minors is nonnegative.
(b)
P-matrix, if each of its principal minors is positive.
Definition 2.
([23]) Let $G : ℜ n ⟶ ℜ n$; the following statements are given:
(a)
We call G monotone, if $( x − y ) T ( G ( x ) − G ( y ) ) ≥ 0$, for $x , y ∈ ℜ n$.
(b)
G is a $P 0$ function, if:
$max 1 ≤ i ≤ n ( x i − y i ) ( G i ( x ) − G i ( y ) ) ≥ 0$
for all $x , y ∈ ℜ n$, $x ≠ y$ and $x i ≠ y i$.
Proposition 1.
([6]) $M ∈ ℜ n × n$ is a $P 0$-matrix$∀ x ≠ 0$, $x i ( M x ) i ≥ 0 , x i ≠ 0$.
Proposition 2.
([21]) Suppose G is a locally-Lipschitzian function and strongly semi-smooth at x. Additionally, it is also directionally differentiable in a neighborhood of x; we get:
$lim h → 0 , H ∈ ∂ G ( x + h ) ∥ G ( x + h ) − G ( x ) − H h ∥ ∥ h ∥ 2 < ∞ .$
Definition 3.
We call $x *$ an R-regular solution of the complementarity problem $x ≥ 0 , G ( x ) ≥ 0$, $x T G ( x ) = 0$, if $G ′ ( x * ) α α$ is nonsingular and $G ′ ( x * ) β β − G ′ ( x * ) β α G ′ ( x * ) α α − 1 G ′ ( x * ) α β$ is a P-matrix, where $α = { i | x i * > 0 , G i ( x * ) = 0 }$, $β = { i | x i * = 0 , G i ( x * ) = 0 }$, $γ = { i | x i * = 0 , G i ( x * ) > 0 }$.
Proposition 3.
([15]) The generalized gradient of $ϕ P$ at a point $( a , b )$ is defined as:
$∂ ϕ ( a , b ) = ( g a , g b ) = ( s g n ( a ) | a | p − 1 ∥ ( a , b ) ∥ p p − 1 − 1 , s g n ( b ) | b | p − 1 ∥ ( a , b ) ∥ p p − 1 − 1 ) , ( a , b ) ≠ ( 0 , 0 ) ( ϵ − 1 , ζ − 1 ) , ( a , b ) = ( 0 , 0 ) ,$
where $∥ ( ϵ , ζ ) ∥ P ≤ 1$. The generalized gradient of $ϕ +$ at a point $( a , b )$ is defined as $∂ ϕ + ( a , b ) = { ( b + ∂ a + , a + ∂ b + ) }$, where:
$∂ Z + = 1 , Z > 0 0 , Z < 0 ,$
$∂ Z + = [ 0 , 1 ]$, if $Z = 0$.
Definition 4.
When $x * > 0 , M ¯ x * + q ¯ > 0$, then (4) is called strictly feasible at $x *$.
Proposition 4.
([15]) All $H ∈ ∂ C ϕ ( x )$ can be defined as:
$λ H 1 ( 1 − λ ) H 2 ,$
where $H 1 ⊆ d i a g { a i ( x ) } + d i a g { b i ( x ) } M ¯$, $H 2 ⊆ d i a g { a i ˜ ( x ) } + d i a g { b i ˜ ( x ) } M ¯$, $( a i ( x ) , b i ( x ) ) ∈ ∂ ϕ P ( x i , ( M ¯ x + q ¯ ) i )$, $( a i ˜ ( x ) , b i ˜ ( x ) ) ∈ ∂ ϕ + ( x i , ( M ¯ x + q ¯ ) i )$, $∂ ϕ P ( x i , ( M ¯ x + q ¯ ) i )$ and $∂ ϕ + ( x i , ( M ¯ x + q ¯ ) i )$ are given in Proposition 3.
Proposition 5.
([15]) Suppose that (4) is R-regular at $x *$, then, all elements of $∂ C ϕ ( x * )$ have full rank.
For any $( x , y ) ∈ ℜ ( m + 1 ) n ,$ we know that:
$∂ C F ( x , y ) = V ϕ n 0 0 ⋯ 0 M ( ω 1 ) − I 0 ⋯ 0 M ( ω 2 ) 0 − I ⋯ 0 ⋮ M ( ω m ) 0 0 ⋯ − I : V ϕ n ∈ ∂ C ϕ n ( x ) ,$
where I is the $n × n$ identity matrix. Hence, by Proposition 5, we know that the following proposition is set up.
Proposition 6.
Suppose (4) is R-regular at $x *$ and $( x * , y * )$ is a solution of (9). Then, all $V ∈ ∂ C F ( x * , y * )$ are nonsingular.
Proposition 7.
If (4) is R-regular at a solution $x *$, then, there exits $α > 0 , β > 0$, such that $∥ ( H T H ) − 1 ∥ ≤ β$ for all $x * ∈ ℜ n$ with $∥ x − x * ∥ ≤ α ,$ where $H ∈ ∂ C ϕ ( x )$.
Proof of Proposition 7.
The proof is similar to the ([15], Lemma 2.5) and therefore omitted here. ☐
In the following part of this paper, we rewrite Ψ as:
$Ψ ( x ) = 1 2 ∥ ϕ n ( x ) ∥ 2 = ∑ i = 1 n ψ ( x i , ( M ¯ x + q ¯ ) i ) ,$
where $ψ : ℜ 2 → ℜ$ is defined as:
$ψ ( a , b ) = 1 2 λ 2 ϕ P 2 ( a , b ) + 1 2 ( 1 − λ ) 2 a + 2 b + 2 .$
Proposition 8.
The function $Ψ : ℜ n → ℜ$ defined in (8) satisfies:
(a)
$∇ Ψ ( x ) = V T ϕ n ( x )$, for any $V ∈ ∂ C ϕ n ( x )$.
(b)
If $∇ Ψ ( x * ) = 0$ and $M ¯$ is a $P 0$ matrix, we know that $x *$ is a solution of (4).
(c)
If (4) is strictly feasible and $x ⟼ M ¯ x + q ¯$ is monotone, then $L ( c ) = { x ∈ ℜ n | Ψ ( x ) ≤ c }$ are compact for all $c ∈ ℜ$.
Proof of Proposition 8.
The proof is similar to the one of ([15], Theorem 2.7), so we skip the details here. ☐

## 3. The Feasible Nonsmooth Levenberg–Marquardt-Type Method and Its Convergence Analysis

In this section, we define a merit function $θ ( z ) = 1 2 ∥ F ( z ) ∥ 2$ and give a feasible nonsmooth Levenberg–Marquardt-type method. At the same time, we also give some discussions about this method.
Let $z = ( x , y ) ∈ ℜ ( m + 1 ) n$; define a merit function of (9) by:
$θ ( z ) = 1 2 ∥ F ( z ) ∥ 2 .$
If (3) has a solution, then solving (9) is equivalent to finding a global solution of the following constrained optimization problem:
$min θ ( z )$
$s . t . z ≥ 0 .$
If z satisfies:
$P Z [ z − ∇ θ ( z ) ] − z = 0 ,$
where $P Z ( · )$ is an orthogonal projection operator onto $Z = { z ∈ ℜ ( m + 1 ) n | z ≥ 0 }$, then z is a stationary point of (10). Obviously, (11) is equivalent to the following problem:
$∇ θ ( z ) ≥ 0 , z ≥ 0 , z T ∇ θ ( z ) = 0 .$
Lemma 1.
([14]) Let $P Z ( · )$ be an orthogonal projection operator onto $Z = { z ∈ ℜ ( m + 1 ) n | z ≥ 0 }$. The following statements hold:
(a)
$∥ P Z ( x ) − P Z ( y ) ∥ ≤ ∥ x − y ∥$ for all $x , y ∈ ℜ ( m + 1 ) n$.
(b)
For any $y ∈ Z$, $( P Z ( x ) − x ) T ( P Z ( x ) − y ) ≤ 0$ for all $x ∈ ℜ ( m + 1 ) n$.
Proposition 9.
([15]) The merit function θ has the following properties.
(a)
$θ ( z )$ is continuously differentiable on $ℜ ( m + 1 ) n$ with $∇ θ ( z ) = H T F ( z )$ for any $H ∈ ∂ C F ( z )$.
(b)
Assume $x ⟼ M ¯ x + q ¯$ is monotone, if $L C P ( M ¯ , q ¯ )$ has a strictly feasible solution, then for all $c > 0 ,$ we know that the level set:
$L ( c ) = { z ∈ ℜ ( m + 1 ) n | θ ( z ) ≤ c }$
is bounded.
For some monotone stochastic linear complementarity problems, the stationary points of (10) may not be a solution. Such as let $n = 1$, $m = 2$, $Ω = { ω 1 , ω 2 } = { 0 , 1 }$, $M ( ω 1 ) = M ( ω 2 ) = 1$, $q ( ω 1 ) = 1$, $q ( ω 2 ) = − 1$, and $p i = P { ω i ∈ Ω } = 0.5 , i = 1 , 2$, (see [12] ).
By simple computation, we know that the above of problem is a monotone SLCP, obviously; all points $x ≥ 1$ are feasible, but this example has no solution. By:
$F ( x , y 1 , y 2 ) = λ ( 2 x p ) 1 p − 2 | x | ( 1 − λ ) ( x + ) 2 x + 1 − y 1 x − 1 − y 2 ,$
and (0, 1, 0) is a stationary point of the constraint optimization problem:
$min 1 2 ∥ F ( x , y 1 , y 2 ) ∥ 2$
$s . t . x ≥ 0 , y 1 ≥ 0 , y 2 ≥ 0 .$
However, $x = 0$ is not a solution of this example.
Therefore, in the following proposition, we give some conditions for (3).
Proposition 10.
For monotone Problem (3), let $z * = ( x * , y * )$ be a stationary point of (10). If $M ( ω i ) x * + q ( ω i ) − y i * = 0 , i = 1 , 2 , … m ,$ then $x *$ is a solution of (3).
Proof of Proposition 10.
Assuming that $z * = ( x * , y * )$ is a stationary point of (10), if $M ( ω i ) x * + q ( ω i ) − y i * = 0 , i = 1 , 2 , … m ,$ by (12), we know that $x *$ is the stationary point of the following problem:
$min { Ψ ( x ) | x ≥ 0 } .$
Similar to the proof of Theorem 3 in [24], it can be shown that $x *$ is a solution of $Ψ ( x ) = 0 .$ Thus, $x *$ is a solution of (3). ☐
Now, we present the feasible nonsmooth Levenberg–Marquardt-type method for solving (3).
Method 1.
Choose $z 0 ∈ Z , σ ∈ ( 0 , 1 2 ) , ε ≥ 0 , β , γ ∈ ( 0 , 1 )$. Set $k = 0$.
Step 1.
If $| θ ( z k ) | ≤ ε$, stop.
Step 2.
Choose $H k ∈ ∂ C F ( z k )$, $ν k = ∥ F ( z k ) ∥ > 0$, and find the solution $d k$ of the equations:
$( H k T H k + ν k I ) d = − ∇ θ ( z k ) .$
Step 3.
If
$∥ F ( P Z ( z k + d k ) ) ∥ ≤ γ ∥ F ( z k ) ∥ ,$
then set $z k + 1 = P Z ( z k + d k )$, $k = k + 1 ,$ and go to Step 1; otherwise, go to Step 4.
Step 4.
Compute $t k = max { β l | l = 0 , 1 , 2 ⋯ }$, such that:
$θ ( z k ( t k ) ) ≤ θ ( z k ) + σ ∇ θ ( z k ) T ( z k ( t k ) − z k ) ,$
where $z k ( t k ) = P Z [ z k − t k ∇ θ ( z k ) ]$. Set $z k + 1 = z k ( t k )$, $k = k + 1$, and go to Step 1.
We now investigate the convergence properties of Method 1. In the following sections, we assume that Method 1 generates an infinite sequence.
Theorem 1.
Method 1 is well defined for a monotone SLCP (3). If Method 1 does not stop at a stationary point in finite steps, an infinite sequence ${ z k }$ is generated with ${ z k } ⊂ Z ,$ and any accumulation point of the sequence ${ z k }$ is a stationary point of θ.
Proof of Theorem 1.
Method 1 is well defined for the reason of $ν k > 0$, and $d k$ is always a descent direction for θ. Now, we consider the following two situations respectively.
(I)
If the direction $d k$ is accepted by an infinite number of times in Step 3 of Method 1, we get:
$z k + 1 = P Z [ z k + d k ] ∈ Z .$
Since $∇ θ ( z k ) ≠ 0$ implies $d k ≠ 0 ,$ we have:
$∇ θ ( z k ) T d k = − ( ( H k T H k + ν k I ) d k ) T d k < 0 .$
From [17], we know that ${ θ ( z k ) }$ is monotonically decreasing. Obviously, this implies that the sequence ${ ∥ F ( z k ) ∥ }$ is also monotonically decreasing. Since $∥ F ( P Z ( z k + d k ) ) ∥ ≤ γ ∥ F ( z k ) ∥$ is accepted by an infinite number of times in view of our assumptions, therefore we get $∥ F ( z k ) ∥ → 0$ for $k → ∞$ by $γ ∈ ( 0 , 1 ) .$ This means that any accumulation point of ${ z k }$ is the solution of (10); therefore, it is also a stationary point of $θ$.
(II)
This case is the negation of Case (I); without loss of generality, we assume that the Levenberg–Marquardt direction is never accepted. If the direction $P Z [ z k − t k ∇ θ ( z k ) ] − z k$ is accepted by an infinite number of times in Step 4 of Method 1, we have:
$z k + 1 = P Z [ z k − t k ∇ θ ( z k ) ] ∈ Z .$
By (b) in Lemma 1, taking $x : = z k − t k ∇ θ ( z k )$, $y : = z k$, we get:
$0 ≥ [ P Z ( z k − t k ∇ θ ( z k ) ) − ( z k − t k ∇ θ ( z k ) ) ] T [ P Z ( z k − t k ∇ θ ( z k ) ) − z k ] = [ P Z ( z k − t k ∇ θ ( z k ) ) − z k + t k ∇ θ ( z k ) ] T [ P Z ( z k − t k ∇ θ ( z k ) ) − z k ] = ( P Z ( z k − t k ∇ θ ( z k ) ) − z k ) 2 + t k ∇ θ ( z k ) T [ P Z ( z k − t k ∇ θ ( z k ) ) − z k ]$
that is,
$∇ θ ( z k ) T [ P Z ( z k − t k ∇ θ ( z k ) ) − z k ] ≤ − ( P Z ( z k − t k ∇ θ ( z k ) ) − z k ) 2 t k ≤ 0 ,$
where $t k = max { β l | l = 0 , 1 , 2 ⋯ }$ with $β ∈ ( 0 , 1 )$. By the Armijo line search properties, we know that any accumulation point of ${ z k }$ is a stationary point of θ, and this completes the proof.
Theorem 2.
Let $x * ∈ ℜ n$ be a R-regular solution; then the whole sequence generated by Method 1 converges to $z *$ Q-quadratically.
Proof of Theorem 2.
By Proposition 6, there is a constant $c 1 > 0$, such that, for all $z k ∈ ⋃ ( z * , δ 1 ) ,$ where $δ 1$ is a sufficiently small positive constant, the matrices $H k T H k + ν k I$ are nonsingular, and $∥ ( H k T H k + ν k I ) − 1 ∥ ≤ c 1$ hold. Furthermore, by Proposition 2, there exists a constant $c 2 > 0$, such that:
$∥ F ( z k ) − F ( z * ) − H k ( z k − z * ) ∥ ≤ c 2 ∥ z k − z * ∥ 2 ,$
for all $z k ∈ ⋃ ( z * , δ 2 ) ,$ where $δ 2$ is a sufficiently small positive constant. Moreover, in view of the upper semicontinuity of the C-subdifferential, we have:
$∥ H k T ∥ ≤ ζ ,$
where $H k ∈ ∂ C F ( z k )$, $ζ > 0$, $z k ∈ ⋃ ( z * , δ 3 )$, and $δ 3$ is a sufficiently small positive constant. Denote $δ = m i n ( δ 1 , δ 2 , δ 3 ) ,$ for $z k ∈ ⋃ ( z * , δ )$. Note that, from (13) and Lemma 1, we have:
$( H k T H k + ν k I ) ( z k + 1 − z * ) = ( H k T H k + ν k I ) ( P Z ( z k + d k ) − z * ) = ( H k T H k + ν k I ) ( z k + d k − z * + P Z ( z k + d k ) − ( z k + d k ) ) = ( H k T H k + ν k I ) ( z k + d k − z * ) + ( H k T H k + ν k I ) ( P Z ( z k + d k ) − ( z k + d k ) ) = ( H k T H k + ν k I ) ( z k − z * ) − H k T F ( z k ) + ( H k T H k + ν k I ) ( P Z ( z k + d k ) − ( z k + d k ) ) = H k T H k ( z k − z * ) + ν k ( z k − z * ) − H k T F ( z k ) + ( H k T H k + ν k I ) ( P Z ( z k + d k ) − ( z k + d k ) ) = H k T ( F ( z * ) − F ( z k ) + H k ( z k − z * ) ) + ν k ( z k − z * ) + ( H k T H k + ν k I ) ( P Z ( z k + d k ) − ( z k + d k ) )$
Since F is a locally-Lipschitzian function and $ν k = ∥ F ( z k ) ∥$, by premultiplying this equation by $( H k T H k + ν k I ) − 1$ and taking norms both sides, we get:
$∥ z k + 1 − z * ∥ ≤ c 1 ( ∥ H k T ∥ ∥ F ( z * ) − F ( z k ) + H k ( z k − z * ) ∥ + ∥ F ( z k ) − F ( z * ) ∥ ∥ z k − z * ∥ ) + ∥ P Z ( z k + d k ) − ( z k + d k ) ∥ ≤ c 1 ( ζ c 2 ∥ z k − z * ∥ 2 + L ∥ z k − z * ∥ 2 ) + ∥ z k + d k − z * ∥ ≤ c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 + ∥ z k − z * − ( H k T H k + ν k I ) − 1 H k T F k ∥ = c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 + ∥ ( H k T H k + ν k I ) − 1 ( ( H k T H k + ν k I ) ( z k − z * ) − H k T F k ) ∥ = c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 + ∥ ( H k T H k + ν k I ) − 1 ( H k T ( F ( z * ) − F ( z k ) + H k ( z k − z * ) ) + ν k ( z k − z * ) ) ∥ ≤ c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 + c 1 ( ∥ H k T ∥ ∥ F ( z * ) − F ( z k ) + H k ( z k − z * ) ∥ + ∥ F ( z k ) − F ( z * ) ∥ ∥ z k − z * ∥ ) ≤ c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 + c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 = 2 c 1 ( ζ c 2 + L ) ∥ z k − z * ∥ 2 = τ ∥ z k − z * ∥ 2 ,$
where $τ = 2 c 1 ( ζ c 2 + L ) .$ Therefore, similar to the proof of ([20], Theorem 2.3), we know that the rate of convergence is Q-quadratic. This completes the proof. ☐

## 4. Numerical Results

In this section, firstly, we make a numerical comparison between Method 1 and the scaled trust region method (STRM) in [20]. We apply Method 1 and the scaled trust region method to solve Examples 1 and 2. Secondly, we use Method 1 to solve the related refinery production problem, which also has been studied in [4,13]. Finally, numerical results about large-scale stochastic linear complementarity problems are also presented. We implement Method 1 in MATLAB and test the method on the given test problems using the reformulation from the previous section. Additionally, all of these problems were done on a PC (Acer) with i5-3210M and RAM of 2 GB. Throughout the computational experiments, the parameters in Method 1 are taken as:
$σ = 0.3 , β = 0.5 , γ = 0.5 .$
The stopping criteria for Method 1 are $∥ θ ( z k ) ∥ ≤ 10 − 15$ or $k m a x = 5000$.
The parameters in the STRM method (see [20]) are taken as:
$Δ 0 = 10 , Δ m i n = 10 − 6 , ρ 1 = 10 − 4 , ρ 2 = 0.75 , σ 1 = 0.5 , σ 2 = 2 , η = 0.5 .$
The stopping criteria for the STRM method are $∥ D k g k ∥ ≤ 10 − 15$ or $k m a x = 5000$.
In the tables of the numerical results, DIM denotes the dimension of the problem (the dimension of the variable x); $x *$ denotes the solution of $θ ( x , y ) = 0$; In the following part of this section, we give the detailed description of the given test problems.
Example 1.
Consider $S L C P ( M ( ω ) , q ( ω ) )$ with:
$M ( ω ) = 1 − 2 ω − 1 0 − ω , q ( ω ) = 1 ω + 1 ,$
where $Ω = { ω 1 , ω 2 } = { 0 , 1 } ,$ and $p i = P ( ω i ∈ Ω ) = 0.5 , i = 1 , 2$.
Numerical results of Example 1 are given in Table 1, Figure 1 and Figure 2, respectively. $x 0$ are chosen randomly in $ℜ 2$; $y 0$ are chosen randomly in $ℜ 4$ and $λ = 0.1$.
From Table 1, we can see that the merit functions associated with $p ∈ ( 1 , 2 )$, for example $p = 1.5$, are more effective than the Fischer–Burmeister merit function, for which exactly $p = 2$.
In Table 2, we give the numerical comparison of Method 1 with fmincon, which is a MATLAB tool box for constrained optimization. We use the sequential quadratic programming (SQP) method in the fmincon tool box to solve Example 1 by $p = 1.1$ and the same initial points.
From Table 2, we can see that Method 1 is more effective than fmincon.
Example 2.
Consider $S L C P ( M ( ω ) , q ( ω ) )$ with:
$M ( ω ) = 1 − ω 0 − ω 2 ω 0 ω 3 , q ( ω ) = 3 − 2 ω − 2 − ω − 3 − ω ,$
where $Ω = { ω 1 , ω 2 } = { 0 , 1 } ,$ and $p i = P ( ω i ∈ Ω ) = 0.5 , i = 1 , 2$.
Numerical results are given in Table 3, Figure 2 and Figure 3; $x 0$ are chosen randomly in $ℜ 3$; $y 0$ are chosen randomly in $ℜ 6$; and $λ = 0.00000001$.
From Table 3, Figure 3 and Figure 4, we can see that the iterations of Method 1 are less than the STRM method. In Method 1, when $p = 5$, the function value falls faster. When p is larger, a greater number of iterations is needed in the STRM method.
In Table 4, we also give the comparison of Method 1 with fmincon. For the propose of comparison, we fixed $p = 10$ and the same initial points.
From Table 4, we can see that Method 1 is also more effective than fmincon.
Example 3.
This example is a refinery production problem, which is also considered in [2,13].
The problem is defined as:
$M ( ω ) = 0 0 1 − 2 − ω 1 − 3 0 0 1 − 6 ω 2 − 3.4 − 1 − 1 0 0 0 2 + ω 1 6 0 − ω 3 − ω 3 3 3.4 − ω 2 0 − ω 4 ω 4 ,$
$q ( ω ) = c − b − 180 − ω 3 − 162 − ω 4 ,$
where $ω 1$, $ω 2$, $ω 3$ and $ω 4$ satisfy the following distribution:
$d i s t r ω 1 ≈ u [ − 0.8 , 0.8 ] ,$
$d i s t r ω 2 ≈ e x p ( λ = 2.5 ) ,$
$d i s t r ω 3 ≈ N ( 0 , 12 ) ,$
$d i s t r ω 4 ≈ N ( 0 , 9 ) .$
• Generate samples $ω j k , j = 1 , 2 , 3 , 4 , k = 1 , 2 , … , K ,$ respectively, from their 99% confidence intervals (except uniform distributions):
$ω 1 ∈ I 1 = [ − 0.8 , 0.8 ] ,$
$ω 2 ∈ I 2 = [ 0.0 , 1.84 ] ,$
$ω 3 ∈ I 3 = [ − 30.91 , 30.91 ] ,$
$ω 4 ∈ I 4 = [ − 23.18 , 23.18 ] ,$
• For each $j ,$ divide the $I j$ into $m j$ cells $I j , i$, $i = 1 , 2 , … , m j$.
• For each $( j , i ) ,$ calculate the average $υ j , i$ of $ω j k$; it belongs to $I j , i$.
• For each $( j , i ) ,$ the estimated probability of $υ j , i$ is $p j , i = k j , i / K ,$ where $k j , i$ is the number of $ω j k ∈ I j , i$.
• Let $N = m 1 × m 2 × m 3 × m 4$, and set the joint distribution of ${ ( ω ℓ , p ℓ ) , ℓ = 1 , 2 , … , N }$,
$ω ℓ = υ 1 , i 1 υ 2 , i 2 υ 3 , i 3 υ 4 , i 4 , p ℓ = p 1 , i 1 p 2 , i 2 p 3 , i 3 p 4 , i 4$
for $i 1 = 1 , … , m 1$, $i 2 = 1 , … , m 2$, $i 3 = 1 , … , m 3$, $i 4 = 1 , … , m 4$.
In the following part of this section, we use Method 1 to solve the constrained optimization problem:
$min z ≥ 0 θ ( z ) = 1 2 ∥ F ( z ) ∥ 2 ,$
where $z = ( x , y ) ,$
$F ( x , y ) = ϕ n ( x ) M ( ω ℓ ) x + q ( ω ℓ ) − y ℓ , ℓ = 1 ⋯ N .$
and:
$ϕ n ( x ) = λ ϕ p ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ 5 ⋮ ( 1 − λ ) ϕ + ( x i , M ¯ i x + q ¯ i ) , i = 1 ⋯ 5 .$
Now, we examine the following two conditions:
$C o n d i t i o n 1 : ω 1 ≡ 0 , ω 2 ≡ 0 , m 3 = 15 , m 4 = 15 .$
$C o n d i t i o n 2 : m 1 = 5 , m 2 = 9 , m 3 = 7 , m 4 = 11 .$
The numerical results of Example 3 are given in Table 5 and Table 6, where $Ψ ( x ) = 1 2 ∥ ϕ n ( x ) ∥ 2$ is the merit function; $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$; $k = 10 i , i = 3 , 4 , 5$. $2 x 1 + 3 x 2$ is the initial production cost; and $λ = 0.5$.
In [13], in the case of $ω 1 ≡ 0 , ω 2 ≡ 0.4 , m 3 = 15 , m 4 = 15$. Kall and Wallace get the optimal solution $( x 1 , x 2 ) = ( 38.539 , 20.539 )$; initial production cost $2 x 1 + 3 x 2 = 138.695$. Here, by Method 1, we get the optimal solution $( x 1 , x 2 ) = ( 41.6939 , 16.1036 ) ,$ and the production cost is $2 x 1 + 3 x 2 = 131.6985$.
Remark 1.
In this paper, we use:
$ω i = ω j , i = j , E ( ω j ) , i ≠ j .$
The computation cost of our method is greatly reduced. In fact, when we think about the general case of $ω 1 , ω 2 , ω 3$ and $ω 4$ varying the random distribution of discrete approximation by a 5-, 9-, 7- and 11-point distribution, respectively. This yields a joint discrete distribution of $5 × 9 × 7 × 11 = 3465$ realizations. Then, $F ( z )$ is a function of 17,335 $( 3465 × 5 + 10 =$ 17,335) dimensions. This is a more complex optimization problem.
In the following part of this subsection, we give a large-scale stochastic linear complementarity problem named the stochastic Murty problem. When $Ω = { ω | ω = 1 2 } ,$ the large-scale stochastic linear complementarity problem reduces to the Murty problem, which is intensively studied in [25,26,27,28,29].
Example 4.
(Stochastic Murty problem) Consider $S L C P ( M ( ω ) , q ( ω ) )$ with:
$M ( ω ) = 1 2 + ω 2 ⋯ 2 0 1 2 + ω ⋯ 2 ⋮ ⋮ ⋱ 0 ⋯ 1 2 + ω , q ( ω ) = − 3 2 + ω − 3 2 + ω ⋮ − 3 2 + ω ,$
where $M ( ω ) ∈ ℜ n × n$, $q ( ω ) ∈ ℜ n$. $Ω = { ω 1 , ω 2 } = { 0 , 1 } ,$ and $p i = P ( ω i ∈ Ω ) = 0.5 , i = 1 , 2$.
In Table 7, we give the comparison of Method 1 with the SQP method in the fmincon tool box, when the dimensions of Example 4 are 10, 100, 200, 300 and 400; where $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$. $x 0$ are chosen randomly in $ℜ n$. $y 0$ are chosen randomly in $ℜ 2 n$, $λ = 0.0001$.
Remark 2.
By the numerical results of Example 4, we can see that Method 1 is very suitable to solve large-scale SLCP. Moreover, Method 1 can be used flexible by adjusting the value of p.

## 5. Conclusions

In this paper, we introduced a feasible nonsmooth Levenberg–Marquardt-type method to solve the stochastic linear complementarity problems with finitely many elements. This method used a linear least squares reformulation of the stochastic linear complementarity problem and applied a feasible nonsmooth Levenberg–Marquardt-type method to solve the reformulated problem. The finally given numerical results showed that the given method is efficient to solve the large-scale stochastic linear complementarity problem and related refinery production problem. Additionally, the method can choose the initial points in a large scope with less computations and high precision.

## Acknowledgments

The authors would like to thank the editor and the reviewers for their very careful reading and insightful and constructive comments on the early version of this paper. This work is supported by the National Natural Science Foundation of China (No. 11671220, 11401331), the Natural Science Foundation of Shandong Province (No. ZR2015AQ013, ZR2016AM29) and the Key Issues of Statistical Research of Shandong Province (KT16276).

## Author Contributions

Zhi-min Liu prepared the manuscript. Rui-ying Wang assisted in the work. Shou-qiang Du was in charge of the overall research of the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Gürkan, G.; Özge, A.Y.; Robinson, S.M. Sample-path solution of stochastic variational inequalities. Math. Progr. 1999, 84, 313–333. [Google Scholar] [CrossRef]
2. Chen, X.; Fukushima, M. Expected residual minimization method for stochastic linear complementarity problems. Math. Oper. Res. 2005, 30, 1022–1038. [Google Scholar] [CrossRef]
3. Ling, C.; Qi, L.; Zhou, G.; Caccettac, L. The SC1 property of an expected residual function arising from stochastic complementarity problems. Oper. Res. Lett. 2008, 36, 456–460. [Google Scholar] [CrossRef]
4. Gabriel, S.A.; Zhuang, J.; Egging, R. Solving stochastic complementarity problems in energy market modeling using scenario reduction. Eur. J. Oper. Res. 2009, 197, 1028–1040. [Google Scholar] [CrossRef]
5. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems. I and II; Spring: New York, NY, USA, 2003. [Google Scholar]
6. Cottle, R.W.; Pang, J.S.; Stone, R.E. The Liner Complementarity Problem; Academic Press: New York, NY, USA, 1992. [Google Scholar]
7. Wang, M.; Ali, M.M. Stochastic Nonlinear Complementarity Problems: Stochastic Programming Reformulation and Penalty Based Approximation Method. J. Optim. Theory Appl. 2010, 144, 597–614. [Google Scholar] [CrossRef]
8. Luo, M.J.; Lin, G.H. Convergence Results of the ERM Method for Nonlinear Stochastic Variational Inequality Problems. J. Optim. Theory Appl. 2009, 142, 569–581. [Google Scholar] [CrossRef]
9. Luo, M.J.; Lin, G.H. Expected Residual Minimization Method for Stochastic Variational Inequality Problems. J. Optim. Theory Appl. 2009, 140, 103–116. [Google Scholar] [CrossRef]
10. Chen, X.; Zhang, C.; Fukushima, M. Robust solution of monotone stochastic linear complementarity problems. Math. Progr. 2009, 117, 51–80. [Google Scholar] [CrossRef]
11. Zhang, C.; Chen, X. Stochastic nonlinear complementarity problem and applications to traffic equilibrium under uncertainty. J. Optim. Theory Appl. 2008, 137, 277–295. [Google Scholar] [CrossRef]
12. Zhou, G.; Caccetta, L. Feasible Semismooth Newton Method for a Class of Stochastic Linear Complementarity Problems. J. Optim. Theory Appl. 2008, 139, 379–392. [Google Scholar] [CrossRef]
13. Kall, P.; Wallace, S.W. Stochastic Programming; John Wiley & Sons: Chichester, UK, 1994. [Google Scholar]
14. Ma, C. A feasible semi-smooth Gauss-Newton method for solving a class of SLCPS. J. Comput. Math. 2012, 30, 197–222. [Google Scholar] [CrossRef]
15. Kanzow, C.; Petra, S. On a semi-smooth least squares formulation of complementarity problems with gap reduction. Optim. Methods Softw. 2004, 19, 507–525. [Google Scholar] [CrossRef]
16. Chen, J.; Pan, S. A family of NCP-functions and a descent method for the nonlinear complementarity problem. Comput. Optim. Appl. 2008, 40, 389–404. [Google Scholar] [CrossRef]
17. Ma, C.; Jiang, L.; Wang, D. The convergence of a smoothing damped Gauss-Newton method for nonlinear complementarity problem. Nonlinear Anal. 2009, 10, 2072–2087. [Google Scholar] [CrossRef]
18. Chen, J. The semi-smooth-related properties of a merit function and a descent method for the nonlinear complementarity problem. J. Glob. Optim. 2006, 36, 565–580. [Google Scholar] [CrossRef]
19. Chen, J. On some NCP-functions based on the generalized Fischer-Burmeister function. Asia-Pac. J. Oper. Res. 2007, 24, 401–420. [Google Scholar] [CrossRef]
20. Kanzow, C.; Petra, S. Projected filter trust region method for a semi-smooth least squares formulation of mixed complementarity problems. Optim. Methods Softw. 2007, 22, 713–735. [Google Scholar] [CrossRef]
21. Facchinei, F.; Kanzow, C. A nonsmooth inexact Newton method for the solution of large-scale nonlinear complementarity problems. Math. Progr. 1997, 76, 493–512. [Google Scholar] [CrossRef]
22. Qi, L.; Sun, J. A nonsmooth version of Newton method. Math. Progr. 1993, 58, 353–367. [Google Scholar] [CrossRef]
23. Chen, J.; Huang, Z.; She, C. A new class of penalized NCP-function and its properties. Comput. Optim. Appl. 2011, 50, 49–73. [Google Scholar] [CrossRef]
24. Ferris, M.C.; Kanzow, C.; Munson, T.S. Feasible descent algorithms for mixed complementarity problems. Math. Progr. 1999, 86, 475–497. [Google Scholar] [CrossRef]
25. Kanzow, C. Global convergence properties of some iterative methods for linear complementarity problems. Optimization 1996, 6, 326–334. [Google Scholar] [CrossRef]
26. Kanzow, C. Some noninterior continuation methods for linear complementarity problems. SIAM J. Matrix Anal. Appl. 1996, 17, 851–868. [Google Scholar] [CrossRef]
27. Xu, S. The global linear convergence of an infeasible noninterior path-following algorithm for complementarity problems with uniform P-functions. Math. Progr. 2000, 87, 501–517. [Google Scholar] [CrossRef]
28. Burke, J.; Xu, S. The global linear convergence of a non-interior path-following algorithm for linear complementarity problems. Math. Oper. Res. 1998, 23, 719–734. [Google Scholar] [CrossRef]
29. Ma, C. A new smoothing and regularization Newton method for P0- NCP. J. Glob. Optim. 2010, 48, 241–261. [Google Scholar] [CrossRef]
Figure 1. Numerical results for Example 1 by Method 1. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 1. Numerical results for Example 1 by Method 1. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 2. Numerical results for Example 1 by the STRM method. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 2. Numerical results for Example 1 by the STRM method. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 3. Numerical results for Example 2 by Method 1. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 3. Numerical results for Example 2 by Method 1. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 4. Numerical results for Example 2 by the STRM method. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Figure 4. Numerical results for Example 2 by the STRM method. The x-axis represents the iteration step; the y-axis represents $θ ( x , y ) = 1 2 ∥ F ( x , y ) ∥ 2$.
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Method 1STRM
px*Final ValueIterationx*Final ValueIteration
1.11.0 × 10−8 × (0.6159, 0.3060)1.7595 × 10−169(0, 0)4.9304 × 10−3210
1.31.0 × 10−8 × (0.8603, 0.3628)2.1217 × 10−169(0, 0)2.9582 × 10−3110
1.51.0 × 10−7 × (0.1079, 0.0394)2.5396 × 10−169(0, 0)2.4652 × 10−329
2.01.0 × 10−7 × (0.1122, 0.0394)2.5960 × 10−1691.0 × 10−16 × (0.7218, 0)2.9399 × 10−329
2.51.0 × 10−7 × (0.1113, 0.0391)2.5505 × 10−1691.0 × 10−16 × (0.1165, 0)1.2360 × 10−349
3.01.0 × 10−7 × (0.1106, 0.0389)2.5224 × 10−169(0, 0)2.4652 × 10−329
3.51.0 × 10−7 × (0.1102, 0.0387)2.5040 × 10−1691.0 × 10−16 × (0.1162, 0)1.2310 × 10−349
4.01.0 × 10−7 × (0.1099, 0.0387)2.4909 × 10−1691.0 × 10−16 × (0.7209, 0)2.9388 × 10−329
4.51.0 × 10−7 × (0.1097, 0.0386)2.4807 × 10−1691.0 × 10−15 × (0.0725, 0.1748)7.3648 × 10−329
5.01.0 × 10−7 × (0.1095, 0.0385)2.4724 × 10−1691.0 × 10−16 × (0.1185, 0)1.2804 × 10−349
5.51.0 × 10−7 × (0.1094, 0.0385)2.4656 × 10−1691.0 × 10−16 × (0.7262, 0.1903)5.5480 × 10−339
6.01.0 × 10−7 × (0.1092, 0.0384)2.4598 × 10−1691.0 × 10−16 × (0.1199, 0)1.4804 × 10−319
6.51.0 × 10−7 × (0.1091, 0.2213)2.4549 × 10−1691.0 × 10−16 × (0.1232, 0)1.3827 × 10−349
7.01.0 × 10−7 × (0.1090, 0.0384)2.4508 × 10−1691.0 × 10−16 × (0.1259, 0)1.4449 × 10−349
7.51.0 × 10−7 × (0.1090, 0.0383)2.4473 × 10−1691.0 × 10−16 × (0.1181, 0)1.2715 × 10−349
8.01.0 × 10−7 × (0.1089, 0.0383)2.4444 × 10−1691.0 × 10−16 × (0, 0.3123)8.1621 × 10−339
9.01.0 × 10−7 × (0.1088, 0.0383)2.4399 × 10−1691.0 × 10−16 × (0.1255, 0.0082)6.3079 × 10−339
101.0 × 10−7 × (0.1087, 0.0382)2.4368 × 10−1691.0 × 10−15 × (0.7305, 0.4090)5.7596 × 10−329
Table 2. Numerical results for Example 1 by Method 1 and fmincon.
Table 2. Numerical results for Example 1 by Method 1 and fmincon.
x*Final Value
Method 11.0 × 10−8 × (0.6159, 0.3060)1.7595 × 10−16
fmincon(0.0002, 0)1.2188 × 10−14
Table 3. Numerical results for Example 2.
Table 3. Numerical results for Example 2.
Method 1STRM
px*Final ValueIterationFinal ValueIteration
2.0(0, 1, 1)5.6439 × 10−17175.000 × 10−1780
3.0(0, 1, 1)5.0000 × 10−17165.000 × 10−1780
4.0(0, 1, 1)5.0002 × 10−17195.000 × 10−1772
5.0(0, 1, 1)5.0000 × 10−17155.000 × 10−17234
6.0(0, 1, 1)5.0000 × 10−17135.000 × 10−17234
7.0(0, 1, 1)5.0000 × 10−17145.000 × 10−17234
8.0(0, 1, 1)5.0000 × 10−17165.000 × 10−17234
9.0(0, 1, 1)5.0000 × 10−17115.000 × 10−17234
10(0, 1, 1)5.0000 × 10−17115.000 × 10−17234
Table 4. Numerical results for Example 2 by Method 1 and fmincon.
Table 4. Numerical results for Example 2 by Method 1 and fmincon.
x*Final Value
Method 1(0, 1, 1)5.0000 × 10−17
fmincon(0, 1, 1)4.9780 × 10−14
Table 5. Numerical results for Example 3 based on Condition 1.
Table 5. Numerical results for Example 3 based on Condition 1.
pkxkθ(zk)Ψ(xk)$2 x 1 k + 3 x 2 k$
2103(42.6730, 15.8000, 0, 0.2848, 0.4688)18.19325.1078132.7462
2104(42.6057, 15.8216, 0, 0.2844, 0.4694)5.21385.0089132.6760
2105(42.0369, 16.0037, 0, 0.2791, 0.4740)4.38944.2424132.0850
4103(42.7120, 15.7720, 0, 0.2872, 0.4829)1999.15.3083132.7399
4104(42.6301, 15.7986, 0, 0.2844, 0.4694)5.21645.0103132.6559
4105(42.0487, 15.9888, 0, 0.2791, 0.4740)4.38264.2359132.0628
6103(42.7277, 15.7628, 0, 0.2853, 0.4687)5.35345.1372132.7438
6104(42.6539, 15.7870, 0, 0.2846, 0.4692)5.24335.0360132.6688
6105(42.0594, 15.9826, 0, 0.2791, 0.4740)4.39254.2458132.0667
Table 6. Numerical results for Example 3 based on Condition 2.
Table 6. Numerical results for Example 3 based on Condition 2.
pkxkθ(zk)Ψ(xk)$2 x 1 k + 3 x 2 k$
2103(42.6799, 15.7988, 0, 0.2833, 0.4704)5.34265.1369132.7562
2104(42.5951, 15.8259, 0, 0.2826, 0.4706)5.21205.0197132.6679
2105(41.9961, 16.0177, 0, 0.2773, 0.4752)4.34284.2083132.0453
4103(42.7005, 15.7755, 0, 0.2846, 0.4717)23.75435.1012132.7276
4104(42.6135, 15.8036, 0, 0.2826, 0.4707)5.20275.0096132.6377
4105(41.9980, 16.0049, 0, 0.2772, 0.4753)4.32074.1872132.0108
6103(42.7599, 15.7531, 0, 0.2838, 0.4686)55.24385.2217132.7789
6104(42.6568, 15.7867, 0, 0.2829, 0.4703)5.26125.0652132.6738
6105(42.0153, 15.9977, 0, 0.2773, 0.4753)4.34114.2060132.0235
Table 7. Numerical results for Example 4.
Table 7. Numerical results for Example 4.
DIMpFinal Value of Method 1Final Value of Fmincon
1021.6000 × 10−30.4315
1041.6000 × 10−30.4315
1061.6000 × 10−30.4315
10023.8000 × 10−30.4426
10044.5000 × 10−30.4461
10064.0000 × 10−30.4502
20024.6000 × 10−30.5101
20044.8000 × 10−30.4123
20064.6000 × 10−30.5108
30021.3263 × 10−40.5394
30040.4373 × 10−30.5665
30066.3331 × 10−40.5395
40024.6550 × 10−40.4575
40048.0495 × 10−40.5365
40063.2255 × 10−40.5514

## Share and Cite

MDPI and ACS Style

Liu, Z.; Du, S.; Wang, R. Nonsmooth Levenberg-Marquardt Type Method for Solving a Class of Stochastic Linear Complementarity Problems with Finitely Many Elements. Algorithms 2016, 9, 83. https://doi.org/10.3390/a9040083

AMA Style

Liu Z, Du S, Wang R. Nonsmooth Levenberg-Marquardt Type Method for Solving a Class of Stochastic Linear Complementarity Problems with Finitely Many Elements. Algorithms. 2016; 9(4):83. https://doi.org/10.3390/a9040083

Chicago/Turabian Style

Liu, Zhimin, Shouqiang Du, and Ruiying Wang. 2016. "Nonsmooth Levenberg-Marquardt Type Method for Solving a Class of Stochastic Linear Complementarity Problems with Finitely Many Elements" Algorithms 9, no. 4: 83. https://doi.org/10.3390/a9040083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.