Extending the Applicability of Newton’s Algorithm with Projections for Solving Generalized Equations

: A new technique is developed to extend the convergence ball of Newton’s algorithm with projections for solving generalized equations with constraints on the multidimensional Euclidean space. This goal is achieved by locating a more precise region than in earlier studies containing the solution on which the Lipschitz constants are smaller than the ones used in previous studies. These advances are obtained without additional conditions. This technique can be used to extend the usage of other iterative algorithms. Numerical experiments are used to demonstrate the superiority of the new results.


Introduction
Let F : D −→ R i be a continuously differentiable operator; D ⊆ R i an open set, K ⊂ D, K a closed convex set, and H : D ⇒ R i be a set-valued operator equipped with a nonempty closed graph. We shall study the generalized equation We are interested in finding a solution x * of the generalized equation, since many problems in nonlinear programming and other disciplines can be reduced to this equation using mathematical modeling. Wy utilize the Newton Inexact Projection Algorithm (NIPA) formally defined in [1] by: Here P K (., x, τ) : R i ⇒ K is the operator given as P := P K (y 1 , x, τ) := {y 2 ∈ K : y 1 − y 2 , y 3 − y 2 ≤ τ y 1 − x 2 for all y 3 ∈ K}.
Operator P is called a feasible inexact projection. The study of generalized equations was inaugurated by S. M. Robinson back in the 1970s [1,2]. A plethora of results was produced ever since utilizing iterative algorithms since solutions in closed form are rarely attainable [1][2][3][4][5][6][7][8]. It is known that if H ≡ {0}, then (1) reduces to a constrained generalized equation [2][3][4][7][8][9]. In these studies the superlinear and/or quadratic local convergence has been established under the condition of the metric regularity or strong metric regularity of the partial linearization of the function that defines a generalized equation. There is a surge in studies of this type since they provide an abstract model for several applications such as equilibrium, linear and nonlinear complementary problems, and variational inequality problems [1,2,5,9]. A common feature in these studies is the realization that region of accessibility or more commonly called convergence ball is not large in general. Hence, the choice of initial points for the iterative algorithm that guarantee its convergence is limited. Moreover, the upper error estimations on the distances are pessimistic in most cases, so more than necessary iterations are carried out to achieve a certain error tolerance. Extending the uniqueness of the solution ball is also an important issue. Motivated by the elegant work by Oliveira et al. [6], which generalized the earlier ones [2][3][4]7], as well as the aforementioned problems, we develop a technique leading to smaller Lipschitz constants than before [6][7][8], and consequently to a finer ball convergence. It is important to notice that these extensions are obtained without additional conditions. In particular, our technique is given for the NIPA. But it can be used with the same advantages on other iterative algorithms.
In Section 2 the ball convergence of NIPA is given, whereas the examples appears in Section 3.

Ball Convergence
The concept of metric regularity that follows plays a role in the ball convergence of NIPA.

Definition 1 ([1]).
We say that a set valued operator Q : D ⇒ R i is metrically regular at y 2 ∈ D for y 1 ∈ R i , if for y 1 ∈ Q(y 2 ), the graph of Q is closed (locally) at (y 1 , y 2 ). Moreover, there exist constants λ, α, β > 0 such that To avoid repetitions more details and properties of the standard concepts developed can be found in [1,2,5,6] and the references therein. Let x * ∈ K be such that 0 ∈ F(x * ) + H(x * ). The following Lipschitz-type conditions are useful. (2) Then, we say operator F is center-Lipschitz on D.
Then, we say operator F is restricted-Lipschitz on D 0 .

Definition 4.
Suppose: there exists 1 > 0 such that Then, we say operator F is Lipschitz on D.
Consider the partial linearization of F + H at some x ∈ D [6], T F+H (x, .) : D ⇒ R i to be given as Let ρ := sup{s ≥ 0 : U(x * , s) ⊆ D}. . But it turns out that (2) and (3) can be used instead. We have that and where 0 1 can be small (arbitrarily) [9][10][11]. The iterates {x n } belong in D 0 which is a more accurate region than D used in [6] (see also the numerical section). That is why we obtain smaller Lipschitz constants. Then, we also suppose otherwise our results hold with 0 replacing . If one uses (2) where it is needed and (3) everywhere else instead of 1 , in the proof of Theorem 2 in [6], we can prove our main ball convergence result for NIPA: Theorem 1. Suppose: there exist x * ∈ K, 0 > 0, > 0 such that 0 ∈ F(x * ) + H(x * ).

Remark 2.
(a) If we further specialize K or lim sup n−→∞ τ n , then we obtain an improved version of the results in [4].
(b) If 0 = = 1 , our results reduce to the ones in [6]. Otherwise, (i.e., if 0 < 1 or < 1 ), then our results give a larger ball of convergence (so more initial points x 0 become available); the error bounds are tighter (i.e., less iterates to be computed to achieve a desired error tolerance), and the uniqueness of the solution ball is enlarged. More precisely, we havē and e n ≤ē n , whereρ * ,ē n stand for corresponding radius and error bounds found in [6] (using only 1 ). If 0 , are replaced by 1 (see also (5)- (8)) it is noticeable that these extensions are not obtained using additional conditions because 0 and are specializations of parameter 1 . Notice that the results in [6] specialize to the results in [2][3][4]7,8]. Hence, by extending the results in [6], we have also extended the results in these references too. Examples where (6)-(8), (12) and (13) are strict follow in Section 3. Other examples can be found in [9][10][11].

Example 1.
Let us consider a system of differential equations governing the motion of an object and given by Then, the derivative is given as Notice that conditions (2)-(4) hold if, we set 0 = e − 1 < = e 1 e−1 < 1 = e. Then, the results in [6] are extended, sinceρ Moreover, for the error bounds, we have so e n <ē n .
Finally, in view ofρ * < ρ * the uniqueness of the solution x * is extended.
The same advantages as in Example hold for error bounds and uniquenes results.

Conclusions
We have extended the applicability of NIPA without additional conditions. The novelty of our study lies in the observation that our new technique generates a subset D 0 of D also containing the iterates x n . But on D 0 the Lipschitz constants 0 , are tighter (see (5)-(7)) than 1 used in [4][5][6][7][8]. Hence, the aforementioned advantages are obtained. It is also worth noticing that no additional conditions are used and that 0 , are specializations of 1 . So, no additional computation is needed.
Our technique can be used to do the same on other algorithms [1,3,4,9].