Next Article in Journal
Artificial Pancreas Control Strategies Used for Type 1 Diabetes Control and Treatment: A Comprehensive Analysis
Next Article in Special Issue
A Brief Description of Cyclic Voltammetry Transducer-Based Non-Enzymatic Glucose Biosensor Using Synthesized Graphene Electrodes
Previous Article in Journal / Special Issue
Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of Newton’s Algorithm with Projections for Solving Generalized Equations

1
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka 575 025, India
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2020, 3(3), 30; https://doi.org/10.3390/asi3030030
Submission received: 10 July 2020 / Revised: 19 July 2020 / Accepted: 21 July 2020 / Published: 22 July 2020
(This article belongs to the Collection Feature Paper Collection in Applied System Innovation)

Abstract

:
A new technique is developed to extend the convergence ball of Newton’s algorithm with projections for solving generalized equations with constraints on the multidimensional Euclidean space. This goal is achieved by locating a more precise region than in earlier studies containing the solution on which the Lipschitz constants are smaller than the ones used in previous studies. These advances are obtained without additional conditions. This technique can be used to extend the usage of other iterative algorithms. Numerical experiments are used to demonstrate the superiority of the new results.

1. Introduction

Let F : D R i be a continuously differentiable operator; D R i an open set, K D , K a closed convex set, and H : D R i be a set-valued operator equipped with a nonempty closed graph. We shall study the generalized equation
x K , 0 F ( x ) + H ( x ) .
We are interested in finding a solution x of the generalized equation, since many problems in nonlinear programming and other disciplines can be reduced to this equation using mathematical modeling.
Wy utilize the Newton Inexact Projection Algorithm (NIPA) formally defined in [1] by:
Step a. 
Choose x 0 K and let { τ m } [ 0 , ) be given, and set n = 0 .
Step b. 
If 0 F ( x n ) + H ( x n ) , then terminate; otherwise, compute v n R i so that
0 F ( x n ) + F ( x n ) ( v n x n ) + H ( v n ) .
Step c. 
If v n K , let x n + 1 = v n ; otherwise choose any x n + 1 K so that
x n + 1 P K ( v n , x n ; τ n ) .
Step d. 
Let n n + 1 , and repeat Step a.
Here P K ( . , x , τ ) : R i K is the operator given as
P : = P K ( y 1 , x , τ ) : = { y 2 K : y 1 y 2 , y 3 y 2 τ y 1 x 2 for all y 3 K } .
Operator P is called a feasible inexact projection.
The study of generalized equations was inaugurated by S. M. Robinson back in the 1970s [1,2]. A plethora of results was produced ever since utilizing iterative algorithms since solutions in closed form are rarely attainable [1,2,3,4,5,6,7,8]. It is known that if H { 0 } , then (1) reduces to a constrained generalized equation [2,3,4,7,8,9]. In these studies the superlinear and/or quadratic local convergence has been established under the condition of the metric regularity or strong metric regularity of the partial linearization of the function that defines a generalized equation. There is a surge in studies of this type since they provide an abstract model for several applications such as equilibrium, linear and nonlinear complementary problems, and variational inequality problems [1,2,5,9]. A common feature in these studies is the realization that region of accessibility or more commonly called convergence ball is not large in general. Hence, the choice of initial points for the iterative algorithm that guarantee its convergence is limited. Moreover, the upper error estimations on the distances are pessimistic in most cases, so more than necessary iterations are carried out to achieve a certain error tolerance. Extending the uniqueness of the solution ball is also an important issue. Motivated by the elegant work by Oliveira et al. [6], which generalized the earlier ones [2,3,4,7], as well as the aforementioned problems, we develop a technique leading to smaller Lipschitz constants than before [6,7,8], and consequently to a finer ball convergence. It is important to notice that these extensions are obtained without additional conditions. In particular, our technique is given for the NIPA. But it can be used with the same advantages on other iterative algorithms.
In Section 2 the ball convergence of NIPA is given, whereas the examples appears in Section 3.

2. Ball Convergence

The concept of metric regularity that follows plays a role in the ball convergence of NIPA.
Definition 1
([1]). We say that a set valued operator Q : D R i is metrically regular at y 2 D for y 1 R i , if for y 1 Q ( y 2 ) , the graph of Q is closed (locally) at ( y 1 , y 2 ) . Moreover, there exist constants λ , α , β > 0 such that
U ( y 2 , α ) D and d ( w , Q 1 ( z ) ) λ d ( z , Q ( w ) ) for all ( w , z ) U ( y 1 , α ) × U ( y 2 , β ) .
To avoid repetitions more details and properties of the standard concepts developed can be found in [1,2,5,6] and the references therein. Let x K be such that 0 F ( x ) + H ( x ) . The following Lipschitz-type conditions are useful.
Definition 2.
Suppose: there exists 0 > 0 such that
F ( x ) F ( x ) 0 x x for each x D .
Then, we say operator F is center-Lipschitz on D .
Consider D 0 : = D U ( x , 1 0 ) .
Definition 3.
Suppose: there exists > 0 such that
F ( x ) F ( y ) x y for each x , y D 0 .
Then, we say operator F is restricted-Lipschitz on D 0 .
Definition 4.
Suppose: there exists 1 > 0 such that
F ( x ) F ( y ) 1 x y for each x , y D .
Then, we say operator F is Lipschitz on D .
Consider the partial linearization of F + H at some x D [6], T F + H ( x , . ) : D R i to be given as
T F + H ( x , z ) : = F ( x ) + F ( x ) ( z x ) + H ( z ) .
Let ρ : = sup { s 0 : U ( x , s ) D } .
Remark 1.
Condition (4) was used in the proof of the main result ([6], Theorem 2). But it turns out that (2) and (3) can be used instead. We have that
D 0 D ,
resulting
0 1
and
1 ,
where 0 1 can be small (arbitrarily) [9,10,11]. The iterates { x n } belong in D 0 which is a more accurate region than D used in [6] (see also the numerical section). That is why we obtain smaller Lipschitz constants. Then, we also suppose
0 ;
otherwise our results hold with 0 replacing .
If one uses (2) where it is needed and (3) everywhere else instead of 1 , in the proof of Theorem 2 in [6], we can prove our main ball convergence result for NIPA:
Theorem 1.
Suppose: there exist x K , 0 > 0 , > 0 such that 0 F ( x ) + H ( x ) .
Conditions (2) and (3) hold;
D z T F + H ( x , z )
is metrically regular at x for 0 , with parameters λ > 0 , α > 0 and β > 0 ;
ρ : = min { ρ , α , 2 β 2 0 + , 2 ( 1 2 τ ¯ ) λ ( + 2 0 + ( 2 0 ) 2 τ ¯ } ,
where
τ ¯ : = sup n τ n < 1 2 .
Then, sequence { x n } starting at x 0 K U ( x , ρ ) { x } and generated by NIPA that solves (1), related to { τ n } belongs in U ( x , ρ ) K , and converges to x so that for each n = 0 , 1 , 2 ,
e n + 1 : = x n + 1 x r n e n ,
where
r n = ( 1 + 2 τ ¯ n ) λ e n 2 ( 1 λ 0 e n ) + 2 τ ¯ .
Moreover, if 0 = lim n τ n , then, the convergence is superlinear, and if τ n = 0 then,
e n + 1 ( 2 0 + ) λ 2 e n 2 .
Furthermore, x is the only solution of (1) in U ( x , ρ ) provided that operator T F + H is metrically strongly regular at x for 0 .
Remark 2.
(a) 
If we further specialize K or lim sup n τ n , then we obtain an improved version of the results in [4].
(b) 
If 0 = = 1 , our results reduce to the ones in [6]. Otherwise, (i.e., if 0 < 1 or < 1 ), then our results give a larger ball of convergence (so more initial points x 0 become available); the error bounds are tighter (i.e., less iterates to be computed to achieve a desired error tolerance), and the uniqueness of the solution ball is enlarged. More precisely, we have
ρ ¯ ρ ,
and
e n e ¯ n ,
where ρ ¯ , e ¯ n stand for corresponding radius and error bounds found in [6] (using only 1 ). If 0 , are replaced by 1 (see also (5)–(8)) it is noticeable that these extensions are not obtained using additional conditions because 0 and ℓ are specializations of parameter 1 . Notice that the results in [6] specialize to the results in [2,3,4,7,8]. Hence, by extending the results in [6], we have also extended the results in these references too. Examples where (6)–(8), (12) and (13) are strict follow in Section 3. Other examples can be found in [9,10,11].

3. Numerical Examples

We present a simple example to show estimates (5)–(8), (12) and (13) can be strict, so the advantages we obtained apply. For simplicity, we set H = { 0 } , λ = 1 , τ n = 0 , i = 3 and D = U ¯ ( 0 , 1 ) .
Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
G 1 ( v ) = e v 1 , G 2 ( v 2 ) = ( e 1 ) v 2 + 1 , G 3 ( v 3 ) = 1
with initial conditions G 1 ( 0 ) = G 2 ( 0 ) = G 3 ( 0 ) = 0 . Let G = ( G 1 , G 2 , G 3 ) T = ( e v 1 , e 1 2 v 2 2 + v 2 , v 3 ) T . Let D = U ¯ ( 0 , 1 ) , x = ( 0 , 0 , 0 ) T . Define function G on D for v = ( v 1 , v 2 , v 3 ) T by
G ( v ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) T .
Then, the derivative is given as
G ( v ) = [ e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1 ] .
Notice that conditions (2)–(4) hold if, we set 0 = e 1 < = e 1 e 1 < 1 = e . Then, the results in [6] are extended, since
ρ ¯ = 2 3 1 = 0 . 2453 < ρ = 2 2 0 + = 0 . 3827 .
Moreover, for the error bounds, we have
2 ( 1 0 e n ) < 1 2 ( 1 1 e n ) ,
so
e n < e ¯ n .
Finally, in view of ρ ¯ < ρ the uniqueness of the solution x is extended.
Example 2.
Let C [ 0 , 1 ] , the space of continuous functions defined on [ 0 , 1 ] be equipped with the max norm. Let D = U ¯ ( 0 , 1 ) . Define function F on D by
F ( φ ) ( x ) = φ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We have that
F ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , for each ξ D .
We have λ = 1 , so 0 = 7 . 5 < = 1 = 15 . Then, the results in [6] are extended, since
ρ ¯ = 2 3 1 = 0.0444 < ρ = 2 2 0 + = 0.0667 .
The same advantages as in Example hold for error bounds and uniquenes results.

4. Conclusions

We have extended the applicability of NIPA without additional conditions. The novelty of our study lies in the observation that our new technique generates a subset D 0 of D also containing the iterates x n . But on D 0 the Lipschitz constants 0 , are tighter (see (5)–(7)) than 1 used in [4,5,6,7,8]. Hence, the aforementioned advantages are obtained. It is also worth noticing that no additional conditions are used and that 0 , are specializations of 1 . So, no additional computation is needed.
Our technique can be used to do the same on other algorithms [1,3,4,9].

Author Contributions

All authors have equally contributed to this work. All authors read and approved this final manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Robinson, S.M. Generalized equations and their solutions, Part I: Basic theory. Math. Program. Stud. 1979, 10, 128–141. [Google Scholar]
  2. Robinson, S.M. Strongly regular generalized equations. Math. Oper. Res. 1980, 5, 43–62. [Google Scholar] [CrossRef]
  3. Aragón Artacho, F.J.; Belyakov, A.; Dontchev, A.L.; López, M. Local convergence of quasi-Newton methods under metric regularity. Comput. Optim. Appl. 2014, 58, 225–247. [Google Scholar] [CrossRef] [Green Version]
  4. Dontchev, A.L.; Rockafellar, R.T. Convergence of inexact Newton methods for generalized equations. Math. Program. 2013, 139, 115–137. [Google Scholar] [CrossRef]
  5. Dontchev, A.L.; Rockafellar, R.T. Implicit Functions and Solution Mappings: A View from Variational Analysis, 2nd ed.; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2014. [Google Scholar]
  6. De Oliveira, F.R.; Ferreira, O.P.; Silva, G.N. Newton’s method with feasible inexact projections for solving contrained generalized equations. Comput. Optim. Appl. 2019, 72, 159–177. [Google Scholar] [CrossRef]
  7. Ferreira, O.P.; Silva, G.N. Local convergence analysis of Newton’s method for solving strongly regular generalized equations. J. Math. Anal. Appl. 2018, 458, 481–496. [Google Scholar] [CrossRef] [Green Version]
  8. Goncalves, M.L.N.; Melo, J.G. A Newton conditional gradient method for constrained nonlinear systems. J. Comput. Appl. Math. 2017, 311, 473–483. [Google Scholar] [CrossRef] [Green Version]
  9. Argyros, I.K.; Magreñán, A.A. Iterative Method and Their Dynamics With Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  10. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newtons method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  11. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Argyros, M.I.; Argyros, G.I.; Argyros, I.K.; Regmi, S.; George, S. Extending the Applicability of Newton’s Algorithm with Projections for Solving Generalized Equations. Appl. Syst. Innov. 2020, 3, 30. https://doi.org/10.3390/asi3030030

AMA Style

Argyros MI, Argyros GI, Argyros IK, Regmi S, George S. Extending the Applicability of Newton’s Algorithm with Projections for Solving Generalized Equations. Applied System Innovation. 2020; 3(3):30. https://doi.org/10.3390/asi3030030

Chicago/Turabian Style

Argyros, Michael I., Gus I. Argyros, Ioannis K. Argyros, Samundra Regmi, and Santhosh George. 2020. "Extending the Applicability of Newton’s Algorithm with Projections for Solving Generalized Equations" Applied System Innovation 3, no. 3: 30. https://doi.org/10.3390/asi3030030

Article Metrics

Back to TopTop