Abstract
We adopt the alternating direction search pattern method to solve the equality and inequality constrained nonlinear optimization problems. Firstly, a new augmented Lagrangian function with a nonlinear complementarity function is proposed to transform the original constrained problem into a new unconstrained problem. Under appropriate conditions, it has been proven that there is a 1-1 correspondence between the local and global optimal solutions of the new unconstrained problem and the original constrained problem. In this way, the optimal solution of the original problem can be obtained by solving the new unconstrained optimization problem. Furthermore, based on the characteristics of the new problem, the alternating direction pattern search method was designed and its convergence was demonstrated. Numerical experiments were implemented to illustrate the availability of the new augmented Lagrangian function and the algorithm.
Keywords:
nonlinear programming; nonlinear complementarity function; alternating direction search pattern method MSC:
90C30; 90C33
1. Introduction
With the big data era coming, the demand for optimization in resource allocation, transportation, production management, marketing planning, engineering design, and other aspects is constantly increasing. As one of the most basic models, the constrained nonlinear optimization problems are widely used in many fields, such as finance [1], power grids [2], dynamics [3,4], logistics management [5], portfolio [6], etc. There are many ways to solve the constrained nonlinear optimization problems, including the filter method [7], penalty function method [8,9], and so on. For example, the alternating direction method has been continuously developed in recent years [10,11]. Ref. [10] proposes a new acceleration approximation linearization ADMM algorithm.
The augmented Lagrangian function is to add the constraints as penalty terms to the objective function to form a new optimization problem, which was presented by Di Pillo and Grippo in the 1980s, see [12,13,14]. Furthermore, PU [15,16] proposed a new class of augmented Lagrangian functions and methods with piecewise nonlinear complementarity function (denote NCP) for the constrained nonlinear programming. A class of new augmented Lagrangian functions with Fischer–Burmeister NCP functions and methods was presented in [17]. A generalized Newtonian method of global convergence to solve non-smooth equations with non-differentiable functions but continuous Lipschitz was presented in [18]. A class of new Lagrangian multiplier methods with NCP functions was presented for solving the constrained nonlinear programming satisfying equation constraints and inequality constraints, see [19]. The reference [20] developed and analyzed several new constructors for constructing nonlinear complementary functions. The reference [21] provided a new class of complementary functions for regularization problems, which use smooth complementary functions to transform the complementarity problem into a nonlinear system of equations. In [22,23], a class of new augmented Lagrangian functions with NCP functions is presented for the constrained nonlinear programming.
We can use the Lagrange multiplier method to solve the constrained nonlinear optimization problems. The following the constrained nonlinear optimization problems(P) is studied in this paper:
let , , which are twice the continuously differentiable functions.
The Lagrangian function of the problem (P) is equivalent to the following function:
where , , and are the multiplier vectors. Conveniently, is denoted as the column vector .
A Karush–Kuhn–Tucker (KKT) point holds the following conditions:
This condition is called the first-order optimal necessary condition for problem (P).
We also define a class of new augmented Lagrangian functions with NCP functions for the problem (P). The difference between this new function and the previous one is that the parameters are different. The previous study used the same parameters in the penalty terms of equation and inequality constraints, and now we consider adding two penalty terms with different parameters, which has the advantage of improving the algorithm efficiency. We then prove a 1-1 correspondence between the local and global optimal solutions of the new unconstrained problem and the original constrained problem. In particular, considering the characteristics of the new problem, an alternating direction search pattern method was designed and its convergence was demonstrated, while experiment simulations are performed according to the designed new augmented Lagrangian function and algorithm.
2. Constructing New Augmented Lagrangian Function
In order to construct the new augmented Lagrangian function, some necessary preparations and assumptions are given in this section.
The NCP function is a function that satisfies the following conditions:
The point subject to the following conditions:
- The first-order KKT condition is satisfied;
- For any with . Here, and .
Then, satisfies the strong second-order sufficiency condition [20].
In this paper, NCP function was used , it is continuously differentiable except at the origin, and strongly semismooth at the origin, see [19,20,21].
Denote
where is a parameter.
iff , and for any . Denote .
For problem (P), we defined the new augmented Lagrangian function F as follows:
where is the other parameter.
Obviously, the conditions of KKT point convert to the following conditions:
Some assumptions are as follows:
A1. , , and are two times Lipschitz continuously differentiable. For example, for , L is a positive number, if all points x in the field of point have,
then it is said that holds two Lipschitz conditions at point . The same for and .
A2. At the point with for any and , are linear independent.
A3. The strict complementarity condition holds at any KKT point of problem (P).
Suppose is a KKT point. is called the complementary condition. If there is also , then the strict complementarity condition holds.
A4. The strict second-order sufficiency condition holds at any KKT point of the problem (P).
3. The Equivalence of Local Optimal Solution
To prove the equivalence of locally optimal solutions, we give the following two lemmas.
Lemma 1.
For sufficiently large γ, . holds iff is a KKT point.
Proof.
The index sets is denoted
By the definition, for any , , , and .
For any , the derivative of F is:
where denotes the following:
Suppose is a KKT point, then for any
The combined (5) can be obtained from this,
put (4) into the above equation in order to obtain
therefore, .
It can be obtained according to (6),
By (7),
As a result, .
Conversely, suppose for some .
Then, we have
by (6) it is easy to know
as , we have .
Thus, makes the KKT conditions of problem (P) hold.
Furthermore, we assume for a sufficiently large .
By A1, is bounded and we have
where .
By (7), we obtain
By A2, are linear independent,
there is
Otherwise, let for any or , which is a contradiction to
By (6) and (9), we have
Put (9)–(11) into (5), then for , we obtain
By A2, are linear independent and then we obtain
If , then .
If , then .
For a sufficiently large ,
then, .
So, it follows from (13) that ; that means , in other words is KKT point of problem (P). □
Lemma 2.
For and a sufficiently large γ, suppose is the local minima of and is the local minima of the problem (P).
The proof of this lemma can be found in [17].
4. The Equivalence of Globe Optimal Solution
Lemma 3.
For a sufficiently large γ, suppose is the KKT point of problem (P), and is strong convex at when A1–A3 hold.
Proof.
Suppose is a KKT point of problem (P), thus
In other words,
The supposition A1 connotes . For any , the Hessian matrix of at a KKT point is:
let represent the column vector,
Therefore,
Moreover,
So,
The lemma is proved. □
Lemma 4.
Let is the KKT point of problem (P) in the compact set. Suppose is the unique global minima of problem (P) on the X. is the unique global minima of when A2–A4 hold.
From the above several lemmas, it is easy to obtain Lemma 4, which is proved but ignored in the remainder of this paper.
5. Solution Algorithms
Based on the above four lemmas, we obtain the solution of the problem (P) by solving the unconstrained nonlinear optimization problems. Considering the nature of the problem (P), we design the algorithm for solving the nonlinear constrained problem using the alternating direction search pattern method. The specific algorithm is as shown below.
| Algorithm 1: Alternating direction search pattern method |
Input: Parameter initialization. and . Give a starting point , and . Let . Output: Stop until a certain stopping criterion is met. Step 1: Solve the optimal solution. Use Algorithm 2 to solve the problem to obtain If and then stop. Step 2: If for any i, then . Otherwise If , for any j, then , otherwise . Step 3: Compute and : Step 4: Set , go to Step 1. |
Algorithm 2 is a sub-algorithm of Algorithm 1.
| Algorithm 2: The search pattern method |
Input: Select initial value Output: Stop until a certain stopping criterion is met. Step 1: Let ; for computer Step 2: Set , if , stop; Otherwise, let Let , go to step 1. |
Note: is the i-th direction of coordinate axis. The Algorithm 2 is also called the search pattern method; its main idea is to find a new point in each n coordinate rotation search in order to obtain a direction and then conduct a search.
6. Convergence
In this section, let
Lemma 5.
Let the non-empty set X is the feasible solutions set of the problem (P), for . , supposing that has a lower bound, then .
Proof.
For any k, to prove is equal in order to prove
Because
Therefore,
For any , in other words
so,
Now that has a lower bound, there is a such that □
The lemma is proved.
Theorem 1.
Let the non-empty set X be the feasible solutions set of the problem (P); when Algorithm 1 reaches the stopping criterion or , then end the experiment.
Proof.
Suppose the contrary; let the algorithm be implemented and if after the finite number of iterations of the algorithm does not stop or , then let
By assumption of this theorem: is not empty. For any ,
Because and , therefore
where is a higher order infinitesimal of k.
Since Algorithm 1 does not stop, for any , there exists , either for a and , and , either , such that,
or for a , ,
The above formula shows,
This is a contradiction. That is the end of the proof. □
Theorem 2.
Let the non-empty set X is the feasible solutions set of the problem (P); and are bounded. Let in Algorithm 1; at the k-th iteration Algorithm 1 satisfies the stopping criterion, then is solution to problem(P), or let be the accumulation point of , then is solution to problem (P).
Proof.
As Algorithm 1 is stopped at the k-th iteration,
It is clear for any ,
Since , , by Equations (16) and (21),
so, is the solution of .
In other words, at the k-th iteration Algorithm 1 does not satisfy the stopping criterion, for any minima of sequence where or for any ,
As
It is clear that is local minima of the primal constrained problem (P).
That is the end of the proof. □
7. Numerical Experiment
To verify the reliability of the algorithm, some numerical experiments were conducted. The results are shown in Table 1. The experimental examples in Table 1 are constrained nonlinear programming from [24], where the problem number is the same as the number of the problem in [24].
Table 1.
Computational results of Algorithm 1.
In the experiments, the parameters are set to , , , and as the termination condition. By selecting different initial points, “NIT, NF and NG” are set as the evaluation criteria. NIT denotes the number of iterations; NF is the number of times the functions are evaluated—the number of NF will only increase by one when all functions are estimated once; NG is the number of times the (or gradient) is evaluated times.
It is clear from the above table that the “NIT, NF and NG” are small for different problems with different initial points, which means that the new augmented Lagrange methods are effective as shown by these numerical results.
In conclusion, a new augmented Lagrangian function with nonlinear complementarity functions is proposed to solve the constrained nonlinear optimization problems. Under the given conditions, the solution of the new unconstrained programming is equivalent to the solution of the original constrained programming. Furthermore, the alternating direction based search pattern method was designed, and its convergence was demonstrated. At the same time, experiments were conducted and the results of the experiments demonstrated the effectiveness of the presented new augmented Lagrangian function and the algorithms.
Author Contributions
Conceptualization, A.F.; Methodology, A.F. and X.C.; Software, X.C.; Validation, A.F. and Y.S.; Resources, J.F.; Data curation, A.F.; Writing—original draft, A.F. and J.F.; Writing—review & editing, X.C.; Supervision, Y.S.; Project administration, Y.S. All authors have read and agreed to the published version of the manuscript.
Funding
Supported by NSFC (No. 12071112, 12101195).
Data Availability Statement
Not applicable.
Acknowledgments
The authors would very much like to thank the reviewers for their helpful suggestions.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Gilli, M.; Maringer, D.; Schumann, E. Numerical Methods and Optimization in Finance; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Yan, M.; Shahidehpour, M.; Paaso, A. Distribution network-constrained optimization of peer-to-peer transactive energy trading among multi-microgrids. IEEE Trans. Smart Grid 2020, 12, 1033–1047. [Google Scholar] [CrossRef]
- Guo, N.; Lenzo, B.; Zhang, X. A real-time nonlinear model predictive controller for yaw motion optimization of distributed drive electric vehicles. IEEE Trans. Veh. Technol. 2020, 69, 4935–4946. [Google Scholar] [CrossRef]
- Shi, D.; Wang, S.; Cai, Y. Model Predictive Control for Nonlinear Energy Management of a Power Split Hybrid Electric Vehicle. Intell. Autom. Soft Comput. 2020, 26, 27–39. [Google Scholar] [CrossRef]
- Zhang, Y.; Kou, X.; Song, Z. Research on logistics management layout optimization and real-time application based on nonlinear programming. Nonlinear Eng. 2022, 10, 526–534. [Google Scholar] [CrossRef]
- Janabi, A.; Mazin, A. Optimization algorithms and investment portfolio analytics with machine learning techniques under time-varying liquidity constraints. J. Model. Manag. 2022, 17, 864–895. [Google Scholar] [CrossRef]
- Liu, Z.; Reynolds, A. Robust multiobjective nonlinear constrained optimization with ensemble stochastic gradient sequential quadratic programming-filter algorithm. SPE J. 2021, 26, 1964–1979. [Google Scholar] [CrossRef]
- Nguyen, B.T.; Bai, Y.; Yan, X. Perturbed smoothing approach to the lower order exact penalty functions for nonlinear inequality constrained optimization. Tamkang J. Math. 2019, 50, 37–60. [Google Scholar] [CrossRef]
- Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
- Liu, J.; Chen, J.; Zheng, J. A new accelerated positive-indefinite proximal ADMM for constrained separable convex optimization problems. J. Nonlinear Var. Anal. 2022, 6, 707–723. [Google Scholar]
- Zhang, X.L.; Zhang, Y.Q.; Wang, Y.Q. Viscosity approximation of a relaxed alternating CQ algorithm for the split equality problem. J. Nonlinear Funct. Anal. 2022, 43, 335. [Google Scholar]
- Di, P.G.; Grippo, L. A new class of augmented Lagrangians in nonlinear programming. SIAM J. Control Optim. 1979, 17, 618–628. [Google Scholar]
- Di, P.G.; Grippo, L. A new augmented Lagrangian function for inequality constraints in nonlinear programming problems. J. Optim. Theory Appl. 1982, 36, 495–519. [Google Scholar]
- Di, P.G.; Lucidi, S. On exact augmented Lagrangian functions in nonlinear programming. Nonlinear Optim. Appl. 1996, 25, 85–100. [Google Scholar]
- Pu, D.G. A class of augmented Lagrangian multiplier function. J. Inst. Railw. Technol. 1984, 5, 45. [Google Scholar]
- Pu, D.; Yang, P. A class of new Lagrangian multiplier methods. In Proceedings of the 2013 Sixth International Conference on Business Intelligence and Financial Engineering, Hangzhou, China, 14–16 November 2013; pp. 647–651. [Google Scholar]
- Pu, D.G.; Zhu, J. New Lagrangian Multiplier Methods. J. Tongji Univ. (Nat. Sci.) 2010, 38, 1387–1391. [Google Scholar]
- Pu, D.G.; Tian, W.W. Globally inexact generalized Newton methods for nonsmooth equation. J. Comput. Appl. Math. 2002, 138, 37–49. [Google Scholar] [CrossRef]
- Shao, Y.F.; Pu, D.G. A Class of New Lagrangian Multiplier Methods with NCP function. J. Tongji Univ. (Nat. Sci.) 2008, 36, 695–698. [Google Scholar]
- Galántai, A. Properties and construction of NCP functions. Comput. Optim. Appl. 2012, 52, 805–824. [Google Scholar] [CrossRef]
- Yu, H.D.; Xu, C.X.; Pu, D.G. Smooth Complementarily Function and 2-Regular Solution of Complementarity Problem. J. Henan Univ. Sci. 2011, 32, 1. [Google Scholar]
- Feng, A.F.; Xu, C.X.; Pu, D.G. New Form of Lagrangian Multiplier Methods. In Proceedings of the 2012 Fifth International Joint Conference on Computational Sciences and Optimization, Harbin, China, 23–26 June 2012; Volume 74, pp. 302–306. [Google Scholar]
- Feng, A.F.; Zhang, L.M.; Xue, Z.X. Alternating Direction Method Of Solving Nonlinear Programming With Inequality Constrained. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Kanton Schwyz, Switzerland, 2014; Volume 651, pp. 2107–2111. [Google Scholar]
- Schittkowski, K. More Test Examples for Nonlinear Programming Codes; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).