Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework
Abstract
1. Introduction
2. Interior-Point Method (IPM) Background
- Complexity justification for the long-step IPM. In long-step IPMs, iterates are allowed to deviate further from the central path by working in the neighborhood . To maintain feasibility, the step length is chosen using a fraction-to-the-boundary rule (Equation (9)), and the inexact Newton analysis yields the duality-gap recursion (Equation (8)):
3. Analysis of the Short-Step Method in IPM
3.1. Local Model of Complementarity
3.2. Theoretical Implications
- Role of the inexactness level and its impact on convergence.
- Practical choice. Choose so that(e.g., ), which guaranties . In implementations, an adaptive rule decreases as the duality gap shrinks (analogous to the forcing-term strategy in (12)), preserving robustness far from the solution while improving the local rate as the iterations approach optimality.
4. Analysis of the Long-Step Method in IPM
- Rationale for and its effect. If the minimizer in (9) is attained at an index j with , then for we obtain , leading to ; analogously, if then . Thus, may step exactly to the boundary. For any , the step length satisfies on the active index, ensuringand similarly . Hence, guarantees strict interior feasibility (, ) and preserves the neighborhood conditions assumed in the analysis. Moreover, affects stability and convergence: smaller values produce shorter, more conservative steps that enhance stability, whereas values closer to one yield faster progress but approach the boundary more aggressively.
5. Inexact–Newton–Smart Test (INS) Method
5.1. Newton System with Regularization
5.2. Step Length and Stopping
5.3. Inexactness and Contraction
6. Equality-Constrained Newton Phase (ECNP)
6.1. Inexact Linear Solves and Forcing Condition
6.2. Merit Function and Backtracking
6.3. Convergence Statement
7. Improvement of the INS Algorithm
7.1. Hessian Regularization
7.2. Quasi–Newton Update
7.3. Preconditioned Iterative Solver
7.4. Sensitivity Analysis of Step Strategies
- The step-length scaling factor controlling the damping of the search direction.
- The tolerance threshold was used as the stopping criterion for residual norms.
- The regularization parameter that governs the Hessian modification in the INS framework.
- Table 1 summarizes the key parameters used to generate synthetic data and run the improved INS and IPM algorithms. These parameters determine the scale of the problem and influence the stability and convergence of both methods.
8. Practical Implications
9. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; p. xiv+716. [Google Scholar] [CrossRef]
- Wright, S.J. Primal-Dual Interior-Point Methods; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1997; p. xx+289. [Google Scholar] [CrossRef]
- Gondzio, J. Matrix-free interior point method. Comput. Optim. Appl. 2012, 51, 457–480. [Google Scholar] [CrossRef]
- Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
- Nesterov, Y.; Nemirovskii, A. Interior-point polynomial algorithms in convex programming. In SIAM Studies in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1994; Volume 13, p. x+405. [Google Scholar] [CrossRef]
- Gondzio, J. Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM J. Optim. 2013, 23, 1510–1527. [Google Scholar] [CrossRef]
- Gondzio, J.; Sobral, F.N.C. Quasi-Newton approaches to interior point methods for quadratic problems. Comput. Optim. Appl. 2019, 74, 93–120. [Google Scholar] [CrossRef]
- Armand, P.; Benoist, J.; Dussault, J.P. Local path-following property of inexact interior methods in nonlinear programming. Comput. Optim. Appl. 2012, 52, 209–238. [Google Scholar] [CrossRef]
- Liu, X.W.; Dai, Y.H. A primal-dual interior-point relaxation method for nonlinear programs. arXiv 2018, arXiv:1807.02959. [Google Scholar]
- Liu, X.W.; Dai, Y.H.; Huang, Y.K. A primal-dual interior-point relaxation method with global and rapidly local convergence for nonlinear programs. Math. Methods Oper. Res. 2022, 96, 351–382. [Google Scholar] [CrossRef]
- Pankratov, E.L. On optimization of inventory management of an industrial enterprise. On analytical approach for prognosis of processes. Adv. Model. Anal. A 2019, 56, 26–29. [Google Scholar] [CrossRef]
- Wang, Z.j.; Zhu, D.t.; Nie, C.y. A filter line search algorithm based on an inexact Newton method for nonconvex equality constrained optimization. Acta Math. Appl. Sin. Engl. Ser. 2017, 33, 687–698. [Google Scholar] [CrossRef]
- Byrd, R.H.; Curtis, F.E.; Nocedal, J. An inexact Newton method for nonconvex equality constrained optimization. Math. Program. 2010, 122, 273–299. [Google Scholar] [CrossRef]
- Bagheri, N.; Haghighi, H.K. A comparison study of ADI and LOD methods on option pricing models. J. Math. Financ. 2017, 7, 275–290. [Google Scholar] [CrossRef][Green Version]
- Curtis, F.E.; Schenk, O.; Wächter, A. An interior-point algorithm for large-scale nonlinear optimization with inexact step computations. SIAM J. Sci. Comput. 2010, 32, 3447–3475. [Google Scholar] [CrossRef]


| Parameter | Value | Description |
|---|---|---|
| 100 | Number of sample instances | |
| 2 | Number of decision variables | |
| 1 | Number of constraints | |
| Tolerance (inexactness) | ||
| Fraction-to-the-boundary parameter | ||
| Centering parameter; | ||
| Algorithmic constants |
| Sample | Iterations | Accuracy | |||
|---|---|---|---|---|---|
| 1 | [0.0017, 0.9983] | [0.0079] | 0.9967 | 100 | 0.749147 |
| 2 | [0.0034, 0.9966] | [0.0158] | 0.9933 | 100 | 0.749147 |
| … | … | … | … | … | … |
| 100 | [0.0056, 0.9944] | [0.0177] | 0.9888 | 100 | 0.749147 |
| Sample | Iterations | Accuracy | |||
|---|---|---|---|---|---|
| 1 | [0.0017, 0.9983] | [0.0079] | 0.9967 | 64 | 0.751450 |
| 2 | [0.0034, 0.9966] | [0.0158] | 0.9933 | 70 | 0.751450 |
| … | … | … | … | … | … |
| 100 | [0.0056, 0.9944] | [0.0177] | 0.9888 | 72 | 0.751450 |
| Sample | Improved INS Time (s) | Interior Point Time (s) |
|---|---|---|
| 1 | 0.23 | 0.13 |
| 2 | 0.24 | 0.14 |
| … | … | … |
| 100 | 0.22 | 0.12 |
| Algorithm | Percentage Close to Termination Test I |
|---|---|
| Improved INS | 32.0 |
| Interior Point | 100.0 |
| Metric | Improved INS | Interior Point |
|---|---|---|
| Average | 0.696548 | 0.678785 |
| Average Iterations | 100.0 | 68.11 |
| Average Accuracy | 0.749147 | 0.751450 |
| Average Execution Time (s) | 0.23 | 0.13 |
| Average Inner Iterations | 147.11 | 68.11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bagheri Renani, N.; Jaefarzadeh, M.; Ševčovič, D. Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework. Mathematics 2025, 13, 3657. https://doi.org/10.3390/math13223657
Bagheri Renani N, Jaefarzadeh M, Ševčovič D. Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework. Mathematics. 2025; 13(22):3657. https://doi.org/10.3390/math13223657
Chicago/Turabian StyleBagheri Renani, Neda, Maryam Jaefarzadeh, and Daniel Ševčovič. 2025. "Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework" Mathematics 13, no. 22: 3657. https://doi.org/10.3390/math13223657
APA StyleBagheri Renani, N., Jaefarzadeh, M., & Ševčovič, D. (2025). Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework. Mathematics, 13(22), 3657. https://doi.org/10.3390/math13223657

