Abstract
In this paper, we consider a general quadratic problem (P) with linear constraints that are not necessarily linear independent. To resolve this problem, we use a new algorithm based on the Inertia-Controlling method while replacing the condition of the Lagrange multiplier vector μ by resolution of a linear system obtained thanks to the Kuruch–Kuhn–Tuker matrix (KKT-matrix) in order to determine the minimizing direction of (P) and so calculate the steep length in general cases, as follows: indefinite, concave, and convex cases. This paper has an interesting topic and meaningful results.
1. Introduction
Currently, the domain of optimization is attracting considerable interest from the academic and industrial communities, see, for instance [,,]. The various studies existing for solving a given problem in this domain and efficient algorithmic implementations open up many perspectives and diverse applications. Many researchers in this field have an objective formed by several numerical methods [,,,,] used in two ways; the first way is the essential principle of such a general method, consisting of enumerating, often in an implicit manner, the set of solutions of the optimization problem, as well as techniques to detect the possible failures, and the second way concerns problems of large size.
The quadratic programming problem in continuous time and the theory of the characterization of its global solutions seem appropriate for questions where the objective function is of the general form, either convex, concave, or indefinite form. However, the quadratic problem can be written in several form equivalents, which help us to simplify and find good optimality conditions for this type of problem in the general case, but now, they are only found for the general quadratic programming of optimization with equality constraints [].
Recent theoretical advances in non-convex optimization have further expanded the understanding of these challenges, offering new insights into the convergence and global optimality of iterative methods for non-convex problems [].
Despite extensive research, there remains a gap in identifying effective methods that guarantee the computation of a global minimum in reasonable time, especially while avoiding numerical difficulties associated with ill-conditioned Hessian matrices. For instance, the search for global and local feasible solutions in constrained multi-objective optimization has been explored, highlighting the complexity of ensuring global optimality in non-convex settings [,]. Additionally, the development of equivalent sufficient conditions for global optimality in quadratically constrained quadratic programs has been a focus of recent studies, providing a theoretical foundation for addressing these challenges []
According to our long scientific research on solving our problem, as denoted below, we have not found an effective method that will allow us to find the global minimum in a reasonable time and avoid the numerical difficulties linked to the bad conditioning of the Hessian of problems.
With the new global sufficient conditions that we propose, we can apply any iterative method converging towards the local solution of the problem (P), whose goal is to compute the global solutions of optimization problems (P). As in the indefinite case, the (P) problem may have more than one global solution. We avoid writing the equivalent forms to the optimization problem (P), which helps us to directly take the original problem. The cost of the analytical function is higher, and so numerical problems (computational operations, errors) can arise, especially when the domain is bounded. However, this is not the case with indefinite Hessian reduced matrix, and questions are also posed about the global solution, which is noted by variable .
In this study, we are interested on the global sufficient conditions to solve the general quadratic problem of optimization, where the set of his constraints are linear, as follows:
Such as A is an (mn) matrix, not necessarily full rank; and H is a Hessian matrix of order n (not necessarily convex). b and c are two given vectors, and e is a scalar.
Actually, the conditions that would help to solve the problem (P) are not found. By these conditions, we can apply any iterative method converging to the local solution of problem (P), whose goal is to compute the global solution of the optimization problem of (P).
This analytic solution is computationally intense and numerical issues may occur, knowing that this solution exists, especially when the domain of constraints is bonded or not bonded with the reduced Hessian that is not indefinite. A question is also posed about the global solution in other cases, for example, in the case of an indefinite reduced Hessian matrix.
2. Notation and Glossary
We present, in this paragraph, the designation of the different symbols and variables used in our work (Table 1).
Table 1.
Notation and glossary.
3. Existence of Global Solution
We consider the general quadratic problem (P) under the linear constraints Δ.
To show the existence of the global solution, we state and implement two lemmas. The first concerns the determination of the minimization direction q (negative direction of curvature, or direction of descent). The second concerns the steep of length which makes our algorithm finite.
3.1. Lemma 1
Let be a stationary point obtained at iteration k. Then, the following linear system (1) obtained thanks to the KKT-matrix has a single solution in ,
Note that this solution exists and is unique if, and only if, is a D.P. matrix, with Zk being a kernel matrix of Ak such that .
This single solution is a minimizing direction of in the following cases:
Case 1 with .
Case 2 with .
Proof.
Given that is D.P. matrix, then it is non-singular matrix, and according to the expression referred to in [], it comes to us that the K-matrix is also nonsingular.
Consequently, the linear system (1) has a single solution .
Now, let us distinguish the cases where the vector q represents the minimization direction of problem (P).
Let us recall that, among the important properties in the general quadratic programming problem and iterates at point , we have from []
In case 1: If with . We have directly .
This fits the negative curvature direction, which is a minimization direction.
In case 2: if with , it fits a descent direction at point and has positive curvature. Here, it results also in .
□
3.2. Lemma 2
We take the same data of lemma1 with the satisfying situations of these two following cases:
Case 1 with .
Case 2 with , such that .
Where
and
Then is a global solution of our problem (P).
Proof.
Take the expression used in the proof of Lemma 1.
In case 1: According to the expression that we have considered in proof of Lemma 1, we obtain the following:
In case 2: As where
We multiply Expression (4) by , meaning that
Involving we obtain
This expression is always positive.
That is, , according to the expression that we have considered in proof of lemma 1.
() leads to
We can say that is a global solution of problem (P). □
4. New Sufficient Conditions for Global Solution of (P)
The new sufficient conditions for global solutions to the problem (P) that we propose are shown in the following steps:
1.
2. is a positive defined matrix
3. In this third condition, we propose the new following step. We replace the condition of the Lagrange Multiplier by the following linear system obtained thanks to the KKT-Matrix.
Which has the only increasing direction q with the steep length in all cases, i.e., in convex case such that
In indefinite case such that
where
Proof.
Before showing this result, we will recall the sufficient conditions of the local solutions of the problems of type (P), which is given in the following theorem, and use the results to find the global solution. □
Theorem 1.
([]). We consider the quadratic problem (P) and let be the point obtained by minimizing the objective function ϕ after performing k iterations starting from the initial point . We are given two matrices, and , such that the columns of the latter constitute a basis for the kernel of the former. If there is a positive vector , such that .
If the expression is positive for all
Then, is a local solution of the problem (P).
Now, we return to show the proof of our original result by considering
(because the case of is evidently )
We have .
Multiplying by qt; we obtain the following:
(which can be positive for a global solution)
We distinguish the two following cases:
Case 1. If , then we obtain a decreasing direction q from the linear system (1), so the point is not a global solution of problem (P). Therefore, we continue the calculation until finding the best solution.
Case 2. If we have and
The following two situations appear:
The first: If , then point is a global solution of our (P) problem, where and
The second: If then we continue to perform the same technical minimization at the point until finding the global solution .
5. Inertia-Controlling Method (I.C.M)
We note that the researchers have abandoned this method, which I believe could be improved and applied to help us solve our problem by introducing some changes (see Section 4 above) to obtain better results, especially since it relies on second-order conditions, whether related to the nature of the reduced Hessian matrix or the nature of the minimization direction.
Definition 1.
(Stationary Point) [].
We say the point = is stationary point of problem (P) if these two following conditions are held:
1.
2. (Positive Defined matrix)
Definition 2.
(Minimization Direction) [].
We say vector q is the minimization direction of problem (P) at point if one of these following conditions is held:
1.
2. (Direction of negative curvature)
Now, we give some important assumptions of the I.C.M [] technique used in our work.
A1. The objective function is bounded from bellow in the feasible region.
A2. All active constraints in x point are in the working set.
A3. The working-set matrix A has full row rank.
A4. The point x satisfies the first-order necessary conditions for optimality.
Theorem 2.
Let be a point of iteration defined by , then the estimation is always decreasing in all cases, where q is a direction which results from the following linear system.
is a steep length of function at point .
Proof.
We distinguish three cases to show the decreasing of the objective function at the point, as follows:
1. Convex case
When is a positive value, we can change a direction q by (−q), which is given a negative value , therefore, we compute the value of by using the following formula:
Thus, we have . Multiplying by both terms, we obtain the result
Now, when is a negative value, we repeat the same process changing only the direction (−q) by direction (q), and we will obtain the same result.
2. Indefinite case
When is a positive value with (if it is zero, we can add a constraint associated with this ), we change the direction q by direction (−q) which is given a negative value . Therefore, we compute the value of by using the following formula:
Thus, we have
Multiplying by both terms, we obtain the result
Now, when is a negative value, we repeat the same process, changing only the direction (−q) by direction (q), and we will obtain the same result.
3. Singular case
When is a positive value with (if it is zero, we add a constraint associated with this ), we change the direction q by direction (−q), which is given a negative value . Therefore, we compute the value of by using the following formula:
Thus, we have
Multiplying by , this directly implies the following result
Now, when is negative value, we repeat the same process, changing only the direction (−q) by direction (q), and we will obtain the same result. □
6. Algorithm
Algorithm for Finding Global Solution
Step 1: Choose an arbitrary initial solution in .
Step 2: Use the algorithm of the active point [], which returns several results: the matrix associated with the active point at iteration k with its kernel matrix and that satisfies the following linear system:
and the gradient .
Step 3: Call the “subroutine in below” to find a stationary point that satisfies the following:
- is a positive definite matrix,
- ∈ Fr(Δ) (see Definition 5.1).
Step 4: Find a minimization direction q and the steep length which corresponds to .
Step 5: Stopping conditions:
- If :
- o
- If Return to step 3
- o
- Else ( with ) Proceed to step 6
- Else if > 0:
- o
- If ≥ 0): → Proceed to Step 6.
- o
- Else If < 0 with ): → Proceed to Step 7.
- o
- Else If < 0): → Take Min () and return to Step 3.
- Else If = 0:
- o
- If < 0): → Return to Step 3.
- o
- Else ( ≥ 0): → Proceed to Step 6.
Step 6: is the global solution of (P). Terminate.
Step 7: is the global solution of (P). Terminate.
Subroutine to calculate the stationary point.
- To determine the stationary point , we solve the following linear system after verifying that is positive defined matrix (D.P).
Such that
is a matrix associated with active point obtained at iteration k.
is the gradient obtained at iteration k from the active point algorithm.
- 2.
- If (q = 0), then the stationary point is
Otherwise, we change the active point k = k + 1,
and repeat End
* is the steep length of case 2—Step 5 of the algorithm. That is to say
We note these two remarks.
- Case 3 is a singular type, so it accepts the linear constraints which have a border domain.
- Each iteration based on the resolution of linear systems is characterized by the simplicity of the programming aspect, as well as by the nature of the reduced Hessian matrix and the steep length .
7. Numerical Results and Comparative Analysis
Our work is well justified by comparing it with some more recent methods with benchmarks used in the literature [,,,,]. We note that the researchers left this track for a long time by using the Inertia-Controlling method, although with a small modification of this method, we managed to find good results.
In addition to the simplicity of verifying the global optimality conditions, the results have also been improved in convex cases (Table 2)
Table 2.
Data for convex case examples.
The results have also been improved in the convex case (Table 2: Data for Convex Case Examples and Table 3: Results for Convex Case Examples). However, not in the concave case (Table 4: Data for Concave Case).
Table 3.
Results for convex case examples.
Table 4.
Data for concave case examples.
However, the concave case (Table 4). and in the indefinite case (Table 6), our results are obtained with a minimal number of iterations and the determined solution is much better, which confirms the originality of our solving technique
Examples in Table 5 present the results for concave case examples, Table 6 provides data for indefinite case examples, and Table 7 shows the corresponding results. Our method achieves these with a minimal number of iterations and the obtained solutions are significantly better, confirming the originality and efficiency of our proposed technique. The convex case results are obtained by using the Path Method with Weight (P.M.W.) with a depart point . New results are obtained with the same initial or depart point .
Table 5.
Results for concave case examples.
Table 5.
Results for concave case examples.
| Examples [] | φ(xG) (Referre) | xa (New) | xG and φ(xG) (New) |
|---|---|---|---|
| 1. | −39.58 (ERROR) | −23.435 After 4 iterations | |
| −42.09 | −42.0973 After 5 iterations | ||
| 3. | −42.09 | −42.0973 After 3 iterations |
Table 6.
Data for indefinite case examples.
Table 6.
Data for indefinite case examples.
| Example | H | Constraints | c |
|---|---|---|---|
| 1 of [] | |||
| 2 of [] | |||
| 3 of [] | 48 35 | ||
| 4 of [] | |||
| 5 of [] |
Table 7.
Results for indefinite case examples.
Table 7.
Results for indefinite case examples.
| Example | xG or φ(xG) (Referres) | xa | xG and φ(xG) (New) |
|---|---|---|---|
| 1 of [] | This solution is not global | We start with this point and we proved it is not global | −2.75 After 5 iterations |
| 2 of [] | This solution is global | We start with this point and we find effectively it is global | .75 After 5 iterations |
| 3 of [] | This solution is global | We start with this point | 0.329 After 5 iterations |
| 4 of [] | = −86.411 | We start with this point | After 4 iterations |
| 5 of [] | = 207.071 | We start with this point | 0.485 After 6 iterations |
8. Conclusions
In this article, we have shown that it is possible to check the global optimization conditions to decide if completed point of the Inertia-Controlling Method (I.C.M.) is a global solution or not.
In the case where any optimization problem is solved by the necessary and sufficient conditions of its solutions, when these conditions are satisfied at the point denoted in our (P) problem by , this later becomes a solution to the considered problem.
However, if the relevant conditions are not found, then we cannot obtain the exact solution.
Through these new sufficient conditions for global optimality, we can apply any decreasing method converging to a local solution of problem (P) and we can also say whether any point x of the field is a global solution of our mathematical problem or not, on the one hand.
On the other hand, the condition that we changed in this work depends on the solution of a linear system that is listed before in the local solution of objective function , and in our theories. The given direction q is the best decrease in the objective function at in terms of minimization, because all negative Eigen values of the matrix are taken, as we have also shown the decreasing objective function at each active point in the convex, indefinite, and singular cases.
The results found provide high efficiency and reliability compared to the methods used (for example, interior points) to resolve quadratic programming problems from the point of view of accuracy and response times (a large number of iterations may be performed to find the solution).
Consequently, we recommend and ask that the method of I.C.M be reviewed again and used in many academic studies, which will encourage university researchers to resort to this mathematical method to enrich their research in the scientific fields of applied mathematics and economics.
Author Contributions
Conceptualization: S.C. and L.D., Methodology, Software, Formal Analysis, Investigation, Resources, Data curation, writing review and editing-original draft preparation: S.C., Visualization, supervision, project administration: L.D. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
We recognize all teacher’s researcher which helped us directly or indirectly for realized our scientific work.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Gill, P.E.; Murray, W.; Saunders, M.A.; Wright, M.H. Inertia-controlling methods for general quadratic programming. SIAM Rev. 1991, 33, 1–36. [Google Scholar] [CrossRef]
- Altman, A.; Gondzio, J. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optim. Methods Softw. 1999, 11, 275–302. [Google Scholar] [CrossRef]
- Grippo, L.; Sciandrone, M. Introduction to Interior Point Methods. In Introduction to Methods for Nonlinear Optimization; Springer International Publishing: Cham, Switzerland, 2023; pp. 497–527. [Google Scholar]
- Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
- Kim, S.; Kojima, M. Equivalent sufficient conditions for global optimality of quadratically constrained quadratic programs. Math. Methods Oper. Res. 2025, 101, 73–94. [Google Scholar] [CrossRef]
- Kebbiche, Z. Etude et Extensions D’algorithmes de Points Intérieurs Pour la Programmation Non Linéaire. Ph.D. Thesis, Université de Sétif, Sétif, Algeria, 2008. [Google Scholar]
- Azevedo, A.T.; Oliveira, A.R.L.; Soares, S. Interior point method for long-term generation scheduling of large-scale hydrothermal systems. Ann. Oper. Res. 2009, 169, 55–80. [Google Scholar] [CrossRef]
- Morales, J.L.; Nocedal, J.; Wu, Y. A sequential quadratic programming algorithm with an additional equality constrained phase. IMA J. Numer. Anal. 2012, 32, 553–579. [Google Scholar] [CrossRef]
- Dussault, J.P. Programmation Non Linéaire; Département d’informatique, Université de Sherbrooke: Sherbrook, QC, Canada, 2011. [Google Scholar]
- Gondzio, J.; Yildrim, E.A. Global solutions of nonconvex standard quadratic programs via mixed integer linear programming reformulations. J. Glob. Optim. 2021, 81, 293–321. [Google Scholar] [CrossRef]
- Khouni, S.E.; Menacer, T. Nizar optimization alghorithm: A novel methaheuristic algorithm for global optimization and engineering applications. J. Supercomput. 2024, 80, 3229–3281. [Google Scholar] [CrossRef]
- Messine, F.; Jourdan, N. L’optimisation Globale par Intervalles: De l’Etude Théorique aux Applications; Habilitation à Diriger des Recherches, Institut National Polytechnique de Toulouse; Toulouse National Polytechnic Institute: Toulouse, France, 2006. [Google Scholar]
- Ouaoua, M.L.; Khelladi, S. Efficient Descent Direction of a Conjugate Gradient Algorithm for Nonlinear Optimization. Nonlinear Dyn. Syst. Theory 2025, 25, 91–100. [Google Scholar]
- Choufi, S. Development of a procedure for finding active points of linear constraints. J. Appl. Comput. Math. 2017, 6, 1000352. [Google Scholar] [CrossRef]
- Wu, Z.Y.; Bai, F.S. Global optimality conditions for mixed nonconvex quadratic programs. Optimization 2009, 58, 39–47. [Google Scholar] [CrossRef]
- Sun, W.; Yuan, Y.X. Optimization Theory and Methods: Nonlinear Programming; Springer Science Business Media: New York, NY, USA, 2006; Volume 1. [Google Scholar]
- Sun, X.L.; Li, D.; NcKinnon, K.I.M. On saddle points of augmented Lagrangians for constrained nonconvex optimization. SIAM J. Optim. 2005, 12, 1128–1146. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).