You are currently viewing a new version of our website. To view the old version click .
AppliedMath
  • Article
  • Open Access

3 November 2025

An Algorithm Based on the Modified Sufficient Conditions of the Inertia-Controlling Method for the Global Solution of a General Quadratic Problem

and
Department of Mathematics, Faculty of Mathematics and Informatics, University Batna2, Batna 05000, Algeria
*
Author to whom correspondence should be addressed.

Abstract

In this paper, we consider a general quadratic problem (P) with linear constraints that are not necessarily linear independent. To resolve this problem, we use a new algorithm based on the Inertia-Controlling method while replacing the condition of the Lagrange multiplier vector μ by resolution of a linear system obtained thanks to the Kuruch–Kuhn–Tuker matrix (KKT-matrix) in order to determine the minimizing direction of (P) and so calculate the steep length in general cases, as follows: indefinite, concave, and convex cases. This paper has an interesting topic and meaningful results.

1. Introduction

Currently, the domain of optimization is attracting considerable interest from the academic and industrial communities, see, for instance [,,]. The various studies existing for solving a given problem in this domain and efficient algorithmic implementations open up many perspectives and diverse applications. Many researchers in this field have an objective formed by several numerical methods [,,,,] used in two ways; the first way is the essential principle of such a general method, consisting of enumerating, often in an implicit manner, the set of solutions of the optimization problem, as well as techniques to detect the possible failures, and the second way concerns problems of large size.
The quadratic programming problem in continuous time and the theory of the characterization of its global solutions seem appropriate for questions where the objective function is of the general form, either convex, concave, or indefinite form. However, the quadratic problem can be written in several form equivalents, which help us to simplify and find good optimality conditions for this type of problem in the general case, but now, they are only found for the general quadratic programming of optimization with equality constraints [].
Recent theoretical advances in non-convex optimization have further expanded the understanding of these challenges, offering new insights into the convergence and global optimality of iterative methods for non-convex problems [].
Despite extensive research, there remains a gap in identifying effective methods that guarantee the computation of a global minimum in reasonable time, especially while avoiding numerical difficulties associated with ill-conditioned Hessian matrices. For instance, the search for global and local feasible solutions in constrained multi-objective optimization has been explored, highlighting the complexity of ensuring global optimality in non-convex settings [,]. Additionally, the development of equivalent sufficient conditions for global optimality in quadratically constrained quadratic programs has been a focus of recent studies, providing a theoretical foundation for addressing these challenges []
According to our long scientific research on solving our problem, as denoted below, we have not found an effective method that will allow us to find the global minimum in a reasonable time and avoid the numerical difficulties linked to the bad conditioning of the Hessian of problems.
With the new global sufficient conditions that we propose, we can apply any iterative method converging towards the local solution of the problem (P), whose goal is to compute the global solutions of optimization problems (P). As in the indefinite case, the (P) problem may have more than one global solution. We avoid writing the equivalent forms to the optimization problem (P), which helps us to directly take the original problem. The cost of the analytical function is higher, and so numerical problems (computational operations, errors) can arise, especially when the domain is bounded. However, this is not the case with indefinite Hessian reduced matrix, and questions are also posed about the global solution, which is noted by variable x G .
In this study, we are interested on the global sufficient conditions to solve the general quadratic problem of optimization, where the set of his constraints are linear, as follows:
P M i n   φ x = 1 2 x t H x + c t x + e   f o r     x w h e r e     = x I R n       :           A x b            
Such as A is an (m × n) matrix, not necessarily full rank; and H is a Hessian matrix of order n (not necessarily convex). b and c are two given vectors, and e is a scalar.
Actually, the conditions that would help to solve the problem (P) are not found. By these conditions, we can apply any iterative method converging to the local solution of problem (P), whose goal is to compute the global solution of the optimization problem of (P).
This analytic solution is computationally intense and numerical issues may occur, knowing that this solution exists, especially when the domain of constraints is bonded or not bonded with the reduced Hessian that is not indefinite. A question is also posed about the global solution in other cases, for example, in the case of an indefinite reduced Hessian matrix.

2. Notation and Glossary

We present, in this paragraph, the designation of the different symbols and variables used in our work (Table 1).
Table 1. Notation and glossary.

3. Existence of Global Solution x G

We consider the general quadratic problem (P) under the linear constraints Δ.
P M i n   φ x = 1 2 x t H x + c t x + e   f o r     x w h e r e     = x I R n       :           A x b            
To show the existence of the global solution, we state and implement two lemmas. The first concerns the determination of the minimization direction q (negative direction of curvature, or direction of descent). The second concerns the steep of length k which makes our algorithm finite.

3.1. Lemma 1

Let x s be a stationary point obtained at iteration k. Then, the following linear system (1) obtained thanks to the KKT-matrix has a single solution in x s ,
H A k t A k 0 q u = 0 1
Note that this solution exists and is unique if, and only if, Z k t H Z k is a D.P. matrix, with Zk being a kernel matrix of Ak such that A k Z k = 0 m k × n k .
This single solution is a minimizing direction of φ in the following cases:
Case 1 q t H q < 0 with g k t q 0 .
Case 2 q t H q > 0 with g k t q < 0 .
Proof. 
Given that Z k t H Z k is D.P. matrix, then it is non-singular matrix, and according to the expression referred to in [], it comes to us that the K-matrix H A k t A k 0 is also nonsingular.
Consequently, the linear system (1) has a single solution q , u t .
Now, let us distinguish the cases where the vector q represents the minimization direction of problem (P).
Let us recall that, among the important properties in the general quadratic programming problem and iterates at point x k , we have from []
φ k + 1 = φ k + α k 2 2 q t H q + α k g k t q
In case 1: If q t H q < 0 with g k t q 0 . We have directly φ k + 1 < φ k .
This fits the negative curvature direction, which is a minimization direction.
In case 2: if q t H q > 0 with g k t q < 0 , it fits a descent direction at point x k and has positive curvature. Here, it results also in φ k + 1 < φ k .
q , c = g k t q q t H q ( c o n v e x   c a s e )

3.2. Lemma 2

We take the same data of lemma1 with the satisfying situations of these two following cases:
Case 1 q t H q 0 with g k t q 0 .
Case 2 q t H q < 0 with g k t q > 0 , such that k < q , i .
Where
q , i = 2   g k t q q t H q
and
k = M i n b i a i t x k a i t q a i t q > 0 , f o r   i = 1 , , m
Then x k is a global solution of our problem (P).
Proof. 
Take the expression used in the proof of Lemma 1.
In case 1: According to the expression that we have considered in proof of Lemma 1, we obtain the following:
φ k + 1 φ k
In case 2: As k < q , i where
q , i = 2   g k t q q t H q
We multiply Expression (4) by k , meaning that
k . α k 2 q t H q < g k t q . k
Involving k 2 q t H q 2 < g k t q 1 . k , we obtain
k 2 2 q t H q > α k g k t q
This expression is always positive.
That is, k 2 2 q t H q + α k g k t q > 0 , according to the expression that we have considered in proof of lemma 1.
( φ k + 1 = φ k + k 2 2 q t H q + α k g k t q ) leads to
φ k + 1 φ k
We can say that x k is a global solution of problem (P). □

4. New Sufficient Conditions for Global Solution of (P)

The new sufficient conditions for global solutions to the problem (P) that we propose are shown in the following steps:
1. g k = A k t μ
2. Z k T H Z k is a positive defined matrix
3. In this third condition, we propose the new following step. We replace the condition of the Lagrange Multiplier µ > 0 by the following linear system obtained thanks to the KKT-Matrix.
H A k t A k 0 q u = 0 1
Which has the only increasing direction q with the steep length k < q in all cases, i.e., in convex case k < q , c such that
t   i s   p o s i t i v e   d e f i n i n s   i s   h o l d   q , c = g k t q q t H q
In indefinite case k < q , i such that
q , i = 2 g k t q q t H q
where
k = ( b i a i t x k a i t q ) i = 1 , , m a i t q > 0 m i n
Proof. 
Before showing this result, we will recall the sufficient conditions of the local solutions of the problems of type (P), which is given in the following theorem, and use the results to find the global solution. □
Theorem 1. 
([]). We consider the quadratic problem (P) and let x * ϵ   be the point obtained by minimizing the objective function ϕ after performing k iterations starting from the initial point x 0 . We are given two matrices, A k and Z k , such that the columns of the latter constitute a basis for the kernel of the former. If there is a positive vector γ 1 * > 0 , such that φ x * A k t γ 1 * = 0 n .
If the expression Z t H Z is positive for all Z 0 :
A k Z k = 0 m k × n k
Then, x * is a local solution of the problem (P).
Now, we return to show the proof of our original result by considering
q 0 n (because the case of q = 0 n is evidently q t g k = q t H q = 0 )
We have g k = A k t μ .
Multiplying by qt; we obtain the following:
q t g k = I k μ ( i )
(which can be positive for a global solution)
We distinguish the two following cases:
Case 1. If q t g k < 0 , then we obtain a decreasing direction q from the linear system (1), so the point x k is not a global solution of problem (P). Therefore, we continue the calculation until finding the best solution.
Case 2. If q t g k 0 , t h e n we have φ k + 1 = φ k + k . g k t q + 1 2 k 2 q t H q and
q t H q < 0
The following two situations appear:
The first: If k < q , then x k point is a global solution of our (P) problem, where k = ( b i a i t x k a i t q ) i = 1 , m w i t h   a i t q > 0 m i n q , c = g k t q q t H q and q , i = 2   g k t q q t H q   .
The second: If k q then we continue to perform the same technical minimization at the x k point until finding the global solution x G .

5. Inertia-Controlling Method (I.C.M)

We note that the researchers have abandoned this method, which I believe could be improved and applied to help us solve our problem by introducing some changes (see Section 4 above) to obtain better results, especially since it relies on second-order conditions, whether related to the nature of the reduced Hessian matrix or the nature of the minimization direction.
Definition 1. 
(Stationary Point) [].
We say the point x k = x s is stationary point of problem (P) if these two following conditions are held:
1. g k = A k t μ
2. Z k t H Z k   i s   D . P . (Positive Defined matrix)
Definition 2. 
(Minimization Direction) [].
We say vector q I R n is the minimization direction of problem (P) at point x k if one of these following conditions is held:
1. g k t q < 0
2. q t H q < 0 (Direction of negative curvature)
Now, we give some important assumptions of the I.C.M [] technique used in our work.
A1. The objective function φ is bounded from bellow in the feasible region.
A2. All active constraints in x point are in the working set.
A3. The working-set matrix A has full row rank.
A4. The point x satisfies the first-order necessary conditions for optimality.
Theorem 2. 
Let x k be a point of k t h iteration defined by x k + 1 = x k + α k q , then the estimation k 2 2 q t H q + k g k t ( q ) is always decreasing in all cases, where q is a direction which results from the following linear system.
H A k t A k 0 q u = g k 0
k is a steep length of function φ at point x k .
Proof. 
We distinguish three cases to show the decreasing of the objective function at the x k + 1 point, as follows:
1. Convex case
When g k t q is a positive value, we can change a direction q by (−q), which is given a negative value g k t ( q ) , therefore, we compute the value of k by using the following formula:
α k M i n M i n b i a i t x k a i t q a i t q > 0 ,     g k t q q t H q ,     1
Thus, we have k 2 q t H q + g k t q < 0 . Multiplying α k by both terms, we obtain the result
k 2 2 q t H q + k g k t ( q ) < 0 .
Now, when g k t q is a negative value, we repeat the same process changing only the direction (−q) by direction (q), and we will obtain the same result.
2. Indefinite case
When g k t q is a positive value with k 0 (if it is zero, we can add a constraint associated with this k ), we change the direction q by direction (−q) which is given a negative value g k t ( q ) . Therefore, we compute the value of k by using the following formula:
α k M i n M i n b i a i t x k a i t q a i t q > 0 ,     2 g k t q q t H q
Thus, we have
k 2 q t H q + g k t q < 0 .
Multiplying α k by both terms, we obtain the result
k 2 2 q t H q + k g k t ( q ) < 0 .
Now, when g k t q is a negative value, we repeat the same process, changing only the direction (−q) by direction (q), and we will obtain the same result.
3. Singular case
When g k t q is a positive value with k 0 (if it is zero, we add a constraint associated with this k ), we change the direction q by direction (−q), which is given a negative value g k t ( q ) . Therefore, we compute the value of k by using the following formula:
α k M i n ( b i a i t x k a i t q     s u c h   t h a t       a i t q   ) > 0
Thus, we have
g k t q < 0 .
Multiplying by α k , this directly implies the following result
k 2 2 q t H q + k g k t ( q ) < 0 .
Now, when g k t q is negative value, we repeat the same process, changing only the direction (−q) by direction (q), and we will obtain the same result. □

6. Algorithm

Algorithm for Finding Global Solution

Step 1: Choose an arbitrary initial solution x 0 in I R n .
Step 2: Use the algorithm of the active point [], which returns several results: the matrix A 0 associated with the active point x a at iteration k with its kernel matrix Z 0 and that satisfies the following linear system:
A 0 Z O = 0 m 0 × n 0 ,
and the gradient g 0 .
Step 3: Call the “subroutine in below” to find a stationary point x s that satisfies the following:
  • A s Z s = 0 m s × n s
  • Z s t H Z s is a positive definite matrix,
  • x s ∈ Fr(Δ) (see Definition 5.1).
Step 4: Find a minimization direction q and the steep length α s which corresponds to x s .
Step 5: Stopping conditions:
  • If q t H q < 0 :
    o
    If ( g k t q 0   o r   ( q , i < s w i t h   g k t q > 0 ) ) Return to step 3
    o
    Else ( g k t q > 0 with s q , i ) Proceed to step 6
  • Else if q t H q > 0:
    o
    If  ( g k t q ≥ 0): → Proceed to Step 6.
    o
    Else If  ( g k t q < 0 with q , c s ): → Proceed to Step 7.
    o
    Else If  ( g k t q < 0): → Take Min ( k , q ,   c ) and return to Step 3.
  • Else If q t H q = 0:
    o
    If ( g k t q < 0): → Return to Step 3.
    o
    Else ( g k t q ≥ 0): → Proceed to Step 6.
Step 6: x k is the global solution of (P). Terminate.
Step 7: x k + 1 is the global solution of (P). Terminate.
Subroutine to calculate the stationary point.
  • To determine the stationary point x s , we solve the following linear system after verifying that Z k t H Z k is positive defined matrix (D.P).
H A k t A k 0 q u = g k 0
Such that
A k is a matrix associated with active point x k obtained at iteration k.
g k is the gradient obtained at iteration k from the active point algorithm.
2.
If (q = 0), then the stationary point is
x s = x k
Otherwise, we change the active point k = k + 1,
x l + 1 = x k + k q
and repeat End
* k is the steep length of case 2—Step 5 of the algorithm. That is to say
k = g k t q q t H q         *
We note these two remarks.
  • Case 3 is a singular type, so it accepts the linear constraints which have a border domain.
  • Each iteration based on the resolution of linear systems is characterized by the simplicity of the programming aspect, as well as by the nature of the reduced Hessian matrix Z k t H Z k and the steep length k .

7. Numerical Results and Comparative Analysis

Our work is well justified by comparing it with some more recent methods with benchmarks used in the literature [,,,,]. We note that the researchers left this track for a long time by using the Inertia-Controlling method, although with a small modification of this method, we managed to find good results.
In addition to the simplicity of verifying the global optimality conditions, the results have also been improved in convex cases (Table 2)
Table 2. Data for convex case examples.
The results have also been improved in the convex case (Table 2: Data for Convex Case Examples and Table 3: Results for Convex Case Examples). However, not in the concave case (Table 4: Data for Concave Case).
Table 3. Results for convex case examples.
Table 4. Data for concave case examples.
However, the concave case (Table 4). and in the indefinite case (Table 6), our results are obtained with a minimal number of iterations and the determined solution is much better, which confirms the originality of our solving technique
Examples in Table 5 present the results for concave case examples, Table 6 provides data for indefinite case examples, and Table 7 shows the corresponding results. Our method achieves these with a minimal number of iterations and the obtained solutions are significantly better, confirming the originality and efficiency of our proposed technique. The convex case results are obtained by using the Path Method with Weight (P.M.W.) with a depart point x 0 . New results are obtained with the same initial or depart point x 0 .
Table 5. Results for concave case examples.
Table 5. Results for concave case examples.
Examples [] φ(xG) (Referre)xa (New) xG and φ(xG) (New)
1. φ x G = −39.58
(ERROR)
1 0 1 2.3 2.7 2.7 2.7 2.3
φ x G = −23.435
After 4 iterations
2 . φ x G = −42.09 0 1 6.481 0.296
φ x G = −42.0973
After 5 iterations
3. φ x G = −42.09 5 0 6.481 0.296
φ x G = −42.0973
After 3 iterations
Table 6. Data for indefinite case examples.
Table 6. Data for indefinite case examples.
ExampleHConstraintsc
1 of [] 1 1 1 / 2 1 2 2 1 / 2 2 2 1 x i 1
i = 1,2 , 3
0 2 0.5
2 of [] 1 0.5 0.5 2 1 x i 1
i = 1 ,   2
4 4
3 of [] 1 / 5 0 0 1 / 30 2 x i 6 ,
i = 1,2
2   x i +   9   x i 48
5   x i + 3   x i 35
1 5 4
4 of [] 9 5 6 5 1 8 1   8 8 8 0 5 9 6 6 0 0.1 x i 1.7
,   i = 1 , , 4
1 4 8 8
5 of [] 1 0 0 0 0 1 0   0 0 0 0 0 2 1 1 1 0.1 x i 1.7
i = 1 , , 4
0 0 0 0
Table 7. Results for indefinite case examples.
Table 7. Results for indefinite case examples.
ExamplexG or φ(xG) (Referres)xaxG and φ(xG) (New)
1 of [] x G 1 1 / 2 1
This solution is not
global
1 1 / 2 1
We start with this
point and we proved
it is not global
x G = 1 1 1 / 2
φ x G =   −2.75
After 5 iterations
2 of [] x G = 1 1
This solution is
global
0 0
We start with this
point and we find
effectively it is global
x G = 1 1
φ x G = 2 .75
After 5 iterations
3 of [] x G = 4.3846 4.3590
This solution is
global
4.3846 4.3590
We start with
this point
4.3077 4.4872
φ x G = 2 0.329
After 5 iterations
4 of [] φ x G = −86.411 0 3 0 0
We start with
this point
0.1 1.7 0.8056 0.1
φ x G = 12.915
After 4 iterations
5 of [] φ ( x G ) = 207.071 0 3 0 0
We start with
this point
1.7 1.7 0.1 1.7
φ x G = 7 0.485
After 6 iterations

8. Conclusions

In this article, we have shown that it is possible to check the global optimization conditions to decide if completed point of the Inertia-Controlling Method (I.C.M.) is a global solution or not.
In the case where any optimization problem is solved by the necessary and sufficient conditions of its solutions, when these conditions are satisfied at the point denoted in our (P) problem by x k , this later becomes a solution to the considered problem.
However, if the relevant conditions are not found, then we cannot obtain the exact solution.
Through these new sufficient conditions for global optimality, we can apply any decreasing method converging to a local solution of problem (P) and we can also say whether any point x of the field is a global solution of our mathematical problem or not, on the one hand.
On the other hand, the condition that we changed in this work depends on the solution of a linear system that is listed before in the local solution of objective function φ , and in our theories. The given direction q is the best decrease in the objective function φ at x k in terms of minimization, because all negative Eigen values of the matrix Z k t H Z k are taken, as we have also shown the decreasing objective function at each active point in the convex, indefinite, and singular cases.
The results found provide high efficiency and reliability compared to the methods used (for example, interior points) to resolve quadratic programming problems from the point of view of accuracy and response times (a large number of iterations may be performed to find the solution).
Consequently, we recommend and ask that the method of I.C.M be reviewed again and used in many academic studies, which will encourage university researchers to resort to this mathematical method to enrich their research in the scientific fields of applied mathematics and economics.

Author Contributions

Conceptualization: S.C. and L.D., Methodology, Software, Formal Analysis, Investigation, Resources, Data curation, writing review and editing-original draft preparation: S.C., Visualization, supervision, project administration: L.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We recognize all teacher’s researcher which helped us directly or indirectly for realized our scientific work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gill, P.E.; Murray, W.; Saunders, M.A.; Wright, M.H. Inertia-controlling methods for general quadratic programming. SIAM Rev. 1991, 33, 1–36. [Google Scholar] [CrossRef]
  2. Altman, A.; Gondzio, J. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optim. Methods Softw. 1999, 11, 275–302. [Google Scholar] [CrossRef]
  3. Grippo, L.; Sciandrone, M. Introduction to Interior Point Methods. In Introduction to Methods for Nonlinear Optimization; Springer International Publishing: Cham, Switzerland, 2023; pp. 497–527. [Google Scholar]
  4. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  5. Kim, S.; Kojima, M. Equivalent sufficient conditions for global optimality of quadratically constrained quadratic programs. Math. Methods Oper. Res. 2025, 101, 73–94. [Google Scholar] [CrossRef]
  6. Kebbiche, Z. Etude et Extensions D’algorithmes de Points Intérieurs Pour la Programmation Non Linéaire. Ph.D. Thesis, Université de Sétif, Sétif, Algeria, 2008. [Google Scholar]
  7. Azevedo, A.T.; Oliveira, A.R.L.; Soares, S. Interior point method for long-term generation scheduling of large-scale hydrothermal systems. Ann. Oper. Res. 2009, 169, 55–80. [Google Scholar] [CrossRef]
  8. Morales, J.L.; Nocedal, J.; Wu, Y. A sequential quadratic programming algorithm with an additional equality constrained phase. IMA J. Numer. Anal. 2012, 32, 553–579. [Google Scholar] [CrossRef]
  9. Dussault, J.P. Programmation Non Linéaire; Département d’informatique, Université de Sherbrooke: Sherbrook, QC, Canada, 2011. [Google Scholar]
  10. Gondzio, J.; Yildrim, E.A. Global solutions of nonconvex standard quadratic programs via mixed integer linear programming reformulations. J. Glob. Optim. 2021, 81, 293–321. [Google Scholar] [CrossRef]
  11. Khouni, S.E.; Menacer, T. Nizar optimization alghorithm: A novel methaheuristic algorithm for global optimization and engineering applications. J. Supercomput. 2024, 80, 3229–3281. [Google Scholar] [CrossRef]
  12. Messine, F.; Jourdan, N. L’optimisation Globale par Intervalles: De l’Etude Théorique aux Applications; Habilitation à Diriger des Recherches, Institut National Polytechnique de Toulouse; Toulouse National Polytechnic Institute: Toulouse, France, 2006. [Google Scholar]
  13. Ouaoua, M.L.; Khelladi, S. Efficient Descent Direction of a Conjugate Gradient Algorithm for Nonlinear Optimization. Nonlinear Dyn. Syst. Theory 2025, 25, 91–100. [Google Scholar]
  14. Choufi, S. Development of a procedure for finding active points of linear constraints. J. Appl. Comput. Math. 2017, 6, 1000352. [Google Scholar] [CrossRef]
  15. Wu, Z.Y.; Bai, F.S. Global optimality conditions for mixed nonconvex quadratic programs. Optimization 2009, 58, 39–47. [Google Scholar] [CrossRef]
  16. Sun, W.; Yuan, Y.X. Optimization Theory and Methods: Nonlinear Programming; Springer Science Business Media: New York, NY, USA, 2006; Volume 1. [Google Scholar]
  17. Sun, X.L.; Li, D.; NcKinnon, K.I.M. On saddle points of augmented Lagrangians for constrained nonconvex optimization. SIAM J. Optim. 2005, 12, 1128–1146. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.