On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula

The paper describes an application of the p-regularity theory to Quadratic Programming (QP) and nonlinear equations with quadratic mappings. In the first part of the paper, a special structure of the nonlinear equation and a construction of the 2-factor operator are used to obtain an exact formula for a solution to the nonlinear equation. In the second part of the paper, the QP problem is reduced to a system of linear equations using the 2-factor operator. The solution to this system represents a local minimizer of the QP problem along with its corresponding Lagrange multiplier. An explicit formula for the solution of the linear system is provided. Additionally, the paper outlines a procedure for identifying active constraints, which plays a crucial role in constructing the linear system.


Introduction
Consider the nonlinear equation F(x) = 0 with the mapping F defined by: where M ∈ R n×n is a matrix, N ∈ R n is a vector, and B : R n → R n is the map defined for x ∈ R n by: where B i is an n × n symmetric matrix for i = 1, . . ., n.
We also consider the quadratic programming (QP) problem with inequality constraints: where Q is an n × n symmetric matrix, A is an m × n matrix, c, x ∈ R n , and b ∈ R m .The paper describes an application of the p-regularity theory to nonlinear equations with the mapping F introduced in (1) and to the quadratic programming problem (3).
In recent years, there has been growing interest in nonlinear problems, including quadratic and polynomial equations, as well as nonlinear optimization problems, attracting specialists from various disciplines (see, for example, refs.[1][2][3][4] and references therein).Furthermore, it was observed that nonlinear problems are closely related to singular problems, as demonstrated in [5].In fact, it has been discovered that essentially nonlinear problems and singular problems are locally equivalent.In this work, we aim to provide a theoretical foundation for this claim by introducing several auxiliary concepts as proposed in [5].Definition 1.Let V be a neighborhood of x * in R n , and let U ⊂ R n be a neighborhood of 0. A mapping F : V → R n , where F ∈ C 2 (V), is considered essentially nonlinear at x * if there exists a perturbation of the form: F(x * + x) = F(x * + x) + ω(x), where ω(x) = o( x ), such that no nondegenerate transformation of coordinates ϕ(x) : U → V, where ϕ ∈ C 1 (U), satisfies ϕ(0) = x * , ϕ (0) = I n , where I n is the identity map in R n , and: F(ϕ(x)) = F(x * ) + F (x * )x for all x ∈ U. Definition 2. We say that the mapping F is singular (or degenerate) at x * if it fails to be regular, meaning its derivative is not onto: The relationship between the notions of essential nonlinearity and singularity is established in Theorem 1, which was derived in [5].
Theorem 1.Let V be a neighborhood of x * in R n .Suppose F : V → R n is C 2 and that x * is a solution of F(x) = 0. Then F is essentially nonlinear at the point x * if and only if F is singular at the point x * .
The work presented in [5] primarily focuses on the construction of p-regularity and its applications in various areas of mathematics.However, it does not specifically cover quadratic nonlinear equations and quadratic programming problems.The current paper builds upon the foundation of the p-regularity theory established in [5] but introduces novel results.The main objective of this paper is to explore the key aspects of nonlinear problems, with a particular emphasis on systems of quadratic equations and quadratic programming problems that may involve singular solutions.
Specifically, we begin by considering the nonlinear equation F(x) = 0.One of the main goals of the paper is to derive the exact formula for a solution x * of the nonlinear equation F(x) = 0 using the special form of the quadratic mapping F defined in (1).We demonstrate how to use a construction of a special 2-factor-operator to transform the original problem into a system of linear equations.The construction of the 2-factor-operator combines the mapping F with its first derivative F (x).
In the second part of the paper, we apply a similar approach to the quadratic programming problem (3) in order to derive explicit formulas for the solution (x * , λ * ), where x * represents a local minimizer of the QP problem and λ * is the corresponding Lagrange multiplier.Namely, using the special form of the QP problem and the 2-factor-operator, we reduce the system of optimality conditions for the QP problem to a system of linear equations, with the point (x * , λ * ) as its solution.The paper also describes a procedure for identifying the active constraints, which plays a vital role in constructing the linear system.
Although there is literature on solutions of degenerate systems of quadratic equations, the approach presented in this paper is novel and distinct from the methods proposed by other authors.This approach can be applied to various problems and areas of mathematics where the problem involves solving a degenerate equation F(x) = 0 with a quadratic mapping F. Such nonlinear problems can arise in the numerical solutions and analysis of ordinary differential equations, partial differential equations, optimal control problems, algebraic geometry, and other fields.In the second part of the paper, we specifically focus on using the methods developed in the first sections to solve the QP problem (3).The quadratic programming problems have attracted the attention of many researchers and scientists, so there is an extensive body of literature on the topic.Some publications in this area include [6][7][8][9][10][11][12].
The outline of the paper.The main contribution and novelty of the paper are in the exact formulas for a solution of a nonlinear equation and of the quadratic programming problems, presented in Section 3 and Section 5, respectively.
In Section 2, we recall the main definitions of the p-regularity theory, as presented in [5], including the special case of p = 2. Additionally, we introduce the p-factor method for solving singular nonlinear equations of the form F(x) = 0 and describe various versions of the 2-factor method.
Section 3 presents some of the key results of the paper, focusing on the application of a modified 2-factor method to solve the nonlinear equation F(x) = 0 with the mapping F defined as F(x) = B[x] 2 + Mx + N, where M ∈ R n×n is a matrix, N ∈ R n is a vector, and B : R n → R n is defined by (2).In this section, we introduce multiple approaches to obtain exact formulas for a solution to the nonlinear equation F(x) = 0, demonstrating that the proposed methods converge to a solution x * of the nonlinear equation in just one iteration.
Section 4 focuses on an auxiliary result used in other parts of the paper.We present a theorem that describes the properties of a special mapping µ(x), which enables us to propose a procedure for determining r linearly independent vectors f i k (x * ), k = 1, . . ., r, at the solution x * of F(x) = 0, without needing to know the exact value of x * .This procedure relies on information about the system of vectors { f 1 (x), . . ., f m (x)} at some point x within a small neighborhood of x * .
Section 5 presents other novel results, focusing on deriving exact formulas for a solution of quadratic programming problems.The section is divided into three parts.In Section 5.1, we consider regular quadratic programming problems and propose three approaches to solving the QP problem and obtaining a formula for its solution.These approaches are based on the construction of the 2-factor-operator.Section 5.2 addresses the issue of identifying the active constraints and proposes strategies for numerically determining the set of active constraints I(x * ).These techniques are then applied in the final part, Section 5.3, to address degenerate QPs.The paper concludes with some closing remarks in Section 6.
Notation.Let a i denote the rows of the m × n matrix A in problem (3), and let b = (b 1 , . . ., b m ) T , so that a T i ∈ R n and b i ∈ R for i = 1, . . ., m.The active set I(x * ) at any feasible point x * of problem (3) is the set of indices of the active constraints at x * , i.e., I(x Furthermore, Ker S = {x ∈ R n | Sx = 0} denotes the null-space (kernel) of a given linear operator S : R n → R m , and Im S = {y ∈ R m | y = Sx for some x ∈ R n } is its image space.
Let B : R n × R n → R n be a bilinear symmetric mapping.The 2-form associated with B is the map , an open ball of radius ε centered at x * .
The notation for the scalar (dot) product of vectors x and y in R n , used in the paper, is x • y = x T y.
We denote by span(a 1 , . . ., a m ) the linear span of the given vectors a 1 , . . ., a m .We also denote by d(x, S) the distance between a point x and a set S.

Elements of the p-Regularity Theory
We begin this section with the main definitions of the p-regularity theory, which are given in [5].The primary focus is on the sufficiently smooth mapping F from R n to R n , defined as: where f i (x) : R n → R for i = 1, . . ., n.After presenting the general case, we focus on the specific case of p = 2.We introduce the definitions of the 2-regular mapping and the 2-factor-operator, which play a central role in the subsequent sections.

The Main Definitions and Constructions of the p-Regularity Theory
Throughout this section, we consider the nonlinear equation: where F is defined in Equation (4).Let x * ∈ R n represent a solution to the nonlinear Equation (5).The mapping F is called regular at x * if: or in other words, if: rank where F (x * ) is the Jacobian matrix of the mapping F at x * .Conversely, the mapping F is called nonregular (irregular, degenerate) if the regularity condition ( 6) is not satisfied.Let the space R n be decomposed into the direct sum: where Y 1 = Im F (x * ), is defined as the closure of the image of the first derivative of F evaluated at x * , and p is chosen as the minimum number for which Equation ( 7) holds.
The remaining spaces are defined as follows.Let Z 1 = R n , and let Z 2 be a closed complementary subspace to Y 1 .Let P Z 2 : R n → Z 2 be the projection operator onto Z 2 along Y 1 .Define Y 2 as the closed linear span of the image of the quadratic map More generally, we define Y i inductively for i = 2, . . ., p − 1 as: where Z i is a choice of a complementary subspace for (Y 1 ⊕ . . .⊕ Y i−1 ) with respect to Y, i = 2, . . ., p, and with respect to Y, i = 2, . . ., p. Finally, we let Y p = Z p .

Definition 3. The linear operator Ψ
, and is called the p-factor operator.
Consider the nonlinear operator Ψ p [•] p defined by: Definition 4. The p-kernel of the operator Ψ p at the point x * is the set H p (x * ) defined by: H p (x * ) = Ker p Ψ p , where: k (x * ), where: Now, we will focus on the special case of p = 2, which we are using in the paper.We denote the image of the Jacobian matrix F (x * ) by R 1 : R 1 = Im F (x * ), and the orthogonal complementary subspace of R 1 in R n by R 2 .Then: We also denote an n × n matrix of the orthogonal projection onto R i in R n by P R i , i = 1, 2.
Similarly to Equation ( 8), we introduce the mappings: The p-factor operator plays the central role in the p-regularity theory.We give the following definition of the p-factor-operator for p = 2. Definition 7. We define a 2-factor-operator of the mapping F at x * with respect to some vector h ∈ R n , h = 0, as a linear operator from R n to R n , defined by one of the following equations: Now we are ready to introduce another very important definition of the 2-regularity theory.
Definition 8.The mapping F is called 2-regular at the point x * with respect to the element h if the image of a 2-factor-operator, defined by one of the Equations (9)-( 11) is equal to R n .
Definition 9.The mapping F is called 2-regular at x * if it is 2-regular at x * with respect to all the elements h from a set that is defined as: H2 (x * ) = KerF (x * ) Ker 2 P R 2 F (x * ) for Ψ2 (h) defined by (10);

The p-Factor-Method for Solving Singular Nonlinear Equations
In this section, we introduce the p-factor-method for solving the singular nonlinear equation F(x) = 0. Then we consider the special case of p = 2 and describe several versions of the 2-factor-method.
Consider Equation (5) in the case when mapping F (x) is singular at x * .In this case, the p-factor method is an iterative procedure defined by: where k = 0, 1, . .., P i = P Y i for i = 1, 2, . . ., p, and vector h, h = 1, is chosen in such a way that the p-factor operator Ψ p is nonsingular, which implies that the mapping F is p-regular at x * along h.The following theorem is valid for the p-factor-method (12).
Theorem 2. Assume that mapping F ∈ C p+1 (R n ) and there exists vector h, h = 1, such that the p-factor operator Ψ p is nonsingular.Given a point x 0 ∈ N ε (x * ), where ε > 0 is sufficiently small and N ε (x * ) is a neighborhood of x * , the sequence {x k } defined by Equation (12) converges quadratically to the solution x * of (5): where C > 0 is an independent constant.Now, we are ready to describe several versions of the 2-factor-method.
For solving singular nonlinear Equation ( 5), the following iterative method, called the 2-factor-method, was proposed in [13]: where the vector h, h = 1, is chosen in such a way that matrix F (x * ) + P R 2 F (x * )[h] is invertible.
The following theorem states the convergence properties of the 2-factor-method (14).
Theorem 3. Given a mapping F ∈ C 3 (R n ), let x * be a solution of Equation (5).Assume that there exists a vector h ∈ R n such that h = 1 and F is 2-regular at the point x * with respect to the vector h with the 2-factor-operator Ψ2 (h) defined by (10).
Then there is a neighborhood N(x * ) of x * in R n such that for any x 0 ∈ N(x * ), the sequence {x k } generated by the 2-factor-method (14) converges to x * and: where C > 0 is some constant.
Proof.Since P R 2 is the orthoprojector onto subspace R 2 = R ⊥ 1 , then for the mapping: we have Φ(x * ) = 0.Moreover, because Φ (x * ) = Ψ2 (h) and the mapping F is 2-regular with respect to the vector h, by Definition 8 with Ψ2 (h) defined by (10), we obtain that Im Ψ2 (h) = R n .Hence, the matrix Φ (x * ) is invertible.Therefore, the 2-factor-method given in ( 14) is an application of Newton's method to system Φ(x) = 0 in a sufficiently small neighborhood of x * .Then the statement of the theorem follows from the properties of Newton's method [14] (Proposition 1.4.1).Now, we will introduce a modified version of the 2-factor-method (14).Assume that there exists a vector h = 0 such that F (x * )h = 0 and the matrix (F (x * ) +F (x * )[h]) is in-vertible.Then for solving Equation ( 5), we can use the following modified 2-factor-method: The following theorem states the convergence properties of method (16).
Theorem 4. Given a mapping F ∈ C 3 (R n ), let x * be a solution of Equation (5).Assume that there exists a vector h ∈ R n such that h = 1, F (x * )h = 0, and F is 2-regular at the point x * with respect to the vector h, where the 2-factor-operator Ψ2 (h) is defined by (11).
Then there is a neighborhood N(x * ) of x * in R n such that for any x 0 ∈ N(x * ), the sequence {x k } generated by the 2-factor-method ( 16) converges to x * , and relation (15) holds with some C > 0.
The proof is similar to one of Theorem 3. Now we introduce another version of the 2-factor-method.Assume that the following conditions hold: 1.There exists a vector h = 0 such that Note that if conditions ( 17) are satisfied, then to solve Equation ( 5), we can use the following modified 2-factor-method: For numerical realization of the 2-factor-method in the form ( 18), we only have to construct vector h satisfying conditions (17).Specifics of some problems allow us to choose vector h without any knowledge of the solution x * .We discuss the choice of the vector h in the following sections of the paper.
The following theorem states the convergence properties of method (18).
Theorem 5. Given a mapping F ∈ C 3 (R n ), let x * be a solution of Equation (5).Assume that there exists a vector h ∈ R n such that h = 1 and conditions (17) are satisfied.
Then, there is a neighborhood N(x * ) of x * in R n such that for any x 0 ∈ N(x * ), the sequence {x k } generated by the 2-factor-method (18) converges to x * and relation (15) holds with some C > 0.
The proof is similar to one of Theorem 3.

Nonlinear Equations with Quadratic Mappings: the Exact Solution Formula
In this section, we consider the mapping F defined by Equation (1) as follows: where M ∈ R n×n is a matrix, N ∈ R n is a vector, and B : R n → R n is the map defined by (2).The mapping B is twice continuously differentiable [15], and its derivatives are given by (B We will now illustrate the application of the 2-factor method (18) for solving the nonlinear equation F(x) = 0 with the mapping F defined by (1).We will present multiple approaches to obtain an exact formula for x * , with the first approach being a specific case of the second approach.Additionally, we will show that for the mapping F, the method (18) converges to x * in just one iteration.
First approach to obtain an exact formula for the solution x * .
For the mapping F defined by (1), the assumptions (17) of Theorem 5 can be simplified to the existence of a vector h that satisfies the following conditions: Under these assumptions (19), for the mapping F defined by ( 1) and a given point x 0 , the first iteration of the 2-factor-method ( 18) can be written as: which is equivalent to: Using the property 2B(h, x 0 ) = 2B(x 0 , h), the last equation implies a one-step method for calculating x 1 and, consequently, finding the solution x * : where the vector h satisfies conditions (19).
The numerical determination of the vector h depends on the specific characteristics of the problem.Alternatively, it can be obtained using the same method as described in the third approach below, which involves transforming the initial system into a system that is completely degenerate at the point x * .
Second approach to obtain an exact formula for the solution x * .Now we present an alternative approach for obtaining a formula for the solution x * of the equation F(x) = 0 using the same mapping F defined by (1).This second approach is applicable to a broader variety of problems compared to the first approach.
Let P 1 denote the projector onto Y 1 = span(Im 2 B), and let P 2 denote the projector onto Y ⊥ 1 , which is the orthogonal complementary subspace of Y 1 in R n .We note that P 2 (B[x * ] 2 ) = 0 and: Then for the mapping F defined in (1), Assume that there exists a vector h ∈ R n satisfying the conditions: Given the definition of P 2 , it follows that P 2 B[x * ] 2 = 0. Substituting this into (1), we obtain P 2 (Mx * + N) = 0. Hence, the point x * satisfies the following identities: By adding these equations and assuming (21), we obtain the exact formula for the solution x * : Remark 1.In the case when P 1 = I n and, hence, P 2 = O n×n , assumptions (21) become (19), and Equation (22) reduces to (20).
Example 1.Consider mapping F : R 2 → R 2 given by: We can represent the mapping F in the form (1) with: The equation F(x) = 0 has a locally unique solution x * = (1, 0) T .In this example, P 1 = I 2 and P 2 = O 2×2 .Hence, by Remark 1, we apply Equation (20) with h = (1, 0) T to obtain: In a numerical implementation, an additional procedure is required to construct the vector h.Since the exact point x * is not known in advance, we only assume that a sufficiently small neighborhood of x * is provided to apply the procedure.
Third approach to obtain an exact formula for the solution x * .
While the first two approaches rely on knowledge of the element h, which is determined by x * , the third approach does not require such knowledge.Instead, all we need is for the starting point to belong to a sufficiently small neighborhood N ε (x * ) of x * .Specifically, we have x 0 ∈ N ε (x * ), where ε > 0 is sufficiently small.Suppose that at the point x * , the first r vectors { f 1 (x * ), . . ., f r (x * )} are linearly independent, where f i is defined in (4) for i = 1, . . ., r. Assume also that the other vectors { f r+1 (x * ), . . ., f n (x * )} are linear combinations of the first r vectors.Therefore, there exist coefficients α j i such that: Let us introduce the subspace L(x) defined by: We denote the orthogonal projection on the subspace L(x) as P L(x) .Then, there exist coefficients α j i (x) such that: In addition, introduce the notation: Notice that x * is also a solution of the equation F(x) = 0, where F(x) is defined as: The definition of F(x) implies that F is 2-regular at the point x * .In the case that some of the vectors f k (x * ), k ∈ {r + 1, . . ., n}, are not zero vectors, transformation (24) can be used to reduce those vectors to zero vectors.This ensures that f k (x * ) = 0 for all k ∈ {r + 1, . . ., n}.Therefore, without loss of generality, we can assume that the mapping F(x) satisfies f j (x * ) = 0 for j = r + 1, . . ., n.An example of a mapping that satisfies these conditions is: where x 2 , and x * = (0, 0) T .Suppose there exist vectors ξ 1 = 0, . .., ξ r = 0, and h = 0, and indices k i ∈ {r + 1, . . ., n}, i = 1, . . ., r, such that the system: Then the mapping F(x) defined by: has x * as its zero, that is F(x * ) = 0.At the same time, compared to the Jacobian matrix of F(x), the matrix: is nonsingular.We can, therefore, consider the method: Theorem 6.Given a mapping F ∈ C 2 (R n ), let x * be a solution of Equation (5).Assume that there exist vectors ξ 1 = 0, . .., ξ r = 0, and h = 0, such that mapping ) is a neighborhood of x * and ε > 0 is sufficiently small.Then the sequence {x k }, k = 1, 2, . . .defined by (26) is convergent to the point x * with the quadratic rate of convergence, that is: where C > 0 is an independent constant.
Using definition of mapping F given by Equation (1), mappings f i introduced in (4) will have the following form: where B i is an n × n symmetric matrix, M i ∈ R n , and Given an initial point x 0 , we use the iterative method (26) to obtain: Because matrix B i is symmetric for any index i, then for any index j, we have: Therefore, Example 1 (Continuation).Consider mapping F : R 2 → R 2 defined in (23): where In this example, x * = (1, 0) T is a solution of F(x) = 0 and: Therefore, mapping F defined in (25) takes the form: where h is chosen in such a way that the matrix F (x * ) is nonsingular, and vectors ξ i are not used.
For example, we can take h = (1, 0) T .Then Equation (27) has the form: which is a solution of F(x) = 0 in this example.
The approaches described above can be modified to derive other methods for solving the equation F(x) = 0.For example, using the equation F (x) T h = 0, where h ∈ Ker F (x * ) T , we obtain the following method: The sequence {x k } converges to x * under the assumption that (F (x * ) T ) h is nonsingular.In this modification, unlike the second approach, we can construct an element h without the knowledge of the point x * , based on the information at an initial point x 0 .
Applying the modified method to Example 1, we obtain the same formulas and results as shown in Equation ( 28) above.To implement this approach, it is necessary to determine the vectors f i (x), i = 1, 2, . . ., n, which correspond to linearly independent vectors f i (x * ), i ∈ {1, 2, . . ., n}.This can be achieved using information at a point x 0 ∈ N ε (x * ), where ε > 0 is sufficiently small.If the assumption of p-regularity is satisfied, the identification of linearly independent vectors is performed using the method described in the next section.

Procedure for Identifying Zero Elements
The procedure for identifying zero elements could be used to implement the methods described in the previous sections numerically.Let F(x) : R n → R m be defined as: In this section, we present a theorem that describes the properties of a special mapping µ(x), which allows us to propose the method for determining r linear independent vectors f i k (x * ), k = 1, . . ., r, at the solution x * of F(x) = 0.This procedure is based on the information about the system of vectors { f 1 (x), . . ., f m (x)} at some point x in a small neighborhood of x * .As a result, we can define the mapping F(x) with the first r components f i k (x), k = 1, . . ., r, corresponding to the linearly independent vectors f i k (x * ), k = 1, . . ., r.
Let F ∈ C 3 (R n ) be 2-regular at the point x * .For some x ∈ N ε (x * ), where ε is sufficiently small, we define the following mappings: where d(x, S) denotes the distance between an element x and the set S. Note that if The mapping µ(x) is used to determine the maximum number r of linearly independent vectors in the system { f 1 (x * ), . . ., f m (x * )} using a special procedure that relies on the information about the mapping F(x) at the point x ∈ N ε (x * ).The properties of the mapping µ(x) are stated in the following theorem, and the proof can be found in [16].
In addition to the properties of the mapping µ(x) given in Theorem 7, we also need the following lemma (for the proof, see [16]).

Lemma 1.
For the non-negative mappings g(x) and µ(x), let the following inequalities hold: where L, C , C , and σ are positive constants, with C ≥ C and p ≥ 2.
Then, there exists a sufficiently small ε > 0 such that one of the following conditions holds: 1.
Remark 2. Based on the assumptions of Lemma 1, there exists a sufficiently small ε > 0 such that if the inequality g( x) ≤ µ( x) is satisfied for any x ∈ N ε (x * ), then the inequality g(x) ≤ µ(x) is satisfied for all x ∈ N ε (x * ), and hence g(x * ) = 0.
Similarly, if the inequality g( x) > µ( x) is satisfied for any x ∈ N ε (x * ), then the inequality g(x) > µ(x) is satisfied for all x ∈ N ε (x * ), and hence g(x * ) = 0. Now we are ready to introduce an iterative method that determines indices 1, . . ., r corresponding to the linearly independent vectors f i (x * ), i = 1, . . ., r.

In addition to the properties of the mapping
Let F(x) : R n → R m be defined by (29) and x * be a solution of F(x) = 0. Let x be in N ε (x * ), where ε is sufficiently small.Define function µ(x) using Equation (30).
Step 2. Use Step 1 to identify if the set S 2 = {1, . . ., m}\{i 1 } has at least one index j such that f j (x * ) = 0.If it does not, the method is finished.Otherwise, identify the next smallest index i 2 in the set S 1 such that the following condition is satisfied: According to Case 2 above, it means that the vectors f i 1 (x * ) and f i 2 (x * ) are linearly independent.
Repeat Step k until the method is finished.Without loss of generality, assume that the first r vectors { f 1 (x * ), . . ., f r (x * )} are linearly independent and define mapping F(x) as: where vectors fk (x) are defined in such a way that: Namely, let: be a linear combination of the vectors f 1 (x * ), . .., f r (x * ).Coefficients α k (x) are determined by solving the following system of equations: In addition, define B(x) to be a nonsingular matrix of the form: These vectors allow us to transform the mapping F(x) to F(x) = B(x) • F(x), where f 1 (x * ) = 0, . . ., f r (x * ) = 0, and f r+1 (x * ) = 0, . . ., f m (x * ) = 0.The purpose of this transformation is to simplify the structure of the projection operators.We present a simple example to illustrate an application of the proposed method.
It is easy to see that vectors f 1 (x * ) and f 2 (x * ) are linearly dependent.We can check this by applying the method introduced above.By using Equation (30), we define function µ(x) = max F(x) , where: and α is the angle between vectors f 1 (x) and f 2 (x).Note that: and hence: , we obtain: , and so µ( x) = 4 17 16 .
We are ready to apply the method described above.
Then in Step 1 of the method with vector f 2 (x), we also verify whether the following inequality holds: , we obtain: Therefore, we conclude that f 2 (x * ) = 0. Thus, in this example, the mapping F(x) defined in (31) has the form F(x) = ( f 1 , f2 ) T , where f 1 (x) = x 1 + x 2 and f2 = x 1 x 2 .

Quadratic Programming Problems
In this section, we consider the quadratic programming (QP) problem (3): where Q is an n × n symmetric matrix, A is an m × n matrix, c, x ∈ R n , and b ∈ R m .The Lagrangian for problem (3) is defined by: where λ = (λ 1 , . . ., λ m ) is the vector of Lagrange multipliers and a i is the ith row of the matrix A. The Karush-Kuhn-Tucker (KKT) conditions [17] are satisfied at x * with some λ * ∈ R m if: The point x * at which relations (33) are satisfied is called a stationary point or a KKT point.
Observe that (x * , λ * ) is a solution of the following system: We denote by I(x * ) the set of indices of the active constraints at x * : The following constraint qualification is used in the paper.
Definition 10 (Linear independence constraint qualification).The linear independence constraint qualification (LICQ) holds at a feasible point x * if the row-vectors a j , j ∈ I(x * ), corresponding to the active at x * constraints, are linearly independent.
The modified second-order sufficient conditions (MSOSC) state that there exist a Lagrange multiplier vector λ * and a scalar ν > 0 such that: for all ω satisfying: We divide the presentation in this section into three parts.We start by considering regular QP problems in Section 5.1.Then, in Section 5.2, we discuss the issue of identifying the active constraints and propose numerical strategies for determining the set I(x * ).We apply these techniques to degenerate QP problems in Section 5.3.

Regular Quadratic Programming
In this section, we consider regular quadratic programming (QP) problem (3).In other words, we assume that the Linear Independence Constraint Qualification (LICQ) and the Mangasarian-Fromovitz Constraint Qualification (MFCQ) conditions (35) hold.Recall that A is an m × n matrix of coefficients representing the constraints Ax ≤ b in problem (3).Without loss of generality, assume that the first p constraints are active at x * , so that: Then we can rewrite the matrix A in the following form: where A A is a p × n matrix of coefficients corresponding to the active constraints at x * , and A N is an (m − p) × n matrix of coefficients corresponding to the nonactive constraints at x * .It is important to note that we do not have prior knowledge of the set I(x * ).We will discuss possible numerical realizations to approximate the set of active constraints in Section 5.2.Additionally, we introduce the following notation associated with the active constraints at the point x * : In the following subsections, we will introduce three approaches to solving the QP problem (3) and provide formulas for the solution.

First Approach to Solving the QP Problem
In this subsection, we present an approach to solving the QP problem and obtaining a formula for its solution.This approach is based on the construction of the 2-factor-operator.For our consideration below, we need the following lemma.Lemma 2. Let V be an n × n matrix, G be a p × n matrix, such that the columns of G T are linearly independent, L be an n × l matrix, G N = diag (g i ) l i=1 be a diagonal full rank matrix, and: (Vx, x) > 0 for all x ∈ KerG\{0}. (37) Then matrix Γ defined by: is nonsingular.
Proof.To prove the lemma, we must prove that the matrix Γ defined by (38) has zero nullspace.Consider the following system that defines the nullspace of Γ in the form of a vector v = (x, y, z), where x ∈ R n , y ∈ R p , and z ∈ R l : Since G N is a full-rank diagonal matrix, the third equation in the system (39) implies that z = 0.Then, using the first equation, we obtain: Consequently, x = 0; otherwise, (Vx) • x = 0, which contradicts the assumption (37) of the lemma.Therefore, the first equation in (39) reduces to G T y = 0, and since the columns of G T are linearly independent, we obtain y = 0. Thus, the matrix Γ (38) has a zero nullspace, (x, y, z) = (0, 0, 0), and therefore, Γ is nonsingular.This concludes the proof of the lemma.Let x ∈ R n , λ ∈ R m , and mapping Φ be defined in (34), so that . Introduce mappings P 1 and P 2 as: Recall that matrix A A is defined in (36), and introduce vector h ∈ R n+m such that: where ).
Define mapping Ψ as: Recall that a i is the ith row of the matrix A and b = (b 1 , . . ., b m ) T .Then: and mapping Ψ defined in (40) can be rewritten as: Introduce matrix Λ N = diag(λ i ) i=p+1,...,m .Then, taking into account the definition of h 1 and h 2 , we obtain: Observe that if (x * , λ * ) is a solution of (34), it is also a solution of Ψ(x, λ) = 0, or, equivalently, To obtain the formula for the solution (x * , λ * ), we rewrite the system (41) as: Assuming that LICQ and MSOSC hold and apply Lemma 2, we obtain that the matrix: is invertible and obtain the formula for (x * , λ * ): 5.1.2.Second Approach to Solving the QP Problem Assume that we can estimate the set I(x * ), which is in our notation I(x * ) = {1, 2, . . ., p}.Taking into account that λ * p+1 = 0, . . ., λ * m = 0 and that A A x * = b A , system (34) can be reduced to the following one: which can be written as: Under the assumptions LICQ and MSOSC, the following matrix is invertible, and system (44) yields the formula for the solution (x * , λ * ): Remark 3. System (41) reduces to system (43) by removing equations Λ N A N h 1 , corresponding to the nonactive constraints.Similarly, Equation (42) reduces to (45).
Remark 4. We note that solutions of QP problems have the following specific property: if x * is a solution of the QP problem and h T L xx (x, λ)h = 0 for the vector h ∈ Ker A, then the points x = x * + th are also solutions of the QP problem.

Examples
In this section, we illustrate the two described approaches with examples.Namely, we consider the construction of system (41) required for the first approach.Then we illustrate using the exact formula (45) derived in the second approach.
Example 2. Consider the problem: The matrix A in this example is A = 1 0 0 1 and b = 0 1 .The solution to this problem By choosing h 1 = (0, 1) T and h 2 = (1, 0) T , the system (41) reduces to the linear system: Now, let us illustrate the second approach.Specifically, using the formula (45) for the solution of problem (46) with λ A = λ 1 , we obtain: Example 3. Consider the problem: The solution to this problem is the point By choosing h 1 = (1, 1) T and h 2 = (0, 0) T , the system (41) reduces to the following linear system for problem (47): 2 ) = (0, 0, 0, 0), as claimed.
To illustrate the second approach, we rewrite the exact formula (45) for the solution of problem (47) in the form: as claimed.

Third Approach to Solving the QP Problem
In this subsection, we present another approach to solving the QP problem.A formula that we obtain for the solution of the QP problem is also based on the construction of the 2-factor-operator.
First, we replace the inequality constraints in the QP problem with equality constraints of the form: Ax − b + y 2 = 0, where y 2 = (y 2 1 , . . ., y 2 m ) T .We then define the Lagrangian as follows: Introduce the notation: Then the point (x * , y * , λ * ) is a solution of the following system: The Jacobian matrix of the system (50) is given by: Assuming that LICQ and MSOSC hold, matrix Φ (x * , y * , λ * ) is singular if and only if the strict complementarity condition does not hold.In other words, the set of indices of the weakly active constraints, is not empty.Let P 1 be the matrix of the orthoprojector onto Im Φ (x * , y * , λ * ), and P 2 be the matrix of the orthoprojector onto Im Φ (x * , y * , λ * ) ⊥ .Note that P 1 will be a projector onto the linear part of the mapping Φ, while P 2 will be a projector onto the quadratic part of Φ.
Define H as a diagonal matrix with elements in the rows corresponding to the components of the vector ĥ2 , and K as a diagonal matrix with elements of the vector ĥ3 , so that: The 2-factor-operator for the mapping Φ is defined as: Ψ(x, y, λ) = P 1 Φ(x, y, λ) + P 2 Φ (x, y, λ) ĥ or We choose a vector ĥ1 according to (51) so that matrix: is nonsingular.Then (x * , y * , λ * ) can be determined using the following formula:

Identification of the Active Constraints
In this section, we address the issue of identifying the active constraints and propose strategies for numerically identifying the set of active constraints I(x * ).
We begin by considering the mapping h(z) : R n → R n , where h ∈ C 2 (R n ).We can also represent h as an n-vector of functions h 1 , . .., h n , such that h(z) = (h 1 (z), . . ., h n (z)) T .Theorem 8. Let h ∈ C 2 (R n → R n ) be 2-regular at the point z * , and let N ε (z * ) be a sufficiently small neighborhood of z * in R n .Assume that there exists a function η(z) : N ε (z * ) → R such that η(z * ) = 0 and for all z ∈ N ε (z * ), we have: where c 1 , c 2 > 0 are independent constants.Then there exists a sufficiently small δ such that 0 < δ < ε, and for any 1 ≤ i ≤ n and any point z ∈ N δ (z * ), the following holds: Proof.The proof is similar to the one in [5]. Let: (x) = min i=1,...,m d g i (x), span(g 1 (x), . . ., g i−1 (x), g i+1 (x), . . ., g m (x) , where d(a, S) denotes the distance between a vector a and a set S. It turns out that if we take η(x) = max g(x) 1/2 , (x) , and g is 2-regular at x * , then inequality (52) holds with z = x.Theorem 8 can be used for the numerical determination of the set of active constraints I(x * ) in the QP problem.To apply Theorem 8, we need to define a function η(•) that satisfies the conditions of the theorem.Recall that for QP problem (3), we denote the Lagrange function defined in (32) by L(x, λ).
Under the assumptions of LICQ and MSOSC, the following holds for x ∈ N ε (x * ) and λ ∈ N ε (λ * ): where ε > 0 is sufficiently small (see, for example, [18]).Hence, the required function η(x, λ) can be defined by: Then, according to Theorem 8, for every i = 1, . . ., m, if: Moreover, if we introduce the function: ), then η(x, λ) satisfies the estimate: where ε > 0 is a sufficiently small number.Then, for any i ∈ I(x * ), if: Here, I 0 (x * ) represents the set of constraints that are weakly active, i.e, for which the associated multipliers are equal to zero, while I + (x * ) denotes the set of constraints that are strongly active at the point x * , i.e., the associated Lagrange multipliers are positive.

General Case
Consider the Lagrange function in the form: In this case, if x * is a solution of problem (3), then there exist multipliers λ * 0 and λ * , not all zeros, such that λ * i ≥ 0, Ax * ≤ b, and the point (x * , λ * 0 , λ * ) is a solution of the following system: Introduce the notation: We are making the following assumption for the rest of the section.
As follows from Assumption 1 and Theorem 8, for those indices i = 1, . . ., m that satisfy the inequalty: we make a conclusion that i ∈ I(x * ).
We can illustrate Assumption 1 with the following examples, where Assumption 1 holds.Example 4. This example illustrates a choice of the function ξ in a more general setting.
Under Assumption 1, we use the function ξ to determine the set I(x * ).We also take into account the fact that the constraints in the problem are linear and the rank of the matrix 2 0 1 0 is 1.This implies that the constraints are linearly dependent.Consequently, we can eliminate, for instance, the second constraint from problem (55) and simplify system (56) to the following one: Now, by introducing: we construct the modified 2-factor-system: This system implies that the solution is x * = 0. Now we will demonstrate the application of the approach described in Section 5.1.2to problem (55).By removing the first constraint, we obtain a regular QP problem with A A = (1, 0).Additionally, in this example, Then application of Equation (45) derived in Section 5.1.2yields: as claimed.There are various directions in which the approach proposed in this paper can be extended.The next example illustrates a degenerate QP problem, in which MSOSC does not hold at the solution.However, an approach proposed in this paper can be applied to find a solution to this problem.Moreover, the solution set is locally not unique.
There are various directions in which the approach proposed in this paper can be extended.The next example illustrates a degenerate QP problem in which MSOSC does not hold at the solution.However, the approach proposed in this paper can still be applied to find a solution to this problem.It is worth noting that the solution set in this case is locally not unique.Example 6.Consider the problem: The solution to this problem is the set of points X * = {(0, x * 2 ) | x * 2 ∈ R}.We observe that the objective function in this example is satisfied as an equality for any x * ∈ X * .Additionally, the system (53) for this example consists of one linear equation and three quadratic equations: Denote the projection of the point x onto the set X * by P X * (x).Also, define the notation ξ(x, λ 0 , λ) = Φ 0 (x, λ 0 , λ) .For any point (x, λ 0 , λ) ∈ N ε (x * , λ * 0 , λ * ), we have the inequality: ξ(x, λ 0 , λ) ≥ α x − P X * (x) , where α > 0 and ε > 0 is sufficiently small.Consider, for example, the point x * = (0, 1) T .In problem (57), we replace the inequality x 1 ≤ 0 with the equation x 1 + y 2 = 0, where y ∈ R. We then introduce the Lagrange function in the form of (49) as follows: L(x, y, λ) = (x 1 x 2 − x 1 ) + λ(x 1 + y 2 ).

Conclusions
The paper focused on applying the p-regularity theory to nonlinear equations with quadratic mappings and quadratic programming (QP) problems.The first part of the paper used the special structure of the nonlinear equation and the construction of a 2-factor operator to derive a formula for the solution of the equation.In the second part, the QP problem was reduced to a system of linear equations using a 2-factor-operator.The solution of the system is a local minimizer of the QP problem with a corresponding Lagrange multiplier.The formula for the solution of the linear system was given.The paper also described a procedure for identifying the active constraints, which was used in constructing the linear system.
The paper primarily focuses on the case where the matrix F (x * ) is degenerate at the solution of the nonlinear equation F(x) = 0.However, the matrix F (x * ) does not need to be degenerate.While we do not explicitly address the identification of degeneracy at a solution point, it is possible to determine the degeneracy of the matrix F (x * ) by examining the behavior of the mapping F in a small neighborhood of the solution x * .Specifically, a function ν p (x) can be defined, such that: for some natural number p and constants c 1 and c 2 .Based on the conclusion about the degeneracy of the matrix F (x * ), an appropriate method can be chosen to solve the system of equations F(x) = 0, as stated in the following theorem.Theorem 9. Let F ∈ C p (R n , R n ) be such that F(x * ) = 0, and let there exist x ∈ N ε (x * ), where ε > 0 is sufficiently small.Then we have the following two cases:
In this case, det F (x * ) = 0, indicating that F is not degenerate at x * .