Location, Separation and Approximation of Solutions for Quadratic Matrix Equations

In this work, we focus on analyzing the location and separation of the solutions of the simplest quadratic matrix equation. For this, we use the qualitative properties that we can deduce of the study of the convergence of iterative processes. This study allow us to determine domains of existence and uniqueness of solutions, and therefore to locate and separate the solutions. Another goal is to approximate a solution of the quadratic matrix equation. For this, we consider iterative processes of fixed point type. So, analyzing the convergence of these iterative processes of fixed point type, we locate, separate and approximate solutions of quadratic matrix equations.


Introduction
Solving nonlinear equations is a fundamental issue of numerical analysis because a great variety of applied problems in engineering, physics, chemistry, biology, and statistics, involve such kind of equations as a part of its solving process [1].
In this paper, we focus on the simplest quadratic matrix equation: with M, N ∈ R m×m . We can extend our study to C m×m . The application of iterative schemes is a frequently used technique to approximate a solution of (1). Taking into account the study of the convergence of an iterative scheme, we can obtain qualitative results about the solution of Equation (1). For instance, from a semilocal convergence result [2], a solution existence result is obtained. In this case, we obtain an existence ball for the solution [3], which enables us to locate a solution of Equation (1). If we obtain a result of the uniqueness of the solution, from this result, we can separate solutions [4]. Thus, the study of the convergence of an iterative process enables us to locate and separate the solution of Equation (1).
Regarding the third aspect that we address in our study, to approximate a solution of (1), we propose the construction of a hybrid iterative process consisting of two stages: one of prediction, through an iterative process with good accessibility and low operational cost and another stage of correction, which allows us to accelerate the convergence of the considered hybrid process. Thus, we will call the hybrid iterative process that we will consider predictor-corrector [5][6][7]. Our goal is to obtain an efficient iterative scheme that is competitive with Newton's method, since it is the most applied to approximate a solution of a nonlinear equations. Thus, let us consider Picard's and Newton's methods as the predictor and corrector methods, respectively. Therefore, these are the three main goals that we set ourselves in this paper: to locate, separate and approximate a solution of (1) efficiently.
There are many areas of research in which a quadratic matrix equation emerges as a possible solution to a given problem. For instance, in control theory, the well-known Riccati equation [8] U MU + U A + A * U + N = 0, appears, where A, M, and N are given coefficient matrices. Another important example is motivated by noisy Wiener-Hopf problems for Markov chains. In this case, for a given diagonal matrix D, with positive and negative diagonal elements, and a positive number µ, which represents the level of noise from a Brownian motion independent of the Markov chains, it is required Q-matrix ∆ + and ∆ − , which satisfy the quadratic matrix equation: and Thus, ∆ + and ∆ − will be the generators of two new Markov chains, see for instance [9]. Finally, another important example appears in in the analysis of structural systems and vibration problems [10]. In this case arises the well known quadratic eigenvalue problem: The idea of considering when the study of the roots of a scalar quadratic equation can be generalized to the study of the Equation (1) seems clear. In general, the answer is no. Only in the special case that A = I, M and N commute, and M 2 − 4N has a square root, we can think of such a generalization. In general, both the location of a solution and its separation from other solutions and, finally, the approximation of a solution of (1) is not a simple problem to solve. Furthermore, we know that the quadratic matrix Equation (1) can have no solutions, a finite positive number, or infinitely many, as follows immediately from the theory of matrix square roots.
This article is organized as follows: Section 2 presents results of the location and separation of solutions for Equation (1) through a study of the convergence of fixed point methods, such as the Successive Approximation method and the Picard method. Section 3 formalizes the construction of our predictor-corrector method to approximate a solution of Equation (1) starting from the considered fixed-point and Newton's methods. Finally, Section 4 presents some numerical experiments, where it is shown how competitive the predictor-corrector method is with respect to Newton's method to solve quadratic matrix equations.

Location and Separation
In this section, we are interested in the global convergence of iterative processes of Successive Approximations and Picard methods. As we know, both methods produce the same approximations, but their different algorithmic expressions allow us to carry out different studies of their convergence. By studying the global convergence, we obtain a ball of global convergence for the iterative process. This ball allow us to locate a solution of (1). In addition, we obtain a domain of uniqueness of solutions that allow us to separate solutions of Equation (1).
Due to the aforementioned, to obtain global convergence results, it is clear to think in fixed point results. To do this, firstly, we transform the quadratic matrix Equation (1) into a fixed matrix equation. It is clear that there are different ways to carry out this process. So, if U = T(U) with T : R m×m → R m×m , then it is necessary that a fixed matrix for operator T must be a solution of Equation (1). The Successive Approximations and Picard methods are iterative schemes that we can use. The Successive Approximations Method for operator T is given by and the well-known Picard method [11]: with F : R m×m → R m×m given by F(U) = (I − T)(U). As we have already indicated before, it is easy to verify that both the Successive Approximations and the Picard methods provide the same iterations, since . A fixed matrix of P is a fixed matrix of T. However, the different algorithmic expressions of both methods allow us to obtain different convergence results for them. Now, to define the operator T, it is important to do so in such a way that the Successive Approximations Method is stable [12]. Thus, as can be seen in [13], if we consider T(U) = (U − M) −1 N, then the Successive Approximations Method given in (4) is stable. So, we consider the following algorithm: Obviously, in this situation, a fixed matrix of T is a solution of (1). Under the above-mentioned, due to the possible solutions that Equation (1) may have, we use a restricted fixed point result. Thus, Banach's Fixed Point Theorem relative to all space is known [14], but we use the following modification [15]: Let Ω ⊂ R m×m be a compact and convex set, and T : Ω → Ω is a contraction. Then, T has a unique fixed matrix in Ω, and this can be approximated by the iterative process U n+1 = T(U n ), n ≥ 0, for any U 0 given in Ω.

The Successive Approximations Method
To start our study, we are going to consider local convergence for the Successive Approximations Method given in (6). In this case, we need to require that there exists U * a fixed matrix of T in the domain B(U * , R). Then, through conditions for R, applying Theorem 1, we obtain conditions for method (6) to be convergent for any starting matrix U 0 in B(U * , R).
Next, to apply Theorem 1 to T with Ω = B(U * , R), T must be a contraction map of Ω into itself. To prove that T is a map of Ω into itself, we consider U ∈ Ω and, from Lemma 1, In this situation, T is well defined, and it follows Thus, the operator T is a contraction map of Ω into itself if f R (1) 2 N < 1.
where N = c. Then, we obtain the following result.

Theorem 2.
Suppose that U * is a fixed matrix for the operator T and there exists ( In order to obtain a new local result, under conditions of Theorem 2 for U * , we have that Then, U * ∈ B(0, βc), where 0 is the null matrix in R m×m . Let us see if we can ensure the convergence of the Successive Approximations Method being R ≥ βc. To do this, we give the following result.
Theorem 3. Suppose that U * is a fixed matrix for T and there exists (U * − M) −1 such that then, the Successive Approximations Method is convergent to U * from any starting matrix U 0 ∈ B(0, R). Moreover, U * is the unique fixed matrix of T in B(0, R).

This condition is satisfied for
Observe that, in this case, we also have that R < 1 2β . Finally, notice that the size of the existence domains can be related to the amount β 2 c. So, we always have to verify that Note that it is always verified that βc ≤ 1 − 1 − 8β 2 c 4β , and, therefore, βc ≤ R.
From the previous result, it is not difficult to think of obtaining another result of restricted global convergence in B(0, R). So, if there exists M −1 , then . Now, we can obtain the following result of semilocal convergence. Note that in the following result, we do not require the existence of a fixed matrix for T.
Proof. We apply Theorem 1 restricted to B(0, R). Firstly, under hypothesis, we have To illustrate the results obtained previously, we consider a simple academic example. So, we consider quadratic matrix Equation (1) with It is easy to check that is a solution of (1). The possible application of the results obtained depends on the values that the parameter takes. Thus, for = 0.04, it follows that β 2 c = 0.16299 < 1 and then, from Theorem 2, we obtain that there exists a unique solution in B(U * , R), where R ∈ (0, 0.564271). Moreover, the Successive Approximations Method is globally convergent in said ball. Now, considering = 0.025, we can apply the results of Theorems (3) and (4). In this case, β 2 c = 0.105551 < 1/8 and by Theorem 3, there is a unique solution in B(0, R), with R ∈ [0.140379, 0.313011]. Furthermore, the Successive Approximations Method is globally convergent on that ball. In addition, taking into account Theorem 4 and α 2 c = 0.113448 < 1/4, then there is a unique solution in B(0, R), with R ∈ [0.116697, 0.296583). Moreover, we obtain global convergence for the Successive Approximations Method on the same ball.
In view of the results obtained, we observe that the best separation of solutions is provided by Theorem 2. On the other hand, the best location of the solution U * is provided by Theorem 4 with B(0, 0.116696).

Picard Method
As it is known, the fixed point conditions are quite restrictive. Afterward, we smooth the above results. For this, we consider the Picard method (5), and we study its convergence by using an auxiliary point. This technique allow us to smooth out the previously obtained results. In addition, we obtain results of both local and semilocal convergence.
On the other hand, we observe that Thus, we have Now, by a mathematical inductive procedure, we have U n ∈ B(Ũ, R), n ≥ 0. On the other hand, notice that { U n+1 − U n } is a strictly decreasing sequence of positive real numbers. Consequently, there exists lim n U n = U * . Now, applying the continuity of the operator F, we obtain that and, therefore, U * ∈ B(Ũ, R) is a solution of Equation (1) Proof. We suppose that V * is another solution of Equation (1) F then it follows that V * = U * . To prove this, we first establish the following condition Therefore, for all W ∈ R m×m we have Thus, and then, there exists J −1 F . Obviously, V * = U * , and the result is proved.
Notice that, from Theorems (5) and (6), we obtain domains of existence and uniqueness of solution. Now, we obtain from Theorem 5 both local and semilocal convergence results for the Picard method given in (5).

Corollary 1. Let U * be a solution of Equation
Now, a semilocal convergence result for method (5) is obtained. For that, we takẽ U = U 0 in Theorema 5.
, with c = N , and β 2 0 c < 1. Then, the Picard method (5) converges to a solution U * of Equation (1) and U * , U n ∈ B(Ũ, R), for n 0, with Now, by using recurrence relations [16] relative to Picard iterations, we obtain another semilocal convergence result for the Picard method.
We suppose that there exists R the smallest positive real root of the auxiliar scalar equation If R < 1 β 0 − √ c and β 2 0 c < 1, then, method (5) converges to U * a solution of Equation (1) starting at U 0 . Moreover, U n , U * ∈ B(U 0 , R), for all n ≥ 0, and U * is unique in B U 0 , 1 Proof. Obviously, from (11), we have U 1 − U 0 = η 0 < R, and then U 1 ∈ B(U 0 , R). On the one hand, it follows that On the other hand, we have Proceeding in a similar way, we obtain Moreover, we have Being as R < 1 On the other hand, By a mathematical inductive procedure, we obtain for n 2: Method (5) converges to a solution U * and by continuity of F, it follows that U * is a solution of Equation (1).
To finish, proceeding as in Theorem 6, the uniqueness is followed. Now, we apply the Picard method to the example given in (7). Thus, for = 0.025, andŨ Furthermore, R = 0.103032 is the smallest positive root of Equation (11) and Thus, starting at U 0 , the Picard method converges to U * a solution of (7). Moreover, U n , U * ∈ B(U 0 , 0.10303). Thus, Corollary 2 and Theorem 7 improve the results of Theorems 3 and 4. Therefore, the location and the separation of solutions is improved applying the Picard method.

Approximation of Solutions
As we have already indicated in the introduction, to approximate a solution of Equation (1), we present a hybrid iterative scheme. Firstly, we apply an iterative scheme with a good accessibility and low operational cost, and after that, to accelerate convergence, we apply another faster method as follows: from any two one-point iterative schemes: We denote by Φ and Ψ the predictor and the corrector iterative schemes. So, we approximate a solution of Equation (1), more efficiently [17].
Since the chosen predictor methods, the Successive Approximations and the Picard methods have low operational cost and good accessibility domain, their applications are useful despite its linear convergence. It is well known that high-order iterative schemes have a reduced accessibility domain and, then, locating starting points for them is a difficult problem to solve. For this reason, we propose that the hybrid scheme (14) be convergent under the same conditions as those of the iterative predictor scheme.
We propose to approximate a solution of Equation (1) the hybrid iterative scheme: where F : R m×m → R m×m , with F(U) = U − (U − M) −1 N. Notice that the Picard and the Newton methods are chosen as predictor and corrector iterative schemes. Thus, the Newton method accelerates the convergence speed of the Picard method. Firstly, notice that for each U ∈ R m×m there exists F (U) : Henceforth, we refer to the hybrid method (15) as: Now, to ensure the convergence keeping the accessibility of the predictor scheme for scheme (15), we have to find N 0 .
Let us see how the first step of the corrector method is carried out, that is, the step from V 0 = U N 0 to V 1 . Notice that from Theorem 7, for n = 0, 1, 2, . . . , N 0 − 1, we have where R is the smallest positive root of scalar Equation (11). Now, taking into account U N 0 −1 , U N 0 ∈ B(U 0 , R) and proceeding as in (12) for t ∈ [0, 1], we obtain that there exists ( Proceeding as in (13), we obtain Thus, there exists V 1 and, it follows Next, we consider , for t ∈ [0, 1]. Now, from the Mean Value Theorem, it follows 3 . Now, we study the following step for the corrector method. On the one hand, Since {V n } is a Newton sequence, we have Hence, Therefore, if we suppose that the scalar equation has at least one positive solution and let R be the smallest of them, then V 2 − U 0 < δ R + R, and therefore, V 2 ∈ B(U 0 , R + δ R). Now, we present the semilocal convergence result for iterative scheme (15).

Theorem 8.
Under conditions of Theorem 7. We suppose that Equation (18) has at least one positive solution and let R be the smallest positive root, starting at U 0 ∈ R m×m and for scheme (15) converges to W * , a solution of Equation (1), where [x] is the integer part of the real number x. Moreover, W n , W * ∈ B(U 0 , R + δ R), for all n ≥ 0.
Under these conditions, we have that {W n } is a Cauchy sequence: with m ≥ 1 and n ≥ N 0 . Thus, if f (γ 0 )g(γ 0 ) < 1, then {W n } is a Cauchy sequence, and there exists W * ∈ B(U 0 , R + δ R) with W * = lim n W n . Now, notice that { F (W n ) } is a bounded sequence, since that Taking into account lim is bounded, we conclude that lim n→∞ F(W n ) = 0 and by the continuity of F, it follows that F(W * ) = 0.
Following the numerical example given in (7), we illustrate the result obtained in Theorem 8 for the hybrid iterative scheme (15).
To finish the section, we take = 0.04 and U 0 = 0 0 0 0 , in example (7). Thus, applying Theorem 7, it follows that R = 0.412888, and as N 0 1, to ensure a fast convergence with the Newton method to a solution of (7), we only have to iterate once the Picard method. Moreover, W n , W * ∈ B(U 0 , 0.160193), for all n ≥ 0.

Numerical Experiments
In this section, we show experimentally the benefits of algorithm (15). To approximate a solution of Equation (1), we apply Picard's method (P), Newton's method (NM) and predictor-corrector method (PC) in different situations to the following QME equation: We consider from 1 to 5 iterations to predict with the Picard method and then iterate with the Newton method to improve the accuracy of the solution. We consider the stopping criterion RES < 10 −10 with RES :  Tables 1 and 2. We choose the starting matrix which has a norm of approximately the same order of magnitude as a solution of the quadratic matrix equation; see [18]. In Table 2, we take U 0 = max 1≤i≤100 (b ii + b 2 ii + 4c ii )I 100 in a similar way as in [19].  Notice that the Picard method has a low operational cost, since each step involves solving only the linear system (U k − M)P k = N with respect to P k . However, in the Newton method, each step involves solving the Sylvester equation (U k − M)U k+1 + U k+1 P k = U k P k + N.
which in turn entails obtaining P k such that (U k − M)P k = N.
In the numerical tests that we show, we take values of the index N 0 given in (19) up to 5. This means that we consider up to five iterations of the predictor method. This is common when taking a good starting point U 0 . So, we save computational cost in these first iterations, since the computational cost of applying the Picard method is very low compared to the computational cost of applying Newton's method. Therefore, the number of iterations that must be carried out with the Picard method determines the most efficient situation in terms of computational cost. So, in Tables 1 and 2, the optimal situation is presented in both cases with four and three iterations of the predictor and corrector methods, respectively, and a residual of the order of 10 −14 .
Thus, as shown in Tables 1 and 2, to achieve the same accuracy that is attained with Newton's method, the computational cost is lower when applying the hybrid method. Even just one iteration of Picard halves the number of Newton iterations required to achieve the stopping criterion. Therefore, we can say that the hybrid iterative scheme (15) is successful to approximate a solution of Equation (1).

Conclusions
As a conclusion, we have presented a stable iterative scheme of Successive Approximations. We have carried out a qualitative analysis of the quadratic matrix equation from the Successive Approximations and Picard methods. We have presented domains of existence and uniqueness of solutions that allow us to locate and separate them. Furthermore, we have constructed a hybrid method by using a predictor-corrector iterative scheme.