Next Article in Journal
Some Asian Women Pioneers of Chemistry and Pharmacy
Next Article in Special Issue
Extended Newton–Kantorovich Theorem for Solving Nonlinear Equations
Previous Article in Journal
Gertrude Belle Elion, Chemist and Pharmacologist, Discoverer of Highly Relevant Active Substances
Previous Article in Special Issue
Extending King’s Method for Finding Solutions of Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Location, Separation and Approximation of Solutions for Quadratic Matrix Equations

by
Miguel Á. Hernández-Verón
and
Natalia Romero
*
Departamento de Matemáticas y Computación, Facultad de Ciencia y Tecnología, Universidad de La Rioja, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
Foundations 2022, 2(2), 457-474; https://doi.org/10.3390/foundations2020030
Submission received: 28 March 2022 / Revised: 26 April 2022 / Accepted: 5 May 2022 / Published: 12 May 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
In this work, we focus on analyzing the location and separation of the solutions of the simplest quadratic matrix equation. For this, we use the qualitative properties that we can deduce of the study of the convergence of iterative processes. This study allow us to determine domains of existence and uniqueness of solutions, and therefore to locate and separate the solutions. Another goal is to approximate a solution of the quadratic matrix equation. For this, we consider iterative processes of fixed point type. So, analyzing the convergence of these iterative processes of fixed point type, we locate, separate and approximate solutions of quadratic matrix equations.

1. Introduction

Solving nonlinear equations is a fundamental issue of numerical analysis because a great variety of applied problems in engineering, physics, chemistry, biology, and statistics, involve such kind of equations as a part of its solving process [1].
In this paper, we focus on the simplest quadratic matrix equation:
Q ( U ) = U 2 M U N = 0 ,
with M , N R m × m . We can extend our study to C m × m .
The application of iterative schemes is a frequently used technique to approximate a solution of (1). Taking into account the study of the convergence of an iterative scheme, we can obtain qualitative results about the solution of Equation (1). For instance, from a semilocal convergence result [2], a solution existence result is obtained. In this case, we obtain an existence ball for the solution [3], which enables us to locate a solution of Equation (1). If we obtain a result of the uniqueness of the solution, from this result, we can separate solutions [4]. Thus, the study of the convergence of an iterative process enables us to locate and separate the solution of Equation (1).
Regarding the third aspect that we address in our study, to approximate a solution of (1), we propose the construction of a hybrid iterative process consisting of two stages: one of prediction, through an iterative process with good accessibility and low operational cost and another stage of correction, which allows us to accelerate the convergence of the considered hybrid process. Thus, we will call the hybrid iterative process that we will consider predictor–corrector [5,6,7]. Our goal is to obtain an efficient iterative scheme that is competitive with Newton’s method, since it is the most applied to approximate a solution of a nonlinear equations. Thus, let us consider Picard’s and Newton’s methods as the predictor and corrector methods, respectively.
Therefore, these are the three main goals that we set ourselves in this paper: to locate, separate and approximate a solution of (1) efficiently.
There are many areas of research in which a quadratic matrix equation emerges as a possible solution to a given problem. For instance, in control theory, the well-known Riccati equation [8]
U M U + U A + A * U + N = 0 ,
appears, where A, M, and N are given coefficient matrices. Another important example is motivated by noisy Wiener–Hopf problems for Markov chains. In this case, for a given diagonal matrix D, with positive and negative diagonal elements, and a positive number μ , which represents the level of noise from a Brownian motion independent of the Markov chains, it is required Q-matrix Δ + and Δ , which satisfy the quadratic matrix equation:
1 2 μ 2 Δ ± 2 D Δ ± + Q = 0 ,
and
1 2 μ 2 Δ ± 2 + D Δ ± + Q = 0 .
Thus, Δ + and Δ will be the generators of two new Markov chains, see for instance [9]. Finally, another important example appears in in the analysis of structural systems and vibration problems [10]. In this case arises the well known quadratic eigenvalue problem:
E ( θ ) x = θ 2 A x + θ M x + N x = 0 ,     w i t h     A , M , N C m × m .
The idea of considering when the study of the roots of a scalar quadratic equation can be generalized to the study of the Equation (1) seems clear. In general, the answer is no. Only in the special case that A = I , M and N commute, and M 2 4 N has a square root, we can think of such a generalization. In general, both the location of a solution and its separation from other solutions and, finally, the approximation of a solution of (1) is not a simple problem to solve. Furthermore, we know that the quadratic matrix Equation (1) can have no solutions, a finite positive number, or infinitely many, as follows immediately from the theory of matrix square roots.
This article is organized as follows: Section 2 presents results of the location and separation of solutions for Equation (1) through a study of the convergence of fixed point methods, such as the Successive Approximation method and the Picard method. Section 3 formalizes the construction of our predictor–corrector method to approximate a solution of Equation (1) starting from the considered fixed-point and Newton’s methods. Finally, Section 4 presents some numerical experiments, where it is shown how competitive the predictor–corrector method is with respect to Newton’s method to solve quadratic matrix equations.

2. Location and Separation

In this section, we are interested in the global convergence of iterative processes of Successive Approximations and Picard methods. As we know, both methods produce the same approximations, but their different algorithmic expressions allow us to carry out different studies of their convergence. By studying the global convergence, we obtain a ball of global convergence for the iterative process. This ball allow us to locate a solution of (1). In addition, we obtain a domain of uniqueness of solutions that allow us to separate solutions of Equation (1).
Due to the aforementioned, to obtain global convergence results, it is clear to think in fixed point results. To do this, firstly, we transform the quadratic matrix Equation (1) into a fixed matrix equation. It is clear that there are different ways to carry out this process. So, if U = T ( U ) with T : R m × m R m × m , then it is necessary that a fixed matrix for operator T must be a solution of Equation (1). The Successive Approximations and Picard methods are iterative schemes that we can use. The Successive Approximations Method for operator T is given by
U 0   given   in   R m × m ,       U n + 1 = T ( U n ) ,       n 0 ,
and the well-known Picard method [11]:
U 0   given   in   R m × m ,       U n + 1 = P ( U n ) = U n F ( U n ) ,       n 0 ,
with F : R m × m R m × m given by F ( U ) = ( I T ) ( U ) . As we have already indicated before, it is easy to verify that both the Successive Approximations and the Picard methods provide the same iterations, since P ( U ) = U F ( U ) = U ( I T ) ( U ) = T ( U ) . A fixed matrix of P is a fixed matrix of T. However, the different algorithmic expressions of both methods allow us to obtain different convergence results for them.
Now, to define the operator T, it is important to do so in such a way that the Successive Approximations Method is stable [12]. Thus, as can be seen in [13], if we consider T ( U ) = ( U M ) 1 N , then the Successive Approximations Method given in (4) is stable. So, we consider the following algorithm:
U 0   given   in   R m × m ,       U n + 1 = ( U n M ) 1 N ,       n 0 .
Obviously, in this situation, a fixed matrix of T is a solution of (1).
Under the above-mentioned, due to the possible solutions that Equation (1) may have, we use a restricted fixed point result. Thus, Banach’s Fixed Point Theorem relative to all space is known [14], but we use the following modification [15]:
Theorem 1.
Let Ω R m × m be a compact and convex set, and T : Ω Ω is a contraction. Then, T has a unique fixed matrix in Ω, and this can be approximated by the iterative process U n + 1 = T ( U n ) , n 0 , for any U 0 given in Ω.

2.1. The Successive Approximations Method

To start our study, we are going to consider local convergence for the Successive Approximations Method given in (6). In this case, we need to require that there exists U * a fixed matrix of T in the domain B ( U * , R ) ¯ . Then, through conditions for R, applying Theorem 1, we obtain conditions for method (6) to be convergent for any starting matrix U 0 in B ( U * , R ) ¯ .
Taking into account that T ( U ) V = ( U M ) 1 V ( U M ) 1 N , we can deduce the following result.
Lemma 1.
Suppose that U * is a fixed matrix for the operator T and there exists ( U * M ) 1 such that ( U * M ) 1 β . Then, for each U B ( U * , R ) ¯ , with R < 1 / β , the following items are satisfied:
(i) 
for each t [ 0 , 1 ] , there exists ( U * + t ( U U * ) M ) 1 and ( U * + t ( U U * ) M ) 1 f R ( t ) , with f R ( t ) = β 1 t β R ,
(ii) 
T ( U * + t ( U U * ) ) f R ( t ) 2 N ,
(iii) 
T ( U * + t ( U U * ) T ( U * ) ) ( U U * ) ( f R ( t ) 2 + f R ( 0 ) 2 ) U U * N .
Proof. 
Firstly, we consider
I ( U * M ) 1 ( U * + t ( U U * ) M ) ( U * M ) 1 t ( U U * ) t β R β R ,
for t [ 0 , 1 ] . Therefore, by Banach lemma, there exists ( U * + t ( U U * ) M ) 1 , and
( U * + t ( U U * ) M ) 1 f R ( t ) .
since R < 1 / β . Taking into account item (i), we obtain:
T ( U * + t ( U U * ) ) ( U * + t ( U U * ) M ) 1 2 N f R ( t ) 2 N ,       t [ 0 , 1 ] .
Finally, to prove item (iii), notice that
T ( U * + t ( U U * ) T ( U * ) ) ( U U * ) = ( U * + t ( U U * ) M ) 1 ( U U * ) ( U * + t ( U U * ) M ) 1 N + ( U * M ) 1 ( U U * ) ( U * M ) 1 N .
Thus,
T ( U * + t ( U U * ) T ( U * ) ) ( U U * ) ( f R ( t ) 2 + f R ( 0 ) 2 ) U U * N ,
and item (iii) is proved. □
Next, to apply Theorem 1 to T with Ω = B ( U * , R ) ¯ , T must be a contraction map of Ω into itself. To prove that T is a map of Ω into itself, we consider U Ω and, from Lemma 1, there exists ( U M ) 1 if R < 1 β . In this situation, T is well defined, and it follows
T ( U ) U * = T ( U ) T ( U * ) T ( θ ) U U * ,
where θ = U * + t ( U U * ) . Therefore,
T ( U ) U * f R ( t ) 2 N R .
So, if f R ( t ) 2 N 1 ; then, T maps Ω into itself.
On the other hand, if T ( U ) < 1 for U Ω then T is a contraction map. By Lemma 1, it follows
T ( U ) f R ( 1 ) 2 N < 1 .
Thus, the operator T is a contraction map of Ω into itself if f R ( 1 ) 2 N < 1 .
Thus, both conditions are verified if f R ( 1 ) 2 N < 1 , since that R < 1 β c < 1 β , where N = c . Then, we obtain the following result.
Theorem 2.
Suppose that U * is a fixed matrix for the operator T and there exists ( U * M ) 1 such that ( U * M ) 1 β . If β 2 c < 1 , where N = c then, from any starting matrix U 0 B ( U * , R ) ¯ , with R 0 , 1 β c , the Successive Approximations Method is convergent to U * . Moreover, U * is the unique fixed matrix of T in B ( U * , R ) ¯ .
In order to obtain a new local result, under conditions of Theorem 2 for U * , we have that
U * = T ( U * ) ( U * M ) 1 N β c .
Then, U * B ( 0 , β c ) ¯ , where 0 is the null matrix in R m × m . Let us see if we can ensure the convergence of the Successive Approximations Method being R β c . To do this, we give the following result.
Theorem 3.
Suppose that U * is a fixed matrix for T and there exists ( U * M ) 1 such that ( U * M ) 1 β . If β 2 c < 1 8 , and
R 1 1 8 β 2 c 4 β , 1 2 1 β c       i f   β 2 c 0 , 1 9 , 1 1 8 β 2 c 4 β , 1 + 1 8 β 2 c 4 β       i f   β 2 c 1 9 , 1 8 , ,
then, the Successive Approximations Method is convergent to U * from any starting matrix U 0 B ( 0 , R ) ¯ . Moreover, U * is the unique fixed matrix of T in B ( 0 , R ) ¯ .
Proof. 
Being as there exists ( U * M ) 1 and U * B ( 0 , R ) ¯ , from any matrix U B ( 0 , R ) ¯ , it follows
I ( U * M ) 1 ( U M ) ( U * M ) 1 U * U 2 β R .
Now, as R < 1 2 β , then there exists ( U M ) 1 , and
( U M ) 1 β 1 2 β R .
Therefore,
T ( U ) ( U M ) 1 N β c 1 2 β R R .
This condition is satisfied for R 1 1 8 β 2 c 4 β , 1 + 1 8 β 2 c 4 β if β 2 c < 1 / 8 . Observe that in any situation, R < 1 2 β . Thus, T ( U ) B ( 0 , R ) ¯ .
In addition, if
T ( U ) β 2 c ( 1 2 β R ) 2 < 1 .
for any U B ( 0 , R ) ¯ , then T is a contraction map in B ( 0 , R ) ¯ . So, from 1 β 2 c > 0 , if R < 1 2 1 β c then, T is a contraction map in B ( 0 , R ) ¯ . Observe that, in this case, we also have that R < 1 2 β .
Finally, notice that the size of the existence domains can be related to the amount β 2 c . So, we always have to verify that 1 1 8 β 2 c 4 β 1 2 1 β c . However, 1 + 1 8 β 2 c 4 β 1 2 1 β c only if β 2 c > 1 9 . □
Note that it is always verified that β c 1 1 8 β 2 c 4 β , and, therefore, β c R .
From the previous result, it is not difficult to think of obtaining another result of restricted global convergence in B ( 0 , R ) ¯ . So, if there exists M 1 , then
I ( M 1 ) ( U M ) M 1 U α R
for U B ( 0 , R ) ¯ and M 1 α . Therefore, taking R < 1 / α , by the Perturbation Lemma in matrix analysis, there exists ( U M ) 1 and ( U M ) 1 α 1 α R . Now, we can obtain the following result of semilocal convergence. Note that in the following result, we do not require the existence of a fixed matrix for T.
Theorem 4.
Let the matrix M be such that there exists M 1 , with M 1 α and α 2 c 1 / 4 . Then, the Successive Approximations Method is convergent to the fixed matrix U * of T from any starting matrix U 0 B ( 0 , R ) ¯ , with R 1 1 4 α 2 c 2 α , 1 α c . Moreover, U * is the unique fixed matrix of T in B ( 0 , R ) ¯ .
Proof. 
We apply Theorem 1 restricted to B ( 0 , R ) ¯ . Firstly, under hypothesis, we have
T ( U ) ( U M ) 1 N α c 1 α R ,
for U B ( 0 , R ) ¯ . Thus, if R < 1 α and α c R + α R 2 0 , then, T ( U ) B ( 0 , R ) ¯ . Therefore, if α 2 c 1 / 4 and R 1 1 4 α 2 c 2 α , 1 + 1 4 α 2 c 2 α , then T ( U ) B ( 0 , R ) ¯ . Notice that R < 1 / α .
Moreover, if
T ( U ) ( U M ) 1 2 N α 2 c ( 1 α R ) 2 < 1 ,
for any U B ( 0 , R ) ¯ , then T is a contraction map in B ( 0 , R ) ¯ or equivalently, if R < 1 α c . Moreover, as 1 1 4 α 2 c 2 α 1 α c 1 + 1 4 α 2 c 2 α , taking R 1 1 4 α 2 c 2 α , 1 α c , applying the Theorem 1 restricted to B ( 0 , R ) ¯ , the result follows. □
To illustrate the results obtained previously, we consider a simple academic example. So, we consider quadratic matrix Equation (1) with
M = 2 0 0 1 ,   N = 2 ϵ ( ϵ 1 ) ϵ ( 2 3 ϵ ) ϵ ( 1 + 3 ϵ ) ϵ ( 2 + 5 ϵ ) ,       ϵ 0 .
It is easy to check that
U * = ϵ ϵ ϵ 2 ϵ
is a solution of (1).
The possible application of the results obtained depends on the values that the parameter ϵ takes. Thus, for ϵ = 0.04 , it follows that β 2 c = 0.16299 < 1 and then, from Theorem 2, we obtain that there exists a unique solution in B ( U * , R ) ¯ , where R ( 0 , 0.564271 ) . Moreover, the Successive Approximations Method is globally convergent in said ball.
Now, considering ϵ = 0.025 , we can apply the results of Theorems (3) and (4). In this case, β 2 c = 0.105551 < 1 / 8 and by Theorem 3, there is a unique solution in B ( 0 , R ) ¯ , with R [ 0.140379 , 0.313011 ] . Furthermore, the Successive Approximations Method is globally convergent on that ball. In addition, taking into account Theorem 4 and α 2 c = 0.113448 < 1 / 4 , then there is a unique solution in B ( 0 , R ) ¯ , with R [ 0.116697 , 0.296583 ) . Moreover, we obtain global convergence for the Successive Approximations Method on the same ball.
In view of the results obtained, we observe that the best separation of solutions is provided by Theorem 2. On the other hand, the best location of the solution U * is provided by Theorem 4 with B ( 0 , 0.116696 ) ¯ .

2.2. Picard Method

As it is known, the fixed point conditions are quite restrictive. Afterward, we smooth the above results. For this, we consider the Picard method (5), and we study its convergence by using an auxiliary point. This technique allow us to smooth out the previously obtained results. In addition, we obtain results of both local and semilocal convergence.
Theorem 5.
Let U ˜ R m × m such that there exists ( U ˜ M ) 1 with ( U ˜ M ) 1 β ˜ . We suppose that F ( U ˜ ) 1 + β ˜ 2 c 2 β ˜ c β ˜ , with c = N , and β ˜ 2 c < 1 . Then, from any starting matrix U 0 B ( U ˜ , R ) , the Picard method (5) converges to U * a solution of Equation (1) and U * , U n B ( U ˜ , R ) ¯ , for n 0 , with
R 1 β ˜ 2 c + β ˜ F ( U ˜ ) Δ 2 β ˜ , min 1 β ˜ c , 1 β ˜ 2 c + β ˜ F ( U ˜ ) + Δ 2 β ˜ ,
where Δ = ( 1 β ˜ 2 c + β ˜ F ( U ˜ ) ) 2 4 β ˜ F ( U ˜ ) .
Proof. 
Firstly, we have that
I ( U ˜ M ) 1 ( U ˜ + t ( U 0 U ˜ ) M ) ( U ˜ M ) 1 t ( U 0 U ˜ ) t β ˜ R β ˜ R ,
for t [ 0 , 1 ] . Therefore, if R < 1 / β ˜ , then there exists ( U ˜ + t ( U 0 U ˜ ) M ) 1 , and
( U ˜ + t ( U 0 U ˜ ) M ) 1 β ˜ 1 t β ˜ R .
Moreover, we take U 0 B ( U ˜ , R ) , then
U 1 U ˜ = ( U 0 U ˜ ) ( F ( U 0 ) F ( U ˜ ) ) + F ( U ˜ ) F ( U ˜ ) + 0 1 I F ( U ˜ + t ( U 0 U ˜ ) ) ( U 0 U ˜ ) d t F ( U ˜ ) + 0 1 ( U ˜ + t ( U 0 U ˜ ) M ) 1 ( U 0 U ˜ ) ( U ˜ + t ( U 0 U ˜ ) M ) 1 N d t F ( U ˜ ) + 0 1 β ˜ 1 t β ˜ R 2 U 0 U ˜ N d t < F ( U ˜ ) + β ˜ c 0 1 1 t β ˜ R 2 β ˜ R   d t = F ( U ˜ ) + β ˜ c 1 β ˜ R β ˜ c .
Therefore, U 1 B ( U ˜ , R ) , if
F ( U ˜ ) ( 1 β ˜ 2 c + β ˜ F ( U ˜ ) ) R + β ˜ R 2 < 0 .
Being as F ( U ˜ ) 1 + β ˜ 2 c 2 β ˜ c β ˜ , then
Δ = ( 1 β ˜ 2 c + β ˜ F ( U ˜ ) ) 2 4 β ˜ F ( U ˜ ) 0   a n d   1 β ˜ c 1 β ˜ 2 c + β ˜ F ( U ˜ ) Δ 2 β ˜ .
If condition (8) is satisfied, then (9) is true. Therefore, we obtain that U 1 B ( U ˜ , R ) .
On the other hand, we observe that
I ( U ˜ M ) 1 ( U 0 + t ( U 1 U 0 ) M ) ( U ˜ M ) 1 t ( U ˜ U 1 ) + ( 1 t ) ( U ˜ U 0 ) β ˜ R ,
for t [ 0 , 1 ] . Therefore, as R < 1 / β ˜ , then there exists ( U 0 + t ( U 1 U 0 ) M ) 1 , and
( U 0 + t ( U 1 U 0 ) M ) 1 β ˜ 1 β ˜ R .
Thus, we have
U 2 U 1 = 0 1 [ I F ( U 0 + t ( U 1 U 0 ) ) ] ( U 1 U 0 )   d t β ˜ 2 c ( 1 β ˜ R ) 2 U 1 U 0 .
Therefore, U 2 U 1 < U 1 U 0 if 1 β ˜ 2 c + β ˜ 2 R 2 2 β ˜ R > 0 . The last condition is satisfied since that R < 1 β ˜ c , with β ˜ 2 c < 1 .
Now, by a mathematical inductive procedure, we have U n B ( U ˜ , R ) , n 0 . On the other hand, notice that { U n + 1 U n } is a strictly decreasing sequence of positive real numbers. Consequently, there exists lim n U n = U * . Now, applying the continuity of the operator F, we obtain that
U * = lim n U n + 1 = lim n ( U n F ( U n ) ) = U * F ( U * )
and, therefore, U * B ( U ˜ , R ) ¯ is a solution of Equation (1). □
Now, we obtain a result of the uniqueness of solution of quadratic matrix equation given in (1).
Theorem 6.
Under conditions of Theorem 5, U * is the unique solution of Equation (1) in B ( U ˜ , 1 β ˜ c ) .
Proof. 
We suppose that V * is another solution of Equation (1) in B U ˜ , 1 β ˜ c . Then, we can write
0 = F ( V * ) F ( U * ) = 0 1 F ( U * + t ( V * U * ) )   d t ( V * U * ) = J F ( V * U * ) ,
where J F : R m × m R m × m is given by J F ( W ) = 0 1 F ( U * + t ( V * U * ) )   d t     W . So, if there exists J F 1 then it follows that V * = U * . To prove this, we first establish the following condition
I ( U ˜ M ) 1 ( U * + t ( V * U * ) M ) ( U ˜ M ) 1 t ( U ˜ V * ) + ( 1 t ) ( U ˜ U * ) β ˜ R < 1 ,
for t [ 0 , 1 ] . Therefore, there exists ( U * + t ( V * U * ) M ) 1 and
( U * + t ( V * U * ) M ) 1 β ˜ 1 β ˜ R .
Therefore, for all W R m × m we have
( I J F ) ( W ) = 0 1 ( I F ( U * + t ( V * U * ) ) ) W   d t ( U * + t ( V * U * ) M ) 1 2 W N .
Thus,
I J F β ˜ 2 c ( 1 β ˜ R ) 2 < β ˜ 2 c ( 1 β ˜ 1 β ˜ c ) 2 = 1
and then, there exists J F 1 . Obviously, V * = U * , and the result is proved. □
Notice that, from Theorems (5) and (6), we obtain domains of existence and uniqueness of solution. Now, we obtain from Theorem 5 both local and semilocal convergence results for the Picard method given in (5).
To obtain a local convergence result, we consider the existence of U ˜ = U * , a solution of (1). Thus, F ( U ˜ ) = 0 and β ˜ = β . In addition, notice that Δ = ( 1 β ˜ 2 c ) 2 .
Corollary 1.
Let U * be a solution of Equation (1) such that there exists ( U * M ) 1 with ( U * M ) 1 β and β 2 c < 1 . Then, method (5) converges to U * from any starting at U 0 B ( U * , R ) , where R 0 , 1 β c . Moreover, U * is unique in B U * , 1 β c .
Now, a semilocal convergence result for method (5) is obtained. For that, we take U ˜ = U 0 in Theorema 5.
Corollary 2.
Let U 0 R m × m be such that exists ( U 0 M ) 1 with ( U 0 M ) 1 β 0 . Suppose that F ( U 0 ) 1 + β 0 2 c 2 β 0 c β 0 , with c = N , and β 0 2 c < 1 . Then, the Picard method (5) converges to a solution U * of Equation (1) and U * , U n B ( U ˜ , R ) ¯ , for n 0 , with
R 1 β ˜ 2 c + β ˜ F ( U 0 ) Δ 2 β 0 , min 1 β 0 c , 1 β 0 2 c + β 0 F ( U 0 ) + Δ 2 β 0 ,
where Δ = ( 1 β 0 2 c + β 0 F ( U 0 ) ) 2 4 β 0 F ( U 0 ) . Moreover, the solution U * is the unique solution of the equation F ( U ) = 0 in B U 0 , 1 β 0 c .
Now, by using recurrence relations [16] relative to Picard iterations, we obtain another semilocal convergence result for the Picard method.
Theorem 7.
Let U 0 R m × m such that there exists ( U 0 M ) 1 with ( U 0 M ) 1 β 0 and F ( U 0 ) η 0 . We suppose that there exists R the smallest positive real root of the auxiliar scalar equation
1 + β 0 2 c ( 1 β 0 t ) 1 β 0 2 c 2 β 0 t + β 0 2 t 2 η 0 = t .
If R < 1 β 0 c and β 0 2 c < 1 , then, method (5) converges to U * a solution of Equation (1) starting at U 0 . Moreover, U n , U * B ( U 0 , R ) ¯ , for all n 0 , and U * is unique in B U 0 , 1 β 0 c .
Proof. 
Obviously, from (11), we have U 1 U 0 = η 0 < R , and then U 1 B ( U 0 , R ) .
On the one hand, it follows that
I ( U 0 M ) 1 ( U 0 + t ( U 1 U 0 ) M ) ( U 0 M ) 1 t ( U 1 U 0 ) t β 0 R β 0 R ,
for t [ 0 , 1 ] . Therefore, if R < 1 / β 0 then there exists ( U 0 + t ( U 1 U 0 ) M ) 1 , and
( U 0 + t ( U 1 U 0 ) M ) 1 β 0 1 t β 0 R .
On the other hand, we have
F ( U 1 ) = ( U 1 U 0 ) + U 0 U 1 F ( W )   d W = 0 1 ( F ( U 0 + t ( U 1 U 0 ) ) I ) ( U 1 U 0 )   d t 0 1 ( U 0 + t ( U 1 U 0 ) M ) 1 ( U 1 U 0 ) ( U 0 + t ( U 1 U 0 ) M ) 1 N d t 0 1 β 0 2 c ( 1 t β 0 R ) 2   d t U 1 U 0 β 0 c R 0 1 1 t β 0 R 2 β 0 R   d t U 1 U 0 β 0 2 c 1 β 0 R U 1 U 0 .
Thus, from R < 1 β 0 2 c β 0 , it follows that β 0 2 c 1 β 0 R < 1 and
U 2 U 1 β 0 2 c 1 β 0 R U 1 U 0 < U 1 U 0 .
Therefore, U 2 B ( U 0 , R ) .
Proceeding in a similar way, we obtain
I ( U 0 M ) 1 ( U 1 + t ( U 2 U 1 ) M ) ( U 0 M ) 1 t ( U 0 U 2 ) + ( 1 t ) ( U 0 U 1 ) β 0 R < 1 ,
for t [ 0 , 1 ] . Then, there exists ( U 1 + t ( U 2 U 1 ) M ) 1 with
( U 1 + t ( U 2 U 1 ) M ) 1 β 0 1 β 0 R .
Moreover, we have
F ( U 2 ) = = ( U 2 U 1 ) + U 1 U 2 F ( W )   d W   = 0 1 ( F ( U 1 + t ( U 2 U 1 ) ) I ) ( U 2 U 1 ) +   d t   0 1 ( U 1 + t ( U 2 U 1 ) M ) 1 ( U 2 U 1 ) ( U 1 + t ( U 2 U 1 ) M ) 1 N d t   β 0 2 c ( 1 β 0 R ) 2 U 2 U 1 .
Being as R < 1 β 0 c , we obtain
U 3 U 2 β 0 2 c ( 1 β 0 R ) 2 U 2 U 1 < U 2 U 1 .
On the other hand,
U 3 U 0 U 3 U 2 + U 2 U 1 + U 1 U 0 β 0 2 c ( 1 β 0 R ) 2 + 1 β 0 2 c 1 β 0 R + 1 U 1 U 0 < 1 + β 0 2 c 1 β 0 R 1 1 β 0 2 c ( 1 β 0 R ) 2 η 0 = 1 + β 0 2 c ( 1 β 0 R ) 1 β 0 2 c 2 β 0 R + β 0 2 R 2 η 0 = R ,
so, U 3 B ( U 0 , R ) .
By a mathematical inductive procedure, we obtain for n 2 :
U n + 1 U n β 0 2 c ( 1 β 0 R ) 2 U n U n 1
U n + 1 U 0 k = 1 n 1 β 0 2 c ( 1 β 0 R ) 2 k + 1 β 0 2 c 1 β 0 R + 1 U 1 U 0 < R .
Method (5) converges to a solution U * and by continuity of F, it follows that U * is a solution of Equation (1).
To finish, proceeding as in Theorem 6, the uniqueness is followed. □
Now, we apply the Picard method to the example given in (7).
Thus, for ϵ = 0.025 , and
U ˜ = ϵ 0 0 ϵ ,
it follows that there exists ( U ˜ M ) 1 with ( U ˜ M ) 1 1.09917 , and F ( U ˜ ) 1 + β ˜ 2 c 2 β ˜ c β ˜ and β ˜ 2 c < 1 , are satisfied. Thus, method (5) converges to a solution U * of Equation (7) from any starting matrix U 0 B ( U ˜ , R ) ¯ being R [ 0.0699064 , 0.608512 ] . Furthermore, U * is the unique solution of Equation (1) in B ( U ˜ , 0.608512 ) .
Now, we compare the results obtained in Corollary 2 and in Theorem 7. For that, we choose the null matrix as starting matrix
U 0 = 0 0 0 0 .
So, the hypotheses of Corollary 2 with ( U 0 M ) 1 1.11803 , F ( U 0 ) = 0.0895976 1 + β 0 2 c 2 β 0 c β 0 = 0.393375 , and β 0 2 c = 0.113448 < 1 are satisfied. Thus, the Picard method converges to a solution U * , which is unique in B ( U ˜ , 0.593166 ) and U * , U n B ( U 0 , R ) ¯ , with R [ 0.104128 , 0.593166 ) .
Furthermore, R = 0.103032 is the smallest positive root of Equation (11) and R < 1 β 0 c = 0.593166 . Thus, starting at U 0 , the Picard method converges to U * a solution of (7). Moreover, U n , U * B ( U 0 , 0.10303 ) ¯ .
Thus, Corollary 2 and Theorem 7 improve the results of Theorems 3 and 4. Therefore, the location and the separation of solutions is improved applying the Picard method.

3. Approximation of Solutions

As we have already indicated in the introduction, to approximate a solution of Equation (1), we present a hybrid iterative scheme. Firstly, we apply an iterative scheme with a good accessibility and low operational cost, and after that, to accelerate convergence, we apply another faster method as follows:
U 0 R m × m , U n + 1 = Φ ( U n ) ,   n = 1 , 2 , , N 0 , V 0 = U N 0 + 1 , V n + 1 = Ψ ( V n ) ,   n 0 ,
from any two one-point iterative schemes:
U 0 R m × m , U n + 1 = Φ ( U n ) ,   n 0 ,     a n d     V 0 R m × m , V n + 1 = Ψ ( V n ) ,   n 0 .
We denote by Φ and Ψ the predictor and the corrector iterative schemes. So, we approximate a solution of Equation (1), more efficiently [17].
Since the chosen predictor methods, the Successive Approximations and the Picard methods have low operational cost and good accessibility domain, their applications are useful despite its linear convergence. It is well known that high-order iterative schemes have a reduced accessibility domain and, then, locating starting points for them is a difficult problem to solve. For this reason, we propose that the hybrid scheme (14) be convergent under the same conditions as those of the iterative predictor scheme.
We propose to approximate a solution of Equation (1) the hybrid iterative scheme:
U 0 R m × m , U n + 1 = U n F ( U n ) ,   n = 0 , 1 , 2 , , N 0 1 , V 0 = U N 0 , V n + 1 = V n [ F ( V n ) ] 1 F ( V n ) ,   n 0 ,
where F : R m × m R m × m , with F ( U ) = U ( U M ) 1 N . Notice that the Picard and the Newton methods are chosen as predictor and corrector iterative schemes. Thus, the Newton method accelerates the convergence speed of the Picard method.
Firstly, notice that for each U R m × m there exists F ( U ) : R m × m R m × m , with [ F ( U ) ] W = W + ( U M ) 1 W ( U M ) 1 N , for all W R m × m . Moreover, there exists F ( U ) : R m × m × R m × m R m × m , with [ F ( U ) ] U V = ( U M ) 1 U ( U M ) 1 V ( U M ) 1 N ( U M ) 1 V ( U M ) 1 U ( U M ) 1 N , for all U , V R m × m .
Henceforth, we refer to the hybrid method (15) as:
W n = U n     f o r     n = 0 , 1 , . . . , N 0 1 , V n     f o r     n N 0 ,
Now, to ensure the convergence keeping the accessibility of the predictor scheme for scheme (15), we have to find N 0 .
Let us see how the first step of the corrector method is carried out, that is, the step from V 0 = U N 0 to V 1 . Notice that from Theorem 7, for n = 0 , 1 , 2 , , N 0 1 , we have
U n + 1 U n β 0 2 c ( 1 β 0 R ) 2 U n U n 1
U n + 1 U 0 < R ,
where R is the smallest positive root of scalar Equation (11).
Now, taking into account U N 0 1 , U N 0 B ( U 0 , R ) and proceeding as in (12) for t [ 0 , 1 ] , we obtain that there exists ( U N 0 1 + t ( U N 0 U N 0 1 ) M ) 1 with
( U N 0 1 + t ( U N 0 U N 0 1 ) M ) 1 β 0 1 β 0 R .
Therefore, there exists ( V 0 M ) 1 and F ( V 0 ) is well defined. Moreover, ( V 0 M ) 1 β 0 1 β 0 R .
Proceeding as in (13), we obtain
F ( V 0 ) = F ( U N 0 ) K U N 0 U N 0 1 K N 0 U 1 U 0 ,
where K = β 0 2 c ( 1 β 0 R ) 2 .
Moreover, as I F ( V 0 ) = I F ( U N 0 ) K < 1 , there exists [ F ( V 0 ) ] 1 with
[ F ( V 0 ) ] 1 1 1 K .
Thus, there exists V 1 and, it follows
V 1 V 0 K 1 K U N 0 U N 0 1 < η 0 1 K K N 0 < η 0 1 K .
Next, we consider
[ F ( V 0 ) ] 1 V 1 V 0 a 0 ,
with δ = η 0 ( 1 K ) K N 0 and a 0 = δ ( 1 K ) < η 0 ( 1 K ) 2 .
Now, we suppose that there exists R ˜ > 0 such that if V 1 B ( V 0 , δ R ˜ ) and β 0 ( R + δ R ˜ ) < 1 , then it follows that ( V 0 + t ( V 1 V 0 ) M ) 1 exists, with ( V 0 + t ( V 1 V 0 ) M ) 1 β 0 1 β 0 ( R + δ R ˜ ) , for t [ 0 , 1 ] .
Now, from the Mean Value Theorem, it follows
F ( U ) F ( V ) Φ ( R ˜ ) U V
with U , V B ( V 0 , R ˜ ) and Φ ( R ˜ ) = 2 β 0 3 c ( 1 β 0 ( R + R ˜ ) ) 3 .
Now, we study the following step for the corrector method. On the one hand,
I [ F ( V 0 ) ] 1 F ( V 1 ) Φ ( R ˜ ) [ F ( V 0 ) ] 1 V 1 V 0 γ 0
where γ 0 = Φ ( R ˜ ) a 0 . Now, if γ 0 < 1 , there exists [ F ( V 1 ) ] 1 and
[ F ( V 1 ) ] 1 f ( γ 0 ) [ F ( V 0 ) ] 1 ,
where f ( t ) = 1 1 t .
Since { V n } is a Newton sequence, we have
F ( V 1 ) = F ( V 0 ) + F ( V 0 ) ( V 1 V 0 ) + V 0 V 1 F ( W ) F ( V 0 )   d W   = V 0 V 1 ( F ( W ) F ( V 0 ) )   d W   = 0 1 ( F ( V 0 + t ( V 1 V 0 ) ) F ( V 0 ) ) ( V 1 V 0 )   d t , F ( V 1 )   Φ ( R ˜ ) 2 V 1 V 0 2 .
Hence,
V 2 V 1 [ F ( V 1 ) ] 1 F ( V 1 ) f ( γ 0 ) g ( γ 0 ) V 1 V 0 ,
where g ( t ) = t 2 .
Moreover, from a 0 < η 0 ( 1 K ) 2 , if f ( γ 0 ) g ( γ 0 ) < 1 , then it follows
V 2 V 0 V 2 V 1 + V 1 V 0   ( 1 + f ( γ 0 ) g ( γ 0 ) ) V 1 V 0   < 1 1 f ( γ 0 ) g ( γ 0 ) V 1 V 0   = 2 ( 1 Φ ( R ˜ ) a 0 ) 2 3 Φ ( R ˜ ) a 0 δ = δ R ˜ .
Therefore, if we suppose that the scalar equation
2 ( ( 1 K ) 2 M ( t ) η 0 ) 2 ( 1 K ) 2 3 M ( t ) η 0 = t ,
has at least one positive solution and let R ˜ be the smallest of them, then V 2 U 0 < δ R ˜ + R , and therefore, V 2 B ( U 0 , R + δ R ˜ ) .
Now, we present the semilocal convergence result for iterative scheme (15).
Theorem 8.
Under conditions of Theorem 7. We suppose that Equation (18) has at least one positive solution and let R ˜ be the smallest positive root, starting at U 0 R m × m and for
N 0 1 + max ln ( 1 K ) 2 2 η 0 Φ ( R ˜ ) ln K , ln ( 1 K ) ( 1 / β 0 R ) R ˜ η 0 ln K ,
scheme (15) converges to W * , a solution of Equation (1), where [ x ] is the integer part of the real number x. Moreover, W n , W * B ( U 0 , R + δ R ˜ ) ¯ , for all n 0 .
Proof. 
Observe that from (17), we have V 1 V 0 < δ R ˜ . So, V 1 U 0 < R + δ R ˜ and then V 1 B ( U 0 , R + δ R ˜ ) .
Moreover, from the hypothesis, ( V 0 + t ( V 1 V 0 ) M ) 1 exists, with ( V 0 + t ( V 1 V 0 ) M ) 1 β 0 1 β 0 ( R + δ R ˜ ) , for t [ 0 , 1 ] .
Next, from γ 0 < 1 / 2 for N 0 such that (19), it follows
(I1)
[ F ( V 1 ) ] 1 f ( γ 0 ) [ F ( V 0 ) ] 1 ,
(II1)
F ( V 1 ) Φ ( R ˜ ) 2 V 1 V 0 2 ,
(III1)
V 2 V 1 f ( γ 0 ) g ( γ 0 ) V 1 V 0 < V 1 V 0 ,
(IV1)
V 2 V 0 < R ˜ ,
and then V 2 B ( U 0 , R + δ R ˜ ) .
To continue, from γ 0 , we define the auxiliary scalar sequence { γ n } as γ n + 1 : = γ n f ( γ n ) 2 g ( γ n ) , n 0 . Moreover, from γ 0 < 1 / 2 , for N 0 such that (19), it follows f ( γ 0 ) 2 g ( γ 0 ) < 1 and then, { γ n } is a strictly decreasing sequence. Next, by the induction procedure, it follows ( I n ) ( IV n ) , for n 2 .
Under these conditions, we have that { W n } is a Cauchy sequence:
W n + m W n = V n + m V n   i = n n + m 1 V i + 1 V i   1 + i = n n + m 2 j = n i f ( γ j ) g ( γ j ) V n + 1 V n   < i = n 1 n + m 2 ( f ( γ 0 ) g ( γ 0 ) ) i + 1 V 1 V 0   = i = 0 m 1 ( f ( γ 0 ) g ( γ 0 ) ) n + i V 1 V 0   = 1 ( f ( γ 0 ) g ( γ 0 ) ) m 1 f ( γ 0 ) g ( γ 0 )   ( f ( γ 0 ) g ( γ 0 ) ) n V 1 V 0 .
with m 1 and n N 0 . Thus, if f ( γ 0 ) g ( γ 0 ) < 1 , then { W n } is a Cauchy sequence, and there exists W * B ( U 0 , R + δ R ˜ ) ¯ with W * = lim n W n .
Now, notice that { F ( W n ) } is a bounded sequence, since that
F ( W n ) F ( U 0 ) + M W n U 0 < F ( U 0 ) + M ( R + δ R ˜ ) .
Taking into account lim n W n + 1 W n = [ F ( W n ) ] 1 F ( W n ) = 0 and { F ( W n ) } is bounded, we conclude that lim n F ( W n ) = 0 and by the continuity of F, it follows that F ( W * ) = 0 . □
Following the numerical example given in (7), we illustrate the result obtained in Theorem 8 for the hybrid iterative scheme (15).
To finish the section, we take ϵ = 0.04 and
U 0 = 0 0 0 0 ,
in example (7). Thus, applying Theorem 7, it follows that R ˜ = 0.412888 , and as N 0 1 , to ensure a fast convergence with the Newton method to a solution of (7), we only have to iterate once the Picard method. Moreover, W n , W * B ( U 0 , 0.160193 ¯ ) , for all n 0 .

4. Numerical Experiments

In this section, we show experimentally the benefits of algorithm (15). To approximate a solution of Equation (1), we apply Picard’s method (P), Newton’s method (NM) and predictor–corrector method (PC) in different situations to the following QME equation:
M = 1     2           100 ,   N = 1 1     1     1 1   1 R 100 × 100 .
We consider from 1 to 5 iterations to predict with the Picard method and then iterate with the Newton method to improve the accuracy of the solution. We consider the stopping criterion R E S < 10 10 with R E S : = U k 2 M U k N F , where the Frobenius norm of a matrix M is M F 2 : = t r a c e ( M T M ) . The Picard, the Newton and the predictor–corrector methods were implemented in Mathematica Version 10.0 .
The number of iterations denoted by k and the residuals are reported in Table 1 and Table 2. We choose the starting matrix
U 0 = ( M F + M F 2 + 4 N F ) / 2 I 100 ,
which has a norm of approximately the same order of magnitude as a solution of the quadratic matrix equation; see [18]. In Table 2, we take U 0 = max 1 i 100 ( b i i + b i i 2 + 4 c i i ) I 100 in a similar way as in [19].
Notice that the Picard method has a low operational cost, since each step involves solving only the linear system
( U k M ) P k = N
with respect to P k . However, in the Newton method, each step involves solving the Sylvester equation
( U k M ) U k + 1 + U k + 1 P k = U k P k + N .
which in turn entails obtaining P k such that ( U k M ) P k = N .
In the numerical tests that we show, we take values of the index N 0 given in (19) up to 5. This means that we consider up to five iterations of the predictor method. This is common when taking a good starting point U 0 . So, we save computational cost in these first iterations, since the computational cost of applying the Picard method is very low compared to the computational cost of applying Newton’s method. Therefore, the number of iterations that must be carried out with the Picard method determines the most efficient situation in terms of computational cost. So, in Table 1 and Table 2, the optimal situation is presented in both cases with four and three iterations of the predictor and corrector methods, respectively, and a residual of the order of 10 14 .
Thus, as shown in Table 1 and Table 2, to achieve the same accuracy that is attained with Newton’s method, the computational cost is lower when applying the hybrid method. Even just one iteration of Picard halves the number of Newton iterations required to achieve the stopping criterion. Therefore, we can say that the hybrid iterative scheme (15) is successful to approximate a solution of Equation (1).

5. Conclusions

As a conclusion, we have presented a stable iterative scheme of Successive Approximations. We have carried out a qualitative analysis of the quadratic matrix equation from the Successive Approximations and Picard methods. We have presented domains of existence and uniqueness of solutions that allow us to locate and separate them. Furthermore, we have constructed a hybrid method by using a predictor–corrector iterative scheme. Finally, some examples have confirmed the benefits of applying the hybrid iterative scheme to Equation (1).

Author Contributions

Conceptualization, M.Á.H.-V. and N.R.; methodology, M.Á.H.-V. and N.R.; software, M.Á.H.-V. and N.R.; validation, M.Á.H.-V. and N.R.; formal analysis, M.Á.H.-V. and N.R.; investigation, M.Á.H.-V. and N.R.; resources, M.Á.H.-V. and N.R.; writing-original draft preparation, M.Á.H.-V. and N.R.; writing—review and editing, M.Á.H.-V. and N.R.; visualization, M.Á.H.-V. and N.R.; supervision, M.Á.H.-V. and N.R.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministerio de Ciencia, Innovación y Universidades. PGC2018-095896-B-C21.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. [Google Scholar]
  2. Singh, S.; Gupta, D.K.; Martínez, E.; Hueso, J.L. Semilocal Convergence Analysis of an Iteration of Order Five Using Recurrence Relations in Banach Spaces. Mediterr. J. Math. 2016, 13, 4219–4235. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; George, S.; Erappa, S.M. Ball convergence for an eighth order efficient method under weak conditions in Banach spaces. SeMA J. 2017, 74, 513–521. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
  5. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Pseudocomposition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput. 2012, 218, 11496–11504. [Google Scholar] [CrossRef]
  6. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. New Predictor-Corrector Methods with High Efficiency for Solving Nonlinear Systems. J. Appl. Math. 2012, 2012, 709843. [Google Scholar] [CrossRef]
  7. Ezquerro, J.A.; Hernández, M.A.; Romero, N.; Velasco, A.I. On Steffensen’s method on Banach spaces. J. Comput. Appl. Math. 2013, 249, 9–23. [Google Scholar] [CrossRef]
  8. Hernández-Verón, M.A.; Romero, N. Solving Symmetric Algebraic Riccati Equations with High Order Iterative Schemes. Mediterr. J. Math. 2018, 15, 15–51. [Google Scholar] [CrossRef]
  9. Rogers, L.C.G. Fluid models in queueing theory and Wiener-Hopf factorization of Markov chains. Ann. Appl. Probab. 1994, 4, 390–413. [Google Scholar] [CrossRef]
  10. Zheng, Z.C.; Ren, G.U.; Wang, W.J. A reduction method for large scale unsymmetric eigenvalue problems in structural dynamics. J. Sound Vib. 1997, 199, 253–268. [Google Scholar] [CrossRef]
  11. Ezquerro, J.A.; Hernández, M.A. A modification of the convergence conditions for Picard’s iteration. Comp. Appl. Math. 2004, 23, 55–65. [Google Scholar] [CrossRef] [Green Version]
  12. Amat, S.; Ezquerro, J.A.; Hernández, M.A. On a new family of high-order iterative methods for the matrix pth root. Numer. Linear Algebra Appl. 2015, 22, 585–595. [Google Scholar] [CrossRef]
  13. Hernández-Verón, M.A.; Romero, N. Numerical analysis for the quadratic matrix equations from a modification of fixed-point type. Math. Meth. Appl. Sci. 2019, 42, 5856–5866. [Google Scholar] [CrossRef]
  14. Kirk, W.A.; Sims, B. Handbook of Metric Fixed Point Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  15. Berinde, V. Iterative Approximation of Fixed Points; Springer: New York, NY, USA, 2007. [Google Scholar]
  16. Ezquerro, J.A.; Hernández-Verón, M.A. Mild differentiability conditions for Newton’s method in Banach spaces. In Frontiers in Mathematics; Birkhäuser: Basel, Switzerland; Springer: Cham, Switzerland, 2020. [Google Scholar]
  17. Amat, S.; Busquier, S.; Grau, A.; Grau-Sánchez, M. Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Appl. Math. Comput. 2013, 219, 7954–7963. [Google Scholar] [CrossRef]
  18. Davis, G.J. Numerical solution of a quadratic matrix equation. SIAM J. Sci. Statist. Comput. 1981, 2, 164–175. [Google Scholar] [CrossRef]
  19. Hernández-Verón, M.A.; Romero, N. An efficient predictor-corrector iterative scheme for solving Wiener-Hopf problems. J. Comput. Appl. Math. 2022, 404, 113554. [Google Scholar] [CrossRef]
Table 1. Number of iterations and the residuals of (20), from U 0 = ( M F + M F 2 + 4 N F ) / 2 I 100 and stopping criteria R E S < 10 10 .
Table 1. Number of iterations and the residuals of (20), from U 0 = ( M F + M F 2 + 4 N F ) / 2 I 100 and stopping criteria R E S < 10 10 .
k RES
P25 0.729749 × 10 11
NM11 0.366868 × 10 14
PC 5 + 3 0.267918 × 10 17
PC 4 + 3 0.527843 × 10 14
PC 3 + 4 0.945013 × 10 20
PC 2 + 4 0.408944 × 10 14
PC 1 + 5 0.436365 × 10 14
Table 2. Number of iterations and the residuals of (20), from U 0 = ( 100 + 10,004 ) I 100 and stopping criteria R E S < 10 10 .
Table 2. Number of iterations and the residuals of (20), from U 0 = ( 100 + 10,004 ) I 100 and stopping criteria R E S < 10 10 .
k RES
P25 0.735139 × 10 11
NM11 0.243716 × 10 11
PC 5 + 3 0.114883 × 10 14
PC 4 + 3 0.584884 × 10 14
PC 3 + 4 0.273 × 10 14
PC 2 + 4 0.483241 × 10 14
PC 1 + 5 0.580521 × 10 14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hernández-Verón, M.Á.; Romero, N. Location, Separation and Approximation of Solutions for Quadratic Matrix Equations. Foundations 2022, 2, 457-474. https://doi.org/10.3390/foundations2020030

AMA Style

Hernández-Verón MÁ, Romero N. Location, Separation and Approximation of Solutions for Quadratic Matrix Equations. Foundations. 2022; 2(2):457-474. https://doi.org/10.3390/foundations2020030

Chicago/Turabian Style

Hernández-Verón, Miguel Á., and Natalia Romero. 2022. "Location, Separation and Approximation of Solutions for Quadratic Matrix Equations" Foundations 2, no. 2: 457-474. https://doi.org/10.3390/foundations2020030

Article Metrics

Back to TopTop