Next Article in Journal
Local Influence for the Thin-Plate Spline Generalized Linear Model
Previous Article in Journal
The Estimation of Different Kinds of Integral Inequalities for a Generalized Class of Convex Mapping and a Harmonic Set via Fuzzy Inclusion Relations and Their Applications in Quadrature Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem

1
School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China
2
School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(6), 345; https://doi.org/10.3390/axioms13060345
Submission received: 18 April 2024 / Revised: 16 May 2024 / Accepted: 19 May 2024 / Published: 22 May 2024

Abstract

:
The conditioning theory of the ML -weighted least squares and ML -weighted pseudoinverse problems is explored in this article. We begin by introducing three types of condition numbers for the ML -weighted pseudoinverse problem: normwise, mixed, and componentwise, along with their explicit expressions. Utilizing the derivative of the ML -weighted pseudoinverse problem, we then provide explicit condition number expressions for the solution of the ML -weighted least squares problem. To ensure reliable estimation of these condition numbers, we employ the small-sample statistical condition estimation method for all three algorithms. The article concludes with numerical examples that highlight the results obtained.

1. Introduction

The study of generalized inverses of matrices has been a very important research field since the middle of last century and remains one of the most active research branches in the world [1,2,3]. Generalized inverses, including the weighted pseudoinverse, have numerous applications in various fields, such as control, networks, statistics, and econometrics [4,5,6,7]. The ML -weighted pseudoinverse of m × n matrix K with the entries of two weight matrices M and L (with order s × m and l × n , respectively) is defined as
K M L = I n ( L P ) L ( MK ) M ,
where ( MK ) denotes the Moore-Penrose inverse of MK and P = I n ( M K ) M K . The ML -weighted pseudoinverse [3] originated from the ML -weighted least squares problem ( ML -WLS), which is stated as follows:
min x S x L with S = x : K x h M is minimum ,
where · L and · M are the ellipsoidal seminorms
x L 2 = x H L H L x , h M 2 = h H M H M h ,
with h R m . The ML -WLS exists and has a unique solution:
x = K M L + h + P I n L P + L P z for some vector z ,
if and only if rank ( B ) = n , with B = M K L . In this case it can be shown [3] that
P I n L P + L P = 0 .
The ML -weighted pseudoinverse K ML [8] is helpful in solving ML -WLS problems [2,9], which is a generalization of the equality constraint least squares problem and has been widely explored in the literature (see, e.g., [9,10,11,12]). Eldén [9] studied perturbation theory for this problem, whereas Cox et al. [12] retrieved the upper perturbation bound and provided the normwise condition number. Li and Wang [13] presented structured and unstructured partial normwise condition numbers, whereas Diao [14] provided partial mixed and componentwise condition numbers for this problem. Until today, the condition numbers for the ML -WLS problem were not yet explored. Motivated by this and considering their significance in ELS research, we present explicit representations of the normwise, mixed, and componentwise condition numbers for the ML -WLS problem, as well as statistical estimation.
A large number of articles and monographs have appeared during the last two decades in the literature dealing with the ML -weighted pseudoinverse K ML [1,3,8]. The ML -weighted pseudoinverse K ML converts to the K-weighted pseudoinverse K L [3] when M = I , the generalized inverse C L [15] when L has a full row rank and M = I , and the Moore-Penrose inverse K [16] when both M = I , and L = I . Wei and Zhang [8] discussed the structure and uniqueness of the ML -weighted pseudoinverse K ML . Elden [3] devised the algorithm for K ML . Wei [17] considered the K L expression using GSVD. Gulliksson et al. [18] proposed a perturbation equation for K L . Galba et al. [4] proposed iterative methods for calculating K ML , but they may not be appropriate for time-varying applications. Recurrent neural networks (RNNs) [2,6,7] are commonly used to calculate time-varying K ML solutions. Recently, Mahvish et al. [19,20] presented condition numbers and statistical estimates for the K -weighted pseudoinverse K L and the generalized inverse C L .
A fundamental idea in numerical analysis is the condition number, which expresses how sensitive a function’s output is to small variations in its input. It is used to predict the worst-case sensitivity of how errors in input data can affect the results of computations. Various condition numbers are available that consider various aspects of the input and output data. The normwise condition number [21] disregards the scaling structure of both the input and output data. On the other hand, the mixed and componentwise [22] condition numbers consider the scaling structure of the data. The mixed condition numbers employ componentwise error analysis for the input data and normwise error analysis for the output data. This means that the errors in the input data are estimated componentwise, while the errors in the output data are estimated using a normwise approach. The componentwise condition numbers, on the other hand, employ componentwise error analysis for both the input and output data. This means that the errors in both the input and output data are estimated componentwise. The condition numbers of the matrix K ML , when associated with the two weight matrices M and L , and its estimation have not been investigated until now. Nonetheless, it is crucial to delve into some generalized findings that encapsulate other pre-existing results in the scientific literature.
The article is organized as follows: The normwise, mixed, and componentwise condition numbers for K ML are discussed in Section 3, and the condition number expressions for the ML -WLS solution are obtained in Section 4. In Section 2, preliminaries and basic properties are summarized, which help in understanding the results presented in the paper. A highly reliable statistical estimate of the condition numbers is obtained using the small-sample statistical condition estimation (SSCE) method [23] in Section 5. A few numerical examples are also included in Section 5 to illustrate the results that were attained. The efficiency of the estimators is illustrated by these examples, which also show the distinction between the normwise condition numbers and the mixed and componentwise condition numbers.
Throughout this article, R m × n denotes the set of real m × n matrices. For a matrix X R m × n , X T is the transpose of X , rank ( X ) denotes the rank of X , X 2 is the spectral norm of X, and X F is the Frobenius norm of X. For a vector x , x is its -norm, and x 2 the 2-norm. The notation | X | is a matrix whose components are the absolute values of the corresponding components of X.

2. Preliminaries

In this part, we will present several definitions and key findings that will be utilized in the following sections. The entry-wise division [24] between the vectors u , v R m is defined as
u v = diag ( v ) u ,
where diag ( v ) is diagonal with diagonal elements v 1 , , v m . Here, for a number s R , s is defined by
s = s 1 , if s 0 , 1 , if s = 0 .
It is obvious that u v has components u v i = v i u i . Similarly, for U = ( u i j ) R m × n , V = ( v i j ) R m × n , U V is defined as follows:
U V i j = v i j u i j .
We describe the relative distance between u and v using the entry-wise division as
d ( u , v ) = u v v = max i = 1 , , m | v i | | u i v i | .
In other words, we take into account the absolute distance at zero components and the relative distance at nonzero components.
To establish the definitions of normwise, mixed and componentwise condition numbers, it is necessary to also determine the set B ( u , ε ) = { w R m | | w i u i | ϵ | u i | , i = 1 , , m } and B ( u , ε ) = { w R m   w u 2 ε u 2 } .
To define the normwise, mixed, and componentwise condition numbers, we consider the following definitions:
Definition 1
([24]). Assume that χ : R p R q is a continuous mapping described on an open set Dom ( χ ) R p and u Dom ( χ ) , u 0 such that χ ( u ) 0 .
(i) The normwise condition number of χ at u is stated as
n ( χ , u ) = lim ε 0 sup w B ( u , ε ) w u χ ( w ) χ ( u ) 2 χ ( u ) 2 / w u 2 u 2 .
(ii) The mixed condition number of χ at u is stated as
m ( χ , u ) = lim ε 0 sup w B o ( u , ε ) w u χ ( w ) χ ( u ) χ ( u ) 1 d ( w , u ) .
(iii) The componentwise condition number of χ at u is stated as
c ( χ , u ) = lim ε 0 sup w B o ( u , ε ) w u d χ ( w ) , χ ( u ) d ( w , u ) .
Using the Fréchet derivative, the next lemma provides explicit expressions for these three condition numbers.
Lemma 1
([24]). Assuming the same specifications as in Definition 1, if χ is F r e ´ c h e t differentiable at u, then we obtain
n ( χ , u ) = δ χ ( u ) 2 u 2 χ ( u ) 2 , m ( χ , u ) = | δ χ ( u ) | | u | χ ( u ) , c ( χ , u ) = | δ χ ( u ) | | u | | χ ( u ) | ,
Here, δ χ ( u ) represents the Fréchet derivative of χ at point u.
In order to derive explicit formulas for the previously mentioned condition numbers, we require certain properties of the Kronecker product, denoted as [25], between matrices A and B. Here, the operator ‘vec’ is defined as
vec ( A ) = a 1 T , , a n T T R m n ,
for A = a 1 , , a n R m × n with a i R m and the Kronecker product between A = a i j R m × n and B R p × q defined as A B = a i j B R m p × n q .
vec ( A X B ) = B T A vec ( X ) ,
vec A T = Π m n vec ( A ) ,
A B 2 = A 2 B 2 ,
( A B ) T = ( A T B T ) ,
Here, the matrix X should have an appropriate dimension. Moreover, Π m n R m n × m n is the vec-permutation matrix, and its definition is based on the dimensions m and n.
Moving forward, we provide two useful lemmas. These will help in the calculation of condition numbers as well as in determining their upper bounds.
Lemma 2
([26], P. 174, Theorem 5). Let S be an open subset of R n × q , and let χ : S R m × p be a matrix function defined and k 1 times (continuously) differentiable on S. If rank ( χ ( X ) ) is constant on S , then χ : S R p × m is k times (continuously) differentiable on S, and
δ χ = χ δ χ χ + χ χ T δ χ T ( I m χ χ ) + ( I p χ χ ) δ χ T χ T χ .
Lemma 3
([16]). For any matrices W , B , C , G , Z and S that have dimensions such that
[ W B + ( C G ) Π ] vec ( Z ) ,
[ W B + ( C G ) Π ] vec ( Z ) S ,
B Z W T a n d G Z T C T ,
are well-defined, we have
| [ W B + ( C G ) Π ] | vec ( | Z | ) vec ( | B | | Z | | W | T + | G | | Z | T | C | T )
and
| [ W B + ( C G ) Π ] | vec ( | Z | ) | S | vec ( | B | | Z | | W | T + | G | | Z | T | C | T ) | S | .

3. Condition Numbers for ML -Weighted Pseudoinverse

To derive the explicit expression of the condition numbers of K M L , we define the mapping ϕ : R m n + s m + l n R n m by
ϕ ( w ) = vec ( K M L ) .
Here, w = ( vec ( M ) T , vec ( L ) T , vec ( K ) T ) T , δ w = ( vec ( δ M ) T , vec ( δ L ) T , vec ( δ K ) T ) T , and for a matrix A = ( a i j ) , A max = vec ( A ) = max i , j | a i j | .
The definitions of the normwise, mixed, and componentwise condition numbers for K M L are given below, following [27] and using Definition 1.
n ( M , L , K ) : = lim ε 0 sup [ δ M , δ L δ K ] F ε [ M , L , K ] F ( K + δ K ) M L K M L F / K M L F [ δ M , δ L δ K ] F / [ M , L , K ] F ,
m ( M , L , K ) : = lim ε 0 sup δ M / M max ε δ L / L max ε δ K / K max ε ( K + δ K ) M L K M L max K M L max 1 d ( w + δ w , w ) ,
c ( M , L , K ) : = lim ε 0 sup δ M / M max ε δ L / L max ε δ K / K max ε 1 d ( w + δ w , w ) ( K + δ K ) M L K M L K M L max .
By applying the operator vec and the spectral, Frobenius, and Max norms, we redefine the previously mentioned definitions accordingly.
n ( M , L , K ) : = lim ε 0 sup vec ( δ M ) vec ( δ L ) vec ( δ K ) 2 ε vec ( M ) vec ( L ) vec ( K ) 2 vec ( ( K + δ K ) M L K M L ) 2 vec ( K M L ) 2 / vec ( δ M ) vec ( δ L ) vec ( δ K ) 2 vec ( M ) vec ( L ) vec ( K ) 2 ,
m ( M , L , K ) : = lim ε 0 sup vec ( δ M ) / vec ( M ) ε vec ( δ L ) / vec ( L ) ε vec ( δ K ) / vec ( K ) ε vec ( ( K + δ K ) M L K M L ) vec ( K M L ) 1 d ( w + δ w , w ) ,
c ( M , L , K ) : = lim ε 0 sup vec ( δ M ) / vec ( M ) ε vec ( δ L ) / vec ( L ) ε vec ( δ K ) / vec ( K ) ε 1 d ( w + δ w , w ) vec ( ( K + δ K ) M L K M L ) vec ( K M L ) .
The expression for the Fréchet derivative of ϕ at w is given below.
Lemma 4.
Suppose that ϕ is a continous mapping. Then it is Fréchet differentiable at w and its Fréchet derivative is:
δ ϕ ( w ) = [ Z ( M ) , Z ( L ) , Z ( K ) ] ,
where
Z ( M ) = ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m Z ( L ) = ( ( K M L ) T ( L P ) ) ( ( L K M L ) T Q ) Π l n Z ( K ) = ( ( K M L ) T K M L ) + ( ( ( L K M L ) T L ( M K ) M ) Q ) Π m n
with
Q = ( L P ) ( L P ) T = ( ( L P ) T ( L P ) ) , R = ( I P ( L P ) L ) ( M K ) .
Proof. 
By differentiating both sides of (1), we acquire
δ ( K M L ) = δ [ ( I ( L P ) L ) ( M K ) M ] .
Considering the facts
( L P ) = P ( L P ) ,
and
P ( I ( L P ) L P ) = 0 ,
which are from [9] Theorem 2.1, Lemma 2, and
K P ( L P ) = K ( L P ) = 0 .
From (19) and using (9) and (20), we have
δ ( K M L ) = δ [ ( I P ( L P ) L ) ( M K ) M ] = δ ( M K ) M + ( M K ) δ M δ ( P ( L P ) L ( M K ) M ) by   ( 20 ) = ( I P ( L P ) L ) δ ( M K ) M + ( I P ( L P ) L ) ( M K ) δ M δ P ( L P ) L ( M K ) M P δ ( L P ) L ( M K ) M P ( L P ) δ L ( M K ) M = ( I P ( L P ) L ) [ ( M K ) δ ( M K ) ( M K ) + ( M K ) ( M K ) T δ ( M K ) T ( I ( M K ) ( M K ) ) + ( I ( M K ) ( M K ) ) δ ( M K ) T ( M K ) T ( M K ) ] M + ( I P ( L P ) L ) ( M K ) δ M δ P ( L P ) L ( M K ) M + P [ ( L P ) δ ( L P ) ( L P ) ( L P ) ( L P ) T δ ( L P ) T ( I ( L P ) ( L P ) ) ( I ( L P ) ( L P ) ) δ ( L P ) T ( L P ) T ( L P ) ] L ( M K ) M P ( L P ) δ L ( M K ) M by   ( 9 )
Using (20) and the result I P ( L P ) L I ( M K ) ( M K ) = P I ( L P ) L P , the above equation can be rewritten as
δ ( K M L ) = ( I P ( L P ) L ) ( M K ) δ ( M K ) ( M K ) M + P ( I ( L P ) L P ) δ ( M K ) T ( M K ) T ( M K ) M + ( I ( L P ) ( L P ) ) ( M K ) ( M K ) T δ ( M K ) T ( I ( M K ) ( M K ) ) M ( L P ) δ L ( M K ) M ( I ( L P ) L ) δ ( I ( M K ) M K ) ( L P ) L ( M K ) M + ( I P ( L P ) L ) ( M K ) δ M + ( L P ) δ L ( L P ) L ( M K ) M ( L P ) ( L P ) T δ ( L P ) T ( I ( L P ) ( L P ) ) L ( M K ) M P ( I ( L P ) L P ) δ ( L P ) T ( L P ) T ( L P ) L ( M K ) M . by   ( 20 )
Furthermore, given that ( M K ) ( M K ) = I , (20), and
K P ( L P ) = K ( L P ) = 0 ,
we can simplify the above equation as considering (21) and (22), we have
= ( I P ( L P ) L ) ( M K ) δ M K ( M K ) M ( I P ( L P ) L ) ( M K ) M δ K ( M K ) M + ( I P ( L P ) L ) ( M K ) δ M ( L P ) δ L ( M K ) M + ( L P ) δ L P ( L P ) L ( M K ) M + ( I ( L P ) L ) ( M K ) M δ K ( L P ) L ( M K ) M ( L P ) ( L P ) T P T δ L T ( I ( L P ) ( L P ) ) L ( M K ) M ( L P ) ( L P ) T δ P T L T ( I ( L P ) ( L P ) ) L ( M K ) M .
Using (1) and (18) in (23), we obtain
= R δ M ( I K ( M K ) M ) R M δ K ( I ( L P ) L ) ( M K ) M ( L P ) δ L R M ( L P ) ( P ( L P ) ) T δ L T L ( I ( L P ) L ) ( M K ) M + ( L P ) ( L P ) T δ K T M T ( M K ) T L T L ( I ( L P ) L ) ( M K ) M + ( L P ) ( L P ) T K T δ M T ( M K ) T L T L ( I ( L P ) L ) ( M K ) M
= R δ M ( I K ( M K ) M ) K M L δ K K M L ( L P ) δ L K M L Q δ L T L K M L + Q δ K T M T ( L ( M K ) ) T L K M L + Q K T δ M T ( L ( M K ) ) T L K M L .
Employing the ‘vec’ operation on both sides of (24) and taking into account (5) and (6), we obtain
vec ( δ K M L ) = ( ( I K ( M K ) M ) T R ) vec ( δ M ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) vec ( δ M T ) ( ( K M L ) T ( L P ) ) vec ( δ L ) ( ( L K M L ) T Q ) vec ( δ L T ) ( ( K M L ) T K M L ) vec ( δ K ) + ( ( ( L K M L ) T L ( M K ) M ) Q ) vec ( δ K T ) by   ( 5 ) = [ ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m ] vec ( δ M ) [ ( ( K M L ) T ( L P ) ) + ( ( L K M K ) T Q ) Π l n ] vec ( δ L ) [ ( ( K M L ) T K M L ) ( ( ( L K M L ) T L ( M K ) M ) Q ) Π m n ] vec ( δ K ) by   ( 6 ) = [ ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m , ( ( K M L ) T ( L P ) ) ( ( L K M L ) T Q ) Π l n , ( ( K M L ) T K M L ) + ( ( ( L K M K ) T L ( M K ) M ) Q ) Π m n ] × vec ( δ M ) vec ( δ L ) vec ( δ K ) .
That is,
δ ( vec ( K ML ) ) = [ Z ( M ) , Z ( L ) , Z ( K ) ] δ w .
The definition of Fréchet derivative yields the expected results.  □
Remark 1.
Assuming M = I and the Fréchet derivative of ϕ at w might be described as follows:
δ ϕ ( w ) = [ Z ˜ ( L ) , Z ˜ ( K ) ] ,
where
Z ˜ ( M ) = 0 Z ˜ ( L ) = ( ( K L ) T ( L P ) ) ( ( L K L ) T Q ) Π l n Z ˜ ( K ) = ( ( K L ) T K L ) + ( ( ( L K L ) T L K ) Q ) Π m n
whereas the latter is simply the results of ([19], Lemma 4), which allows us to retrieve the K-weighted pseudoinverse L K [19] condition numbers. The K-weighted pseudoinverse L K of [19] uses a notation different from this paper ( K and L are interchanged).
Remark 2.
Considering M and L as identity matrices, we obtain
Z ( M ) = 0 , Z ( L ) = 0 , Z ( K ) = [ ( K ) T K ) + ( ( K K T ) 1 ( I K K ) ) Π m n ] ,
which yields the outcome in [16], Lemma 10, from which the condition numbers for the Moore-Penrose inverse [16] can be obtained.
Next, we provide the normwise, mixed, and componentwise condition numbers for K ML , which belong to the direct outcomes of Lemmas 1 and 4.
Theorem 1.
The normwise, mixed and componentwise condition numbers for K ML defined in (11)–(13) are
n ( M , L , K ) = [ Z ( M ) , Z ( L ) , Z ( K ) ] 2 vec ( M ) vec ( L ) vec ( K ) 2 vec ( K ML ) 2 ,
m ( M , L , K ) = | Z ( M ) | vec ( | M | ) + | Z ( L ) | vec ( | L | ) + Z ( K ) vec ( | K | ) vec ( K ML ) ,
c ( M , L , K ) = | Z ( M ) | vec ( | M | ) + | Z ( L ) | vec ( | L | ) + Z ( K ) vec ( | K | ) vec ( K ML ) .
In the following corollary, we propose simply computable upper bounds to decrease the cost of determining these condition numbers. Numerical investigations in Section 5 illustrate the reliability of these bounds.
Corollary 1.
The upper bounds for the normwise, mixed and componentwise condition numbers for K M L are
n ( M , L , K ) n u ( M , L , K ) = [ I K ( M K ) M 2 R 2 + ( L K M L ) T L ( M K ) 2 Q K T 2 + K M L 2 ( L P ) 2 + L K M L 2 Q 2 + K M L 2 K M L 2 + ( L K M L ) T L ( MK ) M 2 Q 2 ] [ M , L , K ] F K M L F , m ( M , L , K ) m u ( M , L , K ) = | R | | M | | ( I K ( M K ) M ) T | + | Q K T | | M T | | ( ( L K M L ) T L ( M K ) ) | + | ( L P ) | | L | | K M L | + | Q | | L T | | L K M L | + | K M L | | K | | K M L | + | Q | | K T | | ( L K M L ) T L ( MK ) M | max K M L max , c ( M , L , K ) c u ( M , L , K ) = | R | | M | | ( I K ( M K ) M ) T | + | Q K T | | M T | | ( ( L K M L ) T L ( M K ) ) | + | ( L P ) | | L | | K M L | + | Q | | L T | | L K M L | + | K M L | | K | | K M L | + | Q | | K T | | ( L K M L ) T L ( MK ) M | | K M L | max .
Proof. 
Considering the known property [ U , V ] 2 U 2 + V 2 for a pair of matrices, U and V, and the Theorem 1, and (7), we obtain
n ( M , L , K ) [ ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m 2 + ( K M L ) T ( L P ) ( ( L K ) T Q ) Π l n 2 + ( K M L ) T K M L ( ( ( L K M L ) T L ( MK ) M ) Q ) Π m n 2 ] × [ M , L , K ] F K M L F [ ( ( I K ( M K ) M ) T R ) 2 + ( ( L K M L ) T L ( M K ) ) Q K T 2 + ( K M L ) T ( L P ) 2 + ( L K M L ) T Q 2 + ( K M L ) T K M L 2 + ( ( L K M L ) T L ( MK ) M ) Q 2 ] × [ M , L , K ] F K M L F = [ I K ( M K ) M 2 R 2 + ( L K M L ) T L ( M K ) 2 Q K T 2 + K M L 2 ( L P ) 2 + L K M L 2 Q 2 + K M L 2 K M L 2 + ( L K M L ) T L ( MK ) M 2 Q 2 ] [ M , L , K ] F K M L F .
Again using Theorem 1, and Lemma 3, we have
m ( M , L , K ) = | ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m | vec ( | M | ) + | ( K M L ) T ( L P ) ( L K M L ) T ( Q ) Π l n | vec ( | L | ) + | ( K M L ) T K M L ( ( L K M L ) T L ( MK ) M ) Q Π m n | vec ( | K | ) / vec ( K M L ) | R | | M | | ( I K ( M K ) M ) T | + | Q K T | | M T | | ( ( L K M L ) T L ( M K ) ) | + | ( L P ) | | L | | K M L | + | Q | | L T | | L K M L | + | K M L | | K | | K M L | + | Q | | K T | | ( L K M L ) T L ( MK ) M | max K M L max ,
and
c ( M , L , K ) = | ( ( I K ( M K ) M ) T R ) + ( ( ( L K M L ) T L ( M K ) ) Q K T ) Π s m | vec ( | M | ) + | ( K M L ) T ( L P ) ( L K M L ) T Q Π l n | vec ( | L | ) + | ( K M L ) T K M L ( ( L K M L ) T L ( MK ) M ) Q Π m n | vec ( | K | ) / | vec ( K M L ) | | R | | M | | ( I K ( M K ) M ) T | + | Q K T | | M T | | ( ( L K M L ) T L ( M K ) ) | + | ( L P ) | | L | | K M L | + | Q | | L T | | L K M L | + | K M L | | K | | K M L | + | Q | | K T | | ( L K M L ) T L ( MK ) M | | K M L | max .
 □

4. Condition Numbers for ML -Weighted Least Squares Problem

First we define the ML -WLS problem mapping ψ ( [ M , L , K , h ] ) : R m n + s m + l n + m R n by
ψ ( [ M , L , K , h ] ) = x = ( I ( L P ) L ) ( M K ) M h .
Then, using Definition 1, we denote the normwise, mixed and componentwise condition numbers for the ML -WLS problem as follows:
n ML W L S ( M , L , K , h ) = n ML W L S ( ϕ , [ M , L , K , h ] ) = lim ε 0 sup [ δ M , δ L , δ K , δ h ] F ε [ M , L , K , h ] F δ x 2 ε x 2 , m ML W L S ( M , L , K , h ) = m ML W L S ( ϕ , [ M , L , K , h ] ) = lim ε 0 sup δ M ε M δ L ε L δ K ε K δ h ε h δ x ε x , c ML W L S ( M , L , K , h ) = c ML W L S ( ϕ , [ M , L , K , h ] ) = lim ε 0 sup δ M ε M δ L ε L δ K ε K δ h ε h 1 ε δ x x .
Lemma 5.
The mapping ψ is continuous, Fréchet differentiable at [ M , L , K , h ] , and
δ ψ ( [ M , L , K , h ] ) = [ S ( M ) , S ( L ) , S ( K ) , h ] ,
where
S ( M ) = ( r T R + ( ( x T L T L ( M K ) ) Q K T ) Π s m , S ( L ) = ( x T ( L P ) ) ( x T L T Q ) Π l n , S ( K ) = ( x T K M L ) + ( ( x T L T L ( M K ) M ) Q ) Π m n ,
with r = ( I K ( M K ) M ) h , Q = ( L P ) ( L P ) T = ( ( L P ) T ( L P ) ) .
Proof. 
Differentiating both sides of (28), we obtain
δ x = δ ψ ( [ M , L , K , h ] ) = δ [ ( I ( L P ) L ) ( M K ) M h ]
From (20) and using (9), we obtain
δ x = δ ( M K ) M h + ( M K ) δ M h + ( M K ) M δ h δ ( P ( L P ) L ( M K ) M h ) by   ( 20 ) = ( I P ( L P ) L ) δ ( M K ) M h + ( I P ( L P ) L ) ( M K ) δ M h δ P ( L P ) L ( M K ) M h P δ ( L P ) L ( M K ) M h P ( L P ) δ L ( M K ) M h P ( L P ) L ( M K ) M δ h = ( I P ( L P ) L ) [ ( M K ) δ ( M K ) ( M K ) + ( M K ) ( M K ) T δ ( M K ) T ( I ( M K ) ( M K ) ) + ( I ( M K ) ( M K ) ) δ ( M K ) T ( M K ) T ( M K ) ] M h + ( I P ( L P ) L ) ( M K ) δ M h δ P ( L P ) L ( M K ) M h + P [ ( L P ) δ ( L P ) ( L P ) ( L P ) ( L P ) T δ ( L P ) T ( I ( L P ) ( L P ) ) ( I ( L P ) ( L P ) ) δ ( L P ) T ( L P ) T ( L P ) ] L ( M K ) M h P ( L P ) δ L ( M K ) M h + ( I P ( L P ) L ) ( M K ) M δ h .
Using (20), (28) and the result I P ( L P ) L I ( M K ) ( M K ) = P I ( L P ) L P , the above equation can be rewritten as
δ x = ( I P ( L P ) L ) ( M K ) δ ( M K ) ( M K ) M h + P ( I ( L P ) L P ) δ ( M K ) T ( M K ) T ( M K ) M h + ( I ( L P ) ( L P ) ) ( M K ) ( M K ) T δ ( M K ) T ( I ( M K ) ( M K ) ) M h + ( I P ( L P ) L ) ( M K ) δ M h ( I ( L P ) L ) δ ( I ( M K ) M K ) ( L P ) L ( M K ) M h + ( L P ) δ L ( L P ) L ( M K ) M h ( L P ) δ L ( M K ) M h ( L P ) ( L P ) T δ ( L P ) T ( I ( L P ) ( L P ) ) L ( M K ) M h + P ( I ( L P ) L P ) δ ( L P ) T ( L P ) T ( L P ) L ( M K ) M h + ( I P ( L P ) L ) ( M K ) M δ h .
Noting, by the fact ( M K ) ( M K ) = I , (20), and
K P ( L P ) = K ( L P ) = 0 ,
we can simplify the above equation by considering (21) and (22)
= ( I P ( L P ) L ) ( M K ) δ M K ( M K ) M h ( I P ( L P ) L ) ( M K ) M δ K ( M K ) M h + ( I P ( L P ) L ) ( M K ) δ M h ( L P ) δ L ( M K ) M h + ( L P ) δ L P ( L P ) L ( M K ) M h + ( I ( L P ) L ) ( M K ) M δ K ( L P ) L ( M K ) M h ( L P ) ( L P ) T P T δ L T ( I ( L P ) ( L P ) ) L ( M K ) M h ( L P ) ( L P ) T δ P T L T ( I ( L P ) ( L P ) ) L ( M K ) M h + ( I P ( L P ) L ) ( M K ) M δ h .
Considering ( L P ) ( L P ) T = ( ( L P ) T ( L P ) ) in (30), we obtain
= R δ M ( I K ( M K ) M ) h R M δ K ( I ( L P ) L ) ( M K ) M h ( L P ) δ L R M h ( L P ) ( P ( L P ) ) T δ L T L ( I ( L P ) L ) ( M K ) M h + ( L P ) ( L P ) T δ K T M T ( M K ) T L T L ( I ( L P ) L ) ( M K ) M h + ( L P ) ( L P ) T K T δ M T ( M K ) T L T L ( I ( L P ) L ) ( M K ) M h + R M δ h . = R δ M r K M L δ K x ( L P ) δ L x Q δ L T L x + Q δ K T M T ( L ( M K ) ) T L x + Q K T δ M T ( L ( M K ) ) T L x + K M L δ h .
After considering (5) and (6) and using the ‘vec’ operation on both sides of (31), we obtain
vec ( δ x ) = ( r T R ) vec ( δ M ) + ( ( x T L T L ( M K ) ) Q K T ) vec ( δ M T ) ( x T ( L P ) ) vec ( δ L ) ( x T L T Q ) vec ( δ L T ) ( x T K M L ) vec ( δ K ) + ( ( x T L T L ( M K ) M ) Q ) vec ( δ K T ) + K M L δ h by   ( 5 ) = [ ( r T R ) + ( ( x T L T L ( M K ) ) Q K T ) Π s m ] vec ( δ M ) [ ( x T ( L P ) ) + ( x T L T Q ) Π l n ] vec ( δ L ) [ ( x T K M L ) ( ( x T L T L ( M K ) M ) Q ) Π m n ] vec ( δ K ) + K M L δ h by   ( 6 ) = [ ( ( r T R ) + ( ( x T L T L ( M K ) ) Q K T ) Π s m ) , ( ( x T ( L P ) ) + ( x T L T Q ) Π l n ) , ( ( x T K M L ) ( ( x T L T L ( M K ) M ) Q ) Π m n ) , K M L ] vec ( δ M ) vec ( δ L ) vec ( δ K ) δ h .
That is,
δ x = [ S ( M ) , S ( L ) , S ( K ) , K M L ] δ w .
Hence, the required results can be obtained by using the definition of Fréchet derivative.  □
Remark 3.
Assuming M and L as identity matrices and using (32), we obtain
δ x = ( ( x T K ) ( x T K ( I K K ) ) Π m n ) , K vec ( δ K ) δ h .
It accomplishes the result stated in ([16] Lemma 11), from which the condition numbers of the linear least squares solution [16] can be acquired.
Now, we give the normwise, mixed, and componentwise condition numbers for ML -WLS solution which are the immediate results of Lemmas 1 and 4.
Theorem 2.
The normwise, mixed and componentwise condition numbers for ML -WLS problem defined in (11)–(13) are
n ML W L S ( M , L , K , h ) = [ S ( M ) , S ( L ) , S ( K ) , K M L ] 2 vec ( M ) vec ( L ) vec ( K ) h 2 x 2 ,
m ML W L S ( M , L , K , h ) = | S ( M ) | vec ( | M | ) + | S ( L ) | vec ( | L | ) + S ( K ) vec ( | K | ) + | K M L | | h | x ,
c ML W L S ( M , L , K , h ) = | S ( M ) | vec ( | M | ) + | S ( L ) | vec ( | L | ) + S ( K ) vec ( | K | ) + | K M L | | h | x .
The next corollary yields effortlessly computable bounds for ML -WLS solution. Numerical investigations in Section 5 confirm the reliability of these bounds.
Corollary 2.
The upper bounds for the normwise, mixed and componentwise condition numbers for ML -WLS solution are
n ML W L S ( M , L , K , h ) n u p p e r ( M , L , K , h ) = [ r 2 R 2 + ( L ( MK ) ) T L x 2 Q K T 2 + x 2 ( L P ) 2 + L x 2 Q 2 + x 2 K M L 2 + M T ( L ( MK ) ) T L x 2 Q 2 + K M L 2 ] [ M , L , K , h ] F x F , m ML W L S ( M , L , K , h ) m u p p e r ( M , L , K , h ) = | R | | M | | r | + | Q K T | | M T | | ( L ( M K ) ) T L x | + | ( L P ) | | L | | x | + | Q | | L T | | L x | + | K M L | | K | | x | + | Q | | K T | | M T ( L ( M K ) ) T L x | + | K M L | | h | x , c ML W L S ( M , L , K , h ) c u p p e r ( M , L , K , h ) = | R | | M | | r | + | Q K T | | M T | | ( L ( M K ) ) T L x | + | ( L P ) | | L | | x | + | Q | | L T | | L x | + | K M L | | K | | x | + | Q | | K T | | M T ( L ( M K ) ) T L x | + | K M L | | h | | x | .
Here, we have a different version of the normwise condition number that does not include the Kronecker products.
Theorem 3.
The normwise condition number n ML W L S ( M , L , K , h ) of the ML -WLS solution is given by
n ML W L S ( M , L , K , h ) = W 2 1 / 2 [ M , L , K , h ] F x 2 ,
where
W = 1 + x 2 2 ( K M L ) T K M L + ( L x 2 2 + x T L T L ( M K ) M 2 2 ) Q 2 + r 2 2 R R T + x 2 2 Q + x T L T L ( M K ) 2 2 Q T K T K Q 2 R ( M K ) T L T L x r T K Q T + 2 Q x x T L T ( L P ) T 2 Q x x T L T L ( M K ) M K M L T .
Proof. 
We find it difficult to simplify δ ψ directly. Therefore, from (29), we consider W and using W 2 = W T 2 2 = max y 2 = 1 W T y 2 2 . If y is a unit vector in R n , then
W T y = Π T ( r R T ) ( ( M K ) T L T L x K Q T ) Π T ( L x Q T ) ( x ( L P ) T ) Π T ( ( M T ( M K ) T L T L x ) Q T ) ( x ( K M L ) T ) ( K M L ) T y ( by   ( 8 ) ) = Π T ( r R T ) vec ( y ) ( ( M K ) T L T L x ( Q K T ) T ) vec ( y ) Π T ( L x Q T ) vec ( y ) ( x ( L P ) T ) vec ( y ) Π T ( ( M T ( M K ) T L T L x ) Q T ) vec ( y ) ( x ( K M L ) T ) vec ( y ) ( K M L ) T y = Π 1 vec ( R T y r T ) Π vec ( ( Q K T ) T y ( ( M K ) T L T L x ) T ) Π 1 vec ( Q T y ( L x ) T ) Π vec ( ( L P ) T y x T ) Π 1 vec ( Q T y ( M T ( M K ) T L T L x ) T ) Π vec ( ( K M L ) T y x T ) ( K M L ) T y ( by   ( 5 ) ) = Π ( vec ( R T y r T ) vec ( ( ( Q K T ) T y x T L T L ( M K ) ) T ) ) Π ( vec ( Q T y ( L x ) T ) + vec ( ( ( L P ) T y x T ) T ) ) Π ( vec ( Q T y ( M T ( M K ) T L T L x ) T ) vec ( ( ( K M L ) T y x T ) T ) ) ( K M L ) T y ( by   ( 6 ) ) = Π 1 vec ( R T y r T ( ( M K ) T L T L x y T Q K T ) ) Π 1 vec ( Q T y ( L x ) T + x y T ( L P ) ) Π 1 vec ( Q T y ( M T ( M K ) T L T L x ) T x y T K M L ) ( K M L ) T y = vec ( r y T R ( K Q T y x T L T L ( M K ) ) ) vec ( L x y T Q + ( L P ) T y x T ) vec ( x T L T L ( M K ) M y T Q ( K M L ) T y x T ) ( K M L ) T y . ( by   ( 6 )   and   Π 1 = Π T )
Then, we obtain
W T y 2 2 = vec ( r y T R ( K Q T y x T L T L ( M K ) ) ) 2 2 + vec ( L x y T Q + ( L P ) T y x T ) 2 2 + vec ( x T L T L ( M K ) M y T Q ( K M L ) T y x T ) 2 2 + ( K M L ) T y 2 2 = r y T ( R ( K Q T y x T L T L ( M K ) ) F 2 + L x y T Q + ( L P ) T y x T F 2 + x T L T L ( M K ) M y T Q ( K M L ) T y x T F 2 + ( K M L ) T y 2 2 = r y T R F 2 + K Q T y x T L T L ( M K ) F 2 + L x y T Q F 2 + ( L P ) T y x T F 2 + x T L T L ( M K ) M y T Q F 2 + ( K M L ) T y x T F 2 + ( K M L ) T y 2 2 2 tr ( ( M K ) T L T L x y T Q K T r y T R ) + 2 tr ( x y T ( L P ) L x y T Q ) 2 tr ( x y T K M L x T L T L ( M K ) M y T Q ) = x T L T L ( M K ) 2 2 K Q T y 2 2 + r 2 2 R T y 2 2 + L x 2 2 Q T y 2 2 + x 2 2 ( L P ) T y 2 2 + x T L T L ( M K ) M 2 2 Q T y 2 2 + x 2 2 ( K M L ) T y 2 2 + ( K M L ) T y 2 2 2 y T R ( M K ) T L T L x r T K Q T y + 2 y T Q x x T L T ( L P ) T y 2 y T Q x x T L T L ( M K ) M K M L T y = 1 + x 2 2 y T ( K M L ) T K M L y + ( L x 2 2 + x T L T L ( M K ) M 2 2 ) y T Q 2 y + x 2 2 y T Q y + r 2 2 y T R R T y + x T L T L ( M K ) 2 2 y T Q T K T K Q y 2 y T R ( M K ) T L T L x r T K Q T y + 2 y T Q x x T L T ( L P ) T y 2 y T Q x x T L T L ( M K ) M K M L T y = y T ( 1 + x 2 2 ( K M L ) T K M L + ( L x 2 2 + x T L T L ( M K ) M 2 2 ) Q 2 + r 2 2 R R T + x 2 2 Q + x T L T L ( M K ) 2 2 Q T K T K Q 2 R ( M K ) T L T L x r T K Q T + 2 Q x x T L T ( L P ) T 2 Q x x T L T L ( M K ) M K M L T ) y .
Therefore, we have the desired result (36).  □

5. Numerical Experiments

In this section, first we present reliable condition estimation algorithms for normwise, mixed, and componentwise condition numbers using small sample statistical condition estimation (SCE) method then we show the accuracy of the propose condition estimation algorithms by numerical experiments. Kenny and Laub [23] provided small sample statistical condition estimation (SCE) as a reliable method to estimate condition numbers for linear least squares problems [13,28,29], indefinite least squares problems [20,30] and total least squares problems [31,32,33]. We proposed Algorithms A, B and C based on the SSCE method [23] to estimate the normwise, mixed, and componentwise condition numbers of K M L and for the ML -WLS solution.
Algorithm A (Small-sample statistical condition estimation method for the normwise condition number of ML -weighted pseudoinverse)
  • Generate matrices δ M 1 , δ L 1 , δ K 1 , δ M 2 , δ L 2 , δ K 2 , , δ M q , δ L q , δ K q with each entry in N ( 0 , 1 ) and orthonormalize the following matrix
    vec δ M 1 vec δ M 2 vec δ M q vec δ L 1 vec δ L 2 vec δ L q vec δ K 1 vec δ K 2 vec δ K q
    to obtain τ 1 , τ 2 , , τ q by modified Gram-Schmidt orthogonalization process. Each τ i can be converted into the corresponding matrices δ M i , δ L i , δ K i by applying the unvec operation.
  • Let p = m n + s m + l n . The Wallis factor approximate ω p and ω q by
    ω p 2 π ( p 1 2 )
  • For i = 1 , 2 , , q , compute λ i from (24)
    λ i = R δ M i ( I K ( M K ) M ) K M L δ K i K M L ( L P ) δ L i K M L Q δ L i T L K M L + Q δ K i T M T ( L ( M K ) ) T L K M L + Q K T δ M i T ( L ( M K ) ) T L K M L ,
    where Q and R are given in (18). Estimate the absolute condition vector by
    n abs : = ω q ω p λ 1 2 + λ 2 2 + + λ q 2 ,
    Here, for any vector λ = λ 1 , , λ n T R n , | λ | 2 = λ 1 2 , , λ n 2 and | λ | = λ 1 , , λ n T . Where the power operation is applied at each entry of y i , i = 1 , 2 , , k and | A | = a i j with A = a i j . Where the square operation is applied to each entry of λ i , i = 1 , 2 , , q and the square root is also applied componentwise.
  • Estimate the normwise condition number (33) by
    n SCE ( M , L , K ) = N SCE [ M , L , K ] F K M L F ,
    where N SCE : = ω q ω p λ 1 2 2 + λ 2 2 2 + + λ q 2 2 = n abs F .
The corresponding SSCE method, which is from [23] and has been used in numerous problems (see, for example, [27,32,33,34]), is required in order to estimate the mixed and componentwise condition numbers for K M L .
Algorithm B (Small-sample statistical condition estimation method for the mixed and componentwise condition numbers of ML -weighted pseudoinverse)
  • Generate matrices δ M 1 , δ L 1 , δ K 1 , δ M 2 , δ L 2 , δ K 2 , , δ M q , δ L q , δ K q with each entry in N ( 0 , 1 ) and orthonormalize the following matrix
    vec δ M 1 vec δ M 2 vec δ M q vec δ L 1 vec δ L 2 vec δ L q vec δ K 1 vec δ K 2 vec δ K q
    to obtain τ 1 , τ 2 , , τ q by modified Gram-Schmidt orthogonalization process. Each τ i can be converted into the corresponding matrices δ M i , δ L i , δ K i by applying the unvec operation. Let δ M i , δ L i , δ K i be the matrix δ M i ˜ , δ L i ˜ , δ K i ˜ multiplied by [ M , L , K ] componentwise.
  • Let p = m n + m s + l n . Approximate ω p and ω q by (38).
  • For α = 1 , 2 , , q , calculate λ α by (39). Compute the absolute condition vector
    n SCE = ω q ω p λ 1 2 + λ 2 2 + + λ q 2
  • Estimate the mixed and componentwise condition estimations m s c e ( M , L , K ) and c s c e ( M , L , K ) as follows:
    m SCE ( M , L , K ) = n SCE vec ( K M L ) , c SCE ( M , L , K ) = n SCE vec ( K M L ) .
In order to estimate the normwise, mixed, and componentwise condition numbers of the ML -WLS problem, we provide Algorithm C based on the SSCE approach [23].
Algorithm C (Small-sample statistical condition estimation method for the condition numbers of ML -weighted least squares problem)
  • Generate matrices δ M 1 , δ L 1 , δ K 1 , δ h 1 , δ M 2 , δ L 2 , δ K 2 , δ h 2 , , δ M t , δ L t , δ K t , δ h t with entries in N ( 0 , 1 ) , where h i R m . To orthonormalize the below matrix
    vec δ M 1 vec δ M 2 vec δ M t vec δ L 1 vec δ L 2 vec δ L t vec δ K 1 vec δ K 2 vec δ K t δ h 1 δ h 2 δ h t
    to obtain an orthonormal matrix ξ 1 , ξ 2 , , ξ t , by using modified Gram-Schmidt orthogonalization technique. Where ξ i can be converted into the corresponding matrices δ M i , δ L i , δ K i , δ h i by applying the unvec operation.
  • Let α = m n + s m + l n + m . Approximate ω p and ω t by using (38).
  • For j = 1 , 2 , , t , compute y j from (31)
    y j = R δ M j r K M L δ K j x ( L P ) δ L j x Q δ L j T L x + Q δ K j T M T ( L ( M K ) ) T L x + Q K T δ M j T ( L ( M K ) ) T L x + K M L δ h j .
    Using the approximations for ω p and ω t , estimate the absolute condition vector
    κ a b s ML W L S = ω t ω p y 1 2 + y 2 2 + + y t 2 .
  • Estimate the normwise condition estimation as follows:
    n SCE ML W L S = κ a b s ML W L S 2 M T , L T , K T , h T T 2 x 2 .
  • Compute the mixed condition estimation m SCE and componentwise condition estimation c SCE as follows:
    m SCE : = κ a b s ML W L S x , c SCE : = κ a b s ML W L S x .
Next, we provide three individual examples. In the first, we compare our SCE-based estimates with the condition numbers of K M L . It also concludes how well Algorithms A and B perform while developing very high estimations. The second one helps to show the accuracy of statistical condition estimators of normwise, mixed, and componentwise condition numbers for the ML -WLS solution. The third one verifies the effectiveness of over-estimation ratios by Algorithm C related to the condition numbers of the ML -WLS solution.
Example 1.
We constructed 200 matrices by repeatedly applying the data matrices K , M , a n d L below and varying the value of θ.
K = sin θ 0 0 0 θ 0 0 0 θ 3 0 0 cos θ R m × n , M = 2 0 0 0 0 cos θ 0 0 0 0 1 0 0 0 0 θ R s × m , L = 1 0 0 0 0 0 0 0 θ 2 10 4 R l × n ,
The results in Table 1 show that Algorithms A and B can reliably estimate the condition numbers in the majority of instances. As stated in ([35], Chapter 15), an estimate of the condition number that is correct to within a factor of 10 is usually suitable because it is the magnitude of an error bound that is of interest rather than its precise value.
Figure 1 demonstrates that Algorithms A and B are very efficient in estimating condition numbers of K M L . To evaluate the efficiency of the Algorithms A and B, we created 500 matrix pairings and set m = 300 , n = 150 , s = 100 , l = 200 and q = 3 with fixed θ = π 6 . In order to determine the effectiveness of Algorithms A and B, we specify the following ratios:
r s = n SCE n , r m = m SCE m , r c = c SCE c .
Example 2.
The nonsymmetric Gaussian random Toeplitz matrix L 1 is constructed using the Matlab function toeplitz ( c , r ) using c = r a n d n ( s , 1 ) and r = r a n d n ( s , 1 ) , where L = [ L 1 , 0 ] . Assume that Y R m × n and U R m × n are the random orthogonal matrices, and Γ is the diagonal matrix with a certain condition number and positive elements on its diagonal. Following that, we are given the matrices K and M as
K = M 1 2 Z Υ 0 Y T , Z = U Γ 1 2 , M = U Γ U T ,
where Υ = n 1 d i a g ( n l , ( n 1 ) l , , 1 ) . The residual vector r = h K x = M 1 2 Z 0 υ and the solution x = ( 1 , 2 2 , , n 2 ) T with υ R m n indicating any random vector with a certain norm and h = M 1 2 Z 0 υ + M 1 2 Z Υ Y T x 0 . Here, we construct 200 random ML -WLS problems for each specified c o n d ( K ) = n l to check the performance of Algorithm C.
The mixed and componentwise condition numbers, rather than the normwise condition number, are more appropriate for describing the underlying conditioning of this ML -WLS problem, considering the facts given in Table 2. Furthermore, we observed that condition estimates based on SSCE may yield accurate results when executed by Algorithm C.
The ratios between the exact condition numbers and their estimated values are listed here.
r s = n SCE ML W L S n ML W L S , r m = m SCE ML W L S m ML W L S , r c = c SCE ML W L S c ML W L S .
In order to determine the effectiveness of Algorithm C, generate 500 random ML -WLS problems with the assumptions that m = 200 , n = 100 , l = 150 , and s = 50 and we used the parameter t = 2 . Therefore, as seen in Figure 2, normwise condition estimation, r n , is not as effective as mixed condition estimation, r m , and componentwise condition estimation, r c .
Example 3.
Consider the random orthogonal matrices: U R s × s , V R l × l , Y R m × m , Z R n × n then the matrices K , M , and L are provided by
K = Y 1 Λ K Z T , L = V Λ L 1 2 Z T , M = U Λ M Y T ,
with appropriate sizes, where Λ K R m × n , Λ M R s × m and Λ L R l × n are diagonal matrices with diagonal elements distributed exponentially from κ K 1 to 1. Furthermore, we define x = 1 , 2 2 , , n 2 T as the solution x and h = r + K M L x , where r is the random vector of the 2-norm.
For the perturbations, we generate them as
Δ K = ε 1 × ( E K ) , Δ M = ε 1 × ( F M ) , Δ L = ε 1 × ( G L ) , Δ h = ε 1 × ( g h ) ,
In this example, the componentwise product of two matrices is indicated by ⊙, and the random matrices E, F, G, and g have uniformly distributed components in the open interval ( 1 , 1 ) and ε 1 = 10 7 .
To evaluate the accuracy of the estimators, we define the overestimation ratios.
r s o v e r : = n SCE ML W L S · ε 1 δ x 2 / x 2 , r m o v e r : = m SCE ML W L S · ε 1 δ x / x , r c o v e r : = c SCE ML W L S · ε 1 δ x / x .
To check the performance of Algorithm C, we constructed 500 random ML -WLS problems with m = 350 , s = 50 , l = 250 , and n = 150 , and we used the parameter t = 3 and outputs n SCE ML W L S , m SCE ML W L S , and c SCE ML W L S of Algorithm C. Figure 3 illustrates that the mixed condition estimation, r m o v e r , and the componentwise condition estimation, r c o v e r , are more efficient when compared to the normwise condition estimation, r n o v e r . However, it is important to note that the latter tends to significantly overestimate the true relative normwise error.

6. Conclusions

This article presents explicit expressions and upper bounds for the normwise, mixed, and componentwise condition numbers of the ML -weighted pseudoinverse K ML . In a specific situation, the results for the K-weighted pseudoinverse and Moore-Penrose inverse are also recovered. Additionally, we provide the process of deriving the ML -weighted least squares solution’s condition numbers from the ML -weighted pseudoinverse condition numbers and K ML condition numbers. We proposed three algorithms to efficiently estimate the normwise, mixed, and componentwise conditions for the ML -weighted pseudoinverse K ML and ML -weighted least squares solutions using the small-sample statistical condition estimation method. Finally, numerical results confirmed the efficacy and accuracy of the algorithms. In the future, we will continue our study on the ML -weighted pseudoinverse.

Author Contributions

Conceptualization, M.S. and X.Z.; Methodology, M.S.; Software, M.S.; Validation, H.X.; Formal analysis, X.Z. and H.X.; Investigation, H.X.; Resources, X.Z.; Data curation, X.Z.; Writing—original draft, M.S.; Supervision, X.Z.; Project administration, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Zhejiang Normal University Postdoctoral Research Fund (Grant No. ZC304022938), the Natural Science Foundation of China (project no. 61976196), the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ22F030003.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nashed, M.Z. Generalized Inverses and Applications: Proceedings of an Advanced Seminar Sponsored by the Mathematics Research Center, The University of Wisconsin-Madison, New York, NY, USA, 8–10 October 1973; Academic Press, Inc.: Cambridge, MA, USA, 1976; pp. 8–10. [Google Scholar]
  2. Qiao, S.; Wei, Y.; Zhang, X. Computing Time-Varying ML-Weighted Pseudoinverse by the Zhang Neural Networks. Numer. Funct. Anal. Optim. 2020, 41, 1672–1693. [Google Scholar] [CrossRef]
  3. Eldén, L. A weighted pseudoinverse, generalized singular values, and constrained least squares problems. BIT 1982, 22, 487–502. [Google Scholar] [CrossRef]
  4. Galba, E.F.; Neka, V.S.; Sergienko, I.V. Weighted pseudoinverses and weighted normal pseudosolutions with singular weights. Comput. Math. Math. Phys. 2009, 49, 1281–1363. [Google Scholar] [CrossRef]
  5. Rubini, L.; Cancelliere, R.; Gallinari, P.; Grosso, A.; Raiti, A. Computational experience with pseudoinversion-based training of neural networks using random projection matrices. In Artificial Intelligence: Methodology, Systems, and Applications; Gennady, A., Hitzler, P., Krisnadhi, A., Kuznetsov, S.O., Eds.; Springer International Publishing: New York, NY, USA, 2014; pp. 236–245. [Google Scholar]
  6. Wang, X.; Wei, Y.; Stanimirovic, P.S. Complex neural network models for time-varying Drazin inverse. Neural Comput. 2016, 28, 2790–2824. [Google Scholar] [CrossRef]
  7. Zivkovic, I.S.; Stanimirovic, P.S.; Wei, Y. Recurrent neural network for computing outer inverse. Neural Comput. 2016, 28, 970–998. [Google Scholar] [CrossRef]
  8. Wei, M.; Zhang, B. Structures and uniqueness conditions of MK-weighted pseudoinverses. BIT 1994, 34, 437–450. [Google Scholar] [CrossRef]
  9. Eldén, L. Perturbation theory for the least squares problem with equality constraints. SIAM J. Numer. Anal. 1980, 17, 338–350. [Google Scholar] [CrossRef]
  10. Björck, Å. Numerical Methods for Least Squares Problems; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
  11. Wei, M. Perturbation theory for rank-deficient equality constrained least squares problem. SIAM J. Numer. Anal. 1992, 29, 1462–1481. [Google Scholar] [CrossRef]
  12. Cox, A.J.; Higham, N.J. Accuracy and stability of the null space method for solving the equality constrained least squares problem. BIT 1999, 39, 34–50. [Google Scholar] [CrossRef]
  13. Li, H.; Wang, S. Partial condition number for the equality constrained linear least squares problem. Calcolo 2017, 54, 1121–1146. [Google Scholar] [CrossRef]
  14. Diao, H. Condition numbers for a linear function of the solution of the linear least squares problem with equality constraints. J. Comput. Appl. Math. 2018, 344, 640–656. [Google Scholar] [CrossRef]
  15. Björck, Å.; Higham, N.J.; Harikrishna, P. The equality constrained indefinite least squares problem: Theory and algorithms. BIT Numer. Math. 2003, 43, 505–517. [Google Scholar]
  16. Cucker, F.; Diao, H.; Wei, Y. On mixed and componentwise condition numbers for Moore Penrose inverse and linear least squares problems. Math Comp. 2007, 76, 947–963. [Google Scholar] [CrossRef]
  17. Wei, M. Algebraic properties of the rank-deficient equality-constrained and weighted least squares problem. Linear Algebra Appl. 1992, 161, 27–43. [Google Scholar] [CrossRef]
  18. Gulliksson, M.E.; Wedin, P.A.; Wei, Y. Perturbation identities for regularized Tikhonov inverses and weighted pseudoinverses. BIT 2000, 40, 513–523. [Google Scholar] [CrossRef]
  19. Samar, M.; Li, H.; Wei, Y. Condition numbers for the K-weighted pseudoinverse L K and their statistical estimation. Linear Multilinear Algebra 2021, 69, 752–770. [Google Scholar] [CrossRef]
  20. Samar, M.; Zhu, X.; Shakoor, A. Conditioning theory for Generalized inverse C A and their estimations. Mathematics 2023, 11, 2111. [Google Scholar] [CrossRef]
  21. Rice, J. A theory of condition. SIAM J. Numer. Anal. 1966, 3, 287–310. [Google Scholar] [CrossRef]
  22. Gohberg, I.; Koltracht, I. Mixed, componentwise, and structured condition numbers. SIAM J. Matrix Anal. Appl. 1993, 14, 688–704. [Google Scholar] [CrossRef]
  23. Kenney, C.S.; Laub, A.J. Small-sample statistical condition estimates for general matrix functions. SIAM J. Sci. Comput. 1994, 15, 36–61. [Google Scholar] [CrossRef]
  24. Xie, Z.; Li, W.; Jin, X. On condition numbers for the canonical generalized polar decomposition of real matrices. Electron. J. Linear Algebra. 2013, 26, 842–857. [Google Scholar] [CrossRef]
  25. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1991. [Google Scholar]
  26. Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd ed.; John Wiley and Sons: Chichester, NH, USA, 2007. [Google Scholar]
  27. Diao, H.; Xiang, H.; Wei, Y. Mixed, componentwise condition numbers and small sample statistical condition estimation of Sylvester equations. Numer. Linear Algebra Appl. 2012, 19, 639–654. [Google Scholar] [CrossRef]
  28. Baboulin, M.; Gratton, S.; Lacroix, R.; Laub, A.J. Statistical estimates for the conditioning of linear least squares problems. Lect. Notes Comput. Sci. 2014, 8384, 124–133. [Google Scholar]
  29. Samar, M. A condition analysis of the constrained and weighted least squares problem using dual norms and their statistical estimation. Taiwan. J. Math. 2021, 25, 717–741. [Google Scholar] [CrossRef]
  30. Li, H.; Wang, S. On the partial condition numbers for the indefinite least squares problem. Appl. Numer. Math. 2018, 123, 200–220. [Google Scholar] [CrossRef]
  31. Samar, M.; Zhu, X. Structured conditioning theory for the total least squares problem with linear equality constraint and their estimation. AIMS Math. 2023, 8, 11350–11372. [Google Scholar] [CrossRef]
  32. Diao, H.; Wei, Y.; Xie, P. Small sample statistical condition estimation for the total least squares problem. Numer. Algor. 2017, 75, 435–455. [Google Scholar] [CrossRef]
  33. Samar, M.; Lin, F. Perturbation and condition numbers for the Tikhonov regularization of total least squares problem and their statistical estimation. J. Comput. Appl. Math. 2022, 411, 114230. [Google Scholar] [CrossRef]
  34. Diao, H.; Wei, Y.; Qiao, S. Structured condition numbers of structured Tikhonov regularization problem and their estimations. J. Comput. Appl. Math. 2016, 308, 276–300. [Google Scholar] [CrossRef]
  35. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
Figure 1. Efficiency of condition eliminators of Algorithms A and B.
Figure 1. Efficiency of condition eliminators of Algorithms A and B.
Axioms 13 00345 g001
Figure 2. Efficiency of condition eliminators of Algorithm C.
Figure 2. Efficiency of condition eliminators of Algorithm C.
Axioms 13 00345 g002
Figure 3. Efficiency of over-estimation ratios of Algorithm C.
Figure 3. Efficiency of over-estimation ratios of Algorithm C.
Axioms 13 00345 g003
Table 1. The efficiency of statistical condition estimates by Algorithms A and B.
Table 1. The efficiency of statistical condition estimates by Algorithms A and B.
m , n , s , l θ n SCE n n u m SCE m m u c SCE c c u
20, 10, 5, 15 π 3 1.1022e+012.7522e+015.7654e+023.1027e+005.3832e+005.2416e+014.2054e+005.4372e+006.2721e+01
π 4 2.2053e+013.2618e+017.1865e+022.3128e+003.4752e+002.3096e+012.5062e+003.6543e+003.6092e+01
π 5 3.1054e+014.2065e+012.6211.e+032.5201e+003.7032e+003.1965e+022.7084e+003.8033e+004.0544e+02
π 6 3.1054e+014.2065e+012.6211.e+032.5201e+003.7032e+003.1965e+022.7084e+003.8033e+004.0544e+02
m , n , s , l θ n SCE n n u m SCE m m u c SCE c c u
60, 40, 30, 50 π 3 4.4034e+015.6652e+015.1977e+033.8402e+006.0328e+008.6047e+015.1054e+007.3590e+008.9076e+01
π 4 4.7901e+015.9084e+017.4710e+032.8033e+003.2228e+006.3722e+013.1560e+004.3805e+007.6943e+01
π 5 2.1642e+023.7611e+021.3179e+042.8764e+004.3502e+004.0644e+024.1232e+005.4653e+005.3772e+02
π 6 2.1642e+023.7611e+021.3179e+042.8764e+004.3502e+004.0644e+024.1232e+005.4653e+005.3772e+02
m , n , s , l θ n SCE n n u m SCE m m u c SCE c c u
100, 60, 40, 80 π 3 1.6543e+022.4638e+023.0643e+044.3222e+006.1108e+004.0644e+025.7642e+007.4544e+005.3632e+02
π 4 1.2324e+022.3207e+024.2501e+043.6233e+005.2326e+002.5489e+025.3562e+006.6533e+003.6471e+02
π 5 2.5434e+023.2455e+026.5731e+043.2064e+004.5211e+005.7654e+024.8659e+005.7532e+006.5703e+02
π 6 2.5434e+023.2455e+026.5731e+043.2064e+004.5211e+005.7654e+024.8659e+005.7532e+006.5703e+02
m , n , s , l θ n SCE n n u m SCE m m u c SCE c c u
200, 100, 50, 150 π 3 2.0331e+024.2224e+027.1023e+044.7532e+007.0665e+003.2052e+036.7051e+007.8066e+004.6281e+03
π 4 2.2053e+023.2618e+027.4533e+044.1054e+006.2350e+001.2411e+036.0462e+007.1102e+002.7403e+03
π 5 4.1326e+026.7651e+029.2016e+043.6325e+005.3824e+004.5341e+035.1632e+006.5032e+007.2305e+03
π 6 4.1326e+026.7651e+029.2016e+043.6325e+005.3824e+004.53414e+035.1632e+006.5032e+007.2305e+03
Table 2. The efficiency of statistical condition estimates by Algorithm C.
Table 2. The efficiency of statistical condition estimates by Algorithm C.
m , n , s , l n l n SCE ML W L S n ML W L S n u p p e r m SCE ML W L S m ML W L S m u p p e r c SCE ML W L S c ML W L S c u p p e r
30, 20, 10, 15 10 0 1.5301e+013.3711e+014.3502e+033.1081e+004.5121e+001.7609e+024.1428e+005.1213e+002.3461e+02
10 1 3.7103e+034.1046e+031.2353e+054.1311e+005.4115e+003.0554e+025.4401e+006.5041e+004.5530e+02
10 2 4.0511e+035.6105e+031.8619e+054.1781e+005.5733e+003.6102e+025.6505e+006.7504e+004.7082e+02
10 4 5.0171e+036.4115e+035.4632e+055.3161e+006.4132e+004.3011e+026.1865e+007.1805e+005.0122e+02
10 7 6.3304e+047.8651e+045.7011e+056.6701e+007.3101e+004.6750e+027.5311e+007.6541e+005.3443e+02
m , n , s , l n l n SCE ML W L S n ML W L S n u p p e r m SCE ML W L S m ML W L S m u p p e r c SCE ML W L S c ML W L S c u p p e r
90, 60, 30, 45 10 0 4.0314e+015.1011e+015.0754e+033.1011e+004.1041e+006.3560e+024.1566e+005.1108e+009.8642e+02
10 1 1.1135e+033.1103e+033.1398e+053.3671e+004.5781e+001.4567e+034.1401e+005.0713e+003.5567e+03
10 2 4.5311e+035.7611e+033.5743e+053.4122e+004.7551e+001.6091e+034.4311e+005.7141e+003.8110e+03
10 4 6.1351e+037.3450e+036.5865e+054.0167e+005.3502e+002.4113e+035.1104e+006.0511e+004.5225e+03
10 7 3.6111e+044.7661e+046.8952e+054.6311e+005.6215e+002.7840e+035.3054e+006.4403e+004.8203e+03
m , n , s , l n l n SCE ML W L S n ML W L S n u p p e r m SCE ML W L S m ML W L S m u p p e r c SCE ML W L S c ML W L S c u p p e r
120, 80, 40, 60 10 0 3.3401e+014.5611e+017.1209e+031.7101e+003.8115e+008.3411e+023.6411e+004.7110e+009.7438e+03
10 1 1.7611e+031.1403e+036.3689e+051.4471e+001.5171e+003.4229e+031.3544e+003.4811e+006.0431e+03
10 2 3.4511e+035.0411e+036.7754e+051.6331e+001.7813e+003.9810e+033.5886e+004.7805e+006.4332e+03
10 4 5.1014e+036.1331e+038.2306e+051.8041e+001.8866e+005.4240e+034.0113e+005.1531e+007.3552e+03
10 7 1.7411e+043.3811e+048.6435e+053.7316e+004.8031e+005.6708e+034.1108e+005.8611e+007.6622e+03
m , n , s , l n l n SCE ML W L S n ML W L S n u p p e r m SCE ML W L S m ML W L S m u p p e r c SCE ML W L S c ML W L S c u p p e r
150, 100, 50, 75 10 0 3.1517e+014.3401e+019.0654e+031.5113e+003.6305e+004.7622e+033.1077e+004.1765e+005.1108e+03
10 1 1.6311e+031.1451e+038.6422e+051.1411e+001.4134e+006.3005e+031.1711e+003.3086e+008.5994e+03
10 2 3.0558e+034.7550e+038.8043e+051.4770e+001.6511e+006.9021e+033.1341e+004.5311e+008.7043e+03
10 4 5.0141e+035.8301e+039.4660e+051.0111e+001.7001e+008.2765e+033.7661e+005.0111e+009.0492e+03
10 7 1.5431e+043.0801e+049.7034e+053.1314e+004.6441e+008.8211e+034.1055e+005.7101e+009.8955e+03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Samar, M.; Zhu, X.; Xu, H. Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem. Axioms 2024, 13, 345. https://doi.org/10.3390/axioms13060345

AMA Style

Samar M, Zhu X, Xu H. Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem. Axioms. 2024; 13(6):345. https://doi.org/10.3390/axioms13060345

Chicago/Turabian Style

Samar, Mahvish, Xinzhong Zhu, and Huiying Xu. 2024. "Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem" Axioms 13, no. 6: 345. https://doi.org/10.3390/axioms13060345

APA Style

Samar, M., Zhu, X., & Xu, H. (2024). Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem. Axioms, 13(6), 345. https://doi.org/10.3390/axioms13060345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop