Next Article in Journal
Novel Method for Estimating Time-Varying COVID-19 Transmission Rate
Previous Article in Journal
Dynamic Modeling and Analysis of Boundary Effects in Vibration Modes of Rectangular Plates with Periodic Boundary Constraints Based on the Variational Principle of Mixed Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Matrices with Only One Non-SDD Row

by
Ksenija Doroslovački
*,† and
Dragana Cvetković
Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(10), 2382; https://doi.org/10.3390/math11102382
Submission received: 19 April 2023 / Revised: 15 May 2023 / Accepted: 16 May 2023 / Published: 20 May 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The class of H-matrices, also known as Generalized Diagonally Dominant (GDD) matrices, plays an important role in many areas of applied linear algebra, as well as in a wide range of applications, such as in dynamical analysis of complex networks that arise in ecology, epidemiology, infectology, neurology, engineering, economy, opinion dynamics, and many other fields. To conclude that the particular dynamical system is (locally) stable, it is sufficient to prove that the corresponding (Jacobian) matrix is an H-matrix with negative diagonal entries. In practice, however, it is very difficult to determine whether a matrix is a non-singular H-matrix or not, so it is valuable to investigate subclasses of H-matrices which are defined by relatively simple and practical criteria. Many subclasses of H-matrices have recently been discussed in detail demonstrating the many benefits they can provide, though one particular subclass has not been fully exploited until now. The aim of this paper is to attract attention to this class and discuss its relation with other more investigated classes, while showing its main advantage, based on its simplicity and elegance. This new approach, which we are presenting in this paper, will be compared with the existing ones, in three possible areas of applications, spectrum localization; maximum norm estimation of the inverse matrix in the point, as well as the block case; and error estimation for LCP problems. The main conclusion is that the importance of our approach grows with the matrix dimension.

1. Introduction

The class of generalized diagonally dominant (GDD), also known as the H-matrix class, as well as its various subclasses, attract significant scientific attention due to its central subclass, the strictly diagonally dominant (SDD) class, which appears to be very important and productive in many fields of applied linear algebra, for example:
  • The famous Geršgorin theorem [1] is, in fact, equivalent to non-singularity result for SDD matrices [2];
  • For a given SDD matrix, an infinity norm estimation for its inverse can be easily found by Varah’s theorem [3];
  • For an arbitrary matrix, its ε pseudospectrum can be easily localized by ε pseudo Geršgorin set [4];
  • If the (Jacobian) matrix off a particular (non-linear) dynamical system is an SDD matrix with negative diagonal entries, then this dynamical system is (locally) stable;
  • The error bound for linear complementarity problems can be easily calculated for the class of SDD matrices [5];
  • The Schur complement of an SDD matrix is SDD itself [6].
All of these applications are simple and elegant as they only require simple operations with matrix entries. In comparison, even if we manage to find an analogue application for GDD-matrices (such as Minimal Geršgorin set [2] for the first item), it is neither simple nor elegant. For this reason, something between SDD and GDD classes, i.e., various H-matrix subclasses, became relevant. Many such subclasses have been discovered over the years, such as doubly strictly diagonally dominant (DSDD), also known under the name Ostrowski matrices [7,8,9], Dashnic–Zusmanovich matrices [10], Dashnic–Zusmanovich type matrices [11], S-SDD or CKV matrices [12,13], CKV type matrices [14], Nekrasov matrices [15,16,17], SDD 1 matrices [18,19], etc. Various problems related to these classes have been studied, including eigenvalue localizations [2,7], pseudospectra localizations [4], infinity norm estimations of the inverse [18,20,21,22,23,24,25,26,27], Schur complement problems [28,29,30,31], error bound for linear complementarity problems [32,33,34,35,36], etc. Furthermore, introducing new H-matrix subclasses can provide more benefits for SDD class itself. For example, infinity norm estimation for the inverse of Nekrasov matrices, see [17,21], can be used for SDD matrices as well (since SDD class is a subset of Nekrasov class), and produce better estimation than Varah’s one. This idea has been specifically essential to estimate the infinity norm of the iterative matrix of a parallel-in-time iterative algorithm for Volterra partial integro-differential problems [37], and to analyze the preconditioning technique for an all-at-once system from Volterra subdiffusion equations [38]. Of course, as a matrix class under consideration becomes wider, the more computational effort has to be performed to obtain a corresponding result.
On the other hand, in many applications, such as in ecology for example, a matrix under consideration is almost an SDD matrix, meaning that in only one row is strictly diagonally not dominant. Such matrices might belong to Ostrowski or Dashnic–Zusmanovich classes, which are widely investigated in the recent years [8,23,29,31,39,40]. However, if we focus on matrices with only one non-SDD row, which are not Ostrowski matrices, we can find a simpler condition than Dashnic–Zusmanovich one, providing an H-matrix property. This condition can be derived from a non-singular class of matrices from [41], which, to author’s knowledge, has not been fully exploited up to now. In this paper, we will present a new approach to this class, show how many benefits we can achieve in the case of matrices with only one non-SDD row, and compare our results with the existing ones in three possible areas of applications, which are eigenvalue localization; maximum norm estimation of the inverse matrix in the point, as well as the block case; and error estimation for LCP problems.
Throughout this paper, the usual notations shall be used:
N : = { 1 , 2 , , n } ,
r i ( A ) : = j i | a i j | , i N , s i ( A ) : = max j N \ { i } | a j i | , i N .
In [41], the following non-singularity result was proved.
Theorem 1.
If a matrix A = [ a i j ] C n , n , n 2 , satisfies the conditions
| a i i | > min r i ( A ) , s i ( A ) , i N ,
and
| a i i | + | a j j | > r i ( A ) + r j ( A ) i , j N , i j ,
then A is non-singular.
The matrix class described by conditions (1) and (2) we will call Nearly-SDD class. To authors’ knowledge, the relationship with some other known non-singular classes has not been established yet. Hence, first of all, we will discuss these relationships, and then present some advantages of using the Nearly-SDD class in applications.
A closer look at conditions (1) and (2), leads to the conclusion that all matrices from the Nearly-SDD class have at most one non-SDD row, i.e., the matrix is either an SDD (strictly diagonally dominant) matrix, meaning that
| a i i | > r i ( A ) i N ,
or there is only one index k N , such that
s k ( A ) < | a k k | r k ( A ) ,
and
| a j j | > r j ( A ) + r k ( A ) | a k k | , j N \ { k } .
Hence, Theorem 1 has the following Corollary:
Corollary 1.
If for a matrix A = [ a i j ] C n , n , n 2 , there exists an index k, such that
s k ( A ) < | a k k | r k ( A ) ,
and for all j k
| a j j | > r j ( A ) + r k ( A ) | a k k | ,
then A is non-singular.
This particular situation, when a given matrix has at most one non-SDD row, happens also with two well-known subclasses of H-matrices, Ostrowski (or Doubly Diagonally Dominant), [9], defined by the condition
| a i i | | a j j | > r i ( A ) r j ( A ) , i , j N , i j ,
and Dashnic–Zusmanovich (DZ) [10], defined by the condition that there exists an index i N , such that
| a i i | | a j j | r j ( A ) + | a j i | > r i ( A ) | a j i | , j N \ { i } .
So, the natural question that immediately arises is what is the relation between these classes?
Recently, in [18,19], another H-matrix subclass which treats differently SDD and non-SDD rows, has been proposed. Let N 1 ( A ) be the subset of indices that correspond to non-SDD rows, i.e.,
N 1 ( A ) : = { i N : | a i i | r i ( A ) } ,
and
p i ( A ) : = j N 1 ( A ) \ { i } | a i j | + j N 1 ( A ) { i } r j ( A ) | a j j | | a i j | .
We say that a matrix A is SDD 1 matrix if
| a i i | > p i ( A ) , i N 1 ( A ) .
Among SDD 1 matrices, obviously, one can find matrices with only one non-SDD row (for such matrices N 1 ( A ) is a singleton), so we will compare the Nearly-SDD class with the SDD 1 class, too. By convention, we will assume that SDD matrices (the case N 1 ( A ) = ø ) does belong to SDD 1 class.

2. The Relation between Classes

Figure 1 illustrates the relationship between Nearly-SDD, Ostrowski, Dashnic–Zusmanovich, and SDD 1 classes.
In order to confirm this, we will present the following:
1*
An example—matrix A 1 —belonging to Nearly-SDD, but not to Ostrowski,
2*
An example—matrix A 2 —belonging to Ostrowski, but not to Nearly-SDD,
3*
An example—matrix A 3 —belonging to DZ, but neither to Ostrowski, nor to Nearly-SDD,
4*
An example—matrix A 4 —belonging to Nearly-SDD, but not to SDD 1 ,
5*
The proof that every Nearly-SDD matrix is a DZ, too.
Other relations presented in the picture are obvious.
The following examples arise in ecological modelling, more precisely in the generalized Lotka–Volterra equations, representing energy flow in complex food webs. Matrices A 1 , A 2 , A 3 , and A 4 are community matrices of such models.
1*
The matrix
A 1 = 78 7 10 5 10 8 3 5 1 7 4 95 9 3 3 5 3 5 5 2 4 4 58 6 8 1 3 4 4 6 9 7 8 87 2 5 6 8 8 3 0 2 4 2 90 1 7 2 6 7 10 9 3 6 3 80 8 2 6 9 9 6 5 4 8 7 86 7 1 36 7 9 1 7 7 3 4 93 11 3 5 8 4 4 3 5 6 4 47 5 4 2 2 10 9 9 2 10 3 49
is a Nearly-SDD matrix (for k = 10 ), but it is not an Ostrowski matrix, for
| a 7 , 7 | | a 10 , 10 | = 4214 < 4233 = r 7 ( A ) r 10 ( A ) .
2*
The matrix
A 2 = 78 7 10 5 10 8 3 5 1 7 4 95 9 3 3 5 3 5 5 2 4 4 58 6 8 1 3 4 4 6 9 7 8 87 2 5 6 8 8 3 0 2 4 2 90 1 7 2 6 7 10 9 3 6 3 80 8 2 6 9 9 6 5 4 8 7 86 7 1 35 7 9 1 7 7 3 4 93 11 3 5 8 4 4 3 5 6 4 47 6 4 2 2 10 9 9 2 10 3 49
is an Ostrowski matrix, but it is not a Nearly-SDD matrix, for the only non-SDD row is k = 10 , and
| a 9 , 9 | + | a 10 , 10 | = 96 = r 9 ( A ) + r 10 ( A ) .
3*
The matrix
A 3 = 78 7 10 5 10 8 3 5 1 7 4 95 9 3 3 5 3 5 5 2 4 4 58 6 8 1 3 4 4 6 9 7 8 87 2 5 6 8 8 3 0 2 4 2 90 1 7 2 6 7 10 9 3 6 3 80 8 2 6 9 9 6 5 4 8 7 86 7 1 37 7 9 1 7 7 3 4 93 11 3 5 8 4 4 3 5 6 4 47 6 4 2 2 10 9 9 2 10 3 49
is a DZ matrix, but it is neither an Ostrowski matrix, because of
| a 7 , 7 | | a 10 , 10 | = 4214 < 4284 = r 7 ( A ) r 10 ( A ) ,
nor a Nearly-SDD matrix, for the only non-SDD row is k = 10 , and
| a 7 , 7 | + | a 10 , 10 | = 135 = r 7 ( A ) + r 10 ( A ) .
4*
The matrix
A 4 = 59 7 10 5 10 8 3 5 1 7 4 42 9 3 3 5 3 5 5 2 4 4 43 6 8 1 3 4 4 6 9 7 8 59 2 5 6 8 8 3 0 2 4 2 34 1 7 2 6 7 10 9 3 6 3 59 8 2 6 9 9 6 5 4 8 7 56 7 1 6 7 9 1 7 7 3 4 55 11 3 5 8 4 4 3 5 6 4 47 5 2 2 2 2 1 1 1 1 1 11
is a Nearly-SDD matrix (for k = 10 ), but it is not an SDD 1 matrix, for
| a 10 , 10 | = 11 < 12.2032 = p 10 ( A ) .
5*
Proof. 
Assume that A belongs to Nearly-SDD class. If A is an SDD matrix, the conclusion follows immediately. Hence, suppose that there exists an index k, such that
s k ( A ) < | a k k | r k ( A ) ,
and for all other indices j k it holds that
| a j j | > r j ( A ) + r k ( A ) | a k k | .
Then,
| a k k | | a j j | > | a k k | r j ( A ) + r k ( A ) | a k k | | a k k | r j ( A ) + s k ( A ) r k ( A ) | a k k |
| a k k | r j ( A ) + | a j k | r k ( A ) | a k k | ,
or, equivalently,
| a k k | | a j j | r j ( A ) + | a j k | > | a j k | r k ( A ) ,
holds for all j k , which means that A belongs to DZ class. □

3. Scaling Technique for Nearly-SDD Class

It is well known that every H-matrix can be diagonally scaled, from the right-hand side, to an SDD matrix, i.e., that there exists a positive diagonal matrix W, such that A W is an SDD matrix. The class we are dealing with in this paper, the Nearly-SDD class, is a subclass of H-matrices, which we confirm by constructing a scaling matrix W.
First of all, if A is an SDD matrix itself, then there is nothing to prove, since the role of the scaling matrix is played by the identity matrix.
Consequently, assume that A belongs to Nearly-SDD class, and that k is the index of the only non-SDD row:
s k ( A ) < | a k k | r k ( A ) .
Note that for all other indices j k it holds that
| a j j | > r j ( A ) + r k ( A ) | a k k | .
We choose the scaling matrix W to have the following form:
W = d i a g ( w 1 , w 2 , , w n ) , where w i = γ if i = k , 1 otherwise .
In order to ensure that A W is an SDD matrix, we have to choose γ , such that:
| a k k | γ > r k ( A ) ,
and
| a j j | > r j ( A ) + ( γ 1 ) | a j k | , j N \ { k } .
Since (9) is fulfilled for all j, for which a j k = 0 , and due to the fact that | a k k | > 0 , it means that γ has to be chosen from the following interval
G 1 : = r k ( A ) | a k k | < γ < 1 + min j N \ { k } , a j k 0 | a j j | r j ( A ) | a j k | = G 2 .
This interval is not empty, because from (6) and (7), we have
r k ( A ) | a k k | | a j k | < | a k k | | a j j | r j ( A ) ,
or, equivalently,
r k ( A ) | a j k | < | a k k | | a j j | r j ( A ) + | a j k | .
Note that we can always choose γ from interval
r k ( A ) | a k k | < γ < 1 + Δ ( A ) s k ( A ) ,
where
Δ ( A ) : = min j N \ { k } | a j j | r j ( A ) ,
which is a bit smaller than (10), but very easy to calculate. It is also non-empty, because of (6) and (7), rewritten as
s k ( A ) < | a k k | r k ( A ) , r k ( A ) | a k k | < Δ ( A ) ,
so that
s k ( A ) r k ( A ) | a k k | < | a k k | Δ ( A ) r k ( A ) s k ( A ) < | a k k | s k ( A ) + | a k k | Δ ( A ) .
Remark 1.
The form of a scaling matrix also proves that the Nearly-SDD class is contained in DZ class (which is fully characterized with the scaling matrices of this form, see [42]).
Remark 2.
Although Nearly-SDD class is a subset of DZ class, it is worth discussing its applications, since its definition, in the form of (6) and (7) (which covers all Nearly-SDD matrices except SDD ones) suggests that we can expect computationally more efficient eigenvalue localizations, upper bound for the inverse, etc., compared to those obtained for wider classes of matrices.
Remark 3.
Obviously, Nearly-SDD class is a subclass of non-singular H-matrices.

4. Possible Applications

4.1. A New Eigenvalue Localization Set

It is well-known that every subclass of H-matrices can lead to an eigenvalue localization set, see [2,7]. For example, the condition defining SDD matrices leads to the well-known Geršgorin set [1], and the condition defining DZ matrices leads to the DZ eigenvalue localization set, a special case of the eigenvalue localization set in [12]. Here, we will reformulate Theorem 6 from [12] for the case S = { k } , which corresponds to DZ class. As usual, by σ ( A ) we denote the spectrum of A.
Theorem 2
([12], Corrolary of Theorem 6). Let k N be an arbitrary index, and n 2 . For a given matrix A = [ a i j ] C n , , n , and all j N \ { k } , define sets
V k j ( A ) : = z C : | z a k k | | z a j j | r j ( A ) + | a j k | r k ( A ) | a j k | .
Then
σ ( A ) 𝒞 { k } ( A ) : = j N \ { k } V k j ( A ) .
Furthermore
σ ( A ) DZ ( A ) = k N C { k } ( A ) .
Analogously, here, by the use of Theorem 1, we obtain the following eigenvalue localization result.
Theorem 3.
For a given matrix A = [ a i j ] C n , , n , n 2 , for i N , define the Geršgorin- type disks
Γ i ( A ) : = z C : | z a i i | min r i ( A ) , s i ( A )
and for all i , j N , j i , the ellipses
P i j ( A ) : = z C : | z a i i | + | z a j j | r i ( A ) + r j ( A ) .
Then
σ ( A ) P ( A ) : = i N Γ i ( A ) i , j N , j i P i j ( A ) .
Proof. 
Suppose, on the contrary, that there exists an eigenvalue λ of A, such that λ P ( A ) . Then, for all i N , λ Γ i ( A ) , and for all i , j N , i j , λ P i j ( A ) , i.e.,
| λ a i i | > min r i ( A ) , s i ( A ) for all i N and
| λ a i i | + | λ a j j | > r i ( A ) + r j ( A ) for all i , j N , i j ,
which means that λ E A is a Nearly-SDD matrix, hence non-singular. This is an obvious contradiction, so the proof is concluded. □
Obviously, for every P i j ( A ) Γ i ( A ) Γ j ( A ) , where Γ i ( A ) denotes the i th Geršgorin disk:
Γ i ( A ) : = z C : | z a i i | r i ( A ) .
More precisely:
  • If Γ i ( A ) Γ j ( A ) = ø , then P i j ( A ) = ø ,
  • If Γ i ( A ) Γ j ( A ) ø , then P i j ( A ) is an ellipse with foci a i i and a j j .
As an immediate consequence, we have the following statement.
Corollary 2.
If for a given matrix A = [ a i j ] C n , , n , n 2 ,
| a i i a j j | > r i ( A ) + r j ( A ) f o r   a l l i , j N , j i , t h e n
σ ( A ) Γ ( A ) : = i N Γ i ( A ) .
This Corollary says that if we have a matrix for which all Geršgorin circles are isolated, then it is possible to reduce their radius. Here, we give an illustrative example.
Example 1.
In stability theory of continuous time-independent dynamical systems, the position of the spectrum of a considered matrix is crucial. In order to ensure that the whole spectrum lies in the open left-half plane, it is sufficient to find agoodspectrum of localization, which lies in the open left-half plane itself. This is the main reason for finding as many as possible elegant, meaning computationally cheap, eigenvalue localizations. This particular example shows how Corollary 2 can help to achieve this goal.
Consider a matrix
A 5 = 1.2 + 4 i 2 0.1 0.9 1 0.2 6.2 i 2 0.5 0.1 1 0.5 26.2 0.1 1 0.1 0.2 1 11.7 0.2 0.1 0.5 1 0.1 21.2 .
In Figure 2, Geršgorin circles are marked by full line, exact eigenvalues by red crosses, and localization area obtained by Corollary 2 (smaller circles) is shaded. Obviously, the original Geršgorin set goes to the right-half plane, so it can not provide conclusion about stability, while the localization set Γ ( A 5 )  is completely in the left-half plane, hence, we now have an answer, the corresponding dynamical system is stable.
Remark 4.
Similar observations can be found in [41], but the proof given there is based on the structure of the corresponding eigenvector, hence it required more technicalities, comparing to the approach presented here, which is based on the Equivalence Principle between subclasses of H-matrices and corresponding Geršgorin-type localizations.

4.2. Upper Bounds for the Norm of the Inverse of Matrices from Nearly-SDD Class

In order to give an upper bound for the maximum norm of the matrix inverse to a Nearly-SDD matrix A, we will use the facts from the previous section.
Theorem 4.
Let A = [ a i j ] C n , n be a Nearly-SDD matrix, for which there exists an index k, such that
s k ( A ) < | a k k | r k ( A ) , a n d   f o r   a l l j k : | a j j | > r j ( A ) + r k ( A ) | a k k | .
Then,
A 1 r k ( A ) + s k ( A ) + Δ ( A ) | a k k | Δ ( A ) s k ( A ) r k ( A ) | a k k | ,
where
Δ ( A ) : = min j N \ { k } | a j j | r j ( A ) .
Proof. 
Let W be a scaling matrix, such that A W is an SDD matrix. Then
A 1 W ( A W ) 1 .
Choose γ from the admissible interval (11), i.e.,
F 1 : = r k ( A ) | a k k | < γ < 1 + Δ ( A ) s k ( A ) = : F 2 .
Since r k ( A ) | a k k | 1 , we immediately have
W = max { 1 , γ } = γ .
On the other hand, according to Varah’s well-known result, for the inverse of SDD matrix A W we have
( A W ) 1 max i N 1 | ( A W ) i i | r i ( A W )
= max 1 | a k k | γ r k ( A ) , max j N \ { k } 1 | a j j | r j ( A ) ( γ 1 ) | a j k |
max 1 | a k k | γ r k ( A ) , 1 Δ ( A ) ( γ 1 ) s k ( A ) .
Denoting by
ϕ 1 ( γ ) = γ | a k k | γ r k ( A ) and ϕ 2 ( γ ) = γ Δ ( A ) ( γ 1 ) s k ( A ) ,
for an arbitrary γ from the interval ( F 1 , F 2 ) given in (14), we have
A 1 max ϕ 1 ( γ ) , ϕ 2 ( γ ) .
Obviously, function ϕ 1 ( γ ) = 1 | a k k | r k ( A ) γ is a decreasing function by γ , while ϕ 2 ( γ ) = 1 Δ ( A ) + s k ( A ) γ s k ( A ) is an increasing function by γ . Since
A 1 min F 1 < γ < F 2 max ϕ 1 ( γ ) , ϕ 2 ( γ ) ,
the minimum over γ will be attained for γ ^ satisfying
| a k k | γ ^ r k ( A ) = Δ ( A ) ( γ ^ 1 ) s k ( A ) , i . e . ,
γ ^ = r k ( A ) + s k ( A ) + Δ ( A ) | a k k | + s k ( A ) = 1 + r k ( A ) | a k k | + Δ ( A ) | a k k | + s k ( A ) ,
if such a γ ^ belongs to the admissible interval (14). This is the case, because, due to (12)
r k ( A ) | a k k | s k ( A ) < Δ ( A ) | a k k | r k ( A ) | a k k | + s k ( A ) < | a k k | r k ( A ) + s k ( A ) + Δ ( A )
F 1 = r k ( A ) | a k k | < r k ( A ) + s k ( A ) + Δ ( A ) | a k k | + s k ( A ) = γ ^ ,
and
s k ( A ) r k ( A ) | a k k | < | a k k | Δ ( A ) s k ( A ) r k ( A ) | a k k | + Δ ( A ) < | a k k | + s k ( A ) Δ ( A )
γ ^ = 1 + r k ( A ) | a k k | + Δ ( A ) | a k k | + s k ( A ) < 1 + 1 s k ( A ) Δ ( A ) = F 2 .
Finally,
min F 1 < γ < F 2 max ϕ 1 ( γ ) , ϕ 2 ( γ ) = ϕ 1 ( γ ^ ) = ϕ 2 ( γ ^ ) = γ ^ | a k k | γ ^ r k ( A )
= r k ( A ) + s k ( A ) + Δ ( A ) | a k k | r k ( A ) + s k ( A ) + Δ ( A ) r k ( A ) a k k | + s k ( A ) = r k ( A ) + s k ( A ) + Δ ( A ) | a k k | Δ ( A ) s k ( A ) r k ( A ) | a k k | ,
and (13) is proved. □
Remark 5.
Since the Nearly-SDD class is a subset of DZ class, which is a special case of S SDD class for S being a singleton, it is worthwhile to compare our inverse norm bound with the known ones from [22,27]. However, in [39], all these known bounds were discussed in more details, and the best one for DZ matrices is given in the following form (see Theorem 4.2 in [39]):
Theorem 5.
Let A = [ a i j ] C n , , n , n 2 , be a DZ matrix, such that
| a j j | r j k ( A ) | a k k | > | a j k | r k ( A ) f o r   a   c e r t a i n k N a n d a l l j k .
Then
A 1 m a x ξ 1 ( k ) , ξ 2 ( k ) ,
where
ξ 1 ( k ) : = max j k p j ( A ) p k ( A ) | a j j | r j k ( A ) + r k ( A ) | a j j | r j k ( A ) | a k k | | a j k | r k ( A ) ,
ξ 2 ( k ) : = max j k p j ( A ) < p k ( A ) | a j k | + | a k k | | a j j | r j k ( A ) | a k k | | a j k | r k ( A ) ,
r i k ( A ) : = r i ( A ) | a i k | , a n d p i ( A ) : = | a i i | r i ( A ) , i N .
In our case, when matrix A has only one non-SDD row, and the index of this row is k, we have
p k ( A ) 0 , and p j ( A ) > 0 , for   all j k ,
so there are no indices satisfying condition in the definition of ξ 2 ( k ) . Hence,
A 1 ξ 1 ( k ) = max j k r k ( A ) + | a j k | + | a j j | r j ( A ) | a k k | | a j j | r j ( A ) | a j k | r k ( A ) | a k k | .
Obviously, the bound (15) is an increasing function by | a j k | , while the equivalent expression
ξ 1 ( k ) = max j k 1 | a k k | 1 + | a k k | r k ( A ) + r k ( A ) | a j k | | a k k | | a j j | r j ( A ) | a j k | r k ( A ) | a k k |
suggests that it is a decreasing function over | a j j | r j ( A ) . Hence, our bound (13) can not be better than (15), but it requires less computations. Namely, for a given matrix with only one non-SDD row, we know index k of that row, so for the bound given by (13), we have to find Δ ( A ) : = min j N \ { k } | a j j | r j ( A ) , and then calculate expression r k ( A ) + s k ( A ) + Δ ( A ) | a k k | Δ ( A ) s k ( A ) ( r k ( A ) | a k k | ) only once, while for the bound given by (15), we have to calculate n 1 expressions r k ( A ) + | a j k | + | a j j | r j ( A ) | a k k | | a j j | r j ( A ) | a j k | r k ( A ) | a k k | , and then find their maximum.
More importantly, for almost all community matrices in ecology, which we have analyzed, our bound is not worse, but the same as (15). For example, for A 1 :
estimation (15) from [39]estimation (13)
A 1 1.2 A 1 1.2

4.3. Linear Complementarity Problem

Given A = [ a i j ] R n , n and q R n , the linear complementarity problem (LCP( A , q )) is to find a vector x R n , such that
x 0 , A x + q 0 , ( A x + q ) T x = 0 ,
or to show that such vector does not exist. It is well known that LCP( A , q ) has a unique solution for any q R n if, and only if, the matrix A is a P matrix, a real square matrix with all its principal minors positive, see [43]. Here, we will consider the Nearly-SDD matrices with positive diagonal entries. Such matrices are H + -matrices (i.e., H-matrices with positive diagonal entries), so they are P-matrices.
In defining an upper error bound for LCP, the following fact can be a starting point, see [44],
| | x x * | | max d [ 0 , 1 ] n | | ( A d ) 1 | | | | r ( x ) | | ,
where A d = I D + D A , D = d i a g ( d 1 , d 2 , , d n ) and d = [ d 1 , d 2 , , d n ] T .
In the same paper, for A being an H + -matrix, it has been shown that
( A d ) 1 A 1 max { Λ , I } ,
where A = [ m i j ] R n , n is the comparison matrix to matrix A, defined with
m i j : = | a i i | for i = j , | a i j | for i j ,
and Λ is the diagonal part of A. Here, we have used the max operator in the following sense:
max { Λ , I } = d i a g max ( a 11 , 1 ) , max ( a 22 , 1 ) , , max ( a n n , 1 ) .
Obviously, if A is a Nearly-SDD matrix, then its comparison matrix A is an M-matrix, and (13) can be treated as an upper bound of its inverse, as well. Hence
A 1 r k ( A ) + s k ( A ) + Δ ( A ) a k k Δ ( A ) s k ( A ) r k ( A ) a k k .
On the other hand,
max { Λ , I } = max { 1 , max i a i i } ,
so that
( A d ) 1 r k ( A ) + s k ( A ) + Δ ( A ) a k k Δ ( A ) s k ( A ) r k ( A ) a k k max { 1 , max i a i i } .
However, when we are considering a subclass of H + -matrices, for which we know a form of the scaling matrix, then we can use the approach presented in [33]. In fact, we can apply Proposition 3.1 from this paper, since our class belongs to the Σ SDD class (the class for which there exists a non-empty subset S, such that matrix A is an S SDD matrix). From this Proposition, because of our scaling parameter γ is greater than 1, we conclude that
max d [ 0 , 1 ] n | | ( A d ) 1 | | m a x γ β ¯ , γ ,
where
β ¯ : = min i β ¯ i , a n d β ¯ i : = a i i w i j i | a i j | w j , i N ,
while W denotes a scaling matrix
W = d i a g ( w 1 , w 2 , , w n ) , where w i = γ if i = k , 1 otherwise .
Obviously,
β ¯ k = a k k γ r k ( A ) and β ¯ i = a i i r i ( A ) + | a i k | | a i k | γ , i k ,
so that
β ¯ = min a k k γ r k ( A ) , min i k a i i r i ( A ) | a i k | ( γ 1 ) .
Hence, we have proved the following proposition.
Proposition 1.
Let us assume that A = [ a i j ] C n , , n is a Nearly-SDD matrix, which is not SDD, i.e., there exist an index k, such that
s k ( A ) < | a k k | r k ( A ) a n d   f o r   a l l j k : | a j j | > r j ( A ) + r k ( A ) | a k k | .
Let γ ( F 1 , F 2 ) defined by (14). Then
max d [ 0 , 1 ] n | | ( A d ) 1 | | m a x γ min a k k γ r k ( A ) , min i k a i i r i ( A ) | a i k | ( γ 1 ) , γ .
Let us consider the same example as in [33], and compare the corresponding bounds.
Example 2
([33]). Let
A 6 = t + 1 t t t 3 t t t t 3 t , t 1 .
It is a Nearly-SDD matrix, which is not SDD, and satisfies conditions of Proposition 1 for k = 1 , so we can conclude that for all γ ( F 1 , F 2 ) it holds that
max d [ 0 , 1 ] n | | ( A d ) 1 | | m a x γ m i n ( t + 1 ) γ 2 t , 2 t t γ , γ .
We will choose γ, such that
( t + 1 ) γ 2 t = 2 t t γ , i . e . , γ = 4 t 2 t + 1 .
Obviously
F 1 = r k ( A ) | a k k | = 2 t t + 1 < γ = 4 t 2 t + 1 < 1 + t t = 1 + Δ ( A ) s k ( A ) = F 2 .
Finally,
max d [ 0 , 1 ] n | | ( A d ) 1 | | m a x γ ( t + 1 ) γ 2 t , γ = m a x 4 t 2 t + 1 ( t + 1 ) 4 t 2 t + 1 2 t , 4 t 2 t + 1
= m a x 2 , 4 t 2 t + 1 = 2 .
In [33], the obtained bound for this matrix is 2 k + 1 k , which is greater than 2.
estimation from [33]estimation (17)
max d [ 0 , 1 ] n | | ( A d ) 1 | | 2 k + 1 k max d [ 0 , 1 ] n | | ( A d ) 1 | | 2
Remark 6.
If we compare this upper bound with the one from [40], on the same examples given there, we see that the bounds are the same, but our bound (16) is computationally cheaper, for the same reason we explained when comparing the bounds (13) and (15).

4.4. Block Case

It is well-known that in real applications, particularly in ecology, matrices might have zeros on diagonal places and this restriction can not be avoided due to some physical explanations. However, this fact immediately makes such a matrix outside of the non-singular H-matrix class. This is one of the reasons why block generalizations become important. The other important reason is that partitioning a given matrix into blocks reduces the dimension of the problem.
Block generalizations of the class of H-matrices were considered in [45].
For a matrix A = [ a i j ] C n , n and a partition π = { p j } j = 0 of the index set N, one can present A in the block form as [ A i j ] × . Here, we will consider only one possible way of introducing the × comparison matrix for a given A and a partition π of the index set. The comparison matrix will be denoted by A π = [ μ i j ] , where
μ i j = ( | | A i i 1 | | ) 1 , i = j and A i i non-singular , 0 , i = j and A i i singular , | | A i j | | , i j .
For a given A = [ a i j ] C n , n , and a given partition π = { p j } j = 0 of the index set we say that:
  • A is a block π H-matrix if A π is an H-matrix,
  • A is a block π SDD matrix if A π is an SDD matrix,
  • A is a block π Nearly-SDD matrix if A π is a Nearly-SDD matrix,
  • Etc.
In [46] the following results has been proved.
Theorem 6
([46]). If A = [ A i j ] n × n is a block π H-matrix, then
| | A 1 | | | | ( A π ) 1 | | .
Due to this theorem, we are able to estimate the norm of the inverse of matrices which are not H-matrices. The following example illustrates how this works:
A 7 = 4 0.2 0.2 0 0 0 1 4 0.2 0 0 0 0 3 0 4 1 0.1 0 3 0.5 0 0 0 0 3 2 0.1 5 0.4 0 3 0.5 2 1 4 .
This is a matrix with zeros on its diagonal, so it can not be a non-singular H-matrix, meaning that we can not apply any of known point-wise estimations for subclasses of H-matrices. However, if we consider this matrix in its block form, with respect to partition π = { 0 , 1 , 2 , 6 } , we obtain the comparison matrix
A 7 π = 4 0.2 0.2 1 4 0.2 0 3 0.5 .
This matrix is, obviously from Nearly-SDD class for k = 3 , hence (13) gives us an upper bound for the inverse of comparison matrix, as well as for the original matrix A 6 , due to Theorem 6:
| | A 7 1 | | | | ( A 7 π ) 1 | | r 3 ( A 7 π ) + s 3 ( A 7 π ) + Δ ( A 7 π ) 0.5 · Δ ( A 7 π ) s 3 ( A 7 π ) r 3 ( A 7 π ) 0.5 6.667 .
It is a good estimation, since the exact values are
A 7 1 = A 7 π 1 = 6.2857 .
Note that comparison matrix A 7 π is not the Ostrowski matrix, while, as we have already pointed out, it is a DZ matrix. However, this information is not providing a better bound, it is the same as (18), while, at the same time, required computational work is more demanding. Of course, the above example is just an illustrative one, the importance of such approach grows with the matrix dimension.

5. Conclusions

While the H-matrix class itself is very important from the application point of view, checking if a matrix is an H-matrix is a computationally very demanding job. Instead, it is much more efficient to check if a matrix belongs to an H-matrix subclass. In the case of a positive answer, there are lots of benefits from this fact. Let us recall just a few:
  • If all diagonal entries are negative, then we immediately conclude that the corresponding continuous linear (non-linear) dynamical system is asymptotically (locally) stable, without calculating exact eigenvalues.
  • Moreover, if a particular feature in mathematical model depends only on the position of eigenvalues, we have a new localization area for eigenvalues, easily constructed from a given H-matrix subclass by equivalence principle, which might help.
  • We are able to estimate max-norm of the inverse matrix, which is important in the perturbation theory of ill-conditioned matrices in engineering, etc.
  • We can apply this new max norm estimation for the inverse to SDD matrices, potentially beneficial in the analysis of a parallel-in-time iterative algorithm for Volterra partial integro-differential problems and the preconditioning technique for an all-at-once system from Volterra subdiffusion equations.
A subclass of H-matrices, called in this paper the Nearly-SDD class, has not been exploited in the above sense, yet.In order to reveal the place and significance of this subclass of H-matrices. let us conclude the following:
  • If all rows of the matrix are SDD, the technique is already known and developed.
  • If the matrix has only one non-SDD row, the known technique offers to check if the matrix is double SDD (satisfy condition (3)). If not, the known technique offers to check the condition (4), i.e., whether the matrix is DZ or not. However, for that check, it is necessary to go through all the indices, and for each of them check n 1 conditions.
  • Now, for a matrix with only one non-SDD row, we offer to check, in total, n conditions, (6) and (7). We have also shown that, if these conditions are satisfied, then (4) will be satisfied as well (precisely for the index corresponding to non SDD row).
Obviously, as the matrix dimension grows, savings become more important.

Author Contributions

Both authors have equally contributed to preparation of all parts of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research paper has been supported by the Ministry of Science, Technological Development and Innovation through project no. 451-03-47/2023-01/200156 “Innovative scientific and artistic research from the FTS (activity) domain”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geršgorin, S. Über die Abgrenzung der Eigenwerte einer Matrix (German) (About the delimitation of the eigenvalues of a matrix). Izv. Akad. Nauk. Sssr, Seriya Mat. 1931, 7, 749–754. [Google Scholar]
  2. Varga, R.S. Geršgorin and His Circles; Springer: New York, NY, USA, 2004. [Google Scholar]
  3. Varah, M. A lower bound for the smallest singular value of a matrix. Linear Algebra Appl. 1975, 11, 3–5. [Google Scholar] [CrossRef]
  4. Kostić, V.R.; Cvetković, L.; Cvetković, D.L. Pseudospectra localizations and their applications. Numer. Linear Algebra Appl. 2016, 23, 55–75. [Google Scholar] [CrossRef]
  5. Garcia-Esnaola, M.; Pena, J.M. A comparison of error bounds for linear complementarity problems of H-matrices. Linear Algebra Appl. 2010, 433, 956–964. [Google Scholar] [CrossRef]
  6. Carlson, D.; Markham, T. Schur complements of diagonally dominant matrices. Czech. Math. J. 1979, 29, 246–251. [Google Scholar] [CrossRef]
  7. Cvetković, L. H-matrix theory vs. eigenvalue localization. Numer. Algor. 2006, 42, 229–245. [Google Scholar] [CrossRef]
  8. Li, B.S.; Tsatsomeros, M.J. Doubly diagonally dominant matrices. Linear Algebra Appl. 1997, 261, 221–235. [Google Scholar] [CrossRef]
  9. Ostrowski, A.M. Über die Determinanten mit Überwiegender Hauptdiagonale (German) (About the determinants with predominant main diagonal). Comment. Math. Helv. 1937, 10, 69–96. [Google Scholar] [CrossRef]
  10. Dashnic, L.S.; Zusmanovich, M.S. O nekotoryh kriteriyah regulyarnosti matric i lokalizacii ih spectra (In Russian) (On some criteria for the nonsingularity of matrices and the localization of their spectrum). Zh. Vychisl. Matem. Matem. Fiz. 1970, 5, 1092–1097. [Google Scholar]
  11. Zhao, J.X.; Liu, Q.L.; Li, C.Q.; Li, Y.T. Dashnic-Zusmanovich type matrices: A new subclass of nonsingular H-matrices. Linear Algebra Appl. 2018, 552, 277–287. [Google Scholar] [CrossRef]
  12. Cvetković, L.; Kostić, V.; Varga, R.S. A new Geršgorin-type eigenvalue inclusion set. Electron. Trans. Numer. Anal. 2004, 18, 73–80. [Google Scholar]
  13. Gao, Y.M.; Wang, X.H. Criteria for generalized diagonally dominant matrices and M-matrices. Linear Algebra Appl. 1992, 169, 257–268. [Google Scholar] [CrossRef]
  14. Cvetković, D.L.; Cvetković, L.; Li, C.Q. CKV-type matrices with applications. Linear Algebra Appl. 2021, 608, 158–184. [Google Scholar] [CrossRef]
  15. Gudkov, V.V. On a certain test for nonsingularity of matrices. Latvian Math. Yearbook 1966, 1965, 385–390. [Google Scholar]
  16. Li, W. On Nekrasov matrices. Linear Algebra Appl. 1998, 281, 87–96. [Google Scholar] [CrossRef]
  17. Szulc, T. Some remarks on a theorem of Gudkov. Linear Algebra Appl. 1995, 225, 221–235. [Google Scholar] [CrossRef]
  18. Chen, X.; Li, Y.; Liu, L.; Wang, Y. Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Math. 2022, 7, 8847–8860. [Google Scholar] [CrossRef]
  19. Pena, J.M. Diagonal dominance, Schur complements and some classes of H-matrices and P-matrices. Adv. Comput. Math. 2011, 35, 357–373. [Google Scholar] [CrossRef]
  20. Cvetković, L.; Kostić, V.; Doroslovački, K. Max-norm bounds for the inverse of S-Nekrasov matrices. Appl. Math. Comput. 2012, 218, 9498–9503. [Google Scholar] [CrossRef]
  21. Cvetković, L.; Dai, P.F.; Doroslovački, K.; Li, Y.T. Infinity norm bounds for the inverse of Nekrasov matrices. Appl. Math. Comput. 2013, 219, 5020–5024. [Google Scholar] [CrossRef]
  22. Kolotilina, L.Y. New subclasses of the class of H-matrices and related bounds for the inverses. J. Math. Sci. 2017, 224, 911–925. [Google Scholar] [CrossRef]
  23. Li, C.Q.; Cvetković, L.; Wei, Y.; Zhao, J.X. An infinity norm bound for the inverse of Dashnic-Zusmanovich type matrices with applications. Linear Algebra Appl. 2019, 565, 99–122. [Google Scholar] [CrossRef]
  24. Kolotilina, L.Y. Bounds for the infinity norm of the inverse for certain M- and H-matrices. Linear Algebra Appl. 2009, 430, 692–702. [Google Scholar] [CrossRef]
  25. Kolotilina, L.Y. On bounding inverse to Nekrasov matrices in the infinity norm. J. Math. Sci. 2014, 199, 432–437. [Google Scholar] [CrossRef]
  26. Li, W. The infinity norm bound for the inverse of nonsingular diagonal dominant matrices. Appl. Math. Lett. 2008, 21, 258–263. [Google Scholar] [CrossRef]
  27. Morača, N. Upper bounds for the infinity norm of the inverse of SDD and S-SDD matrices. J. Comput. Appl. Math. 2007, 206, 666–678. [Google Scholar] [CrossRef]
  28. Cvetković, L.; Nedović, M. Eigenvalue localization refinements for the Schur complement. Appl. Math. Comput. 2012, 218, 8341–8346. [Google Scholar] [CrossRef]
  29. Li, C.Q.; Huang, Z.Y.; Zhao, J.X. On Schur complements of Dashnic-Zusmanovich type matrices. Linear Multilinear Algebra 2022, 70, 4071–4096. [Google Scholar] [CrossRef]
  30. Liu, J.Z.; Zhang, F.Z. Disc separation of the Schur complement of diagonally dominant matrices and determinantal bounds. SIAM J. Matrix Anal. Appl. 2005, 27, 665–674. [Google Scholar] [CrossRef]
  31. Liu, J.Z.; Zhang, J.; Liu, Y. The Schur complement of strictly doubly diagonally dominant matrices and its application. Linear Algebra Appl. 2012, 437, 168–183. [Google Scholar] [CrossRef]
  32. Gao, L.; Wang, Y.; Li, C.Q.; Li, Y. Error bounds for linear complementarity problems of S-Nekrasov matrices and B-S-Nekrasov matrices. J. Comput. Appl. Math. 2019, 336, 147–159. [Google Scholar] [CrossRef]
  33. Garcia-Esnaola, M.; Pena, J.M. Error bounds for the linear complementarity problem with a Σ-SDD matrix. Linear Algebra Appl. 2013, 438, 1339–1346. [Google Scholar] [CrossRef]
  34. Garcia-Esnaola, M.; Pena, J.M. Error bounds for linear complementarity problems of Nekrasov matrices. Numer. Algor. 2014, 67, 655–667. [Google Scholar] [CrossRef]
  35. Li, C.Q.; Dai, P.F.; Li, Y.T. New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices. Numer. Algor. 2017, 74, 997–1009. [Google Scholar] [CrossRef]
  36. Li, C.Q.; Yang, S.; Huang, H.; Li, Y.; Wei, Y. Note on error bounds for linear complementarity problems of Nekrasov matrices. Numer. Algor. 2020, 83, 355–372. [Google Scholar] [CrossRef]
  37. Gu, X.M.; Wu, S.L. A parallel-in-time iterative algorithm for Volterra partial integral-differential problems with weakly singular kernel. J. Comput. Phys. 2020, 417, 109576. [Google Scholar] [CrossRef]
  38. Zhao, J.L.; Gu, X.M.; Ostermann, A. A Preconditioning Technique for an All-at-once System from Volterra Subdiffusion Equations with Graded Time Steps. J. Sci. Comput. 2021, 88, 11. [Google Scholar] [CrossRef]
  39. Kolotilina, L.Y. On Dashnic-Zusmanovich (DZ) and Dashnic-Zusmanovich type (DZT) matrices and their inverses. J. Math. Sci. 2019, 240, 799–812. [Google Scholar] [CrossRef]
  40. Min, Y.U.; Hongmin, M.O. Error Bound of Linear Complementarity Problem for Dashnic-Zusmanovich Matrix. J. Jishou Univ. (Natural Sci. Ed.) Math. 2019, 40, 4–9. [Google Scholar]
  41. Pupkov, V.A. Ob izolirovannom sobstvennom znachenii matricy i strukture ego sobstvennogo vektora (in Russian) (On the isolated eigenvalue of a matrix and the structure of its eigenvector). Zh. Vychisl. Matem. Matem. Fiz. 1983, 23, 1304–1313. [Google Scholar]
  42. Cvetković, L.; Kostić, V. Between Geršgorin and minimal Geršgorin sets. J. Comput. Appl. Math. 2006, 196, 452–458. [Google Scholar] [CrossRef]
  43. Cottle, R.W.; Pang, J.S.; Stone, R.E. The Linear Complementarity Problem; Academic Press: San Diego, CA, USA, 1992. [Google Scholar]
  44. Chen, X.J.; Xiang, S.H. Computation of error bounds for P-matrix linear complementarity problems. Math. Program. Ser. A 2006, 106, 513–525. [Google Scholar] [CrossRef]
  45. Robert, F. Blocs-H-matrices et convergence des methodes iteratives classiques par blocks (French) (Block-H-matrices and convergence of classical iterative methods by blocks). Linear Algebra Appl. 1969, 2, 223–265. [Google Scholar] [CrossRef]
  46. Cvetković, L.; Doroslovački, K. Max norm estimation for the inverse of block matrices. Appl. Math. Comput. 2014, 242, 694–706. [Google Scholar] [CrossRef]
Figure 1. The relationship between classes.
Figure 1. The relationship between classes.
Mathematics 11 02382 g001
Figure 2. Comparison between the Geršgorin set (black line) and Γ ( A 5 ) from Corollary 2 (shaded). Exact eigenvalues are marked by red crosses.
Figure 2. Comparison between the Geršgorin set (black line) and Γ ( A 5 ) from Corollary 2 (shaded). Exact eigenvalues are marked by red crosses.
Mathematics 11 02382 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Doroslovački, K.; Cvetković, D. On Matrices with Only One Non-SDD Row. Mathematics 2023, 11, 2382. https://doi.org/10.3390/math11102382

AMA Style

Doroslovački K, Cvetković D. On Matrices with Only One Non-SDD Row. Mathematics. 2023; 11(10):2382. https://doi.org/10.3390/math11102382

Chicago/Turabian Style

Doroslovački, Ksenija, and Dragana Cvetković. 2023. "On Matrices with Only One Non-SDD Row" Mathematics 11, no. 10: 2382. https://doi.org/10.3390/math11102382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop