Next Article in Journal
Ordered Subset Expectation Maximum Algorithms Based on Symmetric Structure for Image Reconstruction
Next Article in Special Issue
Isomorphism Theorems in the Primary Categories of Krasner Hypermodules
Previous Article in Journal
A Note on the Minimum Size of a Point Set Containing Three Nonintersecting Empty Convex Polygons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eventually DSDD Matrices and Eigenvalue Localization

1
College of Data Science and Information Engineering, Guizhou Minzu University, Guiyang 550025, Guizhou, China
2
School of Mathematical Sciences, Guizhou Normal University, Guiyang 550025, Guizhou, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(10), 448; https://doi.org/10.3390/sym10100448
Submission received: 6 September 2018 / Revised: 22 September 2018 / Accepted: 26 September 2018 / Published: 1 October 2018
(This article belongs to the Special Issue Symmetry in Numerical Linear and Multilinear Algebra)

Abstract

:
Firstly, the relationships among strictly diagonally dominant ( S D D ) matrices, doubly strictly diagonally dominant ( D S D D ) matrices, eventually S D D matrices and eventually D S D D matrices are considered. Secondly, by excluding some proper subsets of an existing eigenvalue inclusion set for matrices, which do not contain any eigenvalues of matrices, a tighter eigenvalue inclusion set of matrices is derived. As its application, a sufficient condition of determining non-singularity of matrices is obtained. Finally, the infinity norm estimation of the inverse of eventually D S D D matrices is derived.

1. Introduction

Let n be a positive integer, n 2 , J = { 1 , 2 , , n } , N be the set of all positive integers, C be the set of all complex numbers, C n × n be the set of all n × n complex matrices and I be the identity matrix. Let A = [ a i j ] C n × n and σ ( A ) be the set of all eigenvalues of A. For i , j J , j i , denote r i ( A ) : = t J , t i | a i t | and r i j ( A ) : = r i ( A ) | a i j | . A matrix A is called a strictly diagonally dominant ( S D D ) matrix if, for each i J ,
| a i i | > r i ( A ) .
In addition, A is doubly strictly diagonally dominant ( D S D D ) if, for any i , j J , i j ,
| a i i | | a j j | > r i ( A ) r j ( A ) .
Locating eigenvalue and bounding infinity norm of the inverse for nonsingular matrices are two major problems in applied linear algebra [1,2,3,4,5,6,7]. The first problem is to find a set in the complex plane to include all eigenvalues of matrices, as if the obtained set is on the right-hand side of the complex plane. Then, one can conclude the positive definiteness of the corresponding matrix. Moreover, the “eigenvalue localization” is very important for the convergence speed of algorithms on which web searching engines are based. For the case of PageRank, see [4] and references therein. Another problem is to bound the infinity norm for the inverse of nonsingular matrices, as it can be used to estimate the condition number for the linear systems of equations [5], as well as linear complementarity problems [6]. In general, it is not easy to discuss these two problems for an arbitrary given nonsingular matrix. One traditional way for solving the two problems is to locate all eigenvalues or to bound the infinity norm for some subclasses of nonsingular matrices.
In 2015, Cvetković [7] et al. extended the class of S D D matrices to the class of eventually S D D ( S D D ) matrices, and proved that S D D matrices are nonsingular.
Definition 1
([7], Definition 1). Let A = s I B , where s C . A is called an S D D matrix if s k I B k is S D D for a positive integer k.
To bound the infinity norm for the inverse of S D D matrices, Cvetković et al. [7] gave the following results.
Theorem 1
([7], Theorem 2). Let A S D D . Then, for two certain numbers s and k,
| | A 1 | | Ψ k s ( A ) = | | s k 1 I + s k 2 B + + s B k 2 + B k 1 | | min i J { | s k ( B k ) i i | r i ( B k ) } .
It is worth noting in Theorem 1 that, from Examples 3 and 4 in [7] and the structure of Ψ k s ( A ) , which includes two parameters s and k, one can conclude that, if there exists two groups of different numbers s and k, both such that A is an S D D matrix, then the different selection of s and k will affect the infinity norm bounds for the inverse of A.
According to the non-singularity of S D D matrices, Liu et al. [8] obtained the following eigenvalue inclusion set of matrices, which corrects Theorem 1 of [7].
Theorem 2
([8], Theorem 2). Let A = s I B C n × n . For any given positive integer k,
σ ( A ) Γ k s ( A ) = i J z C : | ( s z ) k ( B k ) i i | r i ( B k ) .
In 2016, Liu [9] introduced the class of eventually D S D D ( D S D D ) matrices, and located all its eigenvalues.
Definition 2
([9], Definition 3.2.1). Let A = s I B , where s C . A is called an D S D D matrix if s k I B k is D S D D for a positive integer k.
Remark 1.
From Definition 2, a meaningful discussion is concerned:
I s   s K I B K   D S D D   i f   s k I B k   w i t h   k < K   h a s   t h i s   p r o p e r t y ?
Unfortunately, the answer is negative. Let
B = 1 0 1 1 4 1 1 1 4 , s = 0 , a n d K = 2 .
It is easy to validate that, if k = 1 , then s k I B k = B is D S D D , and that s K I B K = B 2 is not D S D D .
Theorem 3
([9], Theorem 3.3.1). Let A = s I B C n × n . For any given positive integer k,
σ ( A ) Ω k s ( A ) = i , j J , i j Ω i , j s ( B k ) ,
and
Ω i , j s ( B k ) = z C : | ( s z ) k ( B k ) i i | | ( s z ) k ( B k ) j j | r i ( B k ) r j ( B k ) .
Note here that, when k = 1 , the sets Ω i , j s ( B k ) in Theorem 3 are ovals, while the sets Ω i , j s ( B k ) with k 2 are lemniscates; see [10] (pp. 35–52), for details.
As everyone knows that S D D matrices are D S D D matrices and D S D D matrices are nonsingular, hence,
{ S D D } { D S D D } { N o n s i n g u l a r } .
By Label (1), and Definitions 1 and 2, the following two relations hold clearly:
{ S D D } { S D D } { D S D D } { N o n s i n g u l a r }
and
{ S D D } { D S D D } { D S D D } { N o n s i n g u l a r } .
Besides S D D matrices, D S D D matrices, S D D matrices and D S D D matrices, there are many other subclasses of nonsingular matrices; see [11] for details.
The outline of the rest of this paper is as follows. In Section 2, by some existing criteria for non-singularity of matrices, a new eigenvalue localization set of matrices is derived. That is, a tighter eigenvalue localization set is obtained by excluding some proper subsets from Ω i , j s ( B k ) in Theorem 3, which is proved to not include any eigenvalues of matrices. In Section 3, the infinity norm for the inverse of D S D D matrices is given. Finally, some concluding remarks are given in Section 4 to summarize this paper.

2. Eigenvalue Localization of Matrices

Firstly, a lemma in [12] is listed, which is very useful for deriving a new eigenvalue inclusion set.
Lemma 1
([12, Corollary 1]). Let A = [ a i j ] C n × n . If for each i , j J , j i , either
| a i i | | a j j | > r i ( A ) r j ( A )
or
( | a i i | + r i t ( A ) ) | a t t | < | a i t | ( | a t i | r t i ( A ) )
for some t i and t J , then A is nonsingular.
Lemma 2.
Let A = s I B C n × n . For any given positive integer k,
σ ( A ) Δ k s ( A ) = i , j J , j i Ω i , j s ( B k ) \ Δ i s ( B k ) ,
where
Δ i s ( B k ) = t J , t i Δ i , t s ( B k ) ,
and
Δ i , t s ( B k ) = z C : | ( s z ) k ( B k ) i i | + r i t ( B k ) | ( s z ) k ( B k ) t t | < | ( B k ) i t | | ( B k ) t i | r t i ( B k ) .
Proof. 
Let λ σ ( A ) . Given k, suppose that λ Δ k s ( A ) , i.e., for any i , j J , i j , λ Ω i , j s ( B k ) \ Δ i s ( B k ) , which is equivalent to that λ Ω i j s ( B k ) or λ Δ i s ( B k ) = t J , t i Δ i , t s ( B k ) , then
| ( s λ ) k ( B k ) i i | | ( s λ ) k ( B k ) j j | > r i ( B k ) r j ( B k ) ,
or for some t J , t i ,
| ( s λ ) k ( B k ) i i | + r i t ( B k ) | ( s λ ) k ( B k ) t t | < | ( B k ) i t | ( | ( B k ) t i | r t i ( B k ) ) .
Then, by Lemma 1, we have ( s λ ) k I B k is nonsingular, i.e., ( s λ ) k is not an eigenvalue of B k . On the other hand, because λ σ ( A ) , there is a vector x, x 0 , such that B x = ( s λ ) x . Furthermore, we have B k x = ( s λ ) k x , which implies that ( s λ ) k is an eigenvalue of B k . This is a contradiction. Hence, λ Δ k s ( A ) . The conclusion holds. ☐
By the arbitrariness of s C and k N in Lemma 2, the following eigenvalue localization theorem is obtained easily.
Theorem 4.
Let A = s I B C n × n . Then,
σ ( A ) Δ ( A ) = s C k N Δ k s ( A ) .
For all i , j J , i j , the relationship
Ω i , j s ( B k ) \ Δ i s ( B k ) = Ω i , j s ( B k ) Δ i s ( B k ) ¯ Ω i , j s ( B k )
holds, hence the following comparison theorem for Theorem 3, Lemma 2 and Theorem 4 is given easily.
Theorem 5.
Let A = s I B C n × n . Given an arbitrary positive integer k, then
Δ ( A ) Δ k s ( A ) Ω k s ( A ) Γ k s ( A ) .
Note here that, from the proof of Lemma 2, it can be seen that Δ i , t s ( B k ) does not contain any eigenvalues of A, and that Δ k s ( A ) is derived by excluding some proper subsets Δ i , t s ( B k ) from Ω i , j s ( B k ) . Hence, Δ i , t s ( B k ) is called an exclusion set of Δ k s ( A ) . By Theorem 5, one can exclude some regions that do not include any eigenvalues of matrices to locate them more precisely.
Taking s = 0 , k = 1 in Theorem 4, the following result, which is also Theorem 4 in [12], is deduced immediately.
Corollary 1.
Let A = [ a i j ] C n × n . Then,
σ ( A ) Δ ¯ ( A ) = i , j J , j i Ω i , j ( A ) \ Δ i ( A ) ,
where
Δ i ( A ) = t J , t i Δ i , t ( A ) ,
and
Ω i , j ( A ) = z C : | z a i i | | z a j j | r i ( A ) r j ( A ) ,
Δ i , t ( A ) = z C : | z a i i | + r i t ( A ) | z a t t | < | a i t | ( | a t i | r t i ( A ) ) .
Next, based on Lemma 2 and the fact that det ( A ) = 0 if and only if 0 σ ( A ) for a matrix A, the following condition such that det ( A ) 0 is given easily.
Corollary 2.
Let A = s I B C n × n . If there is a positive integer k such that for any i , j J , i j , either
| s k ( B k ) i i | | s k ( B k ) j j | > r i ( B k ) r j ( B k ) ,
or for some t i and t J ,
| s k ( B k ) i i | + r i t ( B k ) | s k ( B k ) t t | < | ( B k ) i t | | ( B k ) t i | r t i ( B k ) ,
then A is nonsingular.
Note here that, if taking s = 0 and k = 1 , then Corollary 2 is reduced to Lemma 1. That is to say, Corollary 2 is a generalization of Lemma 1.
Finally, an example is given to validate Theorem 5, and show that the different selection of s and k will affect the eigenvalue location and the determination of non-singularity of A = s I B .
Example 1.
Consider the matrix provided in [12]:
A = 14 0.01 i 0 18 2 i 0 9 4 + i 0 0.01 + i 2 + i 11 0 10 + i 0 0.1 + i 10 .
By computation, the spectrum of A is
σ ( A ) = { 25.5919 0.0647 i , 1.6930 + 0.0821 i , 13.0618 + 0.9825 i , 7.0393 0.9999 i } .
The eigenvalue inclusion set Γ k s ( A ) , Ω k s ( A ) and Δ k s ( A ) with different s and k, and all eigenvalues are drawn in Figure 1, where Γ k s ( A ) , Ω k s ( A ) and Δ k s ( A ) , respectively, are showed by black boundary, yellow zone and its interior, and yellow zone. All eigenvalues are plotted by ‘+’.
From Figure 1, it can be seen that:
(1)
Whether s = 10 , k = 1 , 2 , 3 or s = 7 , k = 2 , it can be seen that
σ ( A ) Δ k s ( A ) Ω k s ( A ) Γ k s ( A )
obtained by Theorem 1 for the same k and s.
(2)
When s = 10 , if taking k = 1 , k = 2 and k = 3 , then differences of the three eigenvalue inclusion sets Δ 1 10 ( A ) , Δ 2 10 ( A ) , Δ 3 10 ( A ) are clear. This implies that, if s is the same, but k is different, then these eigenvalue inclusion sets Δ k s ( A ) are different in general.
(3)
When k = 2 , if taking s = 7 and s = 10 , then the two eigenvalue inclusion sets Δ 2 2 ( A ) and Δ 2 10 ( A ) are also different, which implies that, if k is the same, but s is different, then these eigenvalue inclusion sets Δ k s ( A ) are also different in general.
(4)
When s = 7 and k = 2 , we can see that 0 Δ 2 7 ( A ) . When s = 10 and k = 1 , 2 , 3 , we can see that 0 Δ 1 10 ( A ) , 0 Δ 2 10 ( A ) and 0 Δ 3 10 ( A ) . That is to say, we cannot determine the non-singularity of A when s = 7 and k = 2 , but we can do it when s = 10 and k = 1 , 2 , 3 , respectively. When k is the same one, the upper bounds Θ k 5 ( A ) and Ψ k 5 ( A ) , respectively, are less than or equal to the upper bounds Θ k 4 ( A ) and Θ k 10 ( A ) , and Ψ k 4 ( A ) and Ψ k 10 ( A ) . In addition, if taking s = 5 and k = 8 , the upper bound Θ 8 5 ( A ) = Ψ 8 5 ( A ) = 0.4657 , which reaches the true value of | | A 1 | | .
Then, an interesting problem is considered naturally: how to choose s and k to minimize the eigenvalue inclusion set Δ k s ( A ) and determine the non-singularity of A = s I B ? Hence, it is essential to study this question in the future.

3. Infinity Norm Bounds for the Inverse of DSDD Matrices

Firstly, a lemma is listed, which is used to obtain the infinity norm bounds for the inverse of D S D D matrices.
Lemma 3
([13]). If A D S D D , then
| | A 1 | | max i , j J , i j | a i i | + r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) .
Theorem 6.
Let A D S D D . Then, for two certain numbers s and k,
| | A 1 | | Θ k s ( A ) = max i , j J , i j | s k ( B k ) i i | + r j ( B k ) | | s k 1 I + s k 2 B + + s B k 2 + B k 1 | | | s k ( B k ) i i | | s k ( B k ) j j | r i ( B k ) r j ( B k ) .
Proof. 
Since A D S D D , there exists k N such that s k I B k D S D D . Obviously, A and s k I B k are both nonsingular. By
s k I B k = ( s I B ) ( s k 1 I + s k 2 B + + s B k 2 + B k 1 ) ,
we have
( s I B ) 1 = ( s k I B k ) 1 ( s k 1 I + s k 2 B + + s B k 2 + B k 1 ) .
Then,
| | A 1 | | | | ( s k I B k ) 1 | | | | s k 1 I + s k 2 B + + s B k 2 + B k 1 | | .
Since s k I B k D S D D , by Lemma 3, we have
| | ( s k I B k ) 1 | | max i , j J , i j | s k ( B k ) i i | + r j ( B k ) | s k ( B k ) i i | | s k ( B k ) j j | r i ( B k ) r j ( B k ) .
This implies that Label (2) holds. The proof is completed. ☐
Lemma 4.
If A is S D D , then
max i , j J , i j | a i i | + r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) 1 min i J { | a i i | r i ( A ) } .
Proof. 
Since A is S D D , i.e., | a i i | > r i ( A ) , i J , hence | a i i | | a j j | > r i ( A ) r j ( A ) , i , j J , i j . In order to obtain Label (3), it is sufficient to show that for all i , j J , i j , the following inequality holds:
| a i i | + r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) 1 min | a i i | r i ( A ) , | a j j | r j ( A ) .
Indeed, if
| a j j | r j ( A ) | a i i | r i ( A ) ,
then multiplying this inequality with r j ( A ) gives
| a j j | r j ( A ) ( r j ( A ) ) 2 | a i i | r j ( A ) r i ( A ) r j ( A ) ,
i.e.,
| a i i | | a j j | | a i i | r j ( A ) + | a j j | r j ( A ) ( r j ( A ) ) 2 | a i i | | a j j | r i ( A ) r j ( A ) .
This can be rewritten as
| a i i | + r j ( A ) | a j j | r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) ,
i.e.,
| a i i | + r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) 1 | a j j | r j ( A ) = 1 min | a i i | r i ( A ) , | a j j | r j ( A ) .
On the contrary, if
| a i i | r i ( A ) | a j j | r j ( A ) ,
multiplying this inequality with | a i i | , we get
| a i i | 2 r i ( A ) | a i i | | a i i | | a j j | | a i i | r j ( A ) ,
i.e.,
| a i i | 2 | a i i | r i ( A ) + | a i i | r j ( A ) r i ( A ) r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) ,
which can be rewritten as
| a i i | + r j ( A ) | a i i | r i ( A ) | a i i | | a j j | r i ( A ) r j ( A ) ,
i.e.,
| a i i | + r j ( A ) | a i i | | a j j | r i ( A ) r j ( A ) 1 | a i i | r i ( A ) = 1 min | a i i | r i ( A ) , | a j j | r j ( A ) .
The proof is completed. ☐
Next, a comparison for the bounds in Theorems 1 and 6 is given.
Theorem 7.
If A S D D , then, for two certain numbers s and k,
Θ k s ( A ) Ψ k s ( A ) .
Proof. 
Since A S D D , i.e., for some s C and some k N , s k I B k is S D D , then by Lemma 4, we have
max i , j J , i j | s k ( B k ) i i | + r j ( B k ) | s k ( B k ) i i | | s k ( B k ) j j | r i ( B k ) r j ( B k ) 1 min | s k ( B k ) i i | r i ( B k ) , | s k ( B k ) j j | r j ( B k ) .
Furthermore, by | | s k 1 I + s k 2 B + + s B k 2 + B k 1 | | 0 , Label (4) holds clearly. ☐
Finally, an example is given to validate Theorem 7, and show that the different selection of s and k will affect the infinity norm bounds for the inverse of A = s I B .
Example 2.
Consider the matrix provided in [7]:
A = 3.9 1 1 1 1 1 5.9 1 1 1 1 1 3.9 1 1 1 1 1 5.9 1 1 1 1 1 3.9 .
By computations, | | A 1 | | = 0.4657 . Taking s = 4 , 5 and 10 for k = 2 , , 10 , respectively, we can validate that s k I B k are all S D D matrices. Obviously, they are all D S D D matrices too. Thus, we can use Theorems 1 and 6 to estimate | | A 1 | | . The numerical results are listed in Table 1.
Numerical results in Table 1 show that:
(1)
The upper bound Θ k s ( A ) obtained by Theorem 6 is less than or equal to the bound Ψ k s ( A ) obtained by Theorem 1 for the same k and s.
(2)
When s is the same, the upper bounds Θ k s ( A ) and Ψ k s ( A ) do not increase with the increase of k. When k is the same one, the upper bounds Θ k 5 ( A ) and Ψ k 5 ( A ) , respectively, are less than or equal to the upper bounds Θ k 4 ( A ) and Θ k 10 ( A ) , and Ψ k 4 ( A ) and Ψ k 10 ( A ) . In addition, if taking s = 5 and k = 8 , the upper bound Θ 8 5 ( A ) = Ψ 8 5 ( A ) = 0.4657 , which reaches the true value of | | A 1 | | .
Then, another interesting problem arises naturally: How to choose s and k to minimize the upper bound Θ k s ( A ) ? Hence, it is essential to study this question in the future.

4. Conclusions

In this paper, by excluding some proper subsets of Ω k s ( A ) that do not contain any eigenvalues of A, we obtain a tighter eigenvalue localization set Δ k s ( A ) than those in Theorems 2 and 3. Then, the bound Θ k s ( A ) for the infinity norm of the inverse of D S D D matrices is given. Numerical examples show the effectiveness of the obtained results. However, there is a problem that has not been solved: how can s and k be picked to minimize the eigenvalue localization set Δ k s ( A ) and the infinity norm bounds Θ k s ( A ) ? This is a question worthy of further study.
Finally, the relationship between D S D D matrices and S D D matrices is discussed. Consider again the matrix A in Example 2. It is not difficult to validate that A is an S D D matrix, but not a D S D D matrix, which implies that an S D D matrix is not necessarily a D S D D matrix, that is,
{ S D D } { D S D D } .
Then, another meaningful discussion is concerned: whether D S D D matrices are S D D matrices or not. That is, whether the relationship
{ D S D D } { S D D }
holds or not. This question is also interesting and worthy of further study.

Author Contributions

Conceptualization, Methodology, Software, Validation, Formal Analysis, Investigation, Resourcces, Data Curation, Writing-Original Draft Preoaration, Writing-Review and Editing and Visualization are all made by C.S.; Supervision, Project Administration and Funding Acqusition are all made by J.Z.

Funding

This work was funded by the National Natural Science Foundation of China (Grant No. 11501141) and the Science and Technology Top-notch Talents Support Project of Education Department of Guizhou Province (Grant No. QJHKYZ [2016]066).

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant No. 11501141), and the Science and Technology Top-Notch Talents Support Project of the Education Department of Guizhou Province (Grant No. QJHKYZ [2016]066).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.Q.; Li, Y.T. An eigenvalue localization set for tensor with applications to determine the positive (semi-)definiteness of tensors. Linear Multilinear Algebra 2016, 64, 587–601. [Google Scholar] [CrossRef]
  2. Sang, C.L.; Li, C.Q. Exclusion sets in eigenvalue localization sets for tensors. Linear Multilinear Algebra 2018. [Google Scholar] [CrossRef]
  3. Li, C.Q.; Pei, H.; Gao, A.N.; Li, Y.T. Improvements on the infinity norm bound for the inverse of Nekrasov matrices. Numer. Algorithms 2016, 71, 613–630. [Google Scholar] [CrossRef]
  4. Cicone, A.; Serra-Capizzano, S. Google page ranking problem: The model and the analysis. J. Comput. Appl. Math. 2010, 234, 3140–3169. [Google Scholar] [CrossRef]
  5. Varga, R.S. Matrix Iterative Analysis, 2nd Revised and Expanded ed.; Springer Series in Computational Mathematics; Springer: Berlin, Gemany, 2012; ISBN 978-3-540-66321-8. [Google Scholar]
  6. Gao, L.; Wang, Y.Q.; Li, C.Q.; Li, Y.T. Error bounds for linear complementarity problems of S-Nekrasov matrices and B-S-Nekrasov matrices. J. Comput. Appl. Math. 2018, 336, 147–159. [Google Scholar] [CrossRef]
  7. Cvetković, L.; Erić, M.; Peña, J.M. Eventually SDD matrices and eigenvalue localization. Appl. Math. Comput. 2015, 252, 535–540. [Google Scholar] [CrossRef]
  8. Liu, Q.; Li, Z.B.; Li, C.Q. A note on eventually SDD matrices and eigenvalue localization. Appl. Math. Comput. 2017, 311, 19–21. [Google Scholar] [CrossRef]
  9. Liu, Q. Eventually D-SDD, Eventually S-SDD Matrices and the Eigenvalue Inclusion Sets of Matrices; Yunnan University: Kunming, China, 2016. [Google Scholar]
  10. Varga, R.S. Geršgorin and His Circles; Springer Series in Computational Mathematics; Springer: Berlin, Germany, 2004; ISBN 3-540-21100-4. [Google Scholar]
  11. Cvetković, L. H-matrix theory vs. eigenvalue localization. Numer. Algorithms 2006, 42, 229–245. [Google Scholar] [CrossRef]
  12. Li, S.H.; Li, C.Q.; Li, Y.T. Exclusion sets for eigenvalues of matrices. arXiv, 2017; arXiv:1705.01758v2. [Google Scholar]
  13. Kostić, V.R.; Cvetković, L.; Cvetković, D.L. Pseudospectra localizations and their applications. Numer. Linear Algebra Appl. 2016, 23, 356–372. [Google Scholar] [CrossRef]
Figure 1. Comparisons of Γ k s ( A ) , Ω k s ( A ) and Δ k s ( A ) with different s and k.
Figure 1. Comparisons of Γ k s ( A ) , Ω k s ( A ) and Δ k s ( A ) with different s and k.
Symmetry 10 00448 g001
Table 1. Comparisons of Ψ k s ( A ) and Θ k s ( A ) with different s and k.
Table 1. Comparisons of Ψ k s ( A ) and Θ k s ( A ) with different s and k.
Ψ k 4 ( A ) Θ k 4 ( A ) Ψ k 5 ( A ) Θ k 5 ( A ) Ψ k 10 ( A ) Θ k 10 ( A )
k = 2 2.53922.53920.53190.52451.11730.9454
k = 3 0.61520.61520.47750.47740.71520.6472
k = 4 0.51180.51180.46820.46820.59390.5608
k = 5 0.47640.47640.46630.46630.53880.5224
k = 6 0.46900.46900.46580.46580.50940.5014
k = 7 0.46650.46650.46580.46580.49250.4887
k = 8 0.46590.46590.46570.46570.48230.4806
k = 9 0.46580.46580.46570.46570.47600.4754
k = 10 0.46570.46570.46570.46570.47210.4719

Share and Cite

MDPI and ACS Style

Sang, C.; Zhao, J. Eventually DSDD Matrices and Eigenvalue Localization. Symmetry 2018, 10, 448. https://doi.org/10.3390/sym10100448

AMA Style

Sang C, Zhao J. Eventually DSDD Matrices and Eigenvalue Localization. Symmetry. 2018; 10(10):448. https://doi.org/10.3390/sym10100448

Chicago/Turabian Style

Sang, Caili, and Jianxing Zhao. 2018. "Eventually DSDD Matrices and Eigenvalue Localization" Symmetry 10, no. 10: 448. https://doi.org/10.3390/sym10100448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop