You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

6 December 2017

Determinant Formulae of Matrices with Certain Symmetry and Its Applications

and
Department of Mathematics, Kyungpook National University, Daegu 702-701, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.

Abstract

In this paper, we introduce formulae for the determinants of matrices with certain symmetry. As applications, we will study the Alexander polynomial and the determinant of a periodic link which is presented as the closure of an oriented 4-tangle.

1. Introduction

A block matrix is a matrix which is partitioned into submatrices, called blocks, such that the subscripts of the blocks are defined in the same fashion as those for the elements of a matrix [1].
a 11 a 12 a 13 a 14 a 15 a 21 a 22 a 23 a 24 a 25 a 31 a 32 a 33 a 34 a 35 a 41 a 42 a 43 a 44 a 45 a 51 a 52 a 11 a 54 a 55 = A 11 A 12 A 21 A 22 .
Let us consider periodic subject that consists of finite number of units with the same properties, see Figure 1. There are 5 subjects with the same properties which are arranged periodically. The brown color is used to depict their periodical placement, while the green color in the second and third figure presents an extra relationship which is acting between neighbouring two subjects.
Figure 1. Type of periodic subject.
The matrix encoding the properties of the whole subject can be presented by block matrices. In the first picture in Figure 1, there are a finite number of units which are placed periodically but there are no interaction between units as seen. The matrix for the periodic subject will be of the form
A O O O A O O O A .
Indeed, the determinant of the matrix is ( det A ) n .
In the second picture in Figure 1, there are a finite number of units which are placed periodically and each unit is affected by neighbouring units as seen. The matrix for such a periodic subject is of the form
A O O B O A O B O O A B C C C n D .
In [2], the authors showed that the determinant of such a matrix is given by n r ( det ( A ) ) n 1 det A B C D , where D is an r × r matrix.
On the other hand, in the last picture in Figure 1, there are a finite number of units which are placed periodically and each unit is affected by the periodic structure itself (rather than neighbouring units). The matrix for such a periodic object can be presented by a matrix of the form
A O O O B B B O A O O B O O O O A O O B O O O O A O O B C C O O 2 D D D C O C O D 2 D D C O O C D D 2 D
or a matrix of the form
A O O O B B B O A O O B O O O O A O O B O O O O A O O B C C O O D + E D D C O C O E D + E D C O O C E E D + E .
The applications in the last section will be helpful to understand the difference between (3) and (4).
In this paper, we will show that the determinant of the matrix (3) is
n r det ( A ) det A B C D n 1 ,
while the determinant of the matrix (4) is
det ( A ) k = 1 n det A B C D n k det A B C E k 1 .
As an application, we will find the Alexander polynomial and the determinant of a periodic link (Theorems 4–7). Notice that, if a matrix M is singular, then we define ( det ( M ) ) 0 = 1 .

2. Determinants

In this section, we introduce formulae for the determinants of block matrices (3) and (4).
Theorem 1.
Let A , B , C , and D be matrices of size m × m , m × r , r × m and r × r , respectively and O the zero-matrix. Then
det A O O O B B B O A O O B O O O O A O O B O O O O A O O B C C O O 2 D D D C O C O D 2 D D C O O C D D 2 D = n r det ( A ) det A B C D n 1 ,
where the number of A’s in the diagonal is n and the number of ( 2 D ) ’s in the diagonal is n 1 ( n 2 ) .
Proof. 
The identity can be obtained by elementary determinant calculation. We put the detailed proof at Appendix A for the convenience of readers. ☐
Theorem 2.
Let A , B , C , D and E be matrices of size m × m , m × 1 , 1 × m , 1 × 1 and 1 × 1 , respectively and O the zero-matrix. Then
det A O O O B B B O A O O B O O O O A O O B O O O O A O O B C C O O D + E D D C O C O E D + E D C O O C E E D + E
= det ( A ) k = 1 n det A B C D n k det A B C E k 1 ,
where the number of A’s in the diagonal is n and the number of D + E ’s in the diagonal is n 1 ( n 2 ) .
Proof. 
L H S = ( 1 ) det A O O O B B B O A O O 2 B B B O O A O B 2 B B O O O A B B 2 B O C O O D + E D D O O C O E D + E D O O O C E E D + E = ( 2 ) det ( A ) k = 1 n det A B C D n k det A B C E k 1 .
The identity can be obtained by elementary determinant calculation. The identity ( 1 ) comes by adding the kth column to the first column and then, adding the ( 1 ) (the first row) to the kth row for any k = 2 , 3 , , n , while the identity ( 2 ) can be found in Appendix A. ☐
Remark 1.
If A is invertible, then Theorems 1 and 2 can be proved by using the Schur complement. We put those proofs at Appendix B. The Authors appreciate such valuable comments given by our reviewer.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1D1A3B01007669).

Author Contributions

These authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
L H S = ( 1 ) det A O O O B B B O A O O 2 B B B O O A O B 2 B B O O O A B B 2 B O C O O 2 D D D O O C O D 2 D D O O O C D D 2 D = ( 2 ) det ( A ) det A O O 2 B B n B O A O B 2 B n B O O A B B n B C O O 2 D D n D O C O D 2 D n D O O C D D n D = ( 3 ) n r det ( A ) det A O O 2 B B B O A O B 2 B B O O A B B B C O O 2 D D D O C O D 2 D D O O C D D D = ( 4 ) n r det ( A ) det A O O B O B O A O O B B O O A O O B C O O D O D O C O O D D O O C O O D = ( 5 ) n r det ( A ) det A O O B O O O A O O B O O O A O O B C O O D O O O C O O D O O O C O O D = n r det ( A ) det A B C D n 1 .
The reasons for the identities (1)–(5) are the following;
(1)
Add the kth column to the first column and then, add ( 1 ) (the first row) to the kth row for any k = 2 , 3 , , n .
(2)
Add the kth column to the last column for any k = n , n + 1 , , 2 n 3 .
(3)
B and D are an m × r and an r × r matrices, respectively.
(4)
Add ( 1 ) (the last column) to the kth column for any k = n , n + 1 , , 2 n 3 .
(5)
Add ( 1 ) (the kth column) to the last column for any k = n , n + 1 , , 2 n 3 .
It is the end of the proof of Theorem 1. ☐
Proof of Theorem 2.
To prove
det A O O 2 B B B O A O B 2 B B O O A B B 2 B C O O D + E D D O C O E D + E D O O C E E D + E = k = 1 n det A B C D n k det A B C E k 1 ,
where the number of A’s and the number of ( D + E ) ’s in the diagonal are n 1 ( n 2 ) , we proceed by the mathematical induction on n, the number of block A. It is clear that det A 2 B C D + E = det A B C D + det A B C E .
Assume that the formula is true for n 2 ( n 3 ) . It is well-known that the determinant can be obtained by adding the determinants of the following two matrices.
A O O O 2 B B B B O A O O B 2 B B B O O A O B B 2 B B O O O A B B B B C O O O D + E D D D O C O O E D + E D D O O C O E E D + E D O O O C E E E D
and
A O O O 2 B B B O O A O O B 2 B B O O O A O B B 2 B O O O O A B B B B C O O O D + E D D O O C O O E D + E D O O O C O E E D + E O O O O C E E E E
By simple calculation, one can calculate their determinants. Indeed, the determinant of the first matrix (I) is
( I ) = ( 1 ) det A O O A B O O O O A O A O B O O O O A A O O B O O O O A B B B B C O O C D D E D E O O C O C O D D E O O O C C O O D O O O O C E E E D = ( 2 ) det A O O O B O O O O A O O O B O O O O A O O O B O O O O A O O O B C O O O D D E D E O O C O O O D D E O O O C O O O D O O O O C E D E D E D D = det A B C D det A O O B O O O A O O B O O O A O O B C O O D D E D E O C O O D D E O O C O O D = ( 3 ) det A B C D n 1 ,
while the determinant of the second matrix (II) is
( II ) = ( 4 ) det A O O O 2 B B B O O A O O B 2 B B O O O A O B B 2 B O O O O A O O O B C O O O D + E D D O O C O O E D + E D O O O C O E E D + E O O O O C O O O E = det A B C E det A O O 2 B B B O A O B 2 B B O O A B B 2 B C O O D + E D D O C O E D + E D O O C E E D + E = det A B C E k = 1 n 1 det A B C D n 1 k det A B C E k 1 .
The reasons for the identities (1)–(4) are the following;
(1)
Add ( 1 ) {the ( n 1 ) th row} to the kth row for any k = 1 , 2 , , n 2 , and then, add ( 1 ) {the ( 2 n 2 ) th row} to the kth row for any k = n , n + 1 , , 2 n 3 .
(2)
Add the kth column to the ( n 1 ) th column for any k = 1 , 2 , , n 2 , and then, add ( 1 ) {the ( 2 n 2 ) th column} to the kth column for any k = n , n + 1 , , 2 n 3 .
(3)
Apply the following identity repeatedly.
det A O O B O O O A O O B O O O A O O B C O O D D E D E O C O O D D E O O C O O D
= det A B C D det A O O B O O O A O O B O O O A O O B C O O D D E D E O C O O D D E O O C O O D .
(4)
Add ( 1 ) {the ( 2 n 2 ) th column} to the kth column for any k = n , n + 1 , , 2 n 3 .
Therefore the determinant of our matrix is given by
k = 1 n det A B C D n k det A B C E k 1 .
By using this result, we can prove Theorem 2. ☐

Appendix B

The following proofs use more sophisticated matrix theory tools and henceforth are more compact. We assume that A is invertible so that the Schur complement can be defined. However this is only a technical assumption: in fact, looking at the final formula in Theorem 1, this depends with continuity with respect to the entries of A (and in particular with respect to det ( A ) ) and therefore if the statement holds for det ( A ) different from zero, then it must be true also when det ( A ) = 0 and the reason is that the set of invertible matrices is dense in the set of all matrices (see [19]).
Two preliminary things are needed. The first is the Schur complement (see [20] and references therein) of a block matrix X = A 1 , 1 A 1 , 2 A 2 , 1 A 2 , 2 which corresponds at a single block step of the Gaussian Elimination so that the determinant of X is equal to
det ( A 1 , 1 ) det ( S ) , S = A 2 , 2 A 2 , 1 A 1 , 1 1 A 1 , 2 .
The second is the tensor product [19] of square matrices Y , X so that
det ( Y Z ) = det ( Y ) v det ( Z ) u ,
where Y is square of size u and Z is square of size v.
Now we are ready to prove Theorem 1. First we observe that A 1 , 1 = I n × A so that by (A2) we have det ( A 1 , 1 ) = det ( A ) n . Second we compute the Schur complement S according to (A1) and we find
S = ( I n 1 + e e T ) ( D C A 1 B )
with e T = ( 1 , , 1 ) vector of all ones of size n 1 . Now D C A 1 B is the Schur complement of A B C D and hence by the first part of (A1) we have det A B C D = det ( A ) det ( D C A 1 B ) . The matrix e e T has rank 1 and hence it has n 2 eigenvalues equal to zero and one coinciding with the trace, that is n 1 : hence the matrix I n 1 + e e T has eigenvalue 1 with multiplicity n 2 and eigenvalue n with multiplicity 1 so that its determinant is n
det ( S ) = det ( I n 1 + e e T ) r det ( D C A 1 B ) n 1 = n r det A B C D n 1 det ( A ) 1 n
so that, by putting together the latter relation and det ( A 1 , 1 ) = det ( A ) n , we obtain
det ( X ) = det A 1 , 1 A 1 , 2 A 2 , 1 A 2 , 2 = n r det ( A ) det A B C D n 1
and Theorem 1 is proved.
With the very same tools and by following the same steps, Theorem 2 can be proven as well.

References

  1. Howard, A. Elementary Linear Algebra; John Wiley & Sons: New York, NY, USA, 1994. [Google Scholar]
  2. Bae, Y.; Lee, I.S. On Alexander polynomial of periodic links. J. Knot Theory Ramif. 2011, 20, 749–761. [Google Scholar] [CrossRef]
  3. Banks, J.E. Homogeneous links, Seifert surfaces, digraphs and the reduced Alexander polynomial. Geom. Dedicata 2013, 166, 67–98. [Google Scholar] [CrossRef]
  4. Cromwell, P. Knots and Links; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  5. Kawauchi, A. A Survey of Knot Theory; Birkhäuser Verlag: Basel, Switzerland; Boston, MA, USA, 1996. [Google Scholar]
  6. Bae, Y.; Lee, I.S. On Seifert matrices of symmetric links. Kyungpook Math. J. 2011, 51, 261–281. [Google Scholar] [CrossRef]
  7. Alexander, J. Topological invariants for knots and links. Trans. Am. Math. Soc. 1928, 30, 275–306. [Google Scholar] [CrossRef]
  8. Kauffman, L.H. Formal Knot Theory; Mathematical Notes, 30; Princeton University Press: Princeton, NJ, USA, 1983. [Google Scholar]
  9. Conway, J. An enumeration of knots and links and some of their algebraic properties. In Proceedings of the Conference on Computational Problems in Abstract Algebra, Oxford, UK, 29 August–2 September 1967. [Google Scholar]
  10. Hillman, J.A.; Livingston, C.; Naik, S. Twisted Alexander polynomials of periodic knots. Algebr. Geom. Topol. 2006, 6, 145–169. [Google Scholar] [CrossRef]
  11. Kitayama, T. Twisted Alexander polynomials and ideal points giving Seifert surfaces. Acta Math. Vietnam. 2014, 39, 567–574. [Google Scholar] [CrossRef]
  12. Ozsváth, P.; Szabó, Z. Holomorphic disks and knot invariants. Adv. Math. 2004, 186, 58–116. [Google Scholar]
  13. Rasmussen, J.A. Flouer Homology and Knot Complements. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, March 2003. [Google Scholar]
  14. Kauffman, L.H.; Silvero, M. Alexander-Conway polynomial state model and link homology. J. Knot Theory Ramif. 2015, 25. [Google Scholar] [CrossRef]
  15. Ozsváth, P.; Szabó, Z. An introduction to Heegaard Flouer homology. Available online: http://math.mit.edu/~petero/Introduction.pdf (accessed on 12 August 2017).
  16. Sartori, A. The Alexander polynomial as quantum invariant of links. Ark. Mat. 2015, 53, 1–26. [Google Scholar] [CrossRef]
  17. Murasugi, K. On periodic knots. Comment. Math. Helv. 1971, 46, 162–177. [Google Scholar] [CrossRef]
  18. Sakuma, M.A. On the polynomials of periodic links. Math. Ann. 1981, 257, 487–494. [Google Scholar] [CrossRef]
  19. Bhatia, R. Matrix Analysis; Springer Verlag: New York, NY, USA, 1997. [Google Scholar]
  20. Dorostkar, A.; Neytcheva, M.; Serra-Capizzano, S. Spectral analysis of coupled PDEs and of their Schur complements via the notion of Generalized Locally Toeplitz sequences. Comput. Methods Appl. Mech. Eng. 2016, 309, 74–105. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.