Next Article in Journal
What Makes a Social Robot Good at Interacting with Humans?
Next Article in Special Issue
Stochastic Model of Spatial Fields of the Average Daily Wind Chill Index
Previous Article in Journal
Viability of Neural Networks for Core Technologies for Resource-Scarce Languages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Matrix Calculation Paradigm by the Example of Structured Matrix

by
Jerzy S. Respondek
Institute of Computer Science, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland
Information 2020, 11(1), 42; https://doi.org/10.3390/info11010042
Submission received: 1 December 2019 / Revised: 31 December 2019 / Accepted: 6 January 2020 / Published: 13 January 2020
(This article belongs to the Special Issue Selected Papers from ESM 2019)

Abstract

:
In this paper, we derive recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix. The main advantage of the recursive algorithms is the fact that the computational complexity of the presented algorithm is better than calculating the determinant and the inverse by means of classical methods, developed for the general matrices. The results of this article do not require any symbolic calculations and, therefore, can be performed by a numerical algorithm implemented in a specialized (like Matlab or Mathematica) or general-purpose programming language (C, C++, Java, Pascal, Fortran, etc.).

1. Introduction

In previous studies [1,2], we proposed a classical numerical method for inverting the generalized Vandermonde matrix (GVM). The new contributions in this article are as follows:
  • We derive recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix.
  • The importance of the recursive algorithms becomes clear when we consider practical implementation of the GVM; they are useful each time we add a new interpolation node or a new root of a given differential equation in question.
  • The recursive algorithms, which we propose in this work, can allow avoiding the recalculation of the determinant and/or inverse.
  • The main advantage of the recursive algorithms is the fact that the computational complexity of the presented algorithm is of the O(n) class for the computation of the determinant.
  • The results of this article do not require any symbolic calculations and, therefore, can be performed by a numerical algorithm implemented in a high-level (like Matlab or Mathematica) or low-level programming language (C, C++, Java, Pascal, Fortran, etc.).
In this article, we neatly combined the results from previous studies [3,4] and extended the computational examples.
The main results of this article are shown in Algorithms 1 and 2. The paper is organized as follows: Section 2 justifies the importance of the generalized Vandermonde matrices, Section 3 gives the recursive algorithms for the generalized Vandermonde matrix determinant, Section 4 gives two recursive algorithms for calculating the desired inverse, Section 5 presents, with an example, the application of the proposed algorithms, and Section 6 summarizes the article.

2. Practical Importance of the Generalized Vandermonde Matrix

In this article, we consider the generalized Vandermonde matrix (GVM) of the form proposed by El-Mikkawy [5]. The classical form is considered in References [6,7]. For the n Z + real pairwise distinct roots c 1 , , c n and the real constant coefficient k , we define the GVM as follows:
V G ( k ) ( c 1 , , c n ) = [ c 1 k c 1 k + 1 c 1 k + n 1 c 2 k c 2 k + 1 c 2 k + n 1 c n k c n k + 1 c n k + n 1 ] .
These matrices arise in a broad range of both theoretical and practical issues. Below, we survey the issues which require the use of the generalized Vandermonde matrices.
Linear, ordinary differential equations (ODE): the Jordan canonical form matrix of the ODE in the Frobenius form is a generalized Vandermonde matrix ([8] pp. 86–95).
Control issues: investigating the so-called controllability [9] of the higher-order systems leads to the issue of inverting the classic Vandermonde matrix [10] (in the case of distinct zeros of the system characteristic polynomial) and the generalized Vandermonde matrix [11] (for systems with multiple characteristic polynomial zeros). As the examples of the higher-order models of the physical objects, we can mention Timoshenko’s elastic beam equation [12] (fourth order) and Korteweg-de Vries’s equation of waves on shallow water surfaces [13,14] (third, fifth, and seventh order).
Interpolation: apart from the ordinary polynomial interpolation with single nodes, we consider the Hermite interpolation, allowing multiple interpolation nodes. This issue leads to the system of linear equations, with the generalized Vandermonde matrix ([15] pp. 363–373).
Information coding: the generalized Vandermonde matrix is used in coding and decoding information in the Hermitian code [16].
Optimization of the non-homogeneous differential equation [17].

3. Algorithms for the Generalized Vandermonde Matrix Determinant

In this chapter, we propose a library of recursive algorithms for the calculation of the generalized Vandermonde matrix determinant. These algorithms solve the following set of practically important, incremental problems:
(A)
Suppose we have the value of the Vandermonde determinant for a given series of roots c 1 , , c n 1 . How can we calculate the determinant after inserting another root into an arbitrary position in the root series, without the need to recalculate the whole determinant? This problem corresponds to the situation which frequently emerges in practice, i.e., adding a new node (polynomial interpolation) or increasing the order of the characteristic equation (linear differential equation solving, optimization, and control problems).
(B)
Contrary to the previous scenario, we have the Vandermonde determinant value for a given root series c 1 , , c n . We remove an arbitrary root c q from the series. How can we recursively calculate the determinant in this case? The examples of real applications from the previous point also apply here. The proper solution is given in Section 3.1.
(C)
We are searching for the determinant value, when, in the given root series c 1 , , c n , we change the value of an arbitrarily chosen root (Section 3.1).
(D)
We are searching for the determinant value, for the given root series c 1 , , c n , calculated recursively.
The theorem below is the main tool to construct the above recursive algorithm.

3.1. The Recursive Determinant Formula

Theorem 1.
The following recursive formula is fulfilled for the generalized Vandermonde matrix:
det V G ( k ) ( c 1 , , c n ) = ( 1 ) q + 1 c q k det V G ( k ) ( c 1 , , c q 1 , c q + 1 , , c n ) i = 1 ,   i q n ( c i c q ) ,   q = 1 , . . , n .
Proof. 
Applying the standard determinant linear properties, we can obtain
det V G ( k ) ( c 1 , , c n ) = det [ c 1 k c 1 k + 1 c 1 k + n 1 c q k c q k + 1 c q k + n 1 c n k c n k + 1 c n k + n 1 ] = det [ c 1 k ( c 1 k c 1 c 1 k c q ) ( c 1 k c 1 n 1 c 1 k c 1 n 2 c q ) c q k 0 0 c n k ( c n k c n c n k c q ) ( c n k c n n 1 c n k c n n 2 c q ) ] .
Next, in compliance with Laplace’s expansion formula applied to the q-th column, we directly have
det V G ( k ) ( c 1 , , c n ) = ( 1 ) q + 1 c q k det [ c 1 k ( c 1 c q ) c 1 k + 1 ( c 1 c q ) c 1 k + n 2 ( c 1 c q ) c q 1 k ( c q 1 c q ) c q 1 k + 1 ( c q 1 c q ) c q 1 k + n 2 ( c q 1 c q ) c q + 1 k ( c q + 1 c q ) c q + 1 k + 1 ( c q + 1 c q ) c q + 1 k + n 2 ( c q + 1 c q ) c n k ( c n c q ) c n k + 1 ( c n c q ) c n k + n 2 ( c n c q ) ] = ( 1 ) q + 1 c q k det [ c 1 k c 1 k + 1 c 1 k + n 2 c q 1 k c q 1 k + 1 c q 1 k + n 2 c q + 1 k c q + 1 k + 1 c q + 1 k + n 2 c n k c n k + 1 c n k + n 2 ] i = 1 i q n ( c i c q ) .
This concludes the proof of Equation (2).
Directly from Theorem 1, we can obtain the algorithms below for the incremental problems A–D. The detailed implementation of these formulas is straightforward and omitted.
Cases A, B: All we need to do is apply Equation (2).
Case C: Let us assume that, for the given root series c 1 , , c n , the corresponding determinant value is equal to det V G ( k ) ( c 1 , , c q , , c n ) . Our objective is to find the value of the determinant det V G ( k ) ( c 1 , , c q + Δ c q , , c n ) . Applying Equation (2) twice, we can obtain the following expression for the searched determinant:
det V G ( k ) ( c 1 , , c q + Δ c q , , c n ) = ( c q + Δ c q c q ) k i = 1 ,   i q n ( c i c q Δ c q ) i = 1 ,   i q n ( c i c q ) det V G ( k ) ( c 1 , , c q , , c n ) ,   q = 1 , . . , n .
Case D: The proper recursive function expressing the determinant value, for the given root series c 1 , , c n , has the following form:
det V G ( k ) ( c 1 , , c q ) = { ( 1 ) q + 1 c q k det V G ( k ) ( c 1 , , c q 1 ) i = 1 q 1 ( c i c q ) , for   q > 1 c 1 k , for   q = 1 .
 ☐

3.2. Computational Complexity of the Proposed Algorithms

The following facts are worth noting:
The computational complexity of the presented Algorithms A–C is of the O(n) class with respect to the number of floating-point operations necessary to perform. This enables us to efficiently solve the incremental Vandermonde problems, avoiding the quadratic complexity, typical in the Vandermonde field (e.g., References [14,18])
Algorithm D is of the O(n2) class, being, by the linear term, more efficient than the ordinary Gauss elimination method.

3.3. Special Cases

In this section, we give special forms of Algorithms A–D tuned for two special cases of the generalized Vandermonde matrix, i.e., for the equidistant roots, as well as the roots equal to the succeeding positive integers.

3.3.1. Generalized Vandermonde Matrix with Equidistant Roots

Let us take into account the GVM with the equidistant roots of the form c i = c 1 + ( i 1 ) h ,   h R . In this special case Formula (2) becomes
det V G ( k ) ( c 1 , , c n ) = ( q 1 ) !   ( n q ) !   h n 1   c q k   det V G ( k ) ( c 1 , , c q 1 , c q + 1 , , c n ) ,   q = 1 , . . , n ,
and Algorithms A–D change to the recursive Equation (3).

3.3.2. Generalized Vandermonde Matrix with Positive Integer Roots

In Reference [5], it is considered a special case of GVM, which can be obtained from Equation (1) when c i = i ,   i = 1 , , n , denoting this matrix by V S ( k ) ( n ) .
V S ( k ) ( 1 , , n ) = [ 1 1 1 2 k 2 k + 1 2 k + n 1 n k n k + 1 n k + n 1 ] .
For the special V S ( k ) , Equation (3) becomes
det V S ( k ) ( 1 , , n ) = ( q 1 ) !   ( n q ) !   c q k   det V S ( k ) ( 1 , , q 1 , q + 1 , , n ) ,   q = 1 , . . , n .

4. Algorithms for the Generalized Vandermonde Matrix Inverse

In this chapter, we give a recursive algorithm to invert the generalized Vandermonde matrix of Equation (1). At first, let us refer to the known, non-recursive results within this topic presented previously [5]. Reference [5] features an explicit form of the GVM inverse, which makes use of the so-called elementary symmetric functions, defined below.

4.1. Definition of the Elementary Symmetric Functions

If the n parameters c 1 , c 2 , , c n are distinct, then the elementary symmetric functions σ i , j ( n ) in c 1 , c 2 , , c j 1 , c j + 1 , , c n are defined for i , j = 1 , , n in El-Mikkawy [5] p. 644 by
{ σ 1 , j ( n ) = 1 σ i , j ( n ) = r 1 = 1 r 1 j n r 2 = r 1 + 1 r 2 j n r i 1 = r i 2 + 1 r i 1 j n m = 1 i 1 c r m , f o r   i = 2 , , n .
The efficient algorithm, of the O(n2) computational complexity class, for calculating the elementary symmetric functions in Equation (6), is given in Reference [5]. Now, it is possible to present the explicit form of the inverse GVM given by Reference [5] (p. 647).
[ V G ( k ) ( 1 , , n ) ] 1 = [ ( 1 ) n + 1 σ n , 1 ( n ) c 1 k i = 2 n ( c 1 c i ) ( 1 ) n + 1 σ n , 2 ( n ) c 2 k i = 1 , i 2 n ( c 2 c i ) ( 1 ) n + 1 σ n , n ( n ) c n k i = 1 n 1 ( c n c i ) ( 1 ) n + 2 σ n 1 , 1 ( n ) c 1 k i = 2 n ( c 1 c i ) ( 1 ) n + 2 σ n 1 , 2 ( n ) c 2 k i = 1 , i 2 n ( c 2 c i ) ( 1 ) n + 2 σ n 1 , n ( n ) c n k i = 1 n 1 ( c n c i ) ( 1 ) n + n σ 1 , 1 ( n ) c 1 k i = 2 n 1 ( c 1 c i ) ( 1 ) n + n σ 1 , 2 ( n ) c 2 k i = 1 , i 2 n 1 ( c 2 c i ) ( 1 ) n + n σ 1 , n ( n ) c n k i = 1 n 1 ( c n c i ) ] .
Let us return to the objective of this chapter, i.e., construction of the efficient, recursive algorithm for inverting the generalized Vandermonde matrix. This issue can be formalized as follows: we know the GVM inverse for the root series c 1 , , c n . We want to efficiently calculate the inverse for the root series c 1 , , c n ,   c n + 1 , making use of the known inverse. Let V G ( k ) ( n + 1 ) = V G ( k ) ( c 1 , c 2 , , c n + 1 ) ; then, the theorem below enables recursively calculating the desired inverse.

4.2. Theorem of the Recursive Inverse

Theorem 2.
The inverse generalized Vandermonde matrix [ V G ( k ) ( n + 1 ) ] 1 , corresponding to the root series c 1 , , c n + 1 , can be expressed by the following block matrix:
[ V G ( k ) ( n + 1 ) ] 1 = [ [ V G ( k ) ( n ) ] 1 + [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 d [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] d [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 d 1 d ] ,
d = c n + 1 k + n [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] ,
where [ V G ( k ) ( n ) ] 1 denotes the known GVM inverse for the roots c 1 , , c n .
Proof. 
To prove the matrix recursive identity in Equation (8), we make use of the block matrix algebra rules. A useful formula, expressing the block matrix inverse by the inverses of the respective sub-matrices, is:
A 1 = [ A 1 A 2 A 3 A 4 ] 1 = [ A 1 1 + A 1 1 A 2 B 1 A 3 A 1 1 A 1 1 A 2 B 1 B 1 A 3 A 1 1 B 1 ] , B = A 4 A 3 A 1 1 A 2 .
Thus, if we know the inverse of the sub-matrix A 1 and the inverse of B , we can directly obtain the inverse of the block matrix A by performing a few matrix multiplications. For different c 1 , c 2 , , c n + 1 , the GVM matrix is invertible and Equation (10) holds true; thus, the coefficient d given by Equation (9) is non-zero. Now, let us take into account the generalized Vandermonde matrix V G ( k ) ( n + 1 ) for the n + 1 roots c 1 , , c n + 1 . It can be treated as a block matrix of the following form:
V G ( k ) ( n + 1 ) = [ c 1 k c 1 k + n 1 c 1 k + n c n k c n k + n 1 c n k + n c n + 1 k c n + 1 k + n 1 c n + 1 k + n ] = [ [ V G ( k ) ( n ) ] [ c 1 k + n c n k + n ] [ c n + 1 k c n + 1 k + n 1 ] [ c n + 1 k + n ] ] .
Now, applying the block matrix identities in Equation (10) to the block matrix in Equation (11), we directly obtain the thesis in Equation (9).
Despite the explicit form of the block matrix inverse in Equation (9), its efficient algorithmic implementation is not obvious. The order in which we calculate the matrix term [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 has a crucial influence on the final computational complexity of the algorithm. Therefore, let us analyze all three possible orders.
(A) Left-to-right order of multiplications.
One can notice that [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T [ c n + 1 k c n + 1 k + n 1 ] has dimensions equal to n × n . Hence, the multiplication of this last matrix and of the matrix [ V G ( k ) ( n ) ] 1 is an O(n3) class algorithm. Thus, the computational complexity in the left-to-right order is of the O(n3) class.
(B) Right-to-left order.
A detailed analysis leads also to the O(n3) class.
(C) The order of the following form:
{ [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T } { [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 } .
In this case, at first, we perform the following two multiplications:
  • The multiplication { [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T } requires O(n2) operations; as the result, we get the n-element vertical vector.
  • The multiplication { [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 } requires O(n2) operations; as the result, we get the n-element horizontal vector.  ☐
Finally, all we have to do is to multiply these two last vectors, which obviously is an operation of the O(n2) class.
Summarizing, the most efficient is the multiplication order ((C), giving a quadratic computational complexity. All other orders lead to the worse O(n3) class algorithms. Combining the above results, we can give the algorithm which solves the incremental inverse problem, i.e., calculating the GVM inverse for the root series c 1 , , c n ,   c n + 1 on the basis of the known inverse for the root series c 1 , , c n .

4.3. Algorithm 1

Using the incremental Algorithm 1, we can build the final, recursive algorithm for inverting the generalized Vandermonde matrix of the form in Equation (1).
Algorithm 1:Incremental Inverting of the Generalized Vandermonde Matrix
1. 
Function Incremental_Inverse(n; k; c 1 , , c n + 1 ; [ V G ( k ) ( n ) ] 1 ): [ V G ( k ) ( n + 1 ) ] 1
2. 
Input:
-
n   : i n t e g e r  -number of roots − 1
-
k   : r e a l    -Vandermonde matrix general exponent
-
c 1 , , c n + 1 : r e a l     -the roots
-
[ V G ( k ) ( n ) ] 1   : r e a l n × n    -the GVM inverse for the roots c 1 , , c n .
3. 
Locals:
-
v ¯ 1 ,   v ¯ 2 :   r e a l n
-
d : r e a l
4. 
Calculate the auxiliary vectors v ¯ 1 ,   v ¯ 2 .
v ¯ 1 : = [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T .
v ¯ 2 : = [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 .
5. 
Calculate the coefficient d using Equation (9).
d : = c n + 1 k + n v ¯ 2 [ c 1 k + n c n k + n ] T .
6. 
Build the desired matrix inverse [ V G ( k ) ( n + 1 ) ] 1 as a block matrix.
[ V G ( k ) ( n + 1 ) ] 1 : = [ [ V G ( k ) ( n ) ] 1 + v ¯ 1 v ¯ 2 d v ¯ 1 d v ¯ 2 d 1 d ] .
7. 
Output:
-
[ V G ( k ) ( n + 1 ) ] 1 : r e a l ( n + 1 ) × ( n + 1 ) -the inverse for the roots c 1 , , c n ,   c n + 1 .
8. 
End.

4.4. Computational Complexity

It is possible to note the following advantages of the computational complexity of inverting the GVM by recursive algorithms in comparison with the classical Equation (7), the complexity of which is O(n3) (the classical Equation (7) requires calculating the elementary symmetric functions σ i , j ( n ) for i , j = 1 , , n . To this aim, Algorithm 2.1 p. 644 [5], with quadratic complexity, should be executed n times (Formula 2.5, p. 645):
As we analyzed in the point (C), Section 4.2, the computational complexity of the incremental Algorithm 1, which is constructed on the basis of of Equation (12), is of the O(n2) class with respect to the number of floating-point operations which have to be performed. This is possible thanks to the proper multiplication order in Equation (12). This way, we avoid the O(n3) complexity while adding a new root.
The computational complexity of the recursive Algorithm 2 is of the O(n3) class.
Algorithm 2:For Recursive Inverting the Generalized Vandermonde Matrix.
1. 
Function Inverse(n; k; c 1 , , c n ): [ V G ( k ) ( n ) ] 1
2. 
Input:
-
n : i n t e g e r  - number of roots
-
k : r e a l    - Vandermonde matrix general exponent
-
c 1 , , c n : r e a l    -the roots
3. 
Locals:
-
V [ ] [ ]   o f   r e a l
-
i : integer
4. 
Calculate the variable V:
V = 1 c 1 k
5. 
For i = 1 To n − 1
V = Incremental_Inverse(i,k, c 1 , , c i + 1 ,V)
Next i
6. 
Output:
-
V : r e a l n × n - the inverse for the roots c 1 , , c n .
7. 
End.
Last but not least, Equation (7) requires recalculating the desired inverse each time we add a new root. The main idea of the recursive algorithms we proposed is to make use of the already calculated inverse. This is how high efficiency was obtained. On the graphs below, we practically compare the efficiency of the recursive algorithms with the standard algorithms (in a non-recursive form, for matrices with arbitrary entries, contrary to the algorithms presented in this article, which are developed specially for the GVM) embedded in Matlab®. On the left, we can see the execution time of the standard and recursive algorithms, and, on the right, we can see the relative performance gain (recursive algorithms vs. the standard ones for inversion and determinant calculation).
On Figure 1 and Figure 2 we show a practical performance tests of the Algorithms 1 and 2.

5. Example

We show a practical application of the algorithms from this article using the same numerical example as in Reference [5] (p. 649), to enable easy comparison of the two opposite algorithms: classical and recursive. Let us consider the generalized Vandermonde matrix V G ( k ) ( n ) and its inverse [ V G ( k ) ( n ) ] 1 with the following parameters:
-
general exponent: k = 0.5 .
-
size: n = 7 .
-
roots: c i = i ,   i = 1 , , 7 .
The generalized Vandermonde matrix of such parameters has the following form:
V G ( 0.5 ) ( 7 ) = [ 1 1 1 1 1 1 1 2 2 2 4 2 8 2 16 2 32 2 64 2 3 3 3 9 3 27 3 81 3 243 3 729 3 2 8 32 128 512 2048 8192 5 5 5 25 5 125 5 625 5 3125 5 15625 5 6 6 6 36 6 216 6 1296 6 7776 6 46656 6 7 7 7 49 7 343 7 2401 7 16807 7 117649 7 ] .
The determinant of the matrix V G ( 0.5 ) ( 7 ) and its inverse have the following forms, respectively:
det [ V G ( 0.5 ) ( 7 ) ] = 298598400 35 ,
[ V G ( 0.5 ) ( 7 ) ] 1 = [ 7 21 2 35 3 35 2 21 5 7 6 1 7 223 20 879 20 2 949 12 3 41 201 4 5 1019 60 6 7 7 20 319 45 3929 120 2 389 6 3 2545 72 134 3 5 1849 120 6 29 7 90 37 16 71 6 2 1219 48 3 44 3 185 5 48 41 6 6 7 7 48 59 144 9 4 2 247 48 3 113 36 69 16 5 19 12 6 5 7 144 3 80 13 60 2 25 48 3 1 3 23 48 5 11 60 6 7 240 1 720 1 120 2 1 48 3 1 72 1 48 5 1 120 6 1 720 7 ] .

5.1. Objective

Our objective is to find the determinant and inverse of the generalized Vandermonde matrix V G ( 0.5 ) ( 8 ) , with roots c i = i ,   i = 1 , , 8 , in the recursive way.

5.2. Recursive Determinant Calculation

We calculate the determinant value of the matrix V G ( 0.5 ) ( 8 ) using Equation (5) because the GVM in question has consecutive integer roots. In this case, the equality in Equation (5) leads to the following determinant value:
det V G ( 0.5 ) ( 8 ) = ( 8 1 ) !   ( 8 8 ) !   8   det V G ( 0.5 ) ( 7 ) = 7 ! 8 298598400 35 = 3009871872000 70 .

5.3. Recursive Inverse Finding

The task of calculating the inverse of the matrix V G ( 0.5 ) ( 8 ) is performed using Algorithm 1, with the use of the known inverse [ V G ( 0.5 ) ( 7 ) ] 1 in Equation (19). The auxiliary vectors v ¯ 1 ,   v ¯ 2 have forms in compliance with Equations (13) and (14)
v ¯ 1 = [ V G ( k ) ( n ) ] 1 [ c 1 k + n c n k + n ] T = [ V G ( k ) ( n ) ] 1 2 [ 2 16 128 1024 8192 65536 524288 ] T = = [ 5040 13068 13132 6769 1960 322 28 ] T .
v ¯ 2 = [ c n + 1 k c n + 1 k + n 1 ] [ V G ( k ) ( n ) ] 1 = [ 1 128 2 2187 3 32768 78125 5 279936 6 823543 7 ] [ V G ( k ) ( n ) ] 1 = = [ 2 2 14 14 6 35 2 14 10 14 3 2 14 ] .
Next, we calculate the coefficient d as follows:
d = c n + 1 k + n v ¯ 2 [ c 1 k + n c n k + n ] T = 8 7 + 0.5 [ 2 2 14 14 6 35 2 14 10 14 3 2 14 ] 2 [ 2 16 128 1024 8192 65536 524288 ] T = 1 10080 2 .
The last step of Algorithm 1 is building a block matrix in compliance with Equation (16). Combining the vectors v ¯ 1 in Equation (20) and v ¯ 2 in Equation (21), and the coefficient d in Equation (22) with the known Vandermonde inverse [ V G ( 0.5 ) ( 7 ) ] 1 in Equation (19), we finally obtain
[ V G ( 0.5 ) ( 8 ) ] 1 = [ [ V G ( 0.5 ) ( 7 ) ] 1 + v ¯ 1 v ¯ 2 d v ¯ 1 d v ¯ 2 d 1 d ] = [ 8 14 2 56 3 3 35 56 5 5 14 3 6 8 7 7 481 35 621 20 2 2003 45 3 691 8 141 5 5 2143 180 6 103 35 7 349 36 18353 720 2 797 20 3 1457 18 4891 180 5 187 16 6 527 180 7 329 90 15289 1440 2 268 15 3 10993 288 1193 90 5 2803 480 6 67 45 7 115 144 179 72 2 71 16 3 179 18 2581 720 5 13 8 6 61 144 7 73 720 239 720 2 149 240 3 209 144 391 720 5 61 240 6 49 720 7 1 144 17 720 2 11 240 3 1 9 31 720 5 1 48 6 29 5040 7 1 4 2 363 560 2 469 720 2 967 2880 2 7 72 2 23 1440 2 1 720 2 1 5040 1 1440 2 1 720 3 1 288 1 720 5 1 1440 6 1 5040 7 1 20160 2 ]
One can see that the incrementally received inverse [ V G ( 0.5 ) ( 8 ) ] 1 is equivalent to the inverse obtained by the classical algorithms in Reference [5] (p. 649).

5.4. Summary of the Example

In this example, we recursively calculated the determinant and inverse of the V G ( 0.5 ) ( 8 ) matrix, making use of the known determinant and the inverse of the V G ( 0.5 ) ( 7 ) matrix, respectively. It is worth noting that, to perform this, there were merely eight scalar multiplications necessary for the determinant, and 3 7 + 2 7 2 scalar multiplications necessary for the inverse. This confirms the high efficiency of the recursive approach.

6. Research and Extensions

The following can be seen as the desired future research directions:
Construction of the parallel algorithm for the generalized Vandermonde matrices.
Adaptation of the algorithms to vector-oriented hardware units.
Combination of both.
Application on Graphics Hardware Unit architecture.
Application of the results in new branches, like deep learning and artificial intelligence.
The proposed results could also be applied to other related applications which use Vandermonde or matrices of similar type, such as the following [19,20,21]:
Total variation problems and optimization methods;
Power systems networks;
The numerical problem preconditioning;
Fractional order differential equations.

7. Summary

In this paper, we derived recursive numerical recipes for calculating the determinant and inverse of the generalized Vandermonde matrix. The results presented in this article can be performed automatically using a numerical algorithm in any programming language. The computational complexity of the presented algorithms is better than the ordinary GVM determinant/inverse methods.
The presented results neatly combine the theory of algorithms, particularly the recursion programming paradigm and computational complexity analysis, with numerical recipes, which we consider as the right branch in constructing computational algorithms.
Considering software production, the recursion is not only a purely academical paradigm, as it was successfully used by programmers for decades.

Funding

This work was supported by Statutory Research funds of Institute of Informatics, Silesian University of Technology, Gliwice, Poland (BK/204/RAU2/2019).

Acknowledgments

I would like to thank my university colleagues for stimulating discussions and reviewers for apt remarks which significantly improved the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Respondek, J. On the confluent Vandermonde matrix calculation algorithm. Appl. Math. Lett. 2011, 24, 103–106. [Google Scholar] [CrossRef]
  2. Respondek, J. Numerical recipes for the high efficient inverse of the confluent Vandermonde matrices. Appl. Math. Comput. 2011, 218, 2044–2054. [Google Scholar] [CrossRef]
  3. Respondek, J. Highly Efficient Recursive Algorithms for the Generalized Vandermonde Matrix. In Proceedings of the 30th European Simulation and Modelling Conference—ESM’ 2016, Las Palmas de Gran Canaria, Spain, 26–28 October 2016; pp. 15–19. [Google Scholar]
  4. Respondek, J. Recursive Algorithms for the Generalized Vandermonde Matrix Determinants. In Proceedings of the 33rd Annual European Simulation and Modelling Conference—ESM’ 2019, Palma de Mallorca, Spain, 28–30 October 2019; pp. 53–57. [Google Scholar]
  5. El-Mikkawy, M.E.A. Explicit inverse of a generalized Vandermonde matrix. Appl. Math. Comput. 2003, 146, 643–651. [Google Scholar] [CrossRef]
  6. Hou, S.; Hou, E. Recursive computation of inverses of confluent Vandermonde matrices. Electron. J. Math. Technol. 2007, 1, 12–26. [Google Scholar]
  7. Hou, S.; Pang, W. Inversion of confluent Vandermonde matrices. Comput. Math. Appl. 2002, 43, 1539–1547. [Google Scholar] [CrossRef] [Green Version]
  8. Gorecki, H. Optimization of the Dynamical Systems; PWN: Warsaw, Poland, 1993. [Google Scholar]
  9. Klamka, J. Controllability of Dynamical Systems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991. [Google Scholar]
  10. Respondek, J. Approximate controllability of infinite dimensional systems of the n-th order. Int. J. Appl. Math. Comput. Sci. 2008, 18, 199–212. [Google Scholar] [CrossRef]
  11. Respondek, J. Approximate controllability of the n-th order infinite dimensional systems with controls delayed by the control devices. Int. J. Syst. Sci. 2008, 39, 765–782. [Google Scholar] [CrossRef]
  12. Timoshenko, S. Vibration Problems in Engineering, D, 3rd ed.; Van Nostrand Company: London, UK, 1955. [Google Scholar]
  13. Bellman, R. Introduction to Matrix Analysis; Mcgraw-Hill Book Company: New York, NY, USA, 1960. [Google Scholar]
  14. Eisinberg, A.; Fedele, G. On the inversion of the Vandermonde matrix. Appl. Math. Comput. 2006, 174, 1384–1397. [Google Scholar] [CrossRef]
  15. Kincaid, D.R.; Cheney, E.W. Numerical Analysis: Mathematics of Scientific Computing, 3rd ed.; Brooks Cole: Florence, KY, USA, 2001. [Google Scholar]
  16. Lee, K.; O’Sullivan, M.E. Algebraic soft-decision decoding of Hermitian codes. IEEE Trans. Inf. Theory 2010, 56, 2587–2600. [Google Scholar] [CrossRef] [Green Version]
  17. Gorecki, H. On switching instants in minimum-time control problem. One-dimensional case n-tuple eigenvalue. Bull. Acad. Pol. Sci. 1968, 16, 23–30. [Google Scholar]
  18. Yan, S.; Yang, A. Explicit Algorithm to the Inverse of Vandermonde Matrix. In Proceedings of the 2009 Internatonal Conference on Test and Measurement, Hong Kong, China, 5–6 December 2009; pp. 176–179. [Google Scholar]
  19. Dassios, I.; Fountoulakis, K.; Gondzio, J. A preconditioner for a primal-dual newton conjugate gradients method for compressed sensing problems. SIAM J. Sci. Comput. 2015, 37, A2783–A2812. [Google Scholar] [CrossRef] [Green Version]
  20. Dassios, I.; Baleanu, D. Optimal solutions for singular linear systems of Caputo fractional differential equations. Math. Methods Appl. Sci. 2018. [Google Scholar] [CrossRef] [Green Version]
  21. Dassios, I. Analytic Loss Minimization: Theoretical Framework of a Second Order Optimization Method. Symmetry 2019, 11, 136. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The execution time of the standard and recursive algorithms.
Figure 1. The execution time of the standard and recursive algorithms.
Information 11 00042 g001
Figure 2. The relative performance gain of the algorithms.
Figure 2. The relative performance gain of the algorithms.
Information 11 00042 g002

Share and Cite

MDPI and ACS Style

Respondek, J.S. Recursive Matrix Calculation Paradigm by the Example of Structured Matrix. Information 2020, 11, 42. https://doi.org/10.3390/info11010042

AMA Style

Respondek JS. Recursive Matrix Calculation Paradigm by the Example of Structured Matrix. Information. 2020; 11(1):42. https://doi.org/10.3390/info11010042

Chicago/Turabian Style

Respondek, Jerzy S. 2020. "Recursive Matrix Calculation Paradigm by the Example of Structured Matrix" Information 11, no. 1: 42. https://doi.org/10.3390/info11010042

APA Style

Respondek, J. S. (2020). Recursive Matrix Calculation Paradigm by the Example of Structured Matrix. Information, 11(1), 42. https://doi.org/10.3390/info11010042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop