Next Article in Journal
Upper Semicontinuous Representations of Semiorders as Interval Orders
Previous Article in Journal
A Gravity Tensor and Gauge Equations for Newtonian Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix-Theoretic Investigation of Generalized Geometric Frank Matrices: Spread Estimates and Arithmetic Insights

Department of Engineering Basic Sciences, Sivas Science and Technology University, Sivas 58000, Türkiye
Axioms 2026, 15(1), 52; https://doi.org/10.3390/axioms15010052
Submission received: 22 November 2025 / Revised: 17 December 2025 / Accepted: 19 December 2025 / Published: 9 January 2026
(This article belongs to the Section Algebra and Number Theory)

Abstract

This study introduces a novel generalization: the generalized geometric Frank matrix, which extends the classical Frank matrix and its known variants. We systematically examine its algebraic structure, providing detailed analyses of its factorizations, determinant, inverse, and various norm computations. Furthermore, we investigate the reciprocal form of the reciprocal generalized geometric Frank matrix and reveal a variety of its intriguing algebraic properties. To illustrate the applicability of our theoretical results, we present a compelling example using Fibonacci number entries within the Frank matrix framework. Additionally, we analyze how the spread’s upper bounds are influenced by variations in the parameter r and the matrix dimension. Also, to formally assess the computational implications of these structural choices, we use Big O notation to describe how the computational cost scales with the matrix size n and the iteration count k ( r ) . Our findings demonstrate that selecting r < 1 and utilizing lower-dimensional generalized geometric Frank matrices can yield tighter bounds and significantly reduce computational complexity. These results highlight the potential of the proposed matrix class for optimization problems where efficiency is critical.
MSC:
11B83; 15A15; 15B05; 15A09; 15A60

1. Introduction

Frank matrices are a special class of matrices in linear algebra, often used as test matrices in numerical analysis, particularly for eigenvalue computations. First introduced by Frank in 1958, these matrices exhibit a lower Hessenberg structure, meaning that all elements above the first superdiagonal are zero [1].
One notable property of Frank matrices is that their determinants are always equal to 1. Additionally, their inverses exhibit an upper Hessenberg structure, where all elements below the first subdiagonal are zero. These characteristics make Frank matrices an essential tool in matrix computations and numerical analysis.
In recent years, generalized versions of Frank matrices have been explored. For instance, the “generalized max r-Frank matrix,” a subclass of lower Hessenberg matrices, has been studied for its characteristic polynomial, inverse, determinant, and norms. These extensions highlight the adaptability and mathematical richness of Frank matrices in various applications [2].
Two notable types of matrices frequently studied in [3] are M i n and M a x matrices. A M i n matrix is a relatively simple structure where each entry i ,   j is defined as min i ,   j , for i , j = 1 , 2 , , n .   M a x matrix is also simple matrix whose entry is max i , j . In [4], the smallest eigenvalue of a min matrix was analyzed to establish bounds for trigonometric function values. Mattila and Haukkanen [5] investigated various properties of specific types of min and max matrices.
K m i n = k 1 k 1 k 1 k 1 k 1 k 1 k 2 k 2 k 2 k 2 k 1 k 2 k 3 k 3 k 3 k 1 k 2 k 3 k n 1 k n 1 k 1 k 2 k 3 k n 1 k n ,   K m a x = k 1 k 2 k 3 k n 1 k n k 2 k 2 k 3 k n 1 k n k 3 k 3 k 3 k n 1 k n k n 1 k n 1 k n 1 k n 1 k n k n k n k n k n k n .
In recent years, the generalization (for r = 1 they obtain M i n and M a x matrices) of these matrices has also been considered by some authors. Kızılateş and Terzioğlu [6] introduced the definitions of r-min and r-max matrices and provided their important features.
In fact, it can be said that the Frank matrix is a special case of the M a x matrix; the Frank matrix is widely recognized as a popular test matrix for eigenvalue algorithms due to its combination of well-conditioned and poorly conditioned eigenvalues [7]. In 1986, Varah [8] presented a generalization of the Frank matrix, detailing methods for computing its eigenvalues and eigenvectors as well as estimating the condition numbers associated with its eigenvalues. Later, in 2020, Mersin and Bahşi [9] proposed another generalization of the Frank matrix, defined in terms of the real n-tuple k 1 , k 2 , , k n as follows:
F m a x = k n k n 1 0 0 0 k n 1 k n 1 k n 2 0 0 k n 2 k n 2 k n 2 0 0 k 2 k 2 k 2 k 2 k 1 k 1 k 1 k 1 k 1 k 1  
Much more detailed and important studies on Frank matrices were actually carried to much different and advanced dimensions by Mersin and Bahşi [10,11]. In addition to the studies on the general properties, determinants and bounds of Frank matrices, they later combined these studies with Fibonacci and Lucas number sequences and provided readers with different perspectives. In the following years, the same authors applied Sturm theorems to generalize Frank matrices and obtained their eigenvalues. In addition, obtaining the characteristic polynomials of these matrices is another important contribution.
A prominent group within special matrices is the class of r-circulant matrices, which appear in numerous scientific and mathematical contexts. Their structural properties and diverse applications have attracted significant attention in previous research, as evidenced by studies like those of Bertaccini and Ng (2001), Lyness and Sørevik (2004), and Zhao (2009) [12,13,14]. An r-circulant matrix is a square matrix characterized by its first row k 0 , k 1 , , k n 1 , and takes on a specific structured form:
C = k 0 k 1 k 2 k n 2 k n 1 r k n 1 k 0 k 1 k n 3 k n 2 r k n 2 r k n 1 k 0 k n 4 k n 3 r k 1 r k 2 r k 3 r k n 1 k 0                        
Another generalization of circulant matrices is the geometric circulant matrix. These matrices have attracted considerable research interest, with numerous studies exploring their characteristics. In particular, several works have examined aspects such as their norms and the corresponding upper and lower bounds [15,16,17,18]. These findings highlight the mathematical richness and wide-ranging utility of circulants and their generalized forms in theoretical and applied contexts.
Matrix theory provides a fundamental framework for analyzing the linear transformations, numerical stability, and spectral behavior of structured matrices. In particular, matrix norms, condition numbers, and generalized inverses are key tools for assessing the sensitivity of linear systems and eigenvalue problems, as extensively discussed in classical matrix analysis literature [19,20]. Matrix-theoretic transformations and structured generalizations can significantly influence conditioning and numerical robustness, especially in the presence of ill-conditioned or nearly singular systems. Recent studies on weighted pseudoinverses further demonstrate that appropriate matrix structures lead to improved condition number estimates and enhanced numerical performance [21]. Motivated by this perspective, the generalized geometric Frank matrix introduced in this study is examined not only as an algebraic extension of the classical Frank matrix but also as a structured matrix model with controlled spectral and conditioning properties.
The other important definition is that spread is a crucial measure that characterizes the spectral properties of a matrix [22]. It is defined in [19] as the difference between the maximum and minimum eigenvalues of a matrix. The generalized geometric Frank matrices considered in this study are diagonally dominant with real entries along the main diagonal. Therefore, all eigenvalues are real, and the spectral spread defined as the difference between the maximum and minimum eigenvalues is well-defined. This ensures that comparisons of eigenvalue sizes, used in our spread analysis, are meaningful and consistent.
Mathematically, for matrix A, the spread is expressed as:
s p r e a d A = λ m a x A λ m i n A
where λ m a x A and λ m i n A are the largest and smallest eigenvalues of the matrix, respectively. Spread helps analyze the spectral distribution of a matrix and provides insights into its behavior in various applications, such as control theory and signal processing.
Definition 1 
([19,23]). Let X = x i j ,   i , j = 1 , 2 , 3 , , n and Y = y i j ,   n × n matrices,
X Y = x i j y i j ,   i , j = 1 , 2 , 3 , , n
where denotes Hadamard products of this matrix.
Definition 2 
([19,23]). For matrix X , the following norms are defined:
X 2 = m a x λ k ( X H X ) ,   1 k n ,   s p e c t r a l   n o r m ,
λ k X H X is eigenvalue of  X H X also,  X H is conjugate transpose of the matrix  X .
X p = i , j = 1 n x i j p p , p 2 ,   l p   n o r m ,
X E = i , j = 1 n x i j 2 ,   E u c l i d e a n   n o r m .
Definition 3. 
The spread of a matrix  X ,
S X = m a x i , j λ i λ j ,
    S X 2 X E 2 2 n t r X 2 .
Lemma 1 
([19,23]). Let X = x i j ,   i , j = 1 , 2 , 3 , , n and  Y = y i j ,   m × n matrices, following equality holds:
X Y 2 r X c Y
where,  r X = m a x j = 1 n x i j 2 ,   1 i m and  c Y = m a x i = 1 m y i j 2 ,   1 j n .

2. Generalized Geometric Frank Matrix

We introduce the generalized geometric Frank matrix F r , n = a i j i , j = 1 n as follows:
F r , n = g n g n 1 0 0 0 0 0 0 r g n 1 g n 1 g n 2 0 0 0 0 0 r 2 g n 2 r 2 g n 2 r g n 2 r g n 3 0 0 0 0 r 3 g n 3 r 3 g n 3 r 3 g n 3 r 2 g n 3 r 2 g n 4 0 0 0 r 4 g n 4 r 4 g n 4 r 4 g n 4 r 4 g n 4 r 3 g n 4 0 0 0 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 3 g 2 r n 3 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 2 g 1
where r C 0 ,   g 0 = 0 and n Z + .
a i j = r i 1 g n + 1 i ,       i > j r i 2 g n + 1 j ,       j 2 < i j 0 ,       o t h e r w i s e
Remark 1. 
If we take  r = 1  and  g n = f n , ( f n is nth Fibonacci number) in (10) we get the generalization of the Fibonacci Frank matrix defined in [24] (p. 423).
Theorem 1. 
Define  F r , n  as given in Equation (10). The factorization of  F r , n    is then expressed as follows:
Let  F r , n  be defined as in (10). Then the factorization  F r , n  is as follows:
F r , n = L U S .
Proof. 
By performing some basic column operations to F r , n , we find
F r , n = g n g n 1 0 0 0 0 0 0 r g n 1 g n 1 g n 2 0 0 0 0 0 r 2 g n 2 r 2 g n 2 r g n 2 r g n 3 0 0 0 0 r 3 g n 3 r 3 g n 3 r 3 g n 3 r 2 g n 3 r 2 g n 4 0 0 0 r 4 g n 4 r 4 g n 4 r 4 g n 4 r 4 g n 4 r 3 g n 4 0 0 0 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 3 g 2 r n 3 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 2 g 1
g n g n 1 g n 1 0 0 0 0 ( r 1 ) g n 1 g n 1 g n 2 g n 2 0 0 0 0 ( r 2 r ) g n 2 r ( g n 2 g n 3 ) r g n 3 0 0 0 0 ( r 3 r 2 ) g n 3 r 2 ( g n 3 g n 4 ) 0 0 0 0 0 0 r n 3 ( g 2 g 1 ) r n 3 g 1 0 0 0 0 ( r n 1 r n 2 ) g 1 r n 2 g 1 = M .
For matrix M , Crout’s decomposition is written as follows:
M = L U
here are
L i j = M i j k = 1 j 1 L i k U k j ,   i j
and
U i j = M i j k = 1 j 1 L i k U k j L i i ,   i < j .
When calculating the elements of the L matrix, all lower triangular elements, including the diagonal, are determined. The diagonal and lower elements of this matrix are full, and all the upper elements are zero.
The diagonal elements of the U matrix is unit. The remaining upper triangular elements are determined. The diagonal elements of this matrix are 1, and the lower elements are zero. Now let’s create the L   a n d   U matrices:
L = l 1 0 0 0 0 k 1 l 2 0 0 0 0 k 2 l 3 0 0 0 0 0 l n 1 0 0 0 0 0 k n l n , U = 1 u 1 0 0 0 0 1 u 2 0 0 0 0 1 u 3 0 0 0 0 1 u n 1 0 0 0 0 0 1
and thus, we have
L U = x 1 y 1 0 0 0 z 2 x 2 y 2 0 0 0 z 3 x 3 y 3 0 0 0 0 z n 1 x n 1 y n 1 0 0 0 0 z n x n .
where
x k = g n + 1 k g n k ,   k = 1 , 2 ,   x k = g n + 1 k g n k r k 2 , k 3   a n d   k Z +
z k = r k 2 r 1 g n + 1 k ,   k 2 ,   k Z +
y k = g n k ,   k = 1 , 2 ,   y k = g n k r k 2 ,   k 3 ,   k Z + .
Thus, we obtain
F r , n = L U S , where
S = 1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 1 1 1 1 1 1 .
The main computational tool is matrix operations, particularly matrix multiplication. To efficiently perform the necessary computations, we used a structured Crout decomposition that utilizes the narrow-band-like coupling of the subdiagonal elements in the generalized geometric Frank matrix. This approach reduces computational cost compared to full dense factorization while preserving accuracy.
It should also be noted that various fast matrix multiplication algorithms exist that can further accelerate computations for larger matrices or more complex operations. Many matrix-based computations can be implemented using only matrix multiplication, providing additional flexibility and efficiency. A detailed and comprehensive summary of these techniques in [25] will provide different perspectives on the algorithms discussed.
Thus, combining Crout’s decomposition with knowledge of modern fast multiplication techniques enables both efficient and accurate matrix computations in the context of spectral analysis of generalized geometric Frank matrices.
Theorem 2. 
Let  F r , n  be defined as in (10). Then, the determinant of the generalized geometric Frank matrix is given as follows:
d e t ( F r , n ) = ( g n g n 1 ) i = 2 n x i k i u i 1 .
where  g n F r , n .
Proof. 
From the multiplicative property of determinant, we write
d e t ( F r , n ) = d e t L . d e t U . d e t S .
We know that
d e t U = d e t S = 1 .
We find that
d e t ( F r , n ) = l 1 l 2 l 3 l n = i = 1 n l i = x 1 i = 2 n l i = ( g n g n 1 ) i = 2 n x i k i u i 1 .
where k i = z i ,   i 2 ,   i Z + and u i = y i l i = g n i x i k i u i 1 ,   i 2 ,   i Z + . □
Theorem 3. 
Consider the matrix defined in Equation (10), so that
F r , n = g n X Y K .
where  X = g n 1 , 0 , 0 , 0 , , 0 ,  Y = r g n 1 , r 2 g n 2 , r 3 g n 3 , r 4 g n 4 , , r n 1 g 1 T
K = g n 1 g n 2 0 0 0 0 0 r 2 g n 2 r g n 2 r g n 3 0 0 0 0 r 3 g n 3 r 3 g n 3 r 2 g n 3 r 2 g n 4 0 0 0 r 4 g n 4 r 4 g n 4 r 4 g n 4 r 3 g n 4 0 0 0 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 3 g 2 r n 3 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 2 g 1 .
If F r , n is nonsingular matrix, then the inverse of F r , n 1 is acquired as follows:
F r , n 1 = g n 1 + k g n 1 X Y g n 1 k g n 1 X k Y g n 1 k
where k = 1 K Y g n 1 X .
Proof. 
Since the statement is already valid in its general form for n , it is sufficient to demonstrate its correctness through direct analysis rather than using induction.
By multiplying F r , n and F r , n 1 , we obtain
g n 1 + k g n 1 X Y g n 1 k g n 1 X k Y g n 1 k g n X Y K = 1 + k X Y g n 1 k g n 1 X Y g n 1 X + k g n 1 X Y g n 1 X k K g n 1 X k Y + k Y k X Y g n 1 + k K = I n 0 0 1 .
where I n ,   n × n unit matrix. □
Theorem 4. 
Define the generalized geometric Frank matrix as in Equation (10), with  g 1 < g 2 < g 3 < < g n .  Subsequently, an exposition is made regarding the upper bounds for the spectral norm, which are as follows:
F r , n 2 ( r n 1 ) 2 n 1 + ( r n 2 ) 2 i = 1 n g i 2 , r 1 ( r 2 + 2 ) i = 1 n g i 2 ,   r < 1 .
Also, in (10), the following norms hold:
F r , n p p = g n p + g n 1 p + k = 1 n 1 r n k 1 g k p + k = 1 n 1 r k g n k p k   + k = 1 n 2 r n k 2 g k p
and
F r , n E 2 = g n 2 + g n 1 2 + k = 1 n 1 r n k 1 g k 2 + k = 1 n 1 r k g n k 2 k   + k = 1 n 2 r n k 2 g k 2
Proof. 
Let M   and   N be the following matrices:
M = 1 1 0 0 0 0 r 1 1 0 0 0 r 2 r 2 r r 0 0 r n 2 r n 2 r n 2 r n 3 r n 3 r n 1 r n 1 r n 1 r n 1 r n 1 r n 2
and
N = g n g n 1 0 0 0 0 0 g n 1 g n 1 g n 2 0 0 0 0 g n 2 g n 2 g n 2 g n 3 0 0 0 g 2 g 2 g 2 g 2 g 2 g 2 g 1 g 1 g 1 g 1 g 1 g 1 g 1 g 1 .
Therefore, we obtain
F r , n 2 r 1 ( M ) c 1 ( N )
where
r 1 M = ( r n 1 ) 2 n 1 + ( r n 2 ) 2 , r 1 ( r 2 + 2 ) ,   r < 1
and
c 1 N = i = 1 n g i 2 .  
Since F r , n = M N and from Lemma 1, we obtain
F r , n 2 ( r n 1 ) 2 n 1 + ( r n 2 ) 2 i = 1 n g i 2 , r 1 ( r 2 + 2 ) i = 1 n g i 2 ,   r < 1 .
From Definition 2, we get (12). Taking p = 2 in (12) we get the Euclidean norm as in (13).
F r , n p p = g n p + g n 1 p + k = 1 n 1 r n k 1 g k p + k = 1 n 1 r k g n k p k + k = 1 n 2 r n k 2 g k p
and
F r , n E 2 = g n 2 + g n 1 2 + k = 1 n 1 r n k 1 g k 2 + k = 1 n 1 r k g n k 2 k + k = 1 n 2 r n k 2 g k . 2
Theorem 5. 
The characteristic polynomial of the generalized geometric Frank matrix  F r , n    satisfies a specific recurrence relation, which governs its structure across increasing matrix orders:
P n λ = λ g n + g n 1 P n λ + g n 1 P n λ = λ g n + g n 1 P n λ + g n 1 g n 1 1 r λ P n λ
where
P n λ = g n 1 1 r λ P n λ ,     P n λ = g n k r k 2 P n 1 λ ,   k 2
with initial condition  P 1 λ = λ g 1 ,   P 2 λ = λ 2 + λ g 1 g 2 + g 2 g 1 r g 1 2 .
Proof. 
The characteristic polynomial of the generalized geometric Frank matrix satisfies a recurrence relation; let P n λ denote this polynomial. It follows that its spectral properties can be derived recursively, facilitating analytical and computational analysis:
P n λ = λ g n g n 1 0 0 0 0 0 r g n 1 λ g n 1 g n 2 0 0 0 0 r 2 g n 2 r 2 g n 2 λ r g n 2 r g n 3 0 0 0 r 3 g n 3 r 3 g n 3 r 3 g n 3 λ r 2 g n 3 r 2 g n 4 0 0 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 r n 2 g 2 λ r n 3 g 2 r n 3 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 r n 1 g 1 λ r n 2 g 1 .
If elementary column operations are performed, the following matrix is obtained:
P n λ = λ g n + g n 1 g n 1 0 0 1 r g n 1 λ λ g n 1 + g n 2 0 0 0 r ( 1 r ) g n 2 λ 0 0 0 0 0 0 0 0 λ + r n 3 ( g 1 g 2 ) r n 3 g 1 0 0 1 r r n 2 g 1 λ λ r n 2 g 1 .
If we expand this determinant according to the first row, the following determinant is obtained:
λ g n + g n 1   λ g n 1 + g n 2 g n 2 0 0 r 1 r g n 2 λ λ r g n 2 g n 3 0 0 0 r 2 1 r g n 3 λ 0 0 0 0 λ + r n 3 g 1 g 2 r n 3 g 1 0 0 1 r r n 2 g 1 λ λ r n 2 g 1 + g n 1 1 r g n 1 λ g n 2 0 0 0 λ r g n 2 g n 3 0 0 0 r 2 1 r g n 3 λ 0 0 0 0 λ + r n 3 g 1 g 2 r n 3 g 1 0 0 1 r r n 2 g 1 λ λ r n 2 g 1 = λ g n + g n 1 P n λ + g n 1 P n λ .
The determinant P n λ has the recurrence relation as follows:
P n λ = g n k g n k + 1 r k 2 λ P n 1 λ + g n k r k 2 g n k 1 r r k 1 λ P n 2 λ ,   f o r   k 2 .
With initial conditions, P 2 λ = λ g 1 , P 3 λ = λ 2 + λ g 2 + r g 1 r g 1 g 2 + r 2 g 1 2 .
Similarly, the determinant P n λ has a new recurrence relation by writing it as follows:
P n λ = 1 r g n 1 λ g n 2 0 0 λ r ( g n 2 g n 3 ) 0 0 r 2 ( 1 r ) g n 3 λ 0 0 0 λ + r n 3 ( g 1 g 2 ) r n 3 g 1 0 1 r r n 2 g 1 λ λ r n 2 g 1
P n λ = 1 r g n 1 λ P n λ
And
P n λ = g n k r k 2 P n 1 λ ,   k 2 .
The desired recurrence and relations are obtained from all the determinant and recurrence relations given above. □
Lemma 2. 
For the generalized geometric Frank matrix  F r , n , we have
t r F r , n = g n + g n 1 + r g n 2 + r 2 g n 3 + + r n 2 g 1 = g n + i = 1 n 1 r i 1 g n i
Proof. 
Evidence of this phenomenon can be found by examining the trace of a matrix, which is equivalent to the sum of its diagonal elements. □
Theorem 6. 
The spread upper bound of the generalized geometric Frank matrix F r , n can be expressed as follows:
S F r , n 2 g n 2 + g n 1 2 + k = 1 n 1 r n k 1 g k 2 + k = 1 n 1 r k g n k 2 k + k = 1 n 2 r n k 2 g k . 2             2 n g n + i = 1 n 1 r i 1 g n i 2 .
Proof. 
We know that by Definition 3, the spread of a square matrix X satisfies
    S X 2 X E 2 2 n t r X 2
Also, using Frobenius norm F r , n E 2 combining Theorem 4 from Equation (13) with the trace formula established in Lemma 2, If we replace X with matrix F r , n , we obtain the required proof.
S F r , n 2 F r , n E 2 2 n t r F r , n 2 = 2 g n 2 + g n 1 2 + k = 1 n 1 r n k 1 g k 2 + k = 1 n 1 r k g n k 2 k + k = 1 n 2 r n k 2 g k . 2             2 n g n + i = 1 n 1 r i 1 g n i 2 .

3. Reciprocal Generalized Geometric Frank Matrix

The reciprocal generalized geometric Frank matrix of F r , n is presented as follows:
F r , n ° ( 1 ) = 1 g n 1 g n 1 0 0 0 0 0 1 r g n 1 1 g n 1 1 g n 2 0 0 0 0 1 r 2 g n 2 1 r 2 g n 2 1 r g n 2 1 r g n 3 0 0 0 1 r 3 g n 3 1 r 3 g n 3 1 r 3 g n 3 1 r 2 g n 3 1 r 2 g n 4 0 0 1 r 4 g n 4 1 r 4 g n 4 1 r 4 g n 4 1 r 4 g n 4 1 r 3 g n 4 0 0 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 3 g 2 1 r n 3 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 2 g 1
Let’s give the norms, determinants, and inverses of this matrix:
Theorem 7. 
Factorization of F r , n ° ( 1 ) is given by
F r , n ° 1 = M U N .
Proof. 
By performing certain elementary column operations on F r , n ° ( 1 ) ,   using Theorem 1, then we obtain
F r , n ° ( 1 ) = 1 g n 1 g n 1 0 0 0 0 0 1 r g n 1 1 g n 1 1 g n 2 0 0 0 0 1 r 2 g n 2 1 r 2 g n 2 1 r g n 2 1 r g n 3 0 0 0 1 r 3 g n 3 1 r 3 g n 3 1 r 3 g n 3 1 r 2 g n 3 1 r 2 g n 4 0 0 1 r 4 g n 4 1 r 4 g n 4 1 r 4 g n 4 1 r 4 g n 4 1 r 3 g n 4 0 0 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 3 g 2 1 r n 3 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 2 g 1        
1 g n 1 g n 1 1 g n 1 0 0 0 1 r r g n 1 1 g n 1 1 g n 2 1 g n 2 0 0 0 1 r r 2 g n 2 1 r 1 g n 2 1 g n 3 0 0 0 0 1 r r 3 g n 3 0 0 0 0 0 0 0 0 0 0 1 r n 3 1 g 2 1 g 1 1 r n 3 g 1 0 0 0 1 r r n 1 g 1 1 r n 2 g 1 = T
Matrix T , we examine Crout’s decomposition:
T = M U
M = m 1 0 0 0 0 n 1 m 2 0 0 0 0 n 2 m 3 0 0 0 0 0 m n 1 0 0 0 0 0 n n m n U = 1 s 1 0 0 0 0 1 s 2 0 0 0 0 1 s 3 0 0 0 0 1 s n 1 0 0 0 0 0 1
and thus, we have
M U = a 1 b 1 0 0 0 c 2 a 2 b 2 0 0 0 c 3 a 3 b 3 0 0 0 0 c n 1 a n 1 b n 1 0 0 0 0 c n a n .
So, we achieve
a k = 1 g n + 1 k 1 g n k ,   k = 1 , 2 , , n
c k = 1 r r . g n + 1 k ,   k = 2 , 3 , , n
b k = 1 g n k ,   k = 1 , 2 , , n 1 .
Thus, we get F r , n ° 1 = M U N = T N
Where
N = 1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 1 1 1 1 1 1
Theorem 8. 
Let F r , n ° 1 is a matrix as in (2.5) and 1 g 1 < 1 g 2 < 1 g 3 < < 1 g n . Then the determinant of the reciprocal generalized geometric Frank matrix is given as follows:
d e t ( F r , n ° 1 ) = 1 g n 1 g n 1 i = 2 n a i n i s i 1 .
where  n i = c i ,   i 2 and s i = b i m i = 1 g n i a i n i s i 1 ,   i 2 ,   i Z + .
Proof. 
We can write the following equation from the multiplicative property of the determinant.
d e t ( F r , n ° 1 ) = det M U N = d e t M . d e t U . d e t N ,
d e t U = d e t N = 1 .
We find that
d e t ( F r , n ° 1 ) = m 1 m 2 m 3 m n = i = 1 n m i = a 1 i = 2 n m i = 1 g n 1 g n 1 i = 2 n a i n i s i 1 .
where n i = c i ,   i 2
s i = b i m i = 1 g n i a i n i s i 1 ,   i 2 ,   i Z + .
Theorem 9. 
Let the reciprocal matrix be defined as in (14) so that
F r , n ° 1 = F r , n = 1 g n P Q R .
where P = 1 g n 1 , 0 , 0 , 0 , , 0 , Q = 1 r g n 1 , 1 r 2 g n 2 , 1 r 3 g n 3 , 1 r 4 g n 4 , , 1 r n 1 g 1 T ,
R = 1 g n 1 1 g n 2 0 0 0 0 0 1 r 2 g n 2 1 r g n 2 1 r g n 3 0 0 0 0 1 r 3 g n 3 1 r 3 g n 3 1 r 2 g n 3 1 r 2 g n 4 0 0 0 1 r 4 g n 4 1 r 4 g n 4 1 r 4 g n 4 1 r 3 g n 4 0 0 0 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 2 g 2 1 r n 3 g 2 1 r n 3 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 1 g 1 1 r n 2 g 1
If F r , n ° 1 is nonsigular matrix, then the inverse of F r , n ° 1 1 is acquired as follows:
F r , n ° 1 1 = g n + x g n P Q g n x g n P x Q g n x
where x = 1 R Q g n P .
Proof. 
Since the statement is already valid in its general form for n , it is sufficient to demonstrate its correctness through direct analysis rather than using induction.
By multiplying F r , n ° 1 and F r , n ° 1 1 , we obtain
g n + x g n P Q g n x g n P x Q g n x 1 g n P Q R = 1 + x P Q g n P Q x g n x P + x P Q g n + Q x g n P Q g n R Q x g n x g n P Q + R x = 1 0 0 1 .
Theorem 10. 
Let the generalized reciprocal geometric matrix F r , n ° 1 be as in (14) and 1 g 1 < 1 g 2 < < 1 g n . Thus, the upper bounds for the spectral norm are given by
F r , n ° 1 2 1 r n 1 2 n 1 + 1 r n 2 2 i = 1 n 1 g i 2 , r < 1 1 r 2 + 2 i = 1 n 1 g i 2 ,   r 1 for   n 3 .
Also, for the matrix F r , n ° 1 in (14), the following norms hold:
F r , n ° 1 p p = 1 g n p + 1 g n 1 p + k = 1 n 1 1 r n k 1 1 g k p + k = 1 n 1 1 g n k r k p k + k = 1 n 2 1 g k 1 r n k 2 p  
and
F r , n ° 1 E 2 = 1 g n 2 + 1 g n 1 2 + k = 1 n 1 1 r n k 1 1 g k 2 + k = 1 n 1 1 g n k r k 2 k + k = 1 n 2 1 g k 1 r n k 2 2    
Proof. 
Let Y and Z be the following matrices:
Y = 1 1 0 0 0 0 1 r 1 1 0 0 0 1 r 2 1 r 2 1 r 1 r 0 0 1 r n 2 1 r n 2 1 r n 2 1 r n 3 1 r n 3 1 r n 1 1 r n 1 1 r n 1 1 r n 1 1 r n 1 1 r n 2
and
Z = 1 g n 1 g n 1 0 0 0 0 0 1 g n 1 1 g n 1 1 g n 2 0 0 0 0 1 g n 2 1 g n 2 1 g n 2 1 g n 3 0 0 0 1 g 2 1 g 2 1 g 2 1 g 2 1 g 2 1 g 2 1 g 1 1 g 1 1 g 1 1 g 1 1 g 1 1 g 1 1 g 1 1 g 1
Therefore, we obtain
F r , n ° 1 2 r 1 ( Y ) c 1 ( Z )
where
r 1 Y = 1 r n 1 2 n 1 + 1 r n 2 2 , r < 1 1 r 2 + 2 ,   r 1
and
c 1 Z = i = 1 n 1 g i 2    
Since F r , n ° 1 = Y Z and from Lemma 1, we obtain
F r , n ° 1 2 1 r n 1 2 n 1 + 1 r n 2 2 i = 1 n 1 g i 2 , r < 1 1 r 2 + 2 i = 1 n 1 g i 2 ,   r 1 for   n 3 .
From Definition 2, we obtain (16), taking p = 2 in (16), we obtain the Euclidean norm as in (17).
F r , n ° 1 p p = 1 g n p + 1 g n 1 p + k = 1 n 1 1 r n k 1 1 g k p + k = 1 n 1 1 g n k r k p k + k = 1 n 2 1 g k 1 r n k 2 p
and
F r , n ° 1 E 2 = 1 g n 2 + 1 g n 1 2 + k = 1 n 1 1 r n k 1 1 g k 2 + k = 1 n 1 1 g n k r k 2 k + k = 1 n 2 1 g k 1 r n k 2 2 .    
Theorem 11. 
It has been established that the characteristic polynomial of the generalized reciprocal geometric Frank matrix, denoted by F r , n ° 1 , satisfies the following recurrence relation:
P n ° 1 ( λ ) = λ 1 g n + 1 g n 1 P n ° 1 ( λ ) + 1 g n 1 P n ° 1 ( λ ) = λ 1 g n + 1 g n 1 P n ° 1 ( λ ) + 1 g n 1 1 g n 1 1 1 r λ P n ° 1 ( λ )
where
P n ° 1 λ = 1 g n 1 1 1 r λ P n ° 1 λ ,     P n ° 1 ( λ ) = 1 g n k 1 r k 2 P n ° 1 ( λ ) ,   k 2
With initial condition
P 1 ° 1 ( λ ) = λ 1 g 1 ,     P 2 ° 1 ( λ ) = λ 2 + λ 1 g 1 1 g 2 + 1 g 2 g 1 1 r g 1 2 .
Proof. 
The proof is carried out in a manner similar to that of Theorem 5. □
Lemma 3. 
For the generalized reciprocal geometric Frank matrix F r , n ° 1 , we have
t r F r , n ° 1 = 1 g n + 1 g n 1 + 1 r g n 2 + 1 r 2 g n 3 + + 1 r n 2 g 1 = 1 g n + i = 1 n 1 1 r i 1 g n i .
Proof. 
This result directly stems from the definition of the matrix trace, which is computed as the sum of the entries on the principal diagonal. □
Theorem 12. 
For spread of the generalized reciprocal geometric Frank matrix F r , n ° 1 upper bound is given by
S F r , n ° 1 2 1 g n 2 + 1 g n 1 2 + k = 1 n 1 1 r n k 1 1 g k 2 + k = 1 n 1 1 g n k r k 2 k + k = 1 n 2 1 g k 1 r n k 2 2 2 n 1 g n + i = 1 n 1 1 r i 1 g n i 2 .
Proof. 
The Frobenius norm F r , n ° 1 E 2 is utilised here. In accordance with the trace formula derived from Definition 1, the subsequent equation is deduced from the definition stated in Section 1
S F r , n ° 1 2 F r , n ° 1 E 2 2 n t r F r , n ° 1 2 = 2 1 g n 2 + 1 g n 1 2 + k = 1 n 1 1 r n k 1 1 g k 2 + k = 1 n 1 1 g n k r k 2 k + k = 1 n 2 1 g k 1 r n k 2 2 2 n 1 g n + i = 1 n 1 1 r i 1 g n i 2 .

4. Examples for Generalized Geometric Frank Matrix

This section includes a numerical example, obtained via Wolfram Alpha, to support and illustrate the theoretical findings. The example involves F r , n matrices, with entries derived from Fibonacci numbers.
The recurrence relation for Fibonacci sequences is defined as follows:
F n + 2 = F n + 1 + F n .
Here, based on the definition of the generalized geometric Frank matrix and the requirement for the sequences to be increasing, we will set the initial values as F 1 = 1 ,   F 2 = 2 . The subsequent terms of the sequence continue as F 3 = 3, F 4 = 5, F 5 = 8, F 6 = 13, F 7 = 21, and so on.
Fibonacci numbers constitute one of the most fundamental integer sequences in mathematics and have been widely investigated due to their rich algebraic and asymptotic properties. In addition to their classical role in number theory, Fibonacci numbers frequently appear in matrix theory and structured matrix constructions, where they are used to analyze spectral behavior, stability, and norm estimates [26,27,28,29]. Their well-understood growth characteristics and regular structure make them particularly suitable for illustrating theoretical results related to determinants, norms, and spectral spread in matrix-based models.
For this reason, Fibonacci numbers are employed in this study to construct a concrete example of the generalized geometric Frank matrix, enabling a clear and meaningful demonstration of the proposed theoretical results.
Let
a i j = r i 1 F n + 1 i ,       i > j r i 2 F n + 1 j ,       j 2 < i j 0 ,       o t h e r w i s e .
where F n , nth Fibonacci number. Also, n = 5   ,   r = 1 2 . Then generalized geometric Fibonacci Frank matrix
F F 1 2 , 5 = F 5 F 4 0 0 0 1 2 F 4 F 4 F 3 0 0 1 4 F 3 1 4 F 3 1 2 F 3 1 2 F 2 0 1 8 F 2 1 8 F 2 1 8 F 2 1 4 F 2 1 4 F 1 1 16 F 1 1 16 F 1 1 16 F 1 1 16 F 1 1 8 F 1 = 8 5 0 0 0 5 2 5 3 0 0 3 4 3 4 3 2 1 0 2 8 2 8 2 8 2 4 1 4 1 16 1 16 1 16 1 16 1 8 .
Using Theorem 2, the determinant of F F 1 2 , 5 is computed:
det F F 1 2 , 5 = ( F 5 F 4 ) i = 2 5 x i k i u i 1 = 85 64 1.3281 .
By virtue of Theorem 3, the matrix F F 1 2 , 5 is written as follows:
F F 1 2 , 5 = F 5 X Y K ,
where X = F 4 , 0 , 0 , 0 ,   Y = 1 2 F 4 , 1 4 F 3 , 1 8 F 2 , 1 16 F 1 T and
K = F 4 F 3 0 0 1 4 F 3 1 2 F 3 1 2 F 2 0 1 8 F 2 1 8 F 2 1 4 F 2 1 4 F 1 1 16 F 1 1 16 F 1 1 16 F 1 1 8 F 1 .
Then the inverse of F F 1 2 , 5 as follows:
F F 1 2 , 5 1 = 1 F 5 + k 1 F 5 X Y 1 F 5 k F 5 X k Y 1 F 5 k ,     k = 1 K Y 1 F 5 X
1 F 5 + k 1 F 5 X Y 1 F 5 k F 5 X k Y 1 F 5 k = 11 68 7 34 9 17 24 17 48 17 1 17 28 35 72 85 192 85 384 85 5 136 3 68 33 34 44 17 88 17 3 136 9 340 37 170 276 85 552 85 3 136 9 340 37 170 64 85 808 85 .
The Euclidean norm of the matrix F F 1 2 , 5 is calculated as:
F F 1 2 , 5 E 2 = F 5 2 + k = 1 4 1 2 4 k F k 2 + k = 1 4 F 5 k 1 2 k 2 k + F 4 2 + k = 1 3 1 2 3 k F k 2 = 64 + 25 + 1 2 3 F 1 2 + 1 2 2 F 2 2 + 1 2 1 F 3 2 + F 4 2 + F 4 1 2 2 + F 3 1 2 2 2 2 + F 2 1 2 3 2 3 + F 1 1 2 4 2 4 + F 1 1 2 2 2 + F 2 1 2 1 2 + F 3 2 = 4297 32 134.28125 F F 1 2 , 5 E 11.5879 .
Also, from the definition of the spectral norm, let’s first find the eigenvalues of the matrix we obtained by multiplying this matrix with its transpose for F F 1 2 , 5 :
F F 1 2 , 5 2 = m a x λ i ( F F 1 2 , 5 ) T F F 1 2 , 5
( F F 1 2 , 5 ) T F F 1 2 , 5 = 70.879 53.129 8.691 0.879 0.070 53.129 50.629 16.191 0.879 0.070 8.691 16.191 11.316 1.629 0.070 0.879 0.879 1.629 1.254 0.133 0.070 0.070 0.070 0.133 0.078 .
The eigenvalues of the characteristic polynomial of this matrix are obtained as follows:
λ 1 = 117.5947 ,     λ 2 = 14.6349 ,     λ 3 = 1.8050 ,     λ 4 = 0.0049 ,     λ 5 = 0.1168    
F F 1 2 , 5 2 = max λ 1 = 117.5947 10.844
On the other hand; from Theorem 4 for r < 1 we obtain
F F 1 2 , 5 2 1 2 2 + 2 i = 1 5 F i 2 = 9 4 1 + 4 + 9 + 25 + 64 = 15.223
Upper bound will be as follows:
F F 1 2 , 5 2 10.844 15.223 .
To conclude, we establish an upper bound for the spread associated with the generalized geometric Frank matrix F F 1 2 , 5 :
S F r , n 2 F r , n E 2 2 n t r F r , n 2
S F F 1 2 , 5 2 g n 2 + g n 1 2 + k = 1 n 1 r n k 1 g k 2 + k = 1 n 1 r k g n k 2 k + k = 1 n 2 r n k 2 g k . 2             2 n g n + i = 1 n 1 r i 1 g n i 2
S F F 1 2 , 5 2 134.28125 2 5 F 5 + i = 1 4 1 2 i 1 F 5 i 2 = 2 . ( 134.28125 ) 2 5 15 + 1 8 2 = 232.28089 15.2407 .
Let us now analyze the Fibonacci values under the condition r = 1 4 , keeping the matrix dimension unchanged:
S F F 1 4 , 5 2 100.5635 2 5 F 5 + i = 1 4 1 4 i 1 F 5 i 2 = 201.127 77.1797 = 123.9473 = 11.133 .
The subsequent step is to ascertain upper bounds for the spread by performing the operations given above on the generalized reciprocal geometric Frank matrix, for r = 1 2   , on the same dimensional matrix:
S F 1 2 , 5 ° 1 2 1 F 5 2 + 1 F 4 2 + k = 1 4 2 4 k 1 F k 2 + k = 1 4 2 k F 5 k 2 k + k = 1 3 1 F k 2 3 k 2 2 5 1 F 5 + i = 1 4 2 i 1 F 5 i 2 = 2 . 1161.366 2 5 120.819 = 2322.7334 48.3266 = 47.690 .
As can be seen, when we consider the generalized reciprocal geometric Frank matrix, the upper bound value becomes even larger. In spread calculations, proximity to the ideal value (close to 1) is considered more meaningful. In this context, the generalized geometric Frank matrix produced results closer to the desired value, indicating a more stable structure. In contrast, the generalized reciprocal geometric Frank matrix yielded larger upper bounds for the spread, suggesting a more variable behavior.
Then, considering the generalized geometric Frank matrix, we can make the following comments:
The Fibonacci numbers positioned along the main diagonal introduce a structured spectral pattern that is intrinsically linked to the golden ratio. As the parameter r increases, this organised growth becomes progressively disrupted, emphasising the delicate balance between the stable, sequence-driven nature of the diagonal and the scaling influence of the subdiagonal elements.
Smaller values of r help preserve the spectral coherence induced by the Fibonacci sequence, thereby maintaining matrix stability. In contrast, larger values of r tend to stretch the eigenvalues, unveiling a pronounced sensitivity to structural perturbations. This dynamic interaction is particularly relevant in applications where tunable spectral stability is required.
To validate our theoretical observations, a matrix was constructed using consecutive Fibonacci numbers on the main diagonal, while the parameter r was varied over the set 1 2 , 1 4 . The results show that the tightest upper bound for the spectral spread is achieved when r = 1 4 . Moreover, as the size of the matrix grows, the upper bound on the spread also increases, even while keeping r fixed at its optimal value.
Detailed analysis for selected r values:
For r = 1 4 (Upper Bound = 11.133 ): At this reduced parameter value, the influence of the subdiagonal elements becomes relatively small compared to the prevailing Fibonacci values along the main diagonal. This yields a strongly diagonally dominant matrix, with closely packed eigenvalues. Such spectral characteristics are particularly advantageous in fields like signal and coding theory, where tightly clustered eigenvalues enhance filtering efficiency and encoding precision.
For r = 1 2 (Upper Bound = 15.2407 ): Doubling the value of r significantly increases the influence of the subdiagonal terms, resulting in a moderate dispersion of eigenvalues. The matrix becomes less diagonally dominant, and the spectral spread widens accordingly. This configuration may be suitable for engineering systems that benefit from controlled eigenvalue separation.
Key findings from this analysis reveal the following:
First, choosing values of r smaller than 1 generally results in a reduction of the upper bound of the spread, thus promoting greater spectral stability. Second, increasing the matrix size causes an increase in the upper bound of the spread, indicating that larger matrices may not always be ideal when the goal is to minimize spectral dispersion. Both from a theoretical and practical standpoint, the findings suggest that employing matrices with smaller dimensions and selecting r values especially those less than 1 leads to enhanced spectral characteristics while also reducing the computational workload. Therefore, it is essential to strike the right balance between the matrix size and the parameter r for achieving optimal efficiency and effectiveness in matrix-based computations.
In Figure 1, illustrates the relationship between the spread’s upper bound and the parameter r , emphasizing how changes in r both increases and decreases impact matrices of identical dimensions.
If an asymptotic analysis of the spread upper bound is feasible particularly as the parameter r 0 a clear limiting behavior can be identified, it is highly valuable to present this result in the form r 0 , the spread upper bound converges to 1, suggesting an increasingly compact spectral distribution.
As shown in our analysis, the spectral spread of the generalized geometric Frank matrix is strongly influenced by the parameter r. Smaller r values yield tighter eigenvalue clustering, enhancing matrix stability and predictability.
To formally quantify the computational implications of these structural choices, we employ Big-O notation, which characterizes how computational cost scales with matrix size n   and iteration count k ( r ) . This allows us to assess, in a precise and general manner, how parameter selection affects efficiency. As an illustrative example, a 5 × 5 matrix was constructed and its Frobenius norm, factorization, and iterative eigenvalue computations were measured. Smaller r reduced both the spread upper bound and the required iterations k ( r ) , lowering the total operations from approximately 644 to 464. Therefore, employing smaller r values not only improves spectral stability but also reduces effective computational workload, as formalized by T n , r = O ( k r , n 2 ) . This insight integrates the theoretical spread analysis with practical computational efficiency considerations in matrix-based applications.
The reduction in the spectral spread for smaller r values not only improves matrix stability but also reduces computational effort in a formal, quantifiable manner. Formally, the spread is bounded by
S F r , n 2 F r , n E 2 2 n t r F r , n 2 ,
where F r , n E 2 is the sum of squares of all matrix elements and t r F r , n is the sum of diagonal elements. The influence of the subdiagonal elements is proportional to r, so reducing r decreases F r , n E 2 and thus, the spread upper bound for any matrix size n. Tighter eigenvalue clustering allows iterative eigenvalue algorithms to converge faster, giving an effective computational complexity of O ( k r , n 2 ) , where k ( r ) n depends on the spread.
Example ( n   =   5 , step-by-step). To illustrate this concretely:
  • Matrix construction: 5 × 5 matrix → 25 elements, 25 multiplications.
  • Frobenius norm computation: 25 squarings +   24 additions   49 operations.
  • LU/Crout’s factorization: narrow band structure reduces cost well below dense O ( n 3 ) , 30 operations.
  • Iterative eigenvalue computation: each matrix-vector multiplication costs 5 × 5   =   25 multiplications +   20 additions   45   operations. Iteration count k ( r ) depends on the spectral spread:
    r = 1 / 2     k     12 ,   r = 1 / 4     k     8 .
  • Total operations:
    r = 1 / 2 :   25 + 49 + 30 + 12 × 45 644
    r = 1 / 4 :   25 + 49 + 30 + 8 × 45 464 .
This shows that a smaller r reduces the spread and decreases the iteration count k ( r ) , and therefore lowers the effective computational cost T n , r = O ( k r , n 2 ) .
Both from a theoretical and practical standpoint, these findings suggest that choosing smaller r values ( r < 1 ) and moderate matrix dimensions leads to enhanced spectral characteristics while reducing computational workload.
Remark 2. 
The results obtained in this study show that the generalized geometric Frank matrix provides both theoretical flexibility and computational advantages. In particular, the introduction of the geometric parameter r allows effective control over the spread upper bounds and matrix norms. Choosing r < 1 leads to tighter spectral bounds and improved numerical stability, while lower-dimensional constructions significantly reduce computational cost, as confirmed by the Big O analysis. Moreover, the Fibonacci-based example demonstrates that the proposed matrix class naturally accommodates structured sequences frequently used in applied mathematics, signal processing, and optimization problems. These features make the generalized geometric Frank matrix a practical and efficient alternative to classical Frank-type matrices in large-scale computations.
Most existing studies on classical Frank matrices and their generalizations primarily focus on algebraic and spectral properties without explicitly addressing computational complexity. In contrast, the present study incorporates Big O notation to formally evaluate the computational cost associated with the generalized geometric Frank matrix. By expressing the complexity in terms of the matrix dimension n and the iteration count k ( r ) , we demonstrate that choosing r < 1 and lower-dimensional matrix structures leads to reduced computational complexity. This explicit complexity-aware framework distinguishes the proposed approach from earlier Frank-type matrix studies and highlights its practical relevance for large-scale and optimization-oriented applications.

5. Conclusions

We introduced the generalized geometric Frank matrix as an innovative extension of the classical Frank matrix and its variations. By investigating its algebraic structure, we derived key properties, including its factorizations, determinant, inverse, and various norms. Furthermore, we explored the reciprocal generalized geometric Frank matrix, uncovering a diverse range of linear algebraic properties that enrich its theoretical significance. To validate our findings, we applied these results to Fibonacci numbers, demonstrating how the generalized geometric Frank matrix interacts with this special sequence. This practical application not only reinforced the theoretical results but also highlighted the broader potential of these matrices in mathematical and computational contexts. The findings of this study suggest that the selection of values for r that are less than 1 results in a decrease in the upper limit of the spread. This finding indicates that utilizing matrices of reduced dimensions, in conjunction with judiciously selected r values can enhance matrix performance by minimising spread and reducing computational demands.
Also, we formally quantify the computational implications of these structural choices, we employ Big-O notation, which characterizes how computational cost scales with matrix size n   and iteration count k ( r ) . .
Consequently, focusing on smaller-sized matrices with optimally selected r values offers a practical and efficient approach to achieving superior results. By building on the foundational work presented in this study, future research on generalized geometric Frank matrices could deepen our understanding of structured matrices and uncover novel applications in both theoretical and applied mathematics.
Future Research Directions
Future research may extend the generalized geometric Frank matrix framework to other classes of structured matrices, including block, banded, and circulant-type constructions, to further investigate their spectral and conditioning behavior. The interaction between the geometric parameter r and matrix stability can also be explored in greater depth, particularly in relation to condition numbers, generalized inverses, and iterative solvers. In addition, employing different number sequences or stochastic components within the matrix structure may lead to new insights and applications in large-scale numerical algorithms, signal processing, and optimization problems.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author certifies that she has no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

References

  1. Frank, W. Computing Eigenvalues of Complex Matrices by Determinant Evaluation and by Methods of Danilewski and Wielandt. J. Soc. Indust. Appl. Math. 1958, 6, 378–392. [Google Scholar] [CrossRef]
  2. Gökbaş, H. Some Properties of the Generalized Max Frank Matrices. AIMS Math. 2024, 9, 26826–26835. [Google Scholar] [CrossRef]
  3. Pólya, G.; Szegö, G. Problems and Theorems in Analysis II; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar] [CrossRef]
  4. Haukkanen, P.; Mattila, M.; Merikoski, J.K.; Kovacec, A. Bounds for Sine and Cosine via Eigenvalue Estimation. Spec. Matrices 2014, 2, 19–29. [Google Scholar] [CrossRef]
  5. Mattila, M.; Haukkanen, P. Studying the Various Properties of Min and Max Matrices—Elementary vs. more advanced Methods. Spec. Matrices 2016, 4, 101–109. [Google Scholar]
  6. Kızılateş, C.; Terzioğlu, N. On r-Min and r-Max Matrices. J. Appl. Math. Comput. 2022, 28, 4559–4588. [Google Scholar] [CrossRef]
  7. Hake, J.F. A Remark on Frank Matrices. Computing 1985, 35, 375–379. [Google Scholar] [CrossRef]
  8. Varah, J.M. A Generalization of the Frank Matrix. SIAM J. Sci. Stat. Comput. 1986, 7, 835–839. [Google Scholar] [CrossRef]
  9. Mersin, E.Ö.; Bahşi, M.; Maden, A.D. Some Properties of Generalized Frank Matrices. Math. Sci. Appl. E-Notes 2020, 8, 170–177. [Google Scholar] [CrossRef]
  10. Shi, B.; Kızılateş, C. A New Generalization of the Frank Matrix and Its Some Properties. Comput. Appl. Math. 2024, 43, 19. [Google Scholar] [CrossRef]
  11. Mersin, E.Ö.; Bahşi, M. Sturm Theorem for the Generalized Frank Matrix. Hacet. J. Math. Stat. 2021, 50, 1002–1011. [Google Scholar] [CrossRef]
  12. Bertaccini, D.; Ng, M.K. Skew-Circulant Preconditioners for Systems of LMF-Based ODE Codes. In Proceedings of the Numerical Analysis and Its Applications (LNCS), Rousse, Bulgaria, 11–15 June 2000; pp. 93–101. [Google Scholar]
  13. Lyness, J.N.; Sorevik, T. Four-Dimensional Lattice Rules Generated by Skew-Circulant Matrices. Math. Comput. 2004, 73, 279–295. [Google Scholar] [CrossRef]
  14. Zhao, G. The Improved Nonsingularity on the r-Circulant Matrices in Signal Processing. In Proceedings of the International Conference on Computer Technology and Development, Kota Kinabalu, Malaysia, 13–15 November 2009; IEEE: Washington, DC, USA, 2009; pp. 564–567. [Google Scholar]
  15. Kızılateş, C.; Tuğlu, N. On the Norms of Geometric and Symmetric Geometric Circulant Matrices with the Tribonacci Number. Gazi Univ. J. Sci. 2018, 31, 555–567. [Google Scholar]
  16. Kızılateş, C.; Tuğlu, N. On the Bounds for the Spectral Norms of Geometric Circulant Matrices. J. Inequal. Appl. 2016, 2016, 312. [Google Scholar] [CrossRef]
  17. Kocer, E.G.; Mansour, T.; Tuğlu, N. Norms of Circulant and Semicirculant Matrices with Horadam Numbers. Ars Comb. 2007, 85, 353–359. [Google Scholar]
  18. Radicic, B. On k-Circulant Matrices with Geometric Sequence. Quaest. Math. 2016, 39, 135–144. [Google Scholar] [CrossRef]
  19. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  20. Bellman, R. Introduction to Matrix Analysis, 2nd ed.; Classics in Applied Mathematics No. 19; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  21. Samar, M.; Li, H.; Wei, Y. Condition numbers for the K-weighted pseudoinverse L K and their statistical estimation. Linear Multilinear Algebra 2019, 69, 752–770. [Google Scholar] [CrossRef]
  22. Shannon, A.G.; Kuloğlu, B.; Özkan, E. Rhaly Terraced Sequences: Their Generalizations, Properties and Applications. Comput. Appl. Math. 2025, 44, 226. [Google Scholar] [CrossRef]
  23. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1991; pp. 259–260. [Google Scholar]
  24. Mersin, E.Ö.; Bahşi, M. Bounds for the Maximum Eigenvalues of the Fibonacci-Frank and Lucas-Frank Matrices. Commun. Fac. Sci. Univ. Ankara Ser. A1 Math. Stat. 2024, 73, 420–436. [Google Scholar] [CrossRef]
  25. Respondek, J.S. Fast Matrix Multiplication with Applications; Studies in Big Data; Springer Nature: Cham, Switzerland, 2025; Volume 166. [Google Scholar]
  26. Koshy, T. Fibonacci and Lucas Numbers with Applications; Wiley-Interscience: New York, NY, USA, 2001. [Google Scholar]
  27. Vajda, S. Fibonacci and Lucas Numbers, and the Golden Section; Ellis Horwood: Chichester, UK, 1989. [Google Scholar]
  28. Kuloğlu, B. A quantum calculus framework for Gaussian Fibonacci and Gaussian Lucas quaternion numbers. Notes Number Theory Discret. Math. 2025, 31, 1–14. [Google Scholar] [CrossRef]
  29. Mehraban, E.; Kuloğlu, B.; Özkan, E.; Hıncal, E. On division Fibonacci-Pell and division Gaussian Fibonacci-Pell polynomials and their applications. J. Comb. Math. Comb. Comput. 2025, 128, 317–335. [Google Scholar] [CrossRef]
Figure 1. Variation of the Spread’s Upper Bound with Respect to Changes in r .
Figure 1. Variation of the Spread’s Upper Bound with Respect to Changes in r .
Axioms 15 00052 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kuloğlu, B. Matrix-Theoretic Investigation of Generalized Geometric Frank Matrices: Spread Estimates and Arithmetic Insights. Axioms 2026, 15, 52. https://doi.org/10.3390/axioms15010052

AMA Style

Kuloğlu B. Matrix-Theoretic Investigation of Generalized Geometric Frank Matrices: Spread Estimates and Arithmetic Insights. Axioms. 2026; 15(1):52. https://doi.org/10.3390/axioms15010052

Chicago/Turabian Style

Kuloğlu, Bahar. 2026. "Matrix-Theoretic Investigation of Generalized Geometric Frank Matrices: Spread Estimates and Arithmetic Insights" Axioms 15, no. 1: 52. https://doi.org/10.3390/axioms15010052

APA Style

Kuloğlu, B. (2026). Matrix-Theoretic Investigation of Generalized Geometric Frank Matrices: Spread Estimates and Arithmetic Insights. Axioms, 15(1), 52. https://doi.org/10.3390/axioms15010052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop