1. Introduction
Frank matrices are a special class of matrices in linear algebra, often used as test matrices in numerical analysis, particularly for eigenvalue computations. First introduced by Frank in 1958, these matrices exhibit a lower Hessenberg structure, meaning that all elements above the first superdiagonal are zero [
1].
One notable property of Frank matrices is that their determinants are always equal to 1. Additionally, their inverses exhibit an upper Hessenberg structure, where all elements below the first subdiagonal are zero. These characteristics make Frank matrices an essential tool in matrix computations and numerical analysis.
In recent years, generalized versions of Frank matrices have been explored. For instance, the “generalized max r-Frank matrix,” a subclass of lower Hessenberg matrices, has been studied for its characteristic polynomial, inverse, determinant, and norms. These extensions highlight the adaptability and mathematical richness of Frank matrices in various applications [
2].
Two notable types of matrices frequently studied in [
3] are
and
matrices. A
matrix is a relatively simple structure where each entry
is defined as
, for
matrix is also simple matrix whose entry is
. In [
4], the smallest eigenvalue of a min matrix was analyzed to establish bounds for trigonometric function values. Mattila and Haukkanen [
5] investigated various properties of specific types of min and max matrices.
In recent years, the generalization (for
they obtain
and
matrices) of these matrices has also been considered by some authors. Kızılateş and Terzioğlu [
6] introduced the definitions of r-min and r-max matrices and provided their important features.
In fact, it can be said that the Frank matrix is a special case of the
matrix; the Frank matrix is widely recognized as a popular test matrix for eigenvalue algorithms due to its combination of well-conditioned and poorly conditioned eigenvalues [
7]. In 1986, Varah [
8] presented a generalization of the Frank matrix, detailing methods for computing its eigenvalues and eigenvectors as well as estimating the condition numbers associated with its eigenvalues. Later, in 2020, Mersin and Bahşi [
9] proposed another generalization of the Frank matrix, defined in terms of the real n-tuple
as follows:
Much more detailed and important studies on Frank matrices were actually carried to much different and advanced dimensions by Mersin and Bahşi [
10,
11]. In addition to the studies on the general properties, determinants and bounds of Frank matrices, they later combined these studies with Fibonacci and Lucas number sequences and provided readers with different perspectives. In the following years, the same authors applied Sturm theorems to generalize Frank matrices and obtained their eigenvalues. In addition, obtaining the characteristic polynomials of these matrices is another important contribution.
A prominent group within special matrices is the class of r-circulant matrices, which appear in numerous scientific and mathematical contexts. Their structural properties and diverse applications have attracted significant attention in previous research, as evidenced by studies like those of Bertaccini and Ng (2001), Lyness and Sørevik (2004), and Zhao (2009) [
12,
13,
14]. An r-circulant matrix is a square matrix characterized by its first row
, and takes on a specific structured form:
Another generalization of circulant matrices is the geometric circulant matrix. These matrices have attracted considerable research interest, with numerous studies exploring their characteristics. In particular, several works have examined aspects such as their norms and the corresponding upper and lower bounds [
15,
16,
17,
18]. These findings highlight the mathematical richness and wide-ranging utility of circulants and their generalized forms in theoretical and applied contexts.
Matrix theory provides a fundamental framework for analyzing the linear transformations, numerical stability, and spectral behavior of structured matrices. In particular, matrix norms, condition numbers, and generalized inverses are key tools for assessing the sensitivity of linear systems and eigenvalue problems, as extensively discussed in classical matrix analysis literature [
19,
20]. Matrix-theoretic transformations and structured generalizations can significantly influence conditioning and numerical robustness, especially in the presence of ill-conditioned or nearly singular systems. Recent studies on weighted pseudoinverses further demonstrate that appropriate matrix structures lead to improved condition number estimates and enhanced numerical performance [
21]. Motivated by this perspective, the generalized geometric Frank matrix introduced in this study is examined not only as an algebraic extension of the classical Frank matrix but also as a structured matrix model with controlled spectral and conditioning properties.
The other important definition is that spread is a crucial measure that characterizes the spectral properties of a matrix [
22]. It is defined in [
19] as the difference between the maximum and minimum eigenvalues of a matrix. The generalized geometric Frank matrices considered in this study are diagonally dominant with real entries along the main diagonal. Therefore, all eigenvalues are real, and the spectral spread defined as the difference between the maximum and minimum eigenvalues is well-defined. This ensures that comparisons of eigenvalue sizes, used in our spread analysis, are meaningful and consistent.
Mathematically, for matrix A, the spread is expressed as:
where
and
are the largest and smallest eigenvalues of the matrix, respectively. Spread helps analyze the spectral distribution of a matrix and provides insights into its behavior in various applications, such as control theory and signal processing.
Definition 1
([
19,
23])
. Let and matrices,where denotes Hadamard products of this matrix. Definition 2 ([
19,
23])
. For matrix , the following norms are defined: is eigenvalue of also, is conjugate transpose of the matrix .
Definition 3.
The spread of a matrix , Lemma 1
([
19,
23])
. Let and matrices, following equality holds:where, and 4. Examples for Generalized Geometric Frank Matrix
This section includes a numerical example, obtained via Wolfram Alpha, to support and illustrate the theoretical findings. The example involves matrices, with entries derived from Fibonacci numbers.
The recurrence relation for Fibonacci sequences is defined as follows:
Here, based on the definition of the generalized geometric Frank matrix and the requirement for the sequences to be increasing, we will set the initial values as The subsequent terms of the sequence continue as 3, 5, 8, 13, 21, and so on.
Fibonacci numbers constitute one of the most fundamental integer sequences in mathematics and have been widely investigated due to their rich algebraic and asymptotic properties. In addition to their classical role in number theory, Fibonacci numbers frequently appear in matrix theory and structured matrix constructions, where they are used to analyze spectral behavior, stability, and norm estimates [
26,
27,
28,
29]. Their well-understood growth characteristics and regular structure make them particularly suitable for illustrating theoretical results related to determinants, norms, and spectral spread in matrix-based models.
For this reason, Fibonacci numbers are employed in this study to construct a concrete example of the generalized geometric Frank matrix, enabling a clear and meaningful demonstration of the proposed theoretical results.
Let
where
nth Fibonacci number. Also,
Then generalized geometric Fibonacci Frank matrix
Using Theorem 2, the determinant of
is computed:
By virtue of Theorem 3, the matrix
is written as follows:
where
and
Then the inverse of
as follows:
The Euclidean norm of the matrix
is calculated as:
Also, from the definition of the spectral norm, let’s first find the eigenvalues of the matrix we obtained by multiplying this matrix with its transpose for
:
The eigenvalues of the characteristic polynomial of this matrix are obtained as follows:
On the other hand; from Theorem 4 for
we obtain
Upper bound will be as follows:
To conclude, we establish an upper bound for the spread associated with the generalized geometric Frank matrix
:
Let us now analyze the Fibonacci values under the condition
, keeping the matrix dimension unchanged:
The subsequent step is to ascertain upper bounds for the spread by performing the operations given above on the generalized reciprocal geometric Frank matrix, for
, on the same dimensional matrix:
As can be seen, when we consider the generalized reciprocal geometric Frank matrix, the upper bound value becomes even larger. In spread calculations, proximity to the ideal value (close to 1) is considered more meaningful. In this context, the generalized geometric Frank matrix produced results closer to the desired value, indicating a more stable structure. In contrast, the generalized reciprocal geometric Frank matrix yielded larger upper bounds for the spread, suggesting a more variable behavior.
Then, considering the generalized geometric Frank matrix, we can make the following comments:
The Fibonacci numbers positioned along the main diagonal introduce a structured spectral pattern that is intrinsically linked to the golden ratio. As the parameter increases, this organised growth becomes progressively disrupted, emphasising the delicate balance between the stable, sequence-driven nature of the diagonal and the scaling influence of the subdiagonal elements.
Smaller values of help preserve the spectral coherence induced by the Fibonacci sequence, thereby maintaining matrix stability. In contrast, larger values of tend to stretch the eigenvalues, unveiling a pronounced sensitivity to structural perturbations. This dynamic interaction is particularly relevant in applications where tunable spectral stability is required.
To validate our theoretical observations, a matrix was constructed using consecutive Fibonacci numbers on the main diagonal, while the parameter was varied over the set . The results show that the tightest upper bound for the spectral spread is achieved when . Moreover, as the size of the matrix grows, the upper bound on the spread also increases, even while keeping fixed at its optimal value.
Detailed analysis for selected values:
For (Upper Bound = ): At this reduced parameter value, the influence of the subdiagonal elements becomes relatively small compared to the prevailing Fibonacci values along the main diagonal. This yields a strongly diagonally dominant matrix, with closely packed eigenvalues. Such spectral characteristics are particularly advantageous in fields like signal and coding theory, where tightly clustered eigenvalues enhance filtering efficiency and encoding precision.
For (Upper Bound = ): Doubling the value of significantly increases the influence of the subdiagonal terms, resulting in a moderate dispersion of eigenvalues. The matrix becomes less diagonally dominant, and the spectral spread widens accordingly. This configuration may be suitable for engineering systems that benefit from controlled eigenvalue separation.
Key findings from this analysis reveal the following:
First, choosing values of smaller than 1 generally results in a reduction of the upper bound of the spread, thus promoting greater spectral stability. Second, increasing the matrix size causes an increase in the upper bound of the spread, indicating that larger matrices may not always be ideal when the goal is to minimize spectral dispersion. Both from a theoretical and practical standpoint, the findings suggest that employing matrices with smaller dimensions and selecting values especially those less than 1 leads to enhanced spectral characteristics while also reducing the computational workload. Therefore, it is essential to strike the right balance between the matrix size and the parameter for achieving optimal efficiency and effectiveness in matrix-based computations.
In
Figure 1, illustrates the relationship between the spread’s upper bound and the parameter
, emphasizing how changes in
both increases and decreases impact matrices of identical dimensions.
If an asymptotic analysis of the spread upper bound is feasible particularly as the parameter a clear limiting behavior can be identified, it is highly valuable to present this result in the form , the spread upper bound converges to 1, suggesting an increasingly compact spectral distribution.
As shown in our analysis, the spectral spread of the generalized geometric Frank matrix is strongly influenced by the parameter r. Smaller r values yield tighter eigenvalue clustering, enhancing matrix stability and predictability.
To formally quantify the computational implications of these structural choices, we employ Big-O notation, which characterizes how computational cost scales with matrix size and iteration count This allows us to assess, in a precise and general manner, how parameter selection affects efficiency. As an illustrative example, a matrix was constructed and its Frobenius norm, factorization, and iterative eigenvalue computations were measured. Smaller r reduced both the spread upper bound and the required iterations lowering the total operations from approximately 644 to 464. Therefore, employing smaller values not only improves spectral stability but also reduces effective computational workload, as formalized by . This insight integrates the theoretical spread analysis with practical computational efficiency considerations in matrix-based applications.
The reduction in the spectral spread for smaller
values not only improves matrix stability but also reduces computational effort in a formal, quantifiable manner. Formally, the spread is bounded by
where
is the sum of squares of all matrix elements and
is the sum of diagonal elements. The influence of the subdiagonal elements is proportional to
r, so reducing
decreases
and thus, the spread upper bound for any matrix size n. Tighter eigenvalue clustering allows iterative eigenvalue algorithms to converge faster, giving an effective computational complexity of
, where
depends on the spread.
Example , step-by-step). To illustrate this concretely:
Matrix construction: matrix → elements, multiplications.
Frobenius norm computation: squarings additions operations.
LU/Crout’s factorization: narrow band structure reduces cost well below dense , operations.
Iterative eigenvalue computation: each matrix-vector multiplication costs
multiplications
additions
operations. Iteration count
depends on the spectral spread:
This shows that a smaller r reduces the spread and decreases the iteration count and therefore lowers the effective computational cost .
Both from a theoretical and practical standpoint, these findings suggest that choosing smaller values and moderate matrix dimensions leads to enhanced spectral characteristics while reducing computational workload.
Remark 2.
The results obtained in this study show that the generalized geometric Frank matrix provides both theoretical flexibility and computational advantages. In particular, the introduction of the geometric parameter allows effective control over the spread upper bounds and matrix norms. Choosing leads to tighter spectral bounds and improved numerical stability, while lower-dimensional constructions significantly reduce computational cost, as confirmed by the Big O analysis. Moreover, the Fibonacci-based example demonstrates that the proposed matrix class naturally accommodates structured sequences frequently used in applied mathematics, signal processing, and optimization problems. These features make the generalized geometric Frank matrix a practical and efficient alternative to classical Frank-type matrices in large-scale computations.
Most existing studies on classical Frank matrices and their generalizations primarily focus on algebraic and spectral properties without explicitly addressing computational complexity. In contrast, the present study incorporates Big O notation to formally evaluate the computational cost associated with the generalized geometric Frank matrix. By expressing the complexity in terms of the matrix dimension and the iteration count we demonstrate that choosing and lower-dimensional matrix structures leads to reduced computational complexity. This explicit complexity-aware framework distinguishes the proposed approach from earlier Frank-type matrix studies and highlights its practical relevance for large-scale and optimization-oriented applications.
5. Conclusions
We introduced the generalized geometric Frank matrix as an innovative extension of the classical Frank matrix and its variations. By investigating its algebraic structure, we derived key properties, including its factorizations, determinant, inverse, and various norms. Furthermore, we explored the reciprocal generalized geometric Frank matrix, uncovering a diverse range of linear algebraic properties that enrich its theoretical significance. To validate our findings, we applied these results to Fibonacci numbers, demonstrating how the generalized geometric Frank matrix interacts with this special sequence. This practical application not only reinforced the theoretical results but also highlighted the broader potential of these matrices in mathematical and computational contexts. The findings of this study suggest that the selection of values for r that are less than 1 results in a decrease in the upper limit of the spread. This finding indicates that utilizing matrices of reduced dimensions, in conjunction with judiciously selected values can enhance matrix performance by minimising spread and reducing computational demands.
Also, we formally quantify the computational implications of these structural choices, we employ Big-O notation, which characterizes how computational cost scales with matrix size and iteration count .
Consequently, focusing on smaller-sized matrices with optimally selected values offers a practical and efficient approach to achieving superior results. By building on the foundational work presented in this study, future research on generalized geometric Frank matrices could deepen our understanding of structured matrices and uncover novel applications in both theoretical and applied mathematics.
Future Research Directions
Future research may extend the generalized geometric Frank matrix framework to other classes of structured matrices, including block, banded, and circulant-type constructions, to further investigate their spectral and conditioning behavior. The interaction between the geometric parameter r and matrix stability can also be explored in greater depth, particularly in relation to condition numbers, generalized inverses, and iterative solvers. In addition, employing different number sequences or stochastic components within the matrix structure may lead to new insights and applications in large-scale numerical algorithms, signal processing, and optimization problems.