# Some Comments about Zero and Non-Zero Eigenvalues from Connected Undirected Planar Graph Adjacency Matrices

## Abstract

**:**

## 1. Introduction

**C**, representing connected undirected planar graphs with no self-loops, relevant to spatial statistics/econometrics that is generalizable to other especially applied matrix and linear algebra situations. Both pertain to the inertia of a matrix (i.e., its number of negative, zero, and positive—respectively, n

_{−}, n

_{0}, and n

_{+}—eigenvalues, say λ

_{i}(

**C**), i = 1, 2, … , n; e.g., see [1,2]; for a “matrix inertia theory and its applications” exposition, see Chapter 13 in Lancaster and Tismenetsky [3]). Its novelty is twofold within the context of planar graph theory: it contributes knowledge to help fill an existing gap in graph adjacency matrix nullity― the multiplicity of the zero eigenvalue in a graph adjacency matrix spectrum―theory (e.g., [4,5,6,7,8]); and, it adds to the scant literature published to date about matrix inertia (e.g., [9]). The first problem, which goes beyond the well-known solutions for simply calculating zero eigenvalues, concerns enumeration of linearly dependent adjacency matrix column/row subsets affiliated with these zero eigenvalues (e.g., see [10]), a single challenge here as a result of symmetry. The second problem concerns approximation of the real eigenvalues (e.g., see [11]) of these adjacency matrices, particularly their row-standardized version, say

**W**, a linearly transformed version of a Laplacian matrix, that enjoys extremely popular usage in spatial statistics/econometrics applications and conceptualization nomenclature.

## 2. Background to the Pair of Problems

_{n}= (V, E) be a simple undirected connected graph with n vertices V = {v

_{1}, … , v

_{n}}, and m ≤ n(n − 1)/2 edges E = {e

_{ij}linking vertices v

_{i}and v

_{j}: i = 1, 2, … , n and i = 1, 2, … , n; i ≠ j due to the absence of self-loops}. Often in spatial statistics/econometrics, G

_{n}is both complete and planar, and hence m ≤ 3(n − 2); sometimes G

_{n}is near-planar, with m ≤ 8n. Several well-known properties of the sparse adjacency matrices for these graphs pertaining to both their zero and non-zero eigenvalues include:

- (1)
- the number of zero eigenvalues count linearly dependent column/row subsets (e.g., [15]);
- (2)
- the interval [$\underset{1\le \mathrm{i}\le \mathrm{n}}{\mathrm{MAX}}\{\sqrt{\sum _{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{n}}_{\mathrm{i}}^{2}/\mathrm{n}},\sum _{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{n}}_{\mathrm{i}}/\mathrm{n}\}$, $\underset{1\le \mathrm{i}\le \mathrm{n}}{\mathrm{MAX}}\{\left(1+\sqrt{4{\mathrm{n}}_{\mathrm{j}|\mathrm{i}}-4{\mathrm{n}}_{\mathrm{i}}+1}\right)/2\}$] (e.g., [16,17]) contains the principal eigenvalue (i.e., spectral radius) of the graph G
_{n}adjacency matrix**C**, where n_{i}is the ith row sum, and n_{j|i}is the sum of vertex i’s linked row sums n_{j}; - (3)
- (4)
- the principal eigenvalue of matrix
**W**is one (via the Perron-Frobenius theorem (e.g., [20])); - (5)
- the extreme maximum eigenvalue of matrix
**C,**and the extreme negative eigenvalue of matrix**W**, for graph G_{n}compute very quickly [21]; - (6)
- the variance of the eigenvalues of matrix
**C**is**1**^{T}**C1**/n, and of matrix**W**is**1**^{T}**D**^{−1}**C D**^{−1}**1**/n, where**1**is an n-by-1 vector of ones, superscript T denotes the matrix transpose operator, and**D**is a diagonal matrix whose (i, i) cell entry is n_{i}[21]; - (7)
- the sum of the positive eigenvalues equals minus the sum of the negative eigenvalues [all diagonal entries of matrix
**C**are zero in the absence of self-loops, implying that $\sum _{\mathrm{i}=1}^{\mathrm{n}}{\mathsf{\lambda}}_{\mathrm{i}}$ (**C**) = 0]; - (8)
- the sum of the k largest eigenvalues of matrix
**C**is at most (√k + 1)n/2 [22]; - (9)
- the line graph, L(G
_{n}), furnishes a lower bound, whereas the maximally connected graph [23], L(G_{2}) + L(G_{n−2}), furnishes an upper bound configuration for numerical planar graph eigenfunction analysis; - (10)
- for irregular graphs G
_{n}(i.e., n_{i}has a relatively small modal frequency coupled with a relatively large range), the theoretical maximum number of negative eigenvalues is 3n/4 [24], whereas empirically this maximum number almost always is less than 2n/3, and usually between n/2 and 3n/5 [25], even in the presence of numerous completely connected K_{4}(in graph theory parlance) subgraphs (i.e., the maximum fully connected subgraph Kuratowski’s theorem allows to exist in a planar graph); - (11)
- a bipartite graph is always 2-colorable (i.e., one only needs to assign at most two different colors to all graph vertices such that no two adjacent vertices have the same color), and vice-versa (see [26]), implying knowledge about the number and positioning of K
_{4}subgraphs potentially is informative; and, - (12)
- if the maximum degree (i.e., n
_{i}; see property #2) Δ > 3, η denotes nullity, and G_{n}is not complete bipartite, then η < (Δ − 2)n/(Δ − 1) [5]―this is not a very useful spatial statistics/econometrics upper bound since most irregular surface partitionings have at least one polygon areal unit with Δ sufficiently large (e.g., in the range 10–28) that (Δ − 2)/(Δ − 1) is close to one.

**C**to matrix

**W**. Although the line graph, L(G

_{n}) (i.e., property #9) gives a lower bound configuration for practical spatial statistical problems, an undirected star graph (i.e., K

_{1}in graph theory parlance) gives its absolute lower bound for the sum of positive eigenvalues, which is one for matrix

**W**; this particular graph also highlights the importance of accounting for zero adjacency matrix eigenvalues. In other words, given that the largest eigenvalue, λ

_{1}(

**W**), equals 1, the upper bound for the k positive eigenvalues is k, denoted here by matrix inertia notation n

_{+}; realization of this value is infeasible for a connected L(G

_{n}) since all non-principal positive eigenvalues other than λ

_{1}(

**W**) are guaranteed to be < 1. Thus, for other symmetric eigenvalue distributions (e.g., those characterizing a regular square tessellation lattice forming a complete rectangular region), this loose upper bound also is n/2 at most, and actually is closer to n

_{+}/2; such a tessellation forming a complete √n-by-√n square region, for a perfect square number n, with links in its dual planar graph defined by pixels sharing a non-zero length boundary, also has √n zero eigenvalues (e.g., see Comellas et al., 2008), slightly reducing this upper bound for it by a factor of (1 − 1/√n). Meanwhile, the sum of positive L(G

_{n}) eigenvalues is approximately/exactly (depending upon whether n is even or odd) equal to $\left\{1+\mathrm{S}\mathrm{I}\mathrm{N}\left[\frac{2\mathrm{k}-1}{2(\mathrm{n}-1)}\mathsf{\pi}\right]/\mathrm{S}\mathrm{I}\mathrm{N}\left[\frac{\mathsf{\pi}}{2(\mathrm{n}-1)}\right]\right\}/2$, implying a sharper upper bound on k for the sum of its first k positive eigenvalues, and n/3—which can be <k—for the sum of all of its positive eigenvalues. This outcome suggests the following more general proposition:

**Conjecture 1.**

_{n}= (V, E) be a simple connected undirected planar graph with n vertices V = {v

_{1}, … , v

_{n}}, and m ≤ n(n − 1)/2 edges E = {e

_{ij}linking vertices v

_{i}and v

_{j}: i = 1, 2, … , n and j = 1, 2, … , n; i ≠ j}. For n ≥ 100 nodes and no self-loops, k is an upper bound for the sum of the first k positive eigenvalues of its row-standardized adjacency matrix

**W**.

**Rationale.**

_{1}(

**W**) = 1, and all other positive eigenvalues in this case are such that 0 < λ

_{j}(

**W**) < 1, j = 2, 3, … , k. Taliceo and Griffith [25] synthesize simulation experiment calculations that suggest a minimum value of around 100 for n, the part of this conjecture requiring proof. Tait and Tobin [23] underline the potential for G

_{n}displaying anomalous qualities when n is small, noting that the maximum connectivity case requires n to exceed 8 to be true. Furthermore, in their extensive investigation of planar graphs housing embedded K

_{4}subgraphs, Taliceo and Griffith [25] reveal that 3n/4 negative eigenvalues occur, at least asymptotically with increasing n, for certain supra-structure wheel and line graphs organizing a substructure of only K

_{4}subgraphs, which is a peculiarity vis-à-vis empirical surface partitionings. Their empirical inventory, together with the observational roster appearing in Table 1 (also see Appendix A), implies that most irregular surface partitions have a relatively small percentage of their edges arranged in the precise formation of these subgraphs.

_{4}situation reflects upon a need to adequately differentiate theoretical and practical consequences.

_{1}(

**C**) upper bound result proved and reported by Wu and Liu [17] can extend this conjecture to binary 0-1 graph adjacency matrices with an improved bound of k$\times \underset{1\le \mathrm{i}\le \mathrm{n}}{\mathrm{MAX}}\{\left(1+\sqrt{4{\mathrm{n}}_{\mathrm{j}|\mathrm{i}}-4{\mathrm{n}}_{\mathrm{i}}+1}\right)/2\}$ < (√k + 1)n/2. This value outperforms even the extreme negative eigenvalue case of k = n/4 for all positive eigenvalues: for example, using the true n

_{+}value, (7.59)(839)/2 (≈3184.05) is closer than (√393 + 1)(839)/2 (≈ 8738.76) to 656.86, the ensuing empirical England illustration numbers.

_{n}(

**W**) value of −1 implies the foregoing loose upper bound of n/2, as λ

_{n}(

**W**) goes toward its smallest absolute value of roughly −1/2 (e.g., [27]), property #10 implies that this loose upper bound goes to n/4. One implication here is that λ

_{n}(

**W**) can either directly or indirectly contribute to establishing sharper upper bounds. Taliceo and Griffith [25] reveal that values greater than the extreme case of −1 move the loose upper bound toward 2n/5. This hypothesis merits future research attention. Therefore, as the discussion in this section reflects, a prominent theme in the existing literature concerns various aspects of positive eigenvalues, relegating negative eigenvalues to a secondary topic, while nearly ignoring zero eigenvalues altogether.

## 3. Specimen Empirical Surface Partitionings and Their Dual Graphs

_{1}(

**C**) exhibits a marked decrease—by an average of over 50%—from the maximum n

_{i}values delivered by the Perron-Frobenius theorem.

**Lemma 1.**

**C**has a zero eigenvalue.

**Proof.**

**C**− λ

**I**)|, where det denotes matrix determinant, and then adding the first column to the new second column, yields.

**Corollary 1.**

**Proof.**

**C**

_{n−2K,n−2K}. Recursively applying the Lemma 1 proof technique to the K twosomes yields K zero eigenvalues. □

**Lemma 2.**

**C**has at least K − 1 zero eigenvalues.

**Proof.**

**C**

_{n−K,n−K}. Then,

_{K−1}, is equivalent to applying it to a star graph for which one of its nodes is contained in the core adjacency matrix,

**C**

_{n−K,n−K}, rendering at least K − 1 zero eigenvalues (see [30], p. 266). □

**Corollary 2.**

**C**has at least P + K − 2 zero eigenvalues.

**Proof.**

_{n−K,n−K}= ${\mathrm{C}}_{{\mathrm{P}}^{2},{\mathrm{P}}^{2}}$ for a P-by-P regular square tessellation, in isolation has P zero eigenvalues for its rook adjacency definition―its eigenvalues are given by 2{COS[πj/(P + 1)] + COS[πk/(P + 1)]}, j = 1, 2, … , P and k = 1, 2, … , P [31], with the P cases of k = [(P + 1) – j] yielding zero values. One of these matches occurs for each integer j, yielding a total of P zero eigenvalues. But the single arbitrary node to which the star graph attaches has its column/row modified by its P + 1 column and row additional entries that render the matrix determinant

**Remark.**

**Corollary 3.**

_{h}nodes forms a separate, distinct star subgraph, then this graph’s binary 0-1 adjacency matrix

**C**has at least $\sum _{\mathrm{h}=1}^{\mathrm{H}}{\mathrm{K}}_{\mathrm{h}}$ – H zero eigenvalues.

**Proof.**

_{h}nodes yields $\sum _{\mathrm{h}=1}^{\mathrm{H}}{\mathrm{K}}_{\mathrm{h}}$ – H zero eigenvalues. □

**Remark.**

- Table 2 entries also raise questions about the claim by Feng et al. [18] that adding links strictly increases the matrix
**C**principal eigenvalue’s magnitude (e.g., the US HAS specimen graph), which is the reason this table’s entries show so many decimal places; rather, this spectral radius amount looks to be simply monotonically non-decreasing. This assertion may well apply to the extreme negative eigenvalues, too: they also appear to be monotonically increasing in absolute value.

_{2}(

**W**) ≠ 1] as well as contain the smallest possible n

_{i}s (most often just one).

## 4. Identifying Linearly Dependent Column/Row Subsets in Adjacency Matrices

#### 4.1. Competing Algorithms

**X**

^{T}

**X**in or out, where sweeping is similar to executing a Gauss-Jordan elimination. This sweep procedure involves the following sequence of matrix row adjustments (the basic row operations, pivoting exclusively on diagonal matrix elements, are the multiplication of a row by a constant, and the addition of a multiple of one row to another) based upon diagonal element h in matrix

**X**

^{T}

**X**= <a

_{jk}> whose nonzero value is c

_{hh}:

- j = k = h: matrix entry
**X**^{T}**X**[h, h] = a_{hh}, - j = h & k ≠ h: matrix entry
**X**^{T}**X**[h, k] = a_{hk}/a_{hh}, and - j ≠ h & k ≠ h: matrix entry
**X**^{T}**X**[j, k] = a_{jk}– a_{jh}a_{kh}/a_{hh}.

**X**

^{T}

**X**symmetry requires calculations for only its upper triangle, reaping execution time savings. It also achieves in-place mapping with minimal storage. Meanwhile, sweeping in and out refers to the reversibility of this operation. These preceding added effects constitute “sweeping in.” They can be undone by sweep a second time, constituting “sweeping out.” This sweep operator is quite simple, making it an extremely useful tool for computational statisticians.

**)**of

**X**

^{T}

**X**=

**QR**, where

**Q**is an orthonormal matrix, and

**R**is an upper triangular matrix. The appeal of such an orthogonal matrix is that its transpose and inverse are equal. Thus, the linear regression system of equations

**X**

^{T}

**Xb = (X**reduces to a triangular system Rb = Q

^{T}Y)^{T}(X

^{T}Y), which is much easier to solve. Implementation of this QR algorithm can be carried out with column pivoting, which bolsters its ability to solve rank-deficient systems of equations while tending to provide better numerical accuracy. However, by doing so, it solves the different system of equations

**X**

^{T}

**XP = QR**, or

**X**

^{T}

**X = QRP**, where P is a permutation matrix linked to the largest remaining column (i.e., column pivoting) at the beginning of each new step. The selection of matrix

^{T}**P**usually is such that the diagonal elements of matrix

**R**are non-increasing. This system of equations switch potentially adds further complexity to handling this algorithm. Nevertheless, column pivoting is useful when eigenvalues of a matrix actually, or are expected to approach, zero, the topic of this paper. Consequently, the relative simplicity of the sweep algorithm frequently makes it preferable to a QR algorithm.

#### 4.2. SAS Procedures: REG and TRANSREG

**X**

^{T}

**X**, according to the order of the covariates appearing in its SAS input statement. It always begins by sweeping the first column in matrix

**X**, followed by the next column in this matrix if the pivot is less than a near-zero value whose default SAS threshold magnitude is 1.0 $\times $ 10

^{−9}for most machines (i.e., if that column is not a linear function of the preceding column), then continuing sequentially to each of the next columns if their respective pivots are not less than this threshold amount, until it passes through all of the columns of matrix

**X**. This method is accurate for most undirected connected planar graph adjacency matrixes since they are reasonably scaled and not too collinear. Given this setting, this SAS procedure can uncover linearly dependent subsets of adjacency matrix columns by specifying its first column, C

_{1}, as a dependent variable, and its remaining (n − 1) columns, C

_{2}-C

_{n}, as covariates. SAS output from this artificial specification includes an enumeration of the existing linearly dependent subsets of columns. A second regression that is stepwise in its nature and execution can check whether or not C

_{1}itself is part of a linear combination subset. Simplicity dissipates here when n becomes too large, with prevailing numerical precision resulting in some linear combinations embracing numerous superfluous columns with computed near-zero regression coefficients (e.g., 1.0 $\times $ 10

^{−11}). This rounding error corruption emerges in PROC REG [35] between n = 2100 (e.g., the Chicago empirical example) and n = 3408 (e.g., the US HSA empirical example).

#### 4.3. Disclosing Subset Linear Combinations for the Specimen Empirical Surface Partitionings

_{n}. Since its sparse binary 0-1 adjacency matrix may be stored as a list of no more than 6(n − 2) pairs of edges/links, rather than either a full n

^{2}or no-self-loops n(n − 1) set of pairings, a fast comparison of the position of ones in two columns is possible. These revelations, which primarily are the ones Griffith and Luhanga [12] report, can serve as a check for generated regression disclosures. Sylvester’s [43] matrix algebra inertia theorem transfers these linear combinations from matrix

**C**to matrix

**W**.

_{1594}, C

_{1668}, C

_{2451}}) and the two 4-tuple (i.e., {C

_{3263}, C

_{3264}, C

_{3265}, C

_{3270}} and {C

_{4842}, C

_{4847}, C

_{4848}, C

_{4849}}) subsets.

_{55}, C

_{97}, and C

_{141}. The term “Identity” appears in this computer hardcopy since PROC TRANSREG [35] enables and implements variable transformations, with this particular formulation option instructing this procedure to retain the initial untransformed variables themselves.

_{1460}in this example to only one subset linear combination, whereas PCA introduces some confusion by allocating it to the various subsets to which it also could belong. In other words, the linear regression approach employing a sweep operator furnishes a more parsimonious solution here. One important aspect of this exemplification is that PCA muddling still materializes in the presence of a relatively large n (i.e., 3408) coupled with a quite small number of redundant columns (i.e., eight).

#### 4.4. Discussion: Selected Sweep Operator Properties

## 5. Approximating the Eigenvalues of Matrix W

**W**eigenvalues for a configuration of polygons forming a two-dimensional surface partitioning. Such eigenvalue calculations are possible for quite large n, In the lower 10,000s, but become Impossible beyond the frontiers of computing power, a constantly expanding resource size. This normalizing constant is the existing work adaption target for the matrix inertia refinement this paper disseminates.

_{1,n}, perceptively illustrates the problem here in its extreme: its eigenvalues are 1, n − 2 0s, and −1 for matrix

**W**, whereas they are $\pm \sqrt{{1}^{\mathrm{T}}\mathbf{C}1}$ and n − 2 0s for matrix

**C**. In this kind of situation, Bindel and Dong [15], for example, show how eigenvalue frequency distributions can become dominated by a spike at zero (also see Figure 5b,d). A null eigenvalue fails to add any amount to the sum of the n eigenvalues, regardless of the value of n for star graph G

_{n}. For all n > 1, the variance of these eigenvalues is

**1**

^{T}

**D**

^{−1}

**C D**

^{−1}

**1**= 2. More realistic graphs show that although λ

_{1}(

**W**) = 1 remains unchanged, adding redundant links that introduce zero eigenvalues can alter λ

_{n}(

**C**) and λ

_{n}(

**W**) as well as some or all of the intermediate eigenvalues (see Table 2). Furthermore, the matrix

**W**eigenvalues have a more limited range (Table 2) that causes their density to increase faster than it does for their binary 0-1 parent matrix

**C**eigenvalues.

#### 5.1. Preliminary Eigenvalue Approximation Steps

**C**and

**W**[16,50]. A formulated algorithm employing solely the sparse version of an adjacency matrix (i.e., a sequential listing of only the row-column cells containing a one) quickly produces these values, with extreme accuracy for both λ

_{1}(

**C**) and λ

_{n}(

**W**) (Table 3); the most relevant of these two values for spatial statistics/econometrics is λ

_{n}(

**W**). This algorithm builds its λ

_{n}(

**C**) approximation with the normalized principal eigenvector,

**E**

_{1}, estimated during the iterative calculation of λ

_{1}(

**C**). Next, this algorithm builds its λ

_{n}(

**W**) estimate with the normalized eigenvector

**E**

_{n}approximated during the iterative calculation of λ

_{n}(

**C**). Given that λ

_{1}(

**W**) ≡ 1 is known theoretically, the appealing outcome is that λ

_{n}(

**W**) is knowable for massively large matrices.

_{0}is the number of 2-tuples for massively large square matrices. In addition, this approach fails to be effective when extended to more general real matrices since their non-sparseness requires a replacement of each n(n − 1) simple comparison with a bivariate regression; columns/rows can be proportional rather than just equal. This daunting computational demand argues for keeping the original PROC TRANSREG [35] operationalization with its n(n − 1) sweep operations.

**1**

^{T}

**D**

^{−1}

**C D**

^{−1}

**1**/n) are either known or easily computable, and hence available. A favored specification for this task exploits eigenvalue ordering by calibrating expressions such as

_{i}denotes the descending ordering rank of the i

^{th}eigenvalue after setting aside the set of zero eigenvalues [which subsequently are merged with the approximations generated by Equation (4)], and exponents γ and θ that match the approximated eigenvalue mean and square root of its sum of squares as closely as possible (guided by the method of moments estimation strategy, and using a mean squared error criterion); all manufactured values of expression (4) belong to the interval [0, 1], as do the absolute values of their corresponding actual eigenvalues, with γ > 1 and θ > 1 indicative of variance inflation needing removal, and γ < 1 and θ < 1 indicative of deflated variance needing to be restored. The differences between γ and θ appearing in Table 4 confirm an asymmetric distribution of eigenvalues, whereas the positive eigenvalues bivariate regressions corroborate a well-aligned linear trend with their adjusted rankings. This linear trend tendency is capable of facilitating the drafting of an improved upper bound for the sum of k positive eigenvalues.

#### 5.2. An Instructive Eigenvalue Approximation Assessment

_{0}(the calculable count for massively large G

_{n}), the nearest integer 45% of the non-zero eigenvalues accurately counts n

_{+}(roughly the mid-point of large-landscape simulation findings reported in [25]), and equation (4) approximates a total set of n eigenvalues. Figure 6 portrays the Jacobian plots across the auto-normal model spatial autocorrelation parameter value contained in its feasible parameter space interval, namely, 1/λ

_{n}(

**W**) < ρ < 1. These graphical findings imply that the enhanced (i.e., refined) but relatively simple eigenvalue approximations proposed in this paper are sufficiently accurate to support sound spatial autoregressive analyses. More comprehensive appraisals of this proposition merit future research attention.

## 6. Discussion

_{6}that is a trivial zero eigenvalue source, and hence ignored in terms of constructing linear combinations with it. The non-square matrix requires analysis of both its original columns and original rows in their matrix transformed styles. PROC TRANSREG [35] identifies the linearly dependent subsets of columns and rows appearing in Figure 7, with a count of five. Column c

_{1}and transposed column (i.e., row) r

_{1}serve as the dependent variables in their respective linear regression analyses. Accordingly, since PROC TRANSREG [35] only uncovers linearly dependent subsets in a collection of covariates, both c

_{1}and r

_{1}also have to be analyzed in a stepwise PROC REG [35], which exposes the {r

_{1}, r

_{2}, r

_{5}} linearly dependent subset (e.g., the root mean squared error, _RMSE_, is zero, and the linear regression multiple correlation, _RSQ_, is 1).

_{n}, with n = 72,538,

**1**

^{T}

**C1**= 402,830, MAX{n

_{i}} = 48,

**1**

^{T}

**D**

^{−1}

**C D**

^{−1}

**1**= 12985.54, λ

_{1}(

**C**) = 8.63582, and λ

_{n}(

**W**) = −0.89732—desktop computer—Intel(R) Xeon(R) CPU, E5640 @ 2.67GHz and 2.66GHz (2 processors), 24.0GB RAM, 64-bit OS, x64-based processor, Windows 10 Enterprise—execution time for calculating these last two quantities was less than 40 s. Crafting this geographic polygons configuration graph constructed for the coterminous US census tracts irregular surface partitioning involved modifying an ESRI ArcMAP .shp file with such manual adjustments as linking islands to the mainland through ferry routes. The simple 2-tuple matching analysis retrieves 32 zero eigenvalue sources. Consequently, the basic input for approximating the matrix

**W**eigenvalues, and hence the needed Jacobian term for spatial statistical/econometric autoregressive analysis, is not only feasible, but also available here. This exemplification once again illustrates how the matrix inertia refinement promulgated in this paper is adaptable to existing work.

## 7. Conclusions and Implications

_{n}adjacency matrix size often is in the 10,000,000s [14]. This magnitude echoes that for the aforementioned KONECT project social network entries in excess of one million nodes (reflecting a practical upper limit size), which number 18 and have a median of 30 million nodes, and whose Jacobian term requires the irregular tessellation category of eigenvalue approximations outlined in this paper. Another imperative application arena is the present-day expansion of data mining and machine learning analyses to georeferenced, social media, and other substantially large datasets. The solution advocated for in this paper streamlines earlier results by exploiting extremely large adjacency matrix inertia.

**Conjecture 2.**

_{n}= (V, E) be a simple connected undirected planar graph with n vertices V = {v

_{1}, … , v

_{n}}, and m ≤ n(n − 1)/2 edges E = {e

_{ij}linking vertices v

_{i}and v

_{j}: i = 1, 2, … , n and j = 1, 2, … , n; i ≠ j}. If G

_{n}has no K

_{4}subgraphs, then the maximum value of n

_{−}is 2n/3.

**Conjecture 3.**

_{n}= (V, E) be a simple connected undirected planar graph with n vertices V = {v

_{1}, … , v

_{n}}, and m ≤ n(n − 1)/2 edges E = {e

_{ij}linking vertices v

_{i}and v

_{j}: i = 1, 2, … , n and j = 1, 2, … , n; i ≠ j}. The sum of the k largest matrix

**W**positive eigenvalues least upper bound for a self-loopless G

_{n}’s adjacency matrix is $\sum _{\mathrm{i}=1}^{\mathrm{k}}\left[\right(\mathrm{n}+1\u2013\mathrm{r}\mathrm{i})/\mathrm{n}+]\mathsf{\gamma}$ + ε for some suitably small ε.

- A more general implication is that geographical analysis matrices continue to be a fertile research subject. Concerted quantitative geography efforts in this area began in the 1960s, and have become progressively more sophisticated with the passing of time.

_{n}(

**C**), enhance the accurate determination of n

_{+}(and hence n

_{−}), expand the quick calculation of p-tuple linearly dependent subsets of adjacency matrix columns/rows greater than p = 2, accurately quantify the negative eigenvalue part of Equation (4) with a nonlinear regression equation specification, and refine and then convert the three stated conjectures into theorems with proofs. Future research also should explore furthering the effectiveness of the proposed sweep-algorithm-based solution utilized here, especially for much larger n, by reformulating it to be optimal for the nullity problem addressed here, rather than for its current generic multiple linear regression implementation, allowing a meaningful reduction in its customized computational complexity. One way to achieve this particular goal might be by more accurately addressing the underlying structure of empirical planar graphs, perhaps utilizing special properties of the Laplacian version of their adjacency matrices to attain faster execution times.

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Appendix A. Selected Supplemental Specimen Geographic Landscapes

Geographic Landscape | n | Nullity | K_{4} Count | Matrix C | Matrix W | |||
---|---|---|---|---|---|---|---|---|

λ_{1}(C) | λ_{n}(C) | 1^{T}C1 | λ_{2}(W) | λ_{n}(W) | ||||

Columbus, OH 1980 ^{‡} | 49 | 1 | 0 | 5.908 | −3.097 | 232 | 0.963 | −0.651 |

Winnipeg 1971 ^{§} | 101 | 1 | 1 | 5.738 | −3.279 | 498 | 0.968 | −0.727 |

Stockholm, Sweden | 119 | 1 | 1 | 6.016 | −3.420 | 582 | 0.987 | −0.893 |

Henan, China | 130 | 1 | 5 | 6.111 | −3.029 | 668 | 0.980 | −0.894 |

Brazilian Amazon ^{†} | 323 | 1 | 12 | 6.276 | −3.430 | 1608 | 0.996 | −0.911 |

Minnesota tree stands | 513 | 5 | 29 | 6.118 | −4.127 | 2462 | 0.994 | −0.763 |

Germany kreise 1930 ^{⁕} | 743 | 28 | 7 | 6.271 | −3.412 | 3706 | 1.000 ^{⁑} | −0.793 |

Sheffield, UK | 930 | 1 | 33 | 6.322 | −3.940 | 4708 | 0.999 | −0.833 |

Chicago 1990 ^{‡} | 1754 | 1 | 23 | 6.310 | −3.605 | 8820 | 0.998 | −0.861 |

US counties ^{‡} | 3111 | 3 | 18 | 6.264 | −3.426 | 17,508 | 0.999 | −0.814 |

_{n}nodes; K

_{4}denotes the well-known completely connected G

_{4}; λ denotes eigenvalue; nullity denotes the number of zero eigenvalues;

^{§}Statistics Canada 1971 census tracts;

^{⁑}this surface partitioning comprises two separated sub-graphs of sizes 35 and 708;

^{†}Amazon basin municipalities;

^{‡}US Census Bureau 1980 or 1990 census tracts.

^{⁕}O’ Loughlin, J., C. Flint and L. Anselin. 19943. The Geography of the Nazi Vote: Context, Confession and Class in the Reichstag Election of 1930, Annals, American Association of Geographers, 84 (3): 351–380; map on p. 361.

**Figure A1.**Partial SAS PROC TRANSREG output for Table A1 examples; matrix column (designated by C#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. The top panel contains illustrative SAS output. In the bottom panel, the left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination.

## Appendix B. A Conrolled Experiment: A P-By-P Regular Square Tessellation Forming a Complete Rectangular Region, Rook Adjacency Definition, with a (P, P) Node Corner Landscape Central Node K-Star Subgraph

**Figure A2.**Partial SAS PROC REG output for the artificial example; matrix column (designated by COL#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. The left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination.

Random Selection | Number of Nodes | P + K − 2 | Number of Separate Linear Regression Sweep Algorithm Collinearities | |
---|---|---|---|---|

$\mathbf{P}\text{}\in $ [6, 50] | $\mathbf{K}\text{}\in $ [2, 500] | |||

16 | 390 | 646 | 404 | 404 |

15 | 408 | 633 | 421 | 421 |

34 | 495 | 1651 | 527 | 527 |

12 | 312 | 656 | 322 | 322 |

39 | 289 | 1810 | 326 | 326 |

23 | 375 | 904 | 396 | 396 |

46 | 84 | 2200 | 128 | 128 |

37 | 409 | 1778 | 444 | 444 |

39 | 165 | 1686 | 202 | 202 |

19 | 11 | 372 | 28 | 28 |

## Appendix C. Expanding the Graph Validation Dataset

**Figure A3.**Partial SAS PROC REG output for the simulated examples; matrix column (designated by COL#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. The left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination.

**Figure A4.**Partial SAS PROC TRANSREG output for five simulated examples; matrix column (designated by C#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. From left to right, the column contents are: n (i.e., number of vertices), size of linear combination yielding a zero eigenvalue, a particular redundant matrix column causing collinearity, and the expression for the redundant column’s linear combination.

## References

- Fan, Y.; Wang, L. Bounds for the positive and negative inertia index of a graph. Linear Algebra Its Appl.
**2017**, 522, 15–27. [Google Scholar] [CrossRef] - Li, S.; Sun, W. On the relation between the positive inertia index and negative inertia index of weighted graphs. Linear Algebra Its Appl.
**2019**, 563, 411–425. [Google Scholar] [CrossRef] - Lancaster, P.; Tismenetsky, M. The Theory of Matrices; Academic Press: New York, NY, USA, 1985. [Google Scholar]
- Wang, L. Nullity of a graph in terms of path cover number. Linear Multilinear Algebra
**2021**, 69, 1902–1908. [Google Scholar] [CrossRef] - Cheng, B.; Liu, M.; Tam, B. On the nullity of a connected graph in terms of order and maximum degree. Linear Algebra Its Appl.
**2022**, 632, 193–232. [Google Scholar] [CrossRef] - Hicks, I.; Brimkov, B.; Deaett, L.; Haas, R.; Mikesell, D.; Roberson, D.; Smith, L. Computational and theoretical challenges for computing the minimum rank of a graph. INFORMS J. Comput.
**2022**, 34, 2868–2872. [Google Scholar] [CrossRef] - Alaeiyan, M.; Obayes, K.; Alaeiyan, M. Prediction nullity of graph using data mining. Results Nonlinear Anal.
**2023**, 6, 1–8. [Google Scholar] - Arumugam, S.; Bhat, K.A.; Gutman, L.; Karantha, M.; Poojary, R. Nullity of graphs—A survey and some new results. In Applied Linear Algebra, Probability and Statistics: A Volume in Honour of CR Rao and Arbind K. Lal; Bapat, R., Karantha, M., Kirkland, S., Neogy, S., Pati, S., Puntanen, S., Eds.; Springer Nature: Singapore, 2023; pp. 155–175. [Google Scholar]
- Druinsky, A.; Carlebach, E.; Toledo, S. Wilkinson’s inertia-revealing factorization and its application to sparse matrices. Numer. Linear Algebra Appl.
**2018**, 25, e2130. [Google Scholar] [CrossRef] - Fan, Y.; Wang, Y.; Bao, Y.; Wan, J.; Li, M.; Zhu, Z. Eigenvectors of Laplacian or signless Laplacian of hypergraphs associated with zero eigenvalue. Linear Algebra Its Appl.
**2019**, 579, 244–261. [Google Scholar] [CrossRef] - Nakatsukasa, Y.; Noferini, V. Inertia laws and localization of real eigenvalues for generalized indefinite eigenvalue problems. Linear Algebra Its Appl.
**2019**, 578, 272–296. [Google Scholar] [CrossRef] - Griffith, D.; Luhanga, U. Approximating the inertia of the adjacency matrix of a connected planar graph that is the dual of a geographic surface partitioning. Geogr. Anal.
**2011**, 43, 383–402. [Google Scholar] [CrossRef] - Comellas, F.; Dalfó, C.; Fiol, M.; Mitjana, M. The spectra of Manhattan street networks. Linear Algebra Its Appl.
**2008**, 429, 1823–1839. [Google Scholar] [CrossRef] - Griffith, D. Approximation of Gaussian spatial autoregressive models for massive regular square tessellation data. Int. J. Geogr. Inf. Sci.
**2015**, 29, 2143–2173. [Google Scholar] [CrossRef] - Bindel, D.; Dong, K. Modified kernel polynomial method for estimating graph spectra. In Proceedings of the SIAM Workshop on Network Science, Snowbird, UT, USA, 15–16 May 2015; Available online: https://www.cs.cornell.edu/~bindel/papers/2015-siam-ns.pdf (accessed on 24 October 2023).
- Griffith, D. Extreme eigenfunctions of adjacency matrices for planar graphs employed in spatial analyses. Linear Algebra Its Appl.
**2004**, 388, 201–219. [Google Scholar] [CrossRef] - Wu, X.-Z.; Liu, J.-P. Sharp upper bounds for the adjacency and the signless Laplacian spectral radius of graphs. Appl. Math.—A J. Chin. Univ.
**2019**, 34, 100–112. [Google Scholar] [CrossRef] - Feng, L.; Yu, G.; Zhang, X.-D. Spectral radius of graphs with given matching number. Linear Algebra Its Appl.
**2007**, 422, 133–138. [Google Scholar] [CrossRef] - Milanese, A.; Sun, J.; Nishikawa, T. Approximating spectral impact of structural perturbations in large networks. Phys. Rev. E
**2010**, 81, 046112. [Google Scholar] [CrossRef] - Meyer, C. Chapter 8: Perron–Frobenius theory of nonnegative matrices. In Matrix Analysis and Applied Linear Algebra; SIAM: Philadelphia, PA, USA, 2000; pp. 661–704. [Google Scholar]
- Griffith, D. Eigenfunction properties and approximations of selected incidence matrices employed in spatial analyses. Linear Algebra Its Appl.
**2000**, 321, 95–112. [Google Scholar] [CrossRef] - Mohar, B. On the sum of k largest eigenvalues of graphs and symmetric matrices. J. Comb. Theory Ser. B
**2009**, 99, 306–313. [Google Scholar] [CrossRef] - Tait, M.; Tobin, J. Three conjectures in extremal spectral graph theory. J. Comb. Theory Ser. B
**2017**, 126, 137–163. [Google Scholar] [CrossRef] - Elphick, C.; Wocjan, P. An inertial lower bound for the chromatic number of a graph. Electron. J. Comb.
**2016**, 24, 1–58. [Google Scholar] - Taliceo, N.; Griffith, D. The K
_{4}graph and the inertia of the adjacency matrix for a connected planar graph. STUDIA KPZK PAN (Publ. Pol. Acad. Sci.)**2018**, 183, 185–209. [Google Scholar] - Keller, M.; Trotter, W. Chapter 5: Graph theory. In Applied Combinatorics; LibreTexts: Davis, CA, USA, 2023; Available online: https://math.libretexts.org/Bookshelves/Combinatorics_and_Discrete_Mathematics/Applied_Combinatorics_(Keller_and_Trotter) (accessed on 24 October 2023).
- Griffith, D.; Sone, A. Trade-offs associated with normalizing constant computational simplifications for estimating spatial statistical models. J. Stat. Comput. Simul.
**1995**, 51, 165–183. [Google Scholar] [CrossRef] - Venkateshan, S.; Swaminathan, P. Chapter 2: Solution of linear equations. In Computational Methods in Engineering; Elsevier: Amsterdam, The Netherlands, 2013; pp. 19–103. [Google Scholar]
- Venkateshan, S.; Swaminathan, P. Chapter 4: Solution of algebraic equations. In Computational Methods in Engineering; Elsevier: Amsterdam, The Netherlands, 2013; pp. 155–201. [Google Scholar]
- Bollobás, B. Modern Graph Theory; Graduate Texts in Mathematics; Springer: Berlin/Heidelberg, Germany, 1998; Volume 184. [Google Scholar]
- Ord, J. Estimating methods for models of spatial interaction. J. Am. Stat. Assoc.
**1975**, 70, 120–126. [Google Scholar] [CrossRef] - Hawkins, D. On the investigation of alternative regressions by principal component analysis. J. R. Stat. Soc. Ser. C (Appl. Stat.)
**1973**, 22, 275–286. [Google Scholar] [CrossRef] - Gupta, R. Chapter 6: Eigenvalues and eigenvectors. In Numerical Methods: Fundamentals and Applications; Cambridge University Press: Cambridge, UK, 2019; pp. 268–298. [Google Scholar]
- Taboga, M. Algebraic and Geometric Multiplicity of Eigenvalues. Lectures on Matrix Algebra. 2021. Available online: https://www.statlect.com/matrix-algebra/algebraic-and-geometric-multiplicity-of-eigenvalues (accessed on 18 June 2022).
- SAS Institute Inc. SAS/STAT® 15.1 User’s Guide; SAS Institute Inc.: Cary, NC, USA, 2018; Available online: https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.4/statug/statug_reg_details36.htm (accessed on 24 October 2023).
- Goodnight, J. A tutorial on the sweep operator. Am. Stat.
**1979**, 33, 149–158. [Google Scholar] - Lange, K. Numerical Analysis for Statisticians, 2nd ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
- Ahamed, M.; Biswa, A.; Phukon, M. A study on multicollinearity diagnostics and a few linear estimators. Adv. Appl. Stat.
**2023**, 89, 29–54. [Google Scholar] [CrossRef] - Tsatsomeros, M. Principal pivot transforms: Properties and applications. Linear Algebra Its Appl.
**2000**, 307, 151–165. [Google Scholar] [CrossRef] - Duersch, J.; Gu, M. Randomized QR with column pivoting. SIAM J. Sci. Comput.
**2017**, 39, C263–C291. [Google Scholar] [CrossRef] - Martinsson, P.; OrtI, G.Q.; Heavner, N.; Van De Geijn, R. Householder QR factorization with randomization for column pivoting (HQRRP). SIAM J. Sci. Comput.
**2017**, 39, C96–C115. [Google Scholar] [CrossRef] - Li, H. Numerical Methods Using Java: For Data Science, Analysis, and Engineering; APress: New York, NY, USA, 2022. [Google Scholar]
- Sylvester, J. A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares. Philos. Mag. 4th Ser.
**1852**, 4, 138–142. [Google Scholar] [CrossRef] - Dempster, A. Elements of Continuous Multivariate Analysis; Addison-Wesley: Reading, MA, USA, 1969. [Google Scholar]
- Andrews, D.F. A robust method for multiple linear regression. Technometrics
**1974**, 16, 523–531. [Google Scholar] [CrossRef] - Tucker, A. Principal pivot transforms of square matrices. SIAM Rev.
**1963**, 5, 305. [Google Scholar] - Duffin, R.; Hazony, D.; Morrison, N. Network synthesis through hybrid matrices. SIAM J. Appl. Math.
**1966**, 14, 390–413. [Google Scholar] [CrossRef] - Stewart, M.; Stewart, G. On hyperbolic triangularization: Stability and pivoting. SIAM J. Matrix Anal. Appl.
**1998**, 19, 847–860. [Google Scholar] [CrossRef] - Bivand, R.; Hauke, J.; Kossowski, T. Computing the Jacobian in Gaussian spatial autoregressive models: An illustrated comparison of available methods. Geogr. Anal.
**2013**, 45, 150–179. [Google Scholar] [CrossRef] - Griffith, D.; Bivand, R.; Chun, Y. Implementing approximations to extreme eigenvalues and eigenvalues of irregular surface partitionings for use in SAR and CAR models. Procedia Environ. Sci.
**2015**, 26, 119–122. [Google Scholar] [CrossRef] - Griffith, D. Generating random connected planar graphs. GeoInformatica
**2018**, 22, 767–782. [Google Scholar] [CrossRef]

**Figure 1.**Matrix

**C**twosomes: eigenvalue gamma and positive eigenvalue Weibull quantile plots (black) with superimposed 95% confidence intervals and trend lines (red). Top left: (

**a**,

**b**) England. Top right (

**c**,

**d**): Chicago. Top middle left (

**e**,

**f**): Edmonton. Top middle right (

**g**,

**h**): North Carolina. Bottom middle left (

**i**,

**j**): US HSA. Bottom middle right (

**k**,

**l**): Texas. Bottom (

**m**,

**n**): Syracuse.

**Figure 2.**Partial SAS PROC TRANSREG output for the Table 2 Texas empirical example; matrix column (designated by COL#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. The left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination.

**Figure 3.**Partial SAS PROC TRANSREG output for the Table 2 Syracuse empirical example; matrix column (designated by COL#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues The left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination.

**Figure 4.**Partial SAS PROC TRANSREG (top) and PROC PRINCOMP (bottom) output for the Table 2 US HAS empirical example; matrix column (designated by COL#, where # denotes column position) linear combinations for each of the matrix’s zero eigenvalues. The left-hand table column contains particular redundant matrix columns, and the right-hand expression is each redundant column’s linear combination, based upon the last eight extracted components (a la [32]). Bold font denotes compound loadings; underlining denotes incorrect loadings.

**Figure 5.**Specimen empirical adjacency matrix-based

**W**eigenvalues with a more conspicuous zero value presence. Top left (

**a**): England descending rank ordering. Top right (

**b**): England histogram with a superimposed gamma distribution (red curve line; see Figure 1) and a highlighted zero value frequency (red bar). Bottom left (

**c**): Syracuse descending rank ordering. Bottom right (

**d**): Syracuse histogram with a superimposed gamma distribution (red curve line; see Figure 1) and a highlighted zero value frequency (red bar).

**Figure 6.**The approximated eigenvalues Jacobian plot (denoted by smaller solid black circles) superimposed upon the true Jacobian plot (denoted by larger solid gray circles) for an auto-normal model specification. Left (

**a**): the North Carolina case (γ = 1.1526 and θ = 1.6473). Right (

**b**): the Syracuse case (γ = 1.8090 and θ = 1.6227).

**Table 1.**Selected chosen specimen geographic landscape surface partitioning descriptors (also see Appendix A).

Geographic Landscape | n | K_{4} Count | Matrix C | Matrix W | |||||
---|---|---|---|---|---|---|---|---|---|

UB | λ_{1}(C) | LB | λ_{n}(C) | 1^{T}C1 | λ_{2}(W) | λ_{n}(W) | |||

England ^{†} | 839 | 9 | 7.59 | 6.07 | 4.97 | −4.19 | 3564 | 0.9976 | −0.93 |

Chicago ^{‡} | 2067 | 35 | 8.52 | 6.22 | 5.30 | −3.73 | 10,562 | 0.9987 | −0.86 |

Edmonton ^{§} | 2098 | 121 | 10.95 | 6.95 | 5.62 | −4.32 | 10,640 | 0.9985 | −0.77 |

North Carolina ^{‡} | 2195 | 52 | 8.52 | 6.28 | 5.77 | −3.76 | 12,206 | 0.9994 | −0.70 |

US HSA ^{⁑} | 3408 | 204 | 12.13 | 7.82 | 5.94 | −4.46 | 18,308 | 0.9996 | −0.96 |

Texas ^{‡} | 5265 | 185 | 12.43 | 7.43 | 5.66 | −4.52 | 28,176 | 0.9995 | −0.84 |

Syracuse ^{†} | 7249 | 322 | 12.30 | 7.38 | 5.40 | −5.02 | 35,100 | 0.9994 | −0.94 |

_{n}nodes; K

_{4}denotes the well-known completely connected G

_{4}; λ denotes eigenvalue; UB/LB denote upper/lower bound;

^{§}Statistics Canada 2011 census tracts;

^{⁑}US HSA denotes 2017 US hospital service areas.;

^{†}Appears in Griffith and Luhanga (2011).

^{‡}US Census Bureau 2000, 2010, or 2020 census tracts.

**Table 2.**Selected descriptors of the chosen specimen geographic landscape surface partitionings containing zero λs.

Feature | Removed λ = 0 | England | Edmonton | US HSA | Texas | Syracuse |
---|---|---|---|---|---|---|

λ_{1}(C) | none | 6.071787 | 6.950707 | 7.820136 | 7.431868 | 7.375069 |

2-tuples | 6.027092 | 6.905243 | 7.820136 | 7.413686 | 7.352621 | |

p-tuples | 6.026312 | 6.886547 | 7.820136 | 7.406109 | 7.352365 | |

λ_{n}(C) | none | −4.185268 | −4.324066 | −4.458946 | −4.523357 | −5.024567 |

2-tuples | −3.729453 | −4.236484 | −4.353676 | −4.512665 | −5.018984 | |

p-tuples | −3.729440 | −4.236477 | −4.353676 | −4.506244 | −5.017506 | |

λ_{n}(W) | none | −0.925413 | −0.773818 | −0.956681 | −0.843875 | −0.935861 |

2-tuples | −0.900969 | −0.731063 | −0.956681 | −0.843875 | −0.935861 | |

p-tuples | −0.900969 | −0.728697 | −0.956681 | −0.843875 | −0.935861 | |

1^{T}C1 | none | 3564 | 10,640 | 18,308 | 28,176 | 35,100 |

2-tuples | 3400 | 10,584 | 18,294 | 28,142 | 34,932 | |

p-tuples | 3372 | 10,536 | 18,290 | 28,126 | 34,878 | |

1^{T}D^{−1}C D^{−1}1 | none | 177.14 | 397.38 | 620.01 | 977.65 | 1440.31 |

2-tuples | 165.02 | 394.03 | 619.06 | 976.71 | 1434.92 | |

p-tuples | 161.61 | 391.09 | 618.58 | 976.35 | 1431.56 | |

p-tuple frequency | 2 | 79 | 23 | 6 | 7 | 48 |

3 | 5 | 0 | 1 | 1 | 7 ^{†} | |

4 | 3 | 4 | 1 | 2 | 9 | |

5 | 4 | 4 | 0 | 0 | 3 | |

6 | 1 | 1 | 0 | 0 | 1 | |

7 | 0 | 1 | 0 | 0 | 0 | |

8 | 0 | 0 | 0 | 0 | 2 | |

10 | 2 | 0 | 0 | 0 | 0 | |

11 | 1 | 0 | 0 | 0 | 0 |

^{†}one 2-tuple overlaps with, and hence may mask, two 3-tuples.

**Table 3.**Matrix

**C**and

**W**extreme eigenvalue approximations for the chosen specimen geographic landscape surface partitionings.

Extreme λs | England | Chicago | Edmonton | North Carolina | US HSA | Texas | Syracuse |
---|---|---|---|---|---|---|---|

λ_{1}(C) | 6.07179 | 6.21901 | 6.95071 | 6.27921 | 7.82014 | 7.43187 | 7.37507 |

λ_{n}(C) | −4.17990(−4.18527) | −3.73161(−3.34868) | −3.88935(−4.32407) | −3.75641(−3.75393) | −4.45320(−4.45895) | −4.52253(−4.52336) | −3.76275(−5.02457) |

λ_{n}(W) | −0.92541 | −0.85971 | −0.77382 | −0.70273 | −0.95668 | −0.84387 | −0.93586 |

_{n}(

**C**) values in parentheses, with deviating decimal places in bold font; all λ

_{1}(

**C**) and λ

_{n}(

**W**) reported decimal place values agree.

Quantity | Category | England | Chicago | Edmon-Ton | North Carolina | US HSA | Texas | Syracuse |
---|---|---|---|---|---|---|---|---|

tuple small p | 2 | 79 | 0 | 23 | 0 | 6 | 7 | 48 |

3 | 6 ^{†} | 0 | 9 ^{†} | 0 | 1 | 2 ^{†} | 7 | |

inertia | n_{+} | 349 | 891 | 895 | 909 | 1442 | 2205 | 3242 |

n_{0} | 93 | 0 | 41 | 0 | 8 | 10 | 70 | |

n_{−} | 397 | 1176 | 1162 | 1286 | 1958 | 3050 | 3937 | |

exponent | γ | 1.4178 | 1.5656 | 1.2890 | 1.2279 | 2.2820 | 1.6077 | 1.8094 |

θ | 1.3732 | 1.4636 | 1.4546 | 1.4297 | 1.6744 | 1.4771 | 1.6084 | |

λ > 0 bivariate regression | intercept | 0.0504 | 0.0361 | 0.0348 | 0.0283 | 0.0643 | 0.0419 | 0.0518 |

slope | 0.9429 | 0.9700 | 0.9643 | 0.9745 | 0.9632 | 0.9684 | 0.9520 | |

R^{2} | 0.9973 | 0.9993 | 0.9992 | 0.9995 | 0.9966 | 0.9988 | 0.9979 |

^{†}includes identified overlap matchings with a 2-tuple.

**Table 5.**The matrix appearing in both https://stackoverflow.com/questions/11966515/find-all-lineary-dependent-subsets-of-vectors (accessed on 24 October 2023) and https://math.stackexchange.com/questions/182753/find-all-linearly-dependent-subsets-of-this-set-of-vectors (accessed on 24 October 2023), with its rows labeled to indicate its transpose.

c_{1} | c_{2} | c_{3} | c_{4} | c_{5} | c_{6} | |
---|---|---|---|---|---|---|

r_{1} | 1 | 1 | 1 | 0 | 1 | 0 |

r_{2} | 0 | 0 | 1 | 0 | 0 | 0 |

r_{3} | 1 | 0 | 0 | 0 | 0 | 0 |

r_{4} | 0 | 0 | 0 | 1 | 0 | 0 |

r_{5} | 1 | 1 | 0 | 0 | 1 | 0 |

r_{6} | 0 | 0 | 1 | 1 | 0 | 0 |

r_{7} | 1 | 0 | 1 | 1 | 0 | 0 |

_{j}denotes column j, and r

_{j}denotes row j; transposing the rows and columns produces the binary rows matrix.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Griffith, D.A.
Some Comments about Zero and Non-Zero Eigenvalues from Connected Undirected Planar Graph Adjacency Matrices. *AppliedMath* **2023**, *3*, 771-798.
https://doi.org/10.3390/appliedmath3040042

**AMA Style**

Griffith DA.
Some Comments about Zero and Non-Zero Eigenvalues from Connected Undirected Planar Graph Adjacency Matrices. *AppliedMath*. 2023; 3(4):771-798.
https://doi.org/10.3390/appliedmath3040042

**Chicago/Turabian Style**

Griffith, Daniel A.
2023. "Some Comments about Zero and Non-Zero Eigenvalues from Connected Undirected Planar Graph Adjacency Matrices" *AppliedMath* 3, no. 4: 771-798.
https://doi.org/10.3390/appliedmath3040042