Next Article in Journal
A Quadratic–Exponential Model of Variogram Based on Knowing the Maximal Variability: Application to a Rainfall Time Series
Previous Article in Journal
Analysis of a Novel Two-Dimensional Lattice Hydrodynamic Model Considering Predictive Effect
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geary’s c and Spectral Graph Theory

School of Informatics and Data Science, Hiroshima University, 1-2-1 Kagamiyama, Higashi-Hiroshima 739-8525, Japan
Mathematics 2021, 9(19), 2465; https://doi.org/10.3390/math9192465
Submission received: 6 September 2021 / Revised: 22 September 2021 / Accepted: 22 September 2021 / Published: 3 October 2021
(This article belongs to the Section Probability and Statistics)

Abstract

:
Spatial autocorrelation, of which Geary’s c has traditionally been a popular measure, is fundamental to spatial science. This paper provides a new perspective on Geary’s c. We discuss this using concepts from spectral graph theory/linear algebraic graph theory. More precisely, we provide three types of representations for it: (a) graph Laplacian representation, (b) graph Fourier transform representation, and (c) Pearson’s correlation coefficient representation. Subsequently, we illustrate that the spatial autocorrelation measured by Geary’s c is positive (resp. negative) if spatially smoother (resp. less smooth) graph Laplacian eigenvectors are dominant. Finally, based on our analysis, we provide a recommendation for applied studies.

1. Introduction

Spatial autocorrelation is fundamental to spatial science (Getis [1]). This is because it describes the similarity between signals on adjacent vertices. Strong positive (resp. negative) spatial autocorrelation occurs when most signals on adjacent vertices take similar (resp. dissimilar) values. If such clear tendencies do not exist, spatial autocorrelation is weak. Geary’s [2] c, which is a spatial generalization of the von Neumann [3] ratio, has traditionally been a popular measure of spatial autocorrelation. There exists
positive spatial autocorrelation if c < 1 , no spatial autocorrelation if c = 1 , negative spatial autocorrelation if c > 1 .
See, e.g., de Jong et al. [4] (Equation (6)). Herein, we provide a new perspective on Geary’s c. We discuss this using concepts from spectral graph theory/linear algebraic graph theory and, based on our analysis, provide a recommendation for applied studies.
To present our contributions more precisely, let us clarify the graph considered herein. Let G = ( V , E ) , where V = { v 1 , , v n } , denote an undirected graph without loops and multiple edges. We assume that n (the number of vertices) and m (the number of edges) are such that 2 n < and m 1 , respectively. Then, m satisfies 1 m n ( n 1 ) 2 = n 2 < . For i , j = 1 , , n , let
w i j > 0 if { v i , v j } E , w i j = 0 if { v i , v j } E ,
and W = [ w i j ] R n × n . Then, for i , j = 1 , , n , w i j 0 , w i j = w j i , w i j = 0 if i = j , and i = 1 n j = 1 n w i j > 0 . Accordingly, W is a non-negative, symmetric, hollow, and nonzero matrix.
Example 1.
As an example of G = ( V , E ) , consider the graph G = ( V , E ) such that V = { 1 , 2 , 3 , 4 } and E = { { 1 , 2 } , { 1 , 3 } , { 2 , 3 } , { 3 , 4 } } . See Figure 1, which depicts G = ( V , E ) . In this case, given that { 1 , 4 } and { 2 , 4 } do not belong to E, the corresponding W , denoted by W , is
W = 0 w 12 w 13 0 w 12 0 w 23 0 w 13 w 23 0 w 34 0 0 w 34 0 R 4 × 4 ,
which is a non-negative, symmetric, hollow, and nonzero matrix.
Let y i denote the observation (signal) on the vertex i for i = 1 , , n such that i = 1 n ( y i y ¯ ) 2 > 0 , where y ¯ = 1 n i = 1 n y i . Following Cliff and Ord [5,6,7,8], Geary’s c is defined as
c = n 1 2 Ω i = 1 n j = 1 n w i j ( y i y j ) 2 i = 1 n ( y i y ¯ ) 2 ,
where Ω denotes the sum of all w i j for i , j = 1 , , n , i.e., Ω = i = 1 n j = 1 n w i j , which is positive by assumption. Note that, as stated, Geary’s c is a spatial generalization of the von Neumann [3] ratio:
η = n n 1 i = 2 n ( y i y i 1 ) 2 i = 1 n ( y i y ¯ ) 2 .
We briefly describe this in the Appendix A.14.
Let D = diag ( d 1 , , d n ) , where d i = j = 1 n w i j for i = 1 , , n , and
L = D W ,
which is referred to as graph Laplacian. By assumption, L R n × n is a symmetric matrix.
Example 2.
As an example of L , we show L , which denotes the graph Laplacian of G = ( V , E ) . That is,
L = w 12 + w 13 w 12 w 13 0 w 12 w 12 + w 23 w 23 0 w 13 w 23 w 13 + w 23 + w 34 w 34 0 0 w 34 w 34 R 4 × 4 ,
which is a symmetric matrix.
As L is a real symmetric matrix, it can be decomposed as
L = U Λ U ,
where U = [ u 1 , , u n ] R n × n is an orthogonal matrix and Λ = diag ( λ 1 , , λ n ) such that λ 1 λ n . Concerning the eigenvalues of L , we assume λ 2 < λ n .
Remark 1.
We note that by assuming λ 2 < λ n we exclude the case such that λ 2 = = λ n . Then, it follows that
( n 1 ) λ 2 < i = 2 n λ i < ( n 1 ) λ n .
We exclude it because in this case Geary’s c equals one for any y . For more details, see the Appendix A.1.
In spectral graph theory/linear algebraic graph theory, the following linear transformation:
U y = u 1 y , , u n y ,
where y = [ y 1 , , y n ] , is referred to as graph Fourier transform of y . In addition, λ i and u i for i = 1 , , n are referred to as graph Laplacian eigenvalues and graph Laplacian eigenvectors, respectively (see Hammond et al. [9] and Shuman et al. [10]). Let
α = [ α 1 , , α n ] = u 1 y , , u n y = U y .
Then, given y = U U y = U α , y can be represented as a linear combination of graph Laplacian eigenvectors, u 1 , , u n , as follows:
y = α 1 u 1 + + α n u n .
Moreover, we have
i = 1 n ( y i y ¯ ) 2 = α 2 2 + + α n 2 ,
which represents Parseval’s identity in the graph Fourier transform, see, e.g., Shuman et al. [11]. (Proof of (11) is provided in the Appendix A.2).
Remark 2.
(i) Harvey [12] (Equation (2.13)) represents Parseval’s identity in the Fourier representation of time series, which can be regarded as a special case of (11). See also Anderson [13] (Section 4.2.2). M in Anderson [13] (Equations (14) and (21)) corresponds to U in (6) for the case in which L equals A 0 in Strang [14] (p. 136). (ii) The discrete cosine transform developed by Ahmed et al. [15] is an example of graph Fourier transform. For more details, see Appendix A.14.
Let z R n denote the vector of the standard scores of y :
z = [ z 1 , , z n ] = y 1 y ¯ s , , y n y ¯ s ,
where s denotes the sample standard deviation of y , i.e., the positive square root of s 2 = 1 n 1 i = 1 n ( y i y ¯ ) 2 , and denote the graph Fourier transform of z by β :
β = [ β 1 , , β n ] = [ u 1 z , , u n z ] = U z .
Now, we are ready to state our contributions. As stated, this paper reconsiders Geary’s c using concepts from spectral graph theory. Our contributions herein are twofold.
  • We present three types of representations for it. As the first type of representation, we give two matrix form representations that use graph Laplacian L . As the second type of representation, we express it using α 2 2 , , α n 2 and β 2 2 , , β n 2 . Recall that they are the graph Fourier transform of y and z , respectively. As the third type of representation, we show its expression using squared Pearson’s correlation coefficients between u i and y for i = 2 , , n .
  • We illustrate that the spatial autocorrelation measured by Geary’s c is positive (resp. negative) if spatially smoother (resp. less smooth) graph Laplacian eigenvectors are dominant. Here, we note that u i is spatially smoother than u i + 1 for i = 1 , , n 1 .
The organization of the paper is as follows. Section 2 fixes some notations and presents key preliminary results for L in (5). Section 3 and Section 4 present the contributions stated above in order. Section 5 concludes the paper. In Section 5, we provide a recommendation for applied studies.

2. Preliminaries

2.1. Some Notations

Let ι = [ 1 , , 1 ] R n , I n be the identity matrix of order n, e i denote the ith column of I n for i = 1 , , n , i.e., I n = [ e 1 , , e n ] , and Q ι = I n ι ( ι ι ) 1 ι . Note that Q ι is the orthogonal projection matrix to the orthogonal complement of the space spanned by ι . For a vector γ = [ γ 1 , , γ n ] , γ 2 = γ γ = γ 1 2 + + γ n 2 . Let α 2 = [ α 2 , , α n ] , Λ 2 = diag ( λ 2 , , λ n ) , and U 2 = [ u 2 , , u n ] . For a vector υ R n , denote Pearson’s correlation coefficient between u i and υ by ρ ( u i , υ ) :
ρ ( u i , υ ) = u i Q ι υ u i Q ι u i υ Q ι υ , i = 2 , , n .

2.2. Key Preliminary Results for L

In this section, we provide key preliminary results for L in (5).
First, given that W is a symmetric matrix, L satisfies
y L y = 1 2 i = 1 n j = 1 n w i j ( y i y j ) 2 .
(Proof of (14) is provided in the Appendix A.3). Given that w i j 0 for i , j = 1 , , n , (14) leads to y L y 0 , from which we have λ i 0 for i = 1 , , n . (See also (18) and (19)).
In addition, it follows that
L ι = D ι W ι = [ d 1 , , d n ] [ d 1 , , d n ] = 0 ,
which implies that 0 , 1 n ι is an eigenpair of L . Combining these results, we obtain λ 1 = 0 , u 1 = 1 n ι , and
Q ι u i = u i ι ( ι ι ) 1 ι u i = u i , i = 2 , , n .
We note that (16) is natural because u 2 , , u n belong to the orthogonal complement of the space spanned by ι .
Moreover, given that L is a symmetric matrix such that L ι = 0 , it follows that ( L ι ) = ι L = ι L = 0 , which leads to
Q ι L Q ι = L .
Finally, we have the following result.
Lemma 1.
Let b i j = w i j ( e i e j ) R n for i , j = 1 , , n and B be an n × n ( n 1 ) 2 matrix such that B = [ B 1 , , B n 1 ] , where B k = [ b k ( k + 1 ) , , b k n ] R n × ( n k ) for k = 1 , , n 1 . (Observe that k = 1 n 1 ( n k ) = n ( n 1 ) n ( n 1 ) 2 = n ( n 1 ) 2 = n 2 ). Then, it follows that
f L f = B f 2 ,
B u i 2 = λ i , i = 1 , , n ,
where f denotes an n-dimensional column vector.
Proof. 
See the Appendix A.4. □
Example 3
(Example of B ). Denote B corresponding to G = ( V , E ) by B . Given n = 4 , B is a 4 × 4 · 3 2 matrix such that
B = [ B 1 , B 2 , B 3 ] = [ b 12 , b 13 , b 14 B 1 , b 23 , b 24 B 2 , b 34 B 3 ] = [ w 12 ( e 1 e 2 ) , w 13 ( e 1 e 3 ) , 0 , w 23 ( e 2 e 3 ) , 0 , w 34 ( e 3 e 4 ) ] = w 12 w 13 0 0 0 0 w 12 0 0 w 23 0 0 0 w 13 0 w 23 0 w 34 0 0 0 0 0 w 34 ,
and thus we have
B u i = w 12 ( e 1 e 2 ) u i w 13 ( e 1 e 3 ) u i 0 w 23 ( e 2 e 3 ) u i 0 w 34 ( e 3 e 4 ) u i = w 12 ( u i , 1 u i , 2 ) w 13 ( u i , 1 u i , 3 ) 0 w 23 ( u i , 2 u i , 3 ) 0 w 34 ( u i , 3 u i , 4 ) ,
where u i = [ u i , 1 , , u i , n ] for i = 1 , , n , and
B u i 2 = w 12 ( u i , 1 u i , 2 ) 2 + w 13 ( u i , 1 u i , 3 ) 2 + w 23 ( u i , 2 u i , 3 ) 2 + w 34 ( u i , 3 u i , 4 ) 2 .
Given 0 = λ 1 λ n and λ 2 < λ n , from Lemma 1, we immediately have the following inequalities:
0 = B u 1 2 B u n 2 ,
B u 2 2 < B u n 2 .
As illustrated in (21), B defined in Lemma 1 may be regarded as a spatial differencing matrix. Thus, (22) implies that the ith graph Laplacian eigenvector, u i , is spatially smoother than the ( i + 1 ) th graph Laplacian eigenvector, u i + 1 , for i = 1 , , n 1 .
Remark 3.
(i) f L f in (18) is referred to as graph Laplacian quadratic form of f . In machine learning and statistics, use of graph Laplacian regularized filtering is becoming popular, see, e.g., Shuman et al. [10], Dong et al. [16], and Ricaud et al. [17]. (ii) As illustrated in (20), if { v i , v j } E , then w i j = 0 , from which the corresponding column of B is 0 .
We note that n ( n 1 ) 2 m columns of B are equal to 0 . By removing such columns from B , we can obtain an incidence matrix of G = ( V , E ) , which has n rows and m columns, see Bapat [18] and Gallier [19]. Accordingly, the following matrix is an incidence matrix of G = ( V , E ) :
w 12 w 13 0 0 w 12 0 w 23 0 0 w 13 w 23 w 34 0 0 0 w 34 R | V | × | E | .

3. Three Types of Representations for Geary’s c

In this section, we present three types of representations for Geary’s c: In Section 3.1, we show that it can be represented using L . In Section 3.2, we show that it can be expressed using α 2 2 , , α n 2 and β 2 2 , , β n 2 . In Section 3.3, we show that it can be expressed using ρ 2 ( u i , y ) for i = 2 , , n . Subsequently, in Section 3.4, we clarify the relationship between α i 2 , β i 2 , and ρ 2 ( u i , y ) for i = 2 , , n .

3.1. Graph Laplacian Representation

Geary’s c in (3) can be expressed in matrix notation as follows.
Proposition 1.
(i) Geary’s c can be represented using L as
c = n 1 Ω y L y y Q ι y .
(ii) Geary’s c can also be represented using L very succinctly as
c = 1 Ω z L z .
Proof. 
See the Appendix A.5. □
Remark 4.
(i) (24) corresponds to de Jong et al. [4] (Equation (22)). The matrix B in de Jong et al. [4] is 2 L in our notation. It is notable that de Jong et al. [4] derived this. (ii) (24) also corresponds to Lebichot and Saerens [20] (Equation (7)). Here, we point out that 2 in the equation should be removed.

3.2. Graph Fourier Transform Representation

From (11), we have
y Q ι y = i = 1 n ( y i y ¯ ) 2 = i = 2 n α i 2 .
In addition, given λ 1 = 0 , we have
y L y = y U Λ U y = α Λ α = i = 2 n λ i α i 2 .
Substituting (26) and (27) into (24), it follows that
c = n 1 Ω y L y y Q ι y = n 1 Ω i = 2 n λ i α i 2 i = 2 n α i 2 .
Likewise, Geary’s c can also be represented as
c = 1 Ω z L z = 1 Ω z U Λ U z = 1 Ω i = 1 n λ i ( u i z ) 2 = 1 Ω i = 2 n λ i β i 2 .
Recall that α 2 , , α n in (28) (resp. β 2 , , β n in (29)) are graph Fourier transform of y (resp. z ).
We summarize the properties obtained above in the following proposition.
Proposition 2.
(i) Let θ i = λ i Ω and ν i = 1 n 1 for i = 2 , , n . Then, Geary’s c can be expressed using α 2 2 , , α n 2 as
c = n 1 Ω i = 2 n λ i α i 2 i = 2 n α i 2 = i = 2 n θ i α i 2 i = 2 n ν i α i 2 ,
where both i = 2 n θ i α i 2 and i = 2 n ν i α i 2 are weighted averages of α 2 2 , , α n 2 . In addition, α 2 2 , , α n 2 satisfy
i = 2 n α i 2 = i = 1 n ( y i y ¯ ) 2 .
(ii) Let θ i = λ i Ω for i = 2 , , n . Then, Geary’s c can be expressed using β 2 2 , , β n 2 as
c = 1 Ω i = 2 n λ i β i 2 = i = 2 n θ i β i 2
and it is a weighted average of β 2 2 , , β n 2 , which are non-negative. In addition, β 2 2 , , β n 2 satisfy
i = 2 n β i 2 = n 1 .
Proof. 
See the Appendix A.6. □
Remark 5.
(i) Given 0 λ 2 λ n , we have
0 λ 2 = i = 2 n λ 2 α i 2 i = 2 n α i 2 i = 2 n λ i α i 2 i = 2 n α i 2 i = 2 n λ n α i 2 i = 2 n α i 2 = λ n .
Given (30), multiplying (34) by n 1 Ω > 0 yields
0 ( n 1 ) λ 2 Ω c ( n 1 ) λ n Ω .
We note that the bounds of Geary’s c given by (35) were derived by de Jong et al. [4]. In addition, given that λ 2 < λ n by assumption, it follows that
( n 1 ) λ 2 Ω < ( n 1 ) λ n Ω .
(ii) Let θ 1 = λ 1 Ω . Then, given θ 1 = 0 , (32) can be rewritten as
c = 1 Ω i = 1 n λ i β i 2 = i = 1 n θ i β i 2 .
Thus, c is also a weighted average of β 1 2 , , β n 2 .
From Proposition 2, we have the following results.
Corollary 1.
It follows that
c = ( n 1 ) λ 2 Ω < 1 if α 2 2 = i = 1 n ( y i y ¯ ) 2 and α 3 2 = = α n 2 = 0 , i = 2 n λ i Ω = 1 if α 2 2 = = α n 2 = s 2 , ( n 1 ) λ n Ω > 1 if α 2 2 = = α n 1 2 = 0 and α n 2 = i = 1 n ( y i y ¯ ) 2 ,
and
c = ( n 1 ) λ 2 Ω < 1 if β 2 2 = n 1 and β 3 2 = = β n 2 = 0 , i = 2 n λ i Ω = 1 if β 2 2 = = β n 2 = 1 , ( n 1 ) λ n Ω > 1 if β 2 2 = = β n 1 2 = 0 and β n 2 = n 1 .
Proof. 
See the Appendix A.7. □
Corollary 1 implies that plotting ( α 2 2 , , α n 2 ) or ( β 2 2 , , β n 2 ) is valuable for detecting spatial autocorrelation in y .

3.3. Pearson’s Correlation Coefficient Representation

In this section, we express Geary’s c using Pearson’s correlation coefficient.
Denote Geary’s c for the case in which y equals u i by κ i :
κ i = n 1 Ω u i L u i u i Q ι u i .
Then, we have the following results.
Lemma 2.
κ i can be expressed using λ i as follows:
κ i = ( n 1 ) λ i Ω , i = 2 , , n ,
which thus satisfy κ 2 κ n and κ 2 < κ n .
Proof. 
See the Appendix A.8. □
Remark 6.
Given (35) and (39), κ 2 (resp. κ n ) is the lower (resp. upper) bound of Geary’s c.
In addition, we have the following result.
Lemma 3.
ρ ( u i , y ) equals ρ ( u i , z ) for i = 2 , , n .
Proof. 
See the Appendix A.9. □
Then, given Lemmata 2 and 3, we have the following representation of Geary’s c.
Proposition 3.
For υ = y , z , Geary’s c can be expressed using Pearson’s correlation coefficients as
c = i = 2 n κ i · ρ 2 ( u i , υ ) .
In addition, such correlation coefficients satisfy i = 2 n ρ 2 ( u i , υ ) = 1 .
Proof. 
See the Appendix A.10. □
Remark 7.
Dray [21] represented Moran’s I using Pearson’s correlation coefficient. c = i = 2 n κ i · ρ 2 ( u i , z ) in Proposition 3 corresponds to Dray [21] (Equation (6)).
From Proposition 3, we have the following results.
Corollary 2.
For υ = y , z , it follows that
c = ( n 1 ) λ 2 Ω < 1 if ρ 2 ( u 2 , υ ) = 1 and ρ 2 ( u 3 , υ ) = = ρ 2 ( u n , υ ) = 0 , i = 2 n λ i Ω = 1 if ρ 2 ( u 2 , υ ) = = ρ 2 ( u n , υ ) = 1 n 1 , ( n 1 ) λ n Ω > 1 if ρ 2 ( u 2 , υ ) = = ρ 2 ( u n 1 , υ ) = 0 and ρ 2 ( u n , υ ) = 1 .
Corollary 2 implies that plotting ( ρ 2 ( u 2 , y ) , , ρ 2 ( u n , y ) ) is also valuable for detecting spatial autocorrelation in y .

3.4. Some Remarks

Here, we clarify the relationship between α i 2 , β i 2 , and ρ 2 ( u i , y ) for i = 2 , , n . They are related as
α i 2 = i = 1 n ( y i y ¯ ) 2 · ρ 2 ( u i , y ) = s 2 ( n 1 ) · ρ 2 ( u i , y ) , β i 2 = ( n 1 ) · ρ 2 ( u i , y ) , α i 2 = s 2 β i 2 ,
for i = 2 , , n . Therefore, α i 2 , β i 2 , and ρ 2 ( u i , y ) provide essentially the same information. Nevertheless, among them, we prefer ρ 2 ( u i , y ) to α i 2 and β i 2 . This is because it satisfies
0 ρ 2 ( u i , y ) 1 , i = 2 , , n ,
i = 2 n ρ 2 ( u i , y ) = 1 ,
and thus its distribution appears similar to probability distribution.
We note that (16) is obtainable by dividing (11) by i = 1 n ( y i y ¯ ) 2 , and therefore it is equivalent to Parseval’s identity in the graph Fourier transform in (11). Likewise, (33) is also equivalent to it. This is because (33) is obtainable by dividing (11) by s 2 .

4. An Illustration of When Geary’s c Becomes Greater or Lesser Than One

In this section, we illustrate that Geary’s c becomes greater (resp. lesser) than one if spatially smoother (resp. less smooth) graph Laplacian eigenvectors are dominant. In other words, we show that the spatial autocorrelation measured by Geary’s c is positive (resp. negative) if spatially smoother (resp. less smooth) graph Laplacian eigenvectors are dominant. Recall that, from (22), u i is spatially smoother than u i + 1 for i = 1 , , n 1 .
For this purpose, let
ζ i = exp ( a i ) l = 2 n exp ( 2 a l ) , i = 2 , , n ,
where a in (44) is a real number. Then, it immediately follows that
ζ 2 > > ζ n > 0 if a > 0 , ζ 2 = = ζ n = 1 n 1 > 0 if a = 0 , 0 < ζ 2 < < ζ n if a < 0 ,
and
i = 2 n ζ i 2 = 1 .
Other than these, concerning ζ 2 , , ζ n , we have the following results.
Lemma 4.
It follows that
ζ 2 1 and ζ i 0 for i = 3 , , n as a , ζ i 0 for i = 2 , , n 1 and ζ n 1 as a .
Proof. 
See the Appendix A.11. □
Figure 2 illustrates the results in Lemma 4. It plots ζ 2 , , ζ n for n = 20 and a = 3 , 1.5 , 0.5 , 0 , 0.5 , 1.5 , 3 .
Let
y * = [ y * , 1 , , y * , n ] = U ζ = ζ 1 u 1 + + ζ n u n ,
where ζ = [ ζ 1 , , ζ n ] . Here, ζ 1 is a real number, and ζ i for i = 2 , , n are defined in (44). Recall that { u 1 , , u n } is a basis of n-dimensional Euclidean space, and any vector in the space can be represented uniquely as a linear combination of u 1 , , u n . In addition, we note that if ζ i = u i y ( = α i ) for i = 1 , , n , then y * = y . See (10).
Given the results presented above on ζ 2 , , ζ n and the definition of y * , we have the following results.
Proposition 4.
(i) c B y * 2 = i = 1 n j = 1 n w i j ( y * , i y * , j ) 2 can be represented as
B y * 2 = ζ 2 2 λ 2 + + ζ n 2 λ n
= ζ 2 2 B u 2 2 + + ζ n 2 B u n 2 .
(ii) B y * 2 satisfies the following inequalities:
0 λ 2 < B y * 2 < λ n .
(iii) B y * 2 depends on a in (44) as
B y * 2 λ 2 = B u 2 2 as a , = 1 n 1 i = 2 n λ i = 1 n 1 i = 2 n B u i 2 if a = 0 , λ n = B u n 2 as a .
Proof. 
See the Appendix A.12. □
(i)
Recall that λ 2 (resp. λ n ) denotes the minimum (resp. maximum) of the graph Laplacian eigenvalues, λ 2 , , λ n , and 1 n 1 i = 2 n λ i is the average of them. In addition, from (7), it follows that
0 λ 2 < 1 n 1 i = 2 n λ i < λ n .
(ii)
Lemma 4 and Proposition 4 imply that
(a)
as a increases from 0 to , spatially smoother graph Laplacian eigenvectors become more dominant, and accordingly y * becomes spatially smoother;
(b)
as a decreases from 0 to , spatially less smooth graph Laplacian eigenvectors become more dominant, and accordingly y * becomes less spatially smoother; and
(c)
if a equals 0, then all graph Laplacian eigenvectors contribute to y * equally.
Let
c * = n 1 2 Ω i = 1 n j = 1 n w i j ( y * , i y * , j ) 2 i = 1 n ( y * , i y ¯ * ) 2 = n 1 Ω y * L y * y * Q ι y * ,
where y ¯ * = 1 n i = 1 n y * , i . Note that c * represents Geary’s c for y * defined in (47).
Then, we have the following results.
Proposition 5.
(i) c * in (52) can be represented as follows:
c * = n 1 2 Ω i = 1 n j = 1 n w i j ( y * , i y * , j ) 2
= n 1 Ω ( ζ 2 2 λ 2 + + ζ n 2 λ n )
= n 1 Ω ( ζ 2 2 B u 2 2 + + ζ n 2 B u n 2 ) .
(ii) c * satisfies the following inequalities:
0 ( n 1 ) λ 2 Ω < c * < ( n 1 ) λ n Ω .
(iii) c * depends on a in (44) as
c * ( n 1 ) λ 2 Ω = n 1 Ω B u 2 2 < 1 as a , = i = 2 n λ i Ω = n 1 Ω 1 n 1 i = 2 n B u i 2 = 1 if a = 0 , ( n 1 ) λ n Ω = n 1 Ω B u n 2 > 1 as a .
Proof. 
See the Appendix A.13. □
Remark 8.
(i) From (54), it follows that ζ 1 does not affect c * . (ii) Given that ζ i 2 > 0 for i = 2 , , n and i = 2 n ζ i 2 = 1 , ζ 2 2 λ 2 + + ζ n 2 λ n in (47) is a weighted average of the graph Laplacian eigenvalues, λ 2 , , λ n .
By combining Lemma 4 and Proposition 5, it follows that (i) as a increases from 0 to , spatially smoother graph Laplacian eigenvectors become more dominant, and accordingly the corresponding Geary’s c tends to ( n 1 ) λ 2 Ω ( = κ 2 ) , which is its lower bound; (ii) as a decreases from 0 to , spatially less smooth graph Laplacian eigenvectors become more dominant, and accordingly the corresponding Geary’s c tends to ( n 1 ) λ n Ω ( = κ n ) , which is its upper bound; and (iii) if a equals 0, then all graph Laplacian eigenvectors contribute to y * equally, and the corresponding Geary’s c equals 1.
Let us illustrate Propositions 4 and 5 by specifying W . We consider three W ’s. The first one is W in (2) with w 12 = 1 , w 13 = 0.5 , w 23 = 1 , and w 34 = 1 . As stated, W is a non-negative, symmetric, hollow, and nonzero matrix. The second one is
W p = 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 R n × n ,
where n = 20 . This matrix is the adjacency matrix of a path graph G p = ( V p , E p ) such that V p = { 1 , , n } and E p = { { 1 , 2 } , , { n 1 , n } } . See Figure A1, which depicts G p = ( V p , E p ) where n = 6 . Observe that it is a non-negative, symmetric, hollow, and nonzero matrix. Some remarks on W p are provided in the Appendix A.14. The third one is W q shown in heatmap style in Figure A2. As shown in the figure, W q is also a nonnegative, symmetric, hollow, and nonzero matrix.
Table 1 tabulates the results. LB and UB, respectively, represent lower bound and upper bound of B y * 2 (resp. c * ) given by (50) (resp. (56)). From the table, we can confirm that these results clearly illustrate Propositions 4 and 5. That is to say, for all W ’s, we can observe that as a increases from 0 to 3 (resp. decreases from 0 to 3 ), y * becomes spatially smoother (resp. less smoother), and Geary’s c tends to ( n 1 ) λ 2 Ω (resp. ( n 1 ) λ n Ω ).
Remark 9.
(i) If W = W , then n 1 Ω equals 3 7 . Thus, if W = W , then it follows that c * = 3 7 B y * 2 . (ii) If W = W p , then n 1 Ω equals 1 2 . Thus, if W = W p , then it follows that c * = 1 2 B y * 2 . (iii) If W = W q , then n 1 Ω equals 19 139.8 . Thus, if W = W q , then it follows that c * = 19 139.8 B y * 2 .

5. Concluding Remarks

Herein, we have provided a new perspective on Geary’s c. We reconsidered it using concepts from spectral graph theory/linear algebraic graph theory.
First, we demonstrated three types of representations for it. The first is expressions based on graph Laplacian (Proposition 1), the second is expressions based on graph Fourier transform (Proposition 2), and the third is an expression based on Pearson’s correlation coefficient (Proposition 3).
Second, we illustrated that the spatial autocorrelation measured by Geary’s c is positive (resp. negative) if spatially smoother (resp. less smooth) graph Laplacian eigenvectors are dominant (Propositions 4 and 5 and Table 1).
In closing, based on our analysis, we provide a recommendation for applied studies: For detecting spatial autocorrelation, in addition to calculating Geary’s c, plotting ρ 2 ( u i , y ) , which is the squared Pearson’s correlation coefficient between u i and y , for i = 2 , , n is valuable (see Section 3.4 for a related discussion). This is because
it provides more detailed information on spatial autocorrelation measured by Geary’s c. Its usefulness is similar to frequency domain analysis of univariate time series.

Funding

This research was funded by Japan Society for the Promotion of Science, grant number 20K20759.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We are grateful to Qu Feng, Kazuhiko Hayakawa, Peter C. B. Phillips, Takashi Yamagata, and two anonymous referees for their valuable suggestions and comments. We also thank the participants of the online seminars/conferences hosted by Hiroshima University, Nanyang Technological University, Singapore Management University, and University of Tokyo. The usual caveat applies.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. The Case in Which λ2 = ⋯ = λn

Lemma A1.
If λ 2 = = λ n , then Geary’s c equals one for any y .
Proof. 
Given λ 2 = = λ n = λ , we have Ω = tr ( L ) = ( n 1 ) λ . Then, from (30), it follows that
c = n 1 Ω i = 2 n λ i α i 2 i = 2 n α i 2 = n 1 ( n 1 ) λ λ i = 2 n α i 2 i = 2 n α i 2 = 1 .
Example A1.
Consider the complete graph with n vertices whose W equals ι ι I n . Then, in this case, L equals
( n 1 ) I n ( ι ι I n ) = n I n ι ι = n I n 1 n ι ι = n Q ι .
As the eigenvalues of Q ι are 0 with multiplicity 1 and 1 with multiplicity n 1 , the eigenvalues of n Q ι are 0 with multiplicity 1 and n with multiplicity n 1 . Given that Ω equals n ( n 1 ) , Geary’s c equals
n 1 Ω y L y y Q ι y = n 1 n ( n 1 ) n y Q ι y y Q ι y = 1 .

Appendix A.2. Proof of (11)

Given Q ι u 1 = 0 and Q ι U 2 = U 2 , we have
y Q ι U = y Q ι [ u 1 , U 2 ] = [ 0 , y U 2 ] = [ 0 , α 2 ] ,
which leads to
i = 1 n ( y i y ¯ ) 2 = y Q ι y = y Q ι U U Q ι y = α 2 α 2 = α 2 2 + + α n 2 .
Here, we note that we may prove (11) alternatively by combining
α 1 2 + i = 2 n α i 2 = α 2 = U y 2 = y 2 = i = 1 n y i 2
and α 1 2 = ( u 1 y ) 2 = 1 n i = 1 n y i 2 = n ( y ¯ ) 2 .

Appendix A.3. Proof of (14)

i = 1 n j = 1 n w i j ( y i y j ) 2 can be decomposed as follows:
i = 1 n j = 1 n w i j ( y i y j ) 2 = i = 1 n j = 1 n w i j ( y i 2 2 y i y j + y j 2 ) = i = 1 n j = 1 n w i j y i 2 2 i = 1 n j = 1 n w i j y i y j + i = 1 n j = 1 n w i j y j 2 .
Given (a) y i = y e i , (b) y j = e j y , and (c) i = 1 n j = 1 n w i j e i e j = W , we have
i = 1 n j = 1 n w i j y i y j = i = 1 n j = 1 n w i j y e i e j y = y i = 1 n j = 1 n w i j e i e j y = y W y .
In addition, we have
i = 1 n j = 1 n w i j y i 2 = i = 1 n y i 2 j = 1 n w i j = i = 1 n y i 2 d i = y D y
and, given that w i j = w j i for i , j = 1 , , n , it follows that
i = 1 n j = 1 n w i j y j 2 = j = 1 n i = 1 n w j i y j 2 = j = 1 n y j 2 i = 1 n w j i = j = 1 n y j 2 d j = y D y .
Finally, substituting (A3)–(A5) into (A2) yields
i = 1 n j = 1 n w i j ( y i y j ) 2 = y D y 2 y W y + y D y = 2 y ( D W ) y = 2 y L y .

Appendix A.4. Proof of Lemma 1

Let f = [ f 1 , , f n ] . From (14), it follows that
f L f = 1 2 i = 1 n j = 1 n w i j ( f i f j ) 2 = 1 2 i = 1 n j = 1 n w i j ( f e i f e j ) ( e i f e j f ) = f 1 2 i = 1 n j = 1 n w i j ( e i e j ) ( e i e j ) f = f 1 2 i = 1 n j = 1 n b i j b i j f .
Given (a) b i j b i j = w i j ( e i e j ) ( e i e j ) = 0 R n × n if i = j , (b) b i j b i j = w i j ( e i e j ) ( e i e j ) = w j i ( e j e i ) ( e j e i ) = b j i b j i R n × n for i , j = 1 , , n , and (c) for k = 1 , , n 1 ,
B k B k = [ b k ( k + 1 ) , , b k n ] b k ( k + 1 ) b k n = j = k + 1 n b k j b k j ,
it follows that
1 2 i = 1 n j = 1 n b i j b i j = 1 2 · 2 j = 2 n b 1 j b 1 j + j = 3 n b 2 j b 2 j + + j = n n b ( n 1 ) j b ( n 1 ) j = B 1 B 1 + B 2 B 2 + + B n 1 B n 1 = B B .
By substituting (A7) into (A6), we obtain f L f = f B B f = B f 2 . Next, given that u i U = u i [ u 1 , , u n ] = e i , we have
u i L u i = u i U Λ U u i = e i Λ e i = λ i , i = 1 , , n .
Given these results, we have
B u i 2 = u i L u i = λ i , i = 1 , , n .

Appendix A.5. Proof of Proposition 1

As stated, (24) immediately follows from y L y = 1 2 i = 1 n j = 1 n w i j ( y i y j ) 2 and y Q ι y = i = 1 n ( y i y ¯ ) 2 . Next, given (a) Q ι L Q ι = L , (b) s = 1 n 1 y Q ι y , and (c) z = 1 s Q ι y , Geary’s c in (24) can be expressed as
c = n 1 Ω y L y y Q ι y = n 1 Ω y Q ι L Q ι y ( n 1 ) s 2 = 1 Ω Q ι y s L Q ι y s = 1 Ω z L z .

Appendix A.6. Proof of Proposition 2

Most results have already been proved, and we only have to prove that θ i 0 for i = 2 , , n , i = 2 n θ i = 1 , and (33). (i) Given Ω > 0 and λ i 0 for i = 2 , , n , we have θ i 0 for i = 2 , , n . (ii) Given λ 1 = 0 and tr ( L ) = Ω , we have i = 2 n λ i = i = 1 n λ i = tr ( L ) = Ω , which leads to
i = 2 n θ i = i = 2 n λ i Ω = i = 2 n λ i Ω = 1 .
(iii) Finally, we prove (33). Given (a) Q ι z = z , (b) z = i = 1 n β i u i , (c) Q ι u 1 = 0 , and (d) Q ι u i = u i for i = 2 , , n , we have
z = Q ι z = Q ι i = 1 n β i u i = i = 2 n β i u i .
Thus, given that U = [ u 1 , , u n ] is an orthogonal matrix and 1 n 1 z Q ι z = 1 n 1 z z = 1 , it follows that
i = 2 n β i 2 = β 2 u 2 + + β n u n 2 = z 2 = z z = n 1 .

Appendix A.7. Proof of Corollary 1

Given ( n 1 ) λ 2 < i = 2 n λ i < ( n 1 ) λ n from (7) and Ω = tr ( L ) = i = 2 n λ i > 0 , we have the following inequalities:
( n 1 ) λ 2 Ω < i = 2 n λ i Ω = 1 < ( n 1 ) λ n Ω .

Appendix A.8. Proof of Lemma 2

Given that for i = 2 , , n , Q ι u i = u i and u i u i = 1 , it follows that
u i Q ι u i = u i u i = 1 .
In addition, from (A8), we have u i L u i = λ i for i = 1 , , n . Combining these results yields
κ i = n 1 Ω u i L u i u i Q ι u i = ( n 1 ) λ i Ω , i = 2 , , n .
Next, the inequalities, κ 2 κ n and κ 2 < κ n , immediately follow from λ 2 λ n and λ 2 < λ n .

Appendix A.9. Proof of Lemma 3

Given Q ι z = z , 1 n 1 z Q ι z = 1 , and s = 1 n 1 y Q ι y , we have
ρ ( u i , z ) = u i Q ι z u i Q ι u i z Q ι z = u i u i Q ι u i z n 1 = u i u i Q ι u i Q ι y s 1 n 1 = u i Q ι y u i Q ι u i y Q ι y = ρ ( u i , y ) , i = 2 , , n .

Appendix A.10. Proof of Proposition 3

Given (a) Q ι u i = u i and u i u i = 1 for i = 2 , , n and (b) y Q ι y = i = 2 n α i 2 , we have
ρ ( u i , y ) = u i Q ι y u i Q ι u i y Q ι y = u i y y Q ι y = α i l = 2 n α l 2 , i = 2 , , n ,
which leads to
ρ 2 ( u i , y ) = α i 2 l = 2 n α l 2 , i = 2 , , n .
Given Lemmata 2 and 3, by substituting (A11) into (28), it follows that
c = n 1 Ω i = 2 n λ i · ρ 2 ( u i , y ) = i = 2 n κ i · ρ 2 ( u i , y ) = i = 2 n κ i · ρ 2 ( u i , z ) .
Finally, from Lemma 3 and (A11), it follows that
i = 2 n ρ 2 ( u i , z ) = i = 2 n ρ 2 ( u i , y ) = i = 2 n α i 2 l = 2 n α l 2 = 1 .

Appendix A.11. Proof of Lemma 4

For i = 3 , , n , it follows that
ζ i 2 ζ 2 2 = exp ( 2 a i ) exp ( 2 a · 2 ) = exp { 2 a ( 2 i ) } 0 as a .
Accordingly, we obtain
ζ 2 2 = exp ( 2 a · 2 ) l = 2 n exp ( 2 a l ) = 1 1 + l = 3 n exp ( 2 a l ) exp ( 2 a · 2 ) 1 as a
and for i = 3 , , n ,
ζ i 2 = exp ( 2 a i ) l = 2 n exp ( 2 a l ) = exp ( 2 a i ) exp ( 2 a · 2 ) 1 + l = 3 n exp ( 2 a l ) exp ( 2 a · 2 ) 0 as a .
Therefore, given ζ 2 > 0 , it follows that ζ 2 1 and ζ i 0 for i = 3 , , n as a .
Likewise, for i = 2 , , n 1 , it follows that
ζ i 2 ζ n 2 = exp ( 2 a i ) exp ( 2 a n ) = exp { 2 a ( n i ) } 0 as a .
Accordingly, we obtain
ζ n 2 = exp ( 2 a n ) l = 2 n exp ( 2 a l ) = 1 l = 2 n 1 exp ( 2 a l ) exp ( 2 a n ) + 1 1 as a
and for i = 2 , , n 1 ,
ζ i 2 = exp ( 2 a i ) l = 2 n exp ( 2 a l ) = exp ( 2 a i ) exp ( 2 a n ) l = 2 n 1 exp ( 2 a l ) exp ( 2 a n ) + 1 0 as a .
Therefore, given ζ n > 0 , it follows that ζ i 0 for i = 2 , , n 1 and ζ n 1 as a .

Appendix A.12. Proof of Proposition 4

(i) Given B y * 2 = y * L y * from (18), y * = U ζ , and λ 1 = 0 , we have
B y * 2 = y * L y * = ( U ζ ) L ( U ζ ) = ζ U U Λ U U ζ = ζ Λ ζ = ζ 2 2 λ 2 + + ζ n 2 λ n = ζ 2 2 B u 2 2 + + ζ n 2 B u n 2 .
Note that the last equality in (A12) follows from (19). (ii) Given (A12), 0 λ 2 λ n , λ 2 < λ n , i = 2 n ζ i 2 = 1 , and ζ i 2 > 0 for i = 2 , , n , we have
0 λ 2 = λ 2 ( ζ 2 2 + + ζ n 2 ) < B y * 2 = ζ 2 2 λ 2 + + ζ n 2 λ n < λ n ( ζ 2 2 + + ζ n 2 ) = λ n .
(iii) As shown in (45), if a = 0 , then ζ i = 1 n 1 for i = 2 , , n , from which we have
B y * 2 = ζ 2 2 λ 2 + + ζ n 2 λ n = 1 n 1 i = 2 n λ i = 1 n 1 i = 2 n B u i 2 .
Next, given Lemma 4, we have
B y * 2 = ζ 2 2 λ 2 + + ζ n 2 λ n λ 2 = B u 2 2 as a , B y * 2 = ζ 2 2 λ 2 + + ζ n 2 λ n λ n = B u n 2 as a .

Appendix A.13. Proof of Proposition 5

(i) Given that Q ι U = [ 0 , u 2 , , u n ] , we have Q ι U ζ = i = 2 n ζ i u i . Accordingly, given y * = U ζ and i = 2 n ζ i 2 = 1 , we obtain
y * Q ι y * = ( Q ι U ζ ) ( Q ι U ζ ) = i = 2 n ζ i u i i = 2 n ζ i u i = i = 2 n ζ i 2 = 1 .
Therefore, we have
c * = n 1 Ω y * L y * y * Q ι y * = n 1 Ω y * L y * .
Next, given (48) and (49), we have (50) and (51). (ii) Given c * = n 1 Ω B y * 2 , by multiplying (50) by n 1 Ω > 0 , we have
0 ( n 1 ) λ 2 Ω < n 1 Ω B y * 2 = c * < ( n 1 ) λ n Ω .
(iii) As shown in (45), if a = 0 , then ζ i = 1 n 1 for i = 2 , , n . Accordingly, if a = 0 , then from (54) and (55), we have
c * = i = 2 n λ i Ω = n 1 Ω 1 n 1 i = 2 n B u i 2 = 1 .
Note that the last equality follows from i = 2 n λ i = Ω . Next, given Lemma 4, we have
c * = n 1 Ω ( ζ 2 2 λ 2 + + ζ n 2 λ n ) ( n 1 ) λ 2 Ω < 1 as a , c * = n 1 Ω ( ζ 2 2 λ 2 + + ζ n 2 λ n ) ( n 1 ) λ n Ω > 1 as a .
Note that the last inequalities of (A17) follow from
( n 1 ) λ 2 < i = 2 n λ i = Ω < ( n 1 ) λ n .

Appendix A.14. Some Remarks on Wp in (58)

(i) L p , which denotes the graph Laplacian corresponding to W p , is explicitly expressed as
L p = 1 1 0 0 1 2 1 0 0 1 2 1 0 0 1 1 R n × n .
(ii) Given ι W p ι = 2 ( n 1 ) and y L p y = i = 2 n ( y i y i 1 ) 2 , if W = W p , then Geary’s c for this case, denoted by c p , is expressed by η in (4) as follows:
c p = n 1 ι W p ι y L p y y Q ι y = 1 2 i = 2 n ( y i y i 1 ) 2 i = 1 n ( y i y ¯ ) 2 = n 1 2 n η .
Recall that η denotes the von Neumann ratio. (iii) The graph Fourier transform corresponding to W p is equivalent to the discrete cosine transform developed by Ahmed et al. [15]. Denote the spectral decomposition of L p by
U p Λ p U p
and let U p = [ u p , 1 , , u p , n ] . Figure A3 depicts u p , 1 , , u p , 6 for n = 200 . We can observe that u p , i has longer wavelength than u p , i + 1 for i = 2 , , 5 . More precisely, the period of u p , i is 2 n i 1 for i = 2 , , n . For example, the periods of u p , 2 and u p , 5 are 2 n and n 2 , respectively. Note that in the case 0 < 1 n < 2 n = 2 200 = 0.1 . For more information about Λ p and U p , see, e.g., von Neumann [3], Jain [22], O’Sullivan [23], Strang [14], Garcia [24], Nakatsukasa et al. [25], Strang and MacNamara [26], and Yamada [27,28].
Figure A1. A path graph with six vertices.
Figure A1. A path graph with six vertices.
Mathematics 09 02465 g0a1
Figure A2. W q R 20 × 20 shown in heatmap style.
Figure A2. W q R 20 × 20 shown in heatmap style.
Mathematics 09 02465 g0a2
Figure A3. The first six columns of U p = [ u p , 1 , , u p , n ] in (A20) for n = 200 .
Figure A3. The first six columns of U p = [ u p , 1 , , u p , n ] in (A20) for n = 200 .
Mathematics 09 02465 g0a3

References

  1. Getis, A. A history of the concept of spatial autocorrelation: A geographer’s perspective. Geogr. Anal. 2008, 40, 297–309. [Google Scholar] [CrossRef]
  2. Geary, R.C. The contiguity ratio and statistical mapping. Inc. Stat. 1954, 5, 115–145. [Google Scholar] [CrossRef]
  3. von Neumann, J. Distribution of the ratio of the mean square successive difference to the variance. Ann. Math. Stat. 1941, 12, 367–395. [Google Scholar] [CrossRef]
  4. de Jong, P.; Sprenger, C.; van Veen, F. On extreme values of Moran’s I and Geary’s c. Geogr. Anal. 1984, 16, 17–24. [Google Scholar] [CrossRef]
  5. Cliff, A.D.; Ord, J.K. The problem of spatial autocorrelation. In Studies in Regional Science; Scott, A.J., Ed.; Pion: London, UK, 1969; pp. 25–55. [Google Scholar]
  6. Cliff, A.D.; Ord, J.K. Spatial autocorrelation: A review of existing and new measures with applications. Econ. Geogr. 1970, 46, 269–292. [Google Scholar] [CrossRef]
  7. Cliff, A.D.; Ord, J.K. Spatial Autocorrelation; Pion: London, UK, 1973. [Google Scholar]
  8. Cliff, A.D.; Ord, J.K. Spatial Processes: Models and Applications; Pion: London, UK, 1981. [Google Scholar]
  9. Hammond, D.K.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. Anal. 2011, 30, 129–150. [Google Scholar] [CrossRef] [Green Version]
  10. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef] [Green Version]
  11. Shuman, D.I.; Ricaud, B.; Vandergheynst, P. Vertex-frequency analysis on graphs. Appl. Comput. Harmon. Anal. 2016, 40, 260–291. [Google Scholar] [CrossRef]
  12. Harvey, A.C. Time Series Models, 2nd ed.; Harvester Wheatsheaf: New York, NY, USA, 1993. [Google Scholar]
  13. Anderson, T.W. The Statistical Analysis of Time Series; Wiley: New York, NY, USA, 1971. [Google Scholar]
  14. Strang, G. The discrete cosine transform. SIAM Rev. 1999, 41, 135–147. [Google Scholar] [CrossRef]
  15. Ahmed, N.T.; Natarajan, T.; Rao, K.R. Discrete cosine transform. IEEE Trans. Comput. 1974, 1, 90–93. [Google Scholar] [CrossRef]
  16. Dong, X.; Thanou, D.; Rabbat, M.; Frossard, P. Learning graphs from data: A signal representation perspective. IEEE Signal Process. Mag. 2019, 36, 44–63. [Google Scholar] [CrossRef] [Green Version]
  17. Ricaud, B.; Borgnat, P.; Tremblay, N.; Gonçalves, P.; Vandergheynst, P. Fourier could be a data scientist: From graph Fourier transform to signal processing on graphs. C. R. Phys. 2019, 20, 474–488. [Google Scholar] [CrossRef]
  18. Bapat, R.B. Graphs and Matrices, 2nd ed.; Springer: London, UK, 2014. [Google Scholar]
  19. Gallier, J. Spectral Theory of Unsigned and Signed Graphs. Applications to Graph Clustering: A Survey. 2016. Available online: https://arxiv.org/abs/1601.04692 (accessed on 28 September 2021).
  20. Lebichot, B.; Saerens, M. An experimental study of graph-based semi-supervised classification with additional node information. Knowl. Inf. Syst. 2020, 62, 4337–4371. [Google Scholar] [CrossRef]
  21. Dray, S. A new perspective about Moran’s coefficient: Spatial autocorrelation as a linear regression problem. Geogr. Anal. 2011, 43, 127–141. [Google Scholar] [CrossRef]
  22. Jain, A.K. A sinusoidal family of unitary transforms. IEEE Trans. Pattern Anal. Mach. Intell. 1979, 4, 356–365. [Google Scholar] [CrossRef] [PubMed]
  23. O’Sullivan, F. Discretized Laplacian smoothing by Fourier methods. J. Am. Stat. Assoc. 1991, 86, 634–642. [Google Scholar] [CrossRef]
  24. Garcia, D. Robust smoothing of gridded data in one and higher dimensions with missing values. Comput. Stat. Data Anal. 2010, 54, 1167–1178. [Google Scholar] [CrossRef] [Green Version]
  25. Nakatsukasa, Y.; Saito, N.; Woei, E. Mysteries around the graph Laplacian eigenvalue 4. Linear Algebra Its Appl. 2013, 438, 3231–3246. [Google Scholar] [CrossRef]
  26. Strang, G.; MacNamara, S. Functions of difference matrices are Toeplitz plus Hankel. SIAM Rev. 2014, 56, 525–546. [Google Scholar] [CrossRef] [Green Version]
  27. Yamada, H. A smoothing method that looks like the Hodrick–Prescott filter. Econom. Theory 2020, 36, 961–981. [Google Scholar] [CrossRef]
  28. Yamada, H. A pioneering study on discrete cosine transform. Commun. Stat. Theory Methods 2020, 1838547. [Google Scholar] [CrossRef]
Figure 1. An undirected graph with four vertices.
Figure 1. An undirected graph with four vertices.
Mathematics 09 02465 g001
Figure 2. ζ 2 , , ζ n in (44) for n = 20 and a = 3 , 1.5 , 0.5 , 0 , 0.5 , 1.5 , 3 .
Figure 2. ζ 2 , , ζ n in (44) for n = 20 and a = 3 , 1.5 , 0.5 , 0 , 0.5 , 1.5 , 3 .
Mathematics 09 02465 g002
Table 1. Spatial smoothness and Geary’s c.
Table 1. Spatial smoothness and Geary’s c.
W W p W q
a B y * 2 c * B y * 2 c * B y * 2 c *
LB 0.8754 0.3752 0.0246 0.0123 1.7174 0.2334
3.0 0.8796 0.3770 0.0248 0.0124 1.7237 0.2343
2.5 0.8869 0.3801 0.0251 0.0126 1.7346 0.2357
2.0 0.9068 0.3886 0.0260 0.0130 1.7642 0.2398
1.5 0.9621 0.4123 0.0286 0.0143 1.8463 0.2509
1.0 1.1171 0.4787 0.0372 0.0186 2.0809 0.2828
0.5 1.5318 0.6565 0.0824 0.0412 2.8242 0.3838
0.0 2.3333 1.0000 2.0000 1.0000 7.3579 1.0000
0.5 3.0711 1.3162 3.9176 1.9588 9.8861 1.3436
1.0 3.3941 1.4546 3.9628 1.9814 9.9926 1.3581
1.5 3.4988 1.4995 3.9714 1.9857 10.0142 1.3610
2.0 3.5329 1.5141 3.9740 1.9870 10.0203 1.3618
2.5 3.5447 1.5192 3.9749 1.9874 10.0222 1.3621
3.0 3.5490 1.5210 3.9752 1.9876 10.0229 1.3622
UB 3.5514 1.5220 3.9754 1.9877 10.0233 1.3623
Note: LB and UB, respectively, represent lower bound and upper bound of B y * 2 (resp. c * ) given by (50) (resp. (56)).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yamada, H. Geary’s c and Spectral Graph Theory. Mathematics 2021, 9, 2465. https://doi.org/10.3390/math9192465

AMA Style

Yamada H. Geary’s c and Spectral Graph Theory. Mathematics. 2021; 9(19):2465. https://doi.org/10.3390/math9192465

Chicago/Turabian Style

Yamada, Hiroshi. 2021. "Geary’s c and Spectral Graph Theory" Mathematics 9, no. 19: 2465. https://doi.org/10.3390/math9192465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop