Abstract
We consider the limit of the empirical spectral distribution of Laplace matrices of generalized random graphs. Applying the Stieltjes transform method, we prove under general conditions that the limit spectral distribution of Laplace matrices converges to the free convolution of the semicircular law and the normal law.
MSC:
60B20; 60C05
1. Introduction and Summary
The spectral theory of random graphs is a branch of mathematics that has been studied intensively in the literature in recent decades. The asymptotic behavior of eigenvalues and eigenvectors of matrices associated with graphs, adjacency matrices and Laplace matrices, in particular (see definition below), as the number of vertices of the graph tends to infinity is investigated. See for instance [1,2,3,4,5,6,7,8]. The adjacency matrix of the generalized Erdős–Rènyi random graph is a special case of the generalized Wigner matrix (matrices with elements that are independent up to symmetry, with zero means and different variances). Many deep results have been obtained recently for such matrices. Methods of studying of the spectrum asymptotics of the adjacency matrices are the same as for the spectrum asymptotics of Wigner matrices—these are the method of moments and the Stieltjes transform method. It should be noted that the most profound results for the spectrum of Wigner random matrices were obtained by the methods related to the Stieltjes transform; see [3,9,10].
Laplace matrices have one significant difference—the dependence of the diagonal elements on the remaining elements of the matrix. This significantly complicates the study. For instance, the limit distribution of the empirical spectral function of the Laplace matrix of a complete graph (non-random) was found firstly in 2006; see [11]. In most of the works devoted to the study of the spectrum asymptotics of Laplace matrices of random graphs, the method of moments is used; see [2,4,12]. In this paper, we consider the empirical spectral distribution function of the Laplace matrices of both weighted and unweighted generalized Erdős-Rényi random graphs. We have obtained simple sufficient conditions for the convergence of the empirical spectral distribution function of the Laplace matrices of random graphs to a distribution function that is a free convolution of the semicircular law and the standard normal law. The conditions are expressed in terms of the properties of the graph edge probability matrix and the weight variance matrix (for weighted graphs). To prove the convergence, we exclusively use the Stieltjes transform method.
We consider a non-oriented simple graph (without loops and with simple edges) with vertices and set of edges E such that edges are independent and have probability . Consider the adjacency matrix
where
Define a degree of vertex as
We shall assume that for are independent and Note that We have that matrix is symmetric, i.e., , and that r.v.’s for are independent. We introduce the quantity
We introduce the diagonal matrix
normalized and centered Laplace matrix of not weighted graph G defined as
We shall consider the weighted graphs as well with weight function , where, for , there are independent random variables s.t.
The distribution of may depend on n, but for brevity, we shall omit the index n in the notations. We introduce the quantity
The quantity may be interpreted as the expected mean degree of graph . With graph , we consider the adjacency matrix
and normalized Laplace or Markov matrix
where
We shall denote by ordered eigenvalues of a symmetric matrix . We shall consider the spectrum of matrices , and . For brevity of notation, we shall write , and . We introduce the corresponding empirical spectral distributions (ESDs)
In the paper [11], in 2006, it was shown under conditions and , for any , that ESD weakly converges in probability to the non-random distribution function , which is defined as a free convolution of the Gaussian distribution function and the semicircular distribution function (the definition of free convolution see, for instance, in [13]).
In [4], in 2010, the authors considered the limit of for weighted Erdös–Renyi graphs () with equivariance weights (). Assuming that bounded away from zero and one, and that random variables have the fourth moment, they proved that weakly converges to the same function .
In [14], in 2020, Yizhe Zhu considered the so-called graphon approach to the limiting spectral distribution of Wigner-type matrices. The author described the moments of the limit spectral measure in terms 2279–2375, of graphon of the variance profile matrix and number of trees with a fixed number of vertices. Recently, Chatterjee and Hazra published the paper [12] in which the approach of Zhu was developed.
In [15], in 2021, the author stated simple conditions on probabilities for the convergence of ESD of adjacency matrices to the semicircular law. In the present paper, we consider the convergence of ESD and under similar conditions to the function .
First, we formulate some conditions which we shall use in the present paper.
- Condition :
- Condition : There exists a constant s.t.
- Condition :
- Condition : For any
Remark 1.
Condition is equivalent to the following two conditions together
- Condition :
- Condition :
The main result of the present paper is the following theorem.
Theorem 1.
Let conditions , , , hold. Then, ESDs converge in probability to the distribution function , which is the additive free convolution of the standard normal distribution function and the semi-circular distribution function:
Corollary 1.
Assume that and for any and any . Assume that as and assume that condition holds. Then, ESDs converge in probability to the distribution function , which is the additive free convolution of the standard normal distribution function and the semi-circular distribution function:
Proof of Corollary.
Note that in the case and , we have
Condition is fulfilled. Moreover, it is simple to see that all conditions of Theorem 1 are fulfilled. □
Theorem 2.
Let conditions
and
hold. Then, ESDs converge in probability to the distribution function , which is the additive free convolution of the standard normal distribution function and the semicircular distribution function,
In what follows, we shall omit the superscript in the notations of , writing instead.
2. Toy Example
Consider graph with clique number where . The clique number of graph G is the size of the largest clique or a maximal clique of the graph. Let denote the clique of the graph. Define the weights of vertices as follows
We introduce edge probabilities as follows
We assume that , for . In this case, we have
and
Proposition 1.
Under condition
conditions , and hold.
Proof.
We have
It is straightforward to check that for satisfying the condition (13), we have , as and
That means that the conditions and hold. Furthermore,
It is straightforward to check as well that
Thus, Proposition 1 is proved. □
3. Proof of Theorem 1
We shall use the method of the Stieltjes transform for the proof of Theorem 1. Introduce the resolvent matrix of matrix ,
where denotes a unit matrix. Let denote the Stieltjes transform of the empirical spectral distribution function of matrix ,
For the proof of Theorem 1, it is enough to prove the convergence of the Stieltjes transforms for any fixed with ; moreover, it is enough to prove that converges to some function, say , in some set with a non-empty interior. According to Lemma A2, it is enough to prove the convergence of the expected Stieltjes transform only. Using Lemma A1, the result of Theorem 1 follows from the relation
where denotes the Stieltjes transform of the standard Gaussian distribution,
First, we need some additional notations. By , we denote the matrix obtained from by replacing diagonal entries , with . Note that the diagonal entries of matrix (except ) do not depend on the r.v. values for . We denote by the diagonal matrix with diagonal entries . Denote by the resolvent matrix corresponding to the matrix ,
We have
Using this formula, we may write
According to Lemma A5, we obtain
Furthermore, let us denote by the matrix obtained from by deleting both the j-th column and j-th row. denotes the resolvent matrix corresponding to the matrix . Using the Schur complement formula, we may write
Introduce the following notations
Put . Let
In these notations, we may write
We continue as follows
Summing the last equality in , we obtain
where denotes a random variable which is uniform distributed on the set and independent on all other random variables. Denote by the distribution function of and let
where denotes the distribution function of the standard normal law. Denote the Stieltjes transform of the standard normal law by ,
Note that
Integrating by part, we obtain
According to Lemma A3,
Note that
It remains to prove that and . The last claim follows from Lemmas A6–A11, Lemma A2 and equality (20).
Thus, Theorem 1 is proved.
4. The Proof of Theorem 2
Similar to the previous section, we may write that diagonal entries of matrix
Let denote the resolvent matrix of the matrix . Let be fixed. We denote by the matrix obtained from by replacing diagonal entries , with . Let . By definition, is a diagonal matrix with , for . Note that diagonal entries of matrix (except ) do not depend on the r.v. values for . By , we denote the matrix obtained from by deleting both the j-th column and j-th row. denotes the resolvent matrix corresponding to the matrix . Analogously to (21), we represent the diagonal entries of resolvent matrix in the form
Introduce the following notations
Put . Let
In these notations, we may write
where . We continue as follows
Summing the last equality in , we obtain
where denotes a random variable which is uniform distributed on the set and independent on all other random variables. Similar to inequality (25), we have
According to Lemma A12
Furthermore, since and , we have
By Lemmas A13–A17,
Furthermore, we note that
This relation implies that
It is straightforward to check that
Combining relations (33), (35), (38), we obtain
The last relation and Lemma A1 completed the proof of Theorem 2. Thus, Theorem 2 is proved.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The author declares no conflict of interest.
Appendix A
Definition of Additive Free Convolution
We give the definition of the additive free convolution of distribution functions following the paper [16] (Section 5).
Definition A1.
A pair consisting of a unital algebra and a linear functional with is called the free probability space. Elements of are called random variables, the numbers for such random variables are called moments, and the collection of all moments is called the joint distribution of . Equivalently, we may say that the joint distribution of is given by the linear functional with , where denotes the algebra of all polynomials in k non-commutative indeterminantes .
If for a given element there exists a unique probability measure on such that for all , we identify the distribution of a with the probability measure .
Definition A2.
Let be a non-commutative probability space.
- (1)
- Let be a family of unital sub-algebras of . The sub-algebras are called free independent if, for any positive integer k, whenever the following set of conditions holds: (with ) for , for all and neighboring elements are from taken different sub-algebras, i.e., .
- (2)
- Let be a family of subset of . The subsets are called free or freely independent if their generated initial sub-algebras are free, i.e., if are free, where for each , is the smallest initial sub-algebra of which contains .
- (3)
- Let be a family of elements from . The elements are called free independent if the subsets are free.
Consider two random variables a and b which are free. Then, distributions of (in the sense of linear functionals) depend only on the distribution of a and b.
Definition A3.
For free random variables a and b, the distribution of is called the free additive convolution of and and is denoted by
To compute the free convolution of concrete distributions, we may use the so-called R-transform introduced by Voiculescu [17]. Let be the Stieltjes transform of some distribution function . Denote by the inverse function of in the science of composition. Define R-transform as follows
Let be the semicircle distribution function. Its Stieltjes transform satisfies the equation
Denote by the R-transform of the semicicular law. Simple calulations show that
We denote dy the R-transform of the free convolution semicircular law and Gaussian law. Let denote the R-transform of the standard normal law. Then
See for instance, refs. [18,19]. Using the definition of the R-transform via the Stieltjes transform, we obtain
It is straightforward to show that this equality implies
We prove the following simple but important lemma.
Lemma A1.
Let a sequence of Stieltjes transforms of the distribution functions satisfy the equations
where
Then, the distribution functions weakly converge to the distribution function , which is free convolution of the semicircular law and the standard normal law.
Proof.
It is enough to prove that the Stieltjes transform converges in some region with non-empty interior to the Stieltjes transform , which satisfies equation (A1). We shall consider the region of with . Since the derivative of does not exceed the level , we may write
or
The sequence of the Stieltjes transforms is Cauchy; consequently, there exists a limit say of this sequence,
Taking the limit in the equation (A2), we obtain
The last equality implies that is the Stieltjes transform of the semicircular law and the standard Gaussian law. Thus, Lemma is proved. □
Appendix B. Weighted Graphs
Appendix B.1. Variance of Stieltjes Transform of Empirical Measure
In this section, we estimate the variance of , where . We prove the following Lemma.
Lemma A2.
For any with , the following inequality holds
Proof.
The proof of this lemma is using the martingale representation of . This method in Random Matrix Theory was firstly used by Girko, see for instance [20]. We introduce the sequence of -algebras generated by random variables for . It is easy to see that . Denote by the conditional expectation with respect to -algebra . For , . Introduce random variables
The sequence of , for is martingale difference and
Introduce the sub-matrices obtained from by deleting both the k-th row and k-th column. Denote by the corresponding resolvent matrix, . Note that the matrix depends on the random variables , via diagonal entries. To overcome this difficulty, we introduce the matrix obtained from by replacing diagonal entries with . The corresponding resolvent matrix is denoted via . We have now
This allows us to write
By the overlapping theorem, for ,
From here, we immediately obtain
and
To complete the proof, it remains to show that
Note that
Introduce the diagonal matrix with diagonal entries
In these notations, we have
This implies that
We continue this inequality as follows
Applying Cauchy’s inequality to the second term in the right-hand side of the last inequality, we obtain
It is straightforward to check that
Using this bound, we obtain
In what follows, we shall assume that is fixed.
Appendix B.2. Convergence of Diagonal Entries Distribution Functions of Laplace Matrices to the Normal Law
Lemma A3.
Under conditions and , we have
Proof.
We fix arbitrary . We may write
By condition , we obtain
Because is arbitrary, we obtain the claim. □
Lemma A4.
Under conditions , and , , we have
Proof.
Let be an independent on and random variable uniform distributed on the set . We consider the characterictic function of , . Introduce the following set of indices
where
We denote by the complement set of and by , we denote the cardinality of set . Note that by condition
Analogously, by ,
Finally, by Lemma A3
Combining the last three relations, we obtain
Note that by the independence of and ,
Furthermore,
and by condition
Without loss of generality, we may assume that
and applying Taylor’s formula, we write that
where denotes some function such that . Futhermore, by Taylor’s formula
where , denotes some functions such that . Using this equality, we may write
Summing this equality by , we obtain
For , we have
This implies that for
From this inequality, it follows that
By conditions and , relation (A32) and Lemma A3, we obtain
Thus, the lemma is proved. □
Lemma A5.
Under the conditions of Theorem 1, we have
Proof.
By , we shall denote the operator norm of matrix . Matrices and are defined in the beginning of Section 3 before the relation (18). Note that
It is easy to check that
Using that
we obtain
Futhermore, for any , we have
Summing this inequality in , we obtain
Since is arbitrary, this inequality and condition together imply (A44). Thus, Lemma A5 is proved. □
Appendix B.3. The Bounds of , for ν = 1, …, 7
Lemma A6.
Under the conditions of Theorem 1, we have
Proof.
By definition of , we may write
Applying the Cauchy inequality, we obtain
Simple calculations show that
We introduce the following notations
In these notations, we write
Using that
we obtain that the spectral norm of matrix satiesfies the inequality
and
Using the last bound, we obtain
Furthermore, we apply the bound
We obtain
We continue as follows
Thus, Lemma is proved. □
Lemma A7.
Under the conditions of Theorem 1, we have
Proof.
We recall the definition of ,
Using triangle inequality and Cauchy’s inequality, we may write
Since and random variables are independent for and independent on , we obtain
Thus, the lemma is proved. □
Lemma A8.
Under the conditions of Theorem 1, we have
Proof.
By definition of , we have
We may write
Furthermore,
Using inequality (A60), we obtain
We estimate now the second term in the right-hand side of (A68). Applying triangle inequality, we obtain
Simple calculations show that
Finally, we note that
Combining inequalities (A68), (A70), (A71), we obtain the result of the lemma. Thus, the lemma is proved. □
Lemma A9.
Under the conditions of Theorem 1, we have
Proof.
By definition of , we have
Using that , we obtain
□
Lemma A10.
Under the conditions of Theorem 1, we have
Proof.
Recall that
Using that , we obtain
Thus, the lemma is proved. □
Lemma A11.
Under the conditions of Theorem 1, we have
Proof.
By definition of , we have
By the triangle inequality, we obtain
By the overlapping theorem, we have
It remains to estimate the second term in the r.h.s. of (A81). Note that
This equality implies that
Summing this equality in j, we obtain
Using that
we obtain
Thus, the lemma is proved. □
Appendix C. Unweigthed Graphs
Appendix C.1. Convergence of Diagonal Entries Distribution Functions of Laplace Matrices to the Normal Law
We denote by the distribution function of random variable and
Lemma A12.
Under the conditions of Theorem 2, we have
Proof.
We consider the characteristic function of , . Introduce the following set of indices
We denote by a complement set of and by , we denote the cardinality of set . Note that, by condition ,
Note that, by independence of ,
Applying the Taylor formula, we may write
where denotes some function such that .
Using this equality, we may write
Summing this equality by , we obtain
Note that for ,
and
Similar to (A42), we may write
This inequality implies that
Thus, Lemma A12 is proved. □
In what follows, we shall assume that is fixed.
Appendix C.2. The Bounds of , for ν = 1, …, 5
Lemma A13.
Under conditions of Theorem 1, we have
Proof.
By definition of we may write
Applying the Cauchy inequality, we obtain
Simple calculations show that
Thus, Lemma A13 is proved. □
Lemma A14.
Under the conditions of Theorem 1, we have
Proof.
We recall the definition of ,
Using the triangle inequality and the Cauchy inequality, we may write
Thus, Lemma A14 is proved. □
Lemma A15.
Under conditions of Theorem 1, we have
Proof.
By definition of , we have
We may write
Thus, Lemma A15 is proved. □
Lemma A16.
Under the conditions of Theorem 1, we have
Proof.
Recall that
Note that
Furthermore,
Recall that denotes the operator norm of matrix . The last equality and inequality implies that
Note that
Combining the last two inequalities, we obtain the claim. Thus, Lemma A16 is proved. □
Appendix C.3. Variance of
In this section, we estimate the variance of , where . We prove the following lemma.
Lemma A17.
For any and , the following inequality holds
Proof.
The proof of this lemma is similar to the proof of Lemma A2. We introduce the sequence of -algebras generated by random variables for . It is easy to see that . Denote by the conditional expectation with respect to -algebra . For , . Introduce random variables
The sequence of , for is a martingale difference and
Furthermore, introduce the sub-matrices obtained from by replacing the diagonal entries with . Denote by the corresponding resolvent matrix, . We introduce the matrix obtained from by deleting both the k-th row and k-th column. The corresponding resolvent matrix we denote via . We have now
This allows us to write
By the overlapping theorem
From here, we immediately obtain
and
To complete the proof, it remains to show that
Note that
Introduce the diagonal matrix with diagonal entries
In these notations, we have
This implies that
We continue this inequality as follows
Inequalities (A118) and (A123) completed the proof. Thus, Lemma A17 is proved. □
References
- Bordenave, C.; Caputo, P.; Chafai, D. Spectrum of Markov Generatoron Sparse Random Graphs. Pure Appl. Math. 2015, 67, 621–669. [Google Scholar] [CrossRef]
- Bordenave, C.; Lelarge, M.; Massoulice, L. Nonbacktracking spectrum of random graphs. Ann. Probab. 2018, 46, 1–71. [Google Scholar] [CrossRef]
- Dimitriu, I.; Soumik, P. Sparse regular random graphs: Spectral densitry and eigenvectors. Ann. Probab. 2012, 40, 2197–2235. [Google Scholar] [CrossRef]
- Ding, X.; Jiang, T. Spectral Distribution of Adjancency Matrix and Laplace Matrices of Random Graphs. Ann. Appl. Probab. 2010, 20, 2086–2117. [Google Scholar] [CrossRef]
- Tran, L.V.; Vu, V.H.; Wang, K. Sparse random graphs: Eigenvalues and egenvectors. Random Struct. Algorithms 2013, 42, 110–134. [Google Scholar] [CrossRef]
- Brito, G.; Dimitriu, I.; Harris, K.D. Spectral gap in random bipartite biregular graphs and applications. Comb. Probab. Comput. 2022, 31, 229–267. [Google Scholar] [CrossRef]
- Fernando, L.M.; Jeferson, D.S. Spectral density of dense networks and the breakdown of the Wigner semicircle law. Phys. Rev. Res. 2020, 2, 043116. [Google Scholar]
- Liang, S.; Obata, N.; Takanashi, S. Asymptotic spectral analysis of general Erdös–Renyi random graphs. Nonvommutative harmonic analysis with applications to probability. Banach Cent. Publ. Inst. Math. Acad. Sci. 2007, 78, 211–229. [Google Scholar]
- Erdös, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdös–Rényi graphs I: Local semicircle law. Ann. Probab. 2013, 41, 2279–2375. [Google Scholar] [CrossRef]
- Erdös, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdős-Rényi Graphs II: Eigenvalue spacing and the extreme eigenvalues. Commun. Math. Phys. 2012, 314, 587–640. [Google Scholar] [CrossRef]
- Wloodzimer, B.; Amir, D.; Jiang, T. Spectral Measure of Large Random Hankel, Markov and Toeplitz Matrices. Ann. Probab. 2006, 34, 1–38. [Google Scholar]
- Anirban, C.; Rajat, S.H. Spectral Properties for the Laplace of a Generalized Wigner Matrices. arXiv 2021, arXiv:2011.07912v2. [Google Scholar]
- Philippe, B. On the Free Convolution with a Semi-circular Distribution. Indiana Univ. Math. J. 1997, 46, 705–718. [Google Scholar]
- Zhu, Y. A Graphon Approach to Limiting Spectral Distribution of Wigner-Type Matrices. Random Struct. Algorithms 2020, 56, 251–279. [Google Scholar] [CrossRef]
- Tikhomirov, A.N. On the Wigner law for Generalized Erdös–Renyi Random Graphs. Sib. Adv. Math. 2021, 31, 229–236. [Google Scholar] [CrossRef]
- Götze, F.; Koesters, H.; Tikhomirov, A. Asymptotic Spectra of Matrix–Valued Functions of Independent Random Matrices and Free Probability. Random Matrices Theory Appl. 2014, 4, 1–85. [Google Scholar] [CrossRef]
- Voiculescu, D. Symmetries of some reduced free product C*-algebras. In Operator Algebras and Their Connections with Topology and Ergodic Theory. Lecture Notes in Mathematics; Springer: Berlin, Germany, 1985; Volume 1132, pp. 556–588. [Google Scholar]
- Bercovici, H.; Voiculescu, D. convolution of measures with unbounded support. Indiana Univ. Math. J. 1993, 42, 733–773. [Google Scholar] [CrossRef]
- Voiculescu, D. Lectures on free probability theory. In Lectures on Probability Theory and Statistics. Lecture Notes in Mathematics; Springer: Berlin, Germany, 2000; Volume 1738, pp. 279–349. [Google Scholar]
- Girko, V.L. Spectral theory of random matrices. Russ. Math. Surv. 1985, 40, 77–120. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).