Information Entropy of Tight-Binding Random Networks with Losses and Gain: Scaling and Universality

We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, the network connectivity α, and the losses-and-gain strength γ. Here, N and α are the standard parameters of ER graphs, while we introduce losses and gain by including complex self-loops on all vertices with the imaginary amplitude iγ with random balanced signs, thus breaking the Hermiticity of the corresponding adjacency matrices and inducing complex spectra. By the use of extensive numerical simulations, we define a scaling parameter ξ≡ξ(N,α,γ) that fixes the localization properties of the eigenvectors of our random network model; such that, when ξ<0.1 (10<ξ), the eigenvectors are localized (extended), while the localization-to-delocalization transition occurs for 0.1<ξ<10. Moreover, to extend the applicability of our findings, we demonstrate that for fixed ξ, the spectral properties (characterized by the position of the eigenvalues on the complex plane) of our network model are also universal; i.e., they do not depend on the specific values of the network parameters.


Introduction
Independently of the field, classification, or application, a commonly-accepted mathematical representation of a network or graph is the adjacency matrix. The adjacency matrix A of a simple non-directed network (a simple network is a network not having multiple edges or self-edges) is the matrix with elements A ij defined as [1]: 1 if there is an edge between vertices i and j, 0 otherwise.
This prescription produces N × N symmetric sparse matrices with zero diagonal elements, where N is the number of vertices of the corresponding network. The sparsity of A is quantified by the parameter α, which is the fraction of non-vanishing off-diagonal adjacency matrix elements. Vertices are isolated when α = 0, whereas the network is fully connected for α = 1. Once the adjacency matrix of a network is constructed, it is quite natural to ask about the properties of its eigenvalues and eigenvectors, which is the main subject of this paper. As commonly used, we refer to the properties of the eigenvalues and eigenvectors of the adjacency matrix as the properties of the eigenvalues and eigenvectors of the respective network.
if there is an edge between vertices i and j, 0 otherwise.
Here, ij = ji are statistically-independent random variables drawn from a normal distribution with zero mean and variance one. Note that the term ±iγ, with γ = 0, makes the adjacency matrix of the tight-binding random network model non-Hermitian, which, in turn, has complex eigenvalues and eigenvectors. According to this definition, a diagonal random matrix is obtained for α = 0 and γ = 0 (Poisson case), whereas the Gaussian Orthogonal Ensemble (GOE) is recovered when α = 1 and γ = 0. The GOE is a random matrix ensemble formed by real symmetric random matrices A whose entries are statistically-independent random variables drawn from a normal distribution with zero mean and variance |A ij | 2 = (1 + δ ij )/2; see, e.g., [18]. The GOE is commonly used to statistically represent Hamiltonian matrices corresponding to complex, chaotic, or disordered systems having time-reversal invariance.
The random network model with the adjacency matrix of Equation (2) is inspired by non-Hermitian Hamiltonians describing open or scattering systems, systems interacting with an environment, or active materials. Within the effective non-Hermitian Hamiltonian approach, such opening or interaction is modeled by adding complex terms to the main diagonal of the Hamiltonian of the system of interest [19][20][21][22][23]. Indeed, in tight-binding systems, the on-site term ±iγ represents losses (iγ) and gain (−iγ). Moreover, the term ±iγ allows adding losses and gain to tight-binding systems locally by adding this term to selected sites (in regular arrays, the addition of the term iγ to border sites is commonly used to study scattering and transport properties; see, e.g., [24][25][26]), globally by adding this term to all sites in the system (in linear chains, the addition of the term iγ to all sites has been used to represent a system coupled to a common decay channel; see, e.g., [27,28]), and in a balanced way by adding iγ to all sites with the same proportion of plus and minus signs (the addition of alternating iγ and −iγ terms to the sites of one-dimensional non-disordered arrays produces PT-symmetric wires; see, e.g., [29,30]). In our model, we choose the latter setup, where balanced implies that the network is formed by an even number of vertices. Our main motivation to choose a balanced loss-and-gain setup is to limit the number of parameters of the model, since a non-balanced setup would require including the loss-to-gain ratio as a parameter. Furthermore, since the vertices of our network are not ordered, the balanced loss and gain is effectively introduced randomly to the network. This is in contrast to PT-symmetric systems [29], where loss and gain alternate periodically. Thus, in our model, γ is the loss-and-gain strength.
Therefore, our random network model corresponds to tight-binding random networks with random on-site potentials, random hopping integrals, and random on-site loss and gain. Our random network model depends on three parameters: the network size N, the network connectivity α, and the losses-and-gain strength γ.

Previous Work
As precedents, we can mention that we have already studied some spectral [16,17], eigenvector [16], and transport [5] properties of ER-type random networks with a special focus on universality, from a random matrix theory (RMT) point of view. Moreover, we have also performed scaling studies on other random networks models, such as multilayer and multiplex networks [15,31] and random-geometric and random-rectangular graphs [32]. In particular, for ER fully-random networks, we have shown that [16] the average information entropy S (to be defined below) is a function of the average degree ξ = αN. Moreover, S describes the delocalization transition of the network model well: (i) for ξ < ∼ 2, where S ≈ 0, the eigenvectors are practically localized; hence, the delocalization transition takes place around ξ ≈ 2 (for which S becomes larger than zero, meaning that the corresponding eigenvectors have more than only one principal component), which is close to previous theoretical and numerical estimations [33][34][35][36]; and (ii) for ξ > 200, where S ≈ S GOE ≡ S GOE ≈ ln(N/2.07), the eigenvectors are practically random and fully extended. Here, S GOE is the entropy of the eigenvectors of the GOE, i.e., random eigenvectors with Gaussian-distributed amplitudes. Thus, the study of [16] provides a tool to predict the localization properties of the eigenvectors of ER-type random networks once the parameter ξ is known.
Thus, in Section 2, we study some eigenvector and eigenvalue properties of the ER tight-binding random networks with balanced losses and gain, corresponding to the non-Hermitian adjacency matrices of Equation (2), focusing on scaling and universality from an RMT point of view.
It is fair to say that there are several works in the literature that apply RMT approaches to the study of spectral and eigenvector properties of non-Hermitian sparse matrices, in some cases already applied to graphs or network models; see for example [37][38][39][40][41][42][43][44][45][46].

Scaling of Information Entropy
In order to characterize quantitatively the complexity, and in specific cases the fractality, of the normalized eigenvectors Ψ of random matrices (and of Hamiltonians corresponding to disordered and quantized chaotic systems), the Rényi entropies are widely used: Here, the subindex n refers to the n th eigenvector component, and ρ n ≡ |Ψ n | 2 form the discrete probability distribution P = (ρ 1 , . . . , ρ N ) associated with the eigenvector Ψ (where | · | stands for the modulus of a complex number); with ρ n ≥ 0 and ∑ N n=1 ρ n = 1. In our study, we use the information entropy (given by Equation (3) in the limit q → 1): Note that the minimal value of S, S = 0, is obtained when only one component in the eigenvector Ψ concentrates all the probability; while the maximal value of S, S = ln N, is approached when the probability is evenly distributed over the eigenvector: ρ n = 1/N for all n. Any other possible configuration of probabilities ρ n , including the eigenvectors of the GOE, provides 0 < S < ln N. Therefore, the exponential of S is known to be a good measure of eigenvector localization [47], since it provides the number of principal components of an eigenvector in a given basis. That is, when S = 0, the eigenvector has only one principal component, exp(S) = 1, so it is localized; while it is fully extended, exp(S) = N, when S = ln N. Here, we refer to the principal components of an eigenvector as the eigenvector components having the largest amplitudes. In fact, S has been already used to characterize the eigenvectors of adjacency matrices of several random network models (see some examples in [15,16,[48][49][50][51]).
With Definition (4), when α = 0 for any γ ≥ 0, since the eigenvectors of the (diagonal) adjacency matrices of our random network model have only one non-vanishing component with the magnitude equal to one, then S = 0. On the other hand, for α = 1 and γ = 0, the GOE is reproduced, and S = S GOE ; i.e., the random eigenvectors extend over the N available vertices in the network. We note that for α = 1 and γ = 0, our random network model does not reproduce the GOE and S = S GOE ; however, we observe that S ≈ S GOE , so we use S GOE as the reference information entropy.
Below, we use exact numerical diagonalization to obtain the eigenvectors Ψ m and eigenvalues λ m (m = 1 . . . N) of the adjacency matrices of large ensembles of tight-binding random networks characterized by N, α, and γ. Then, we average over all eigenvectors of an ensemble of adjacency matrices of size N to compute S . We have verified that our conclusions are not modified when we restrict the averages to a fraction of the eigenvectors around the band center, which is a prescription commonly used in RMT studies. In Figures 1 and 2, we show the average information entropy S , normalized to S GOE , as a function of the connectivity α for the adjacency matrices of ER tight-binding random networks with balanced losses and gain. We observe that the curves of S /S GOE , for any combination of N and γ, have a very similar functional form as a function of α: the curves S /S GOE show a smooth transition from approximately zero (localized regime) to approximately one (delocalized regime) when α increases from α ∼ 0 (mostly isolated vertices) to one (fully-connected graphs). From Figure 1, for fixed γ, we observe that the larger the network size N, the smaller the value of α needed to approach the delocalized regime. Furthermore, note that the curves of S /S GOE vs. α are shifted to the left on the α-axis for increasing N. All this panorama is in accordance with the case γ = 0, as shown in [16]. In contrast, for fixed N, the curves of S /S GOE vs. α are displaced to the right on the α-axis for increasing γ; clearly seen in the insets of Figure 2. As a reference, we include the case γ = 0 as black full lines in all panels of Figure 2. Moreover, the fact that these curves, plotted in semi-log scale, are just shifted on the α-axis when tuning N or γ makes us forecast the existence of a scaling parameter that depends on both N and γ. In order to look for the scaling parameter, we first define a quantity to characterize the position of the curves S /S GOE on the α-axis: indeed, we choose the value of α, which we label as α * , for which S /S GOE ≈ 0.5. Notice that α * characterizes the localization-to-delocalization transition of the eigenvectors of our network model. Then, in Figure 3a,b, we present the localization-to-delocalization transition point α * as a function of N and γ, respectively. On the one hand, the linear trend of the data (in log-log scale) in Figure 3a implies a power-law relation of the form:  In fact, Equation (5) provides very good fittings to the data (the values of the fitting parameters are reported in Table 1). Note that δ ≈ −0.98 for all γ > 0, a slight difference with the case γ = 0 where δ ≈ −1; see also [16]. On the other hand, in Figure 3b, we plot the ratio α * /N δ , with δ = −0.98, as a function of γ. With this, we already take into account the scaling stated in Equation (5), which, at the same time, allows us to examine the dependence of α * on γ more easily. Indeed, for γ > 0.4, we conclude that: Table 1. Values of C and δ obtained from the fittings of the curves α * vs. N of Figure 3a with Equation (5). Therefore, by plotting again the curves of S /S GOE now as a function of the connectivity divided by the localization-to-delocalization transition point, we observe that curves for different parameter combinations (N, γ) collapse on top of a universal curve; i.e., a curve that depends on the parameter ξ only; see Figure 4. This means that once the ratio ξ is fixed, no matter the graph size and the loss-and-gain strength, the information entropy of the eigenvectors is also fixed.

Eigenvalue Properties
Once we have found that ξ (see Equations (6) and (7)) is the parameter that scales the eigenvector properties (characterized by their information entropy) of our model of random networks with losses and gain, we believe that other properties (i.e., spectral properties) of the network model may also be scaled by the same parameter. Thus, in the following, we validate our surmise by analyzing the corresponding eigenvalues.
Recall that for γ = 0, the adjacency matrices of our random network model are Hermitian and the corresponding spectra are real. For any γ > 0, the adjacency matrices become non-Hermitian and their eigenvalues λ are complex numbers. Now, in Figure 5, we show density plots (in the complex plane) of the eigenvalues λ of Erdős-Rényi tight-binding random networks with losses and gain for several parameter combinations. In this figure, we can clearly see the competition of the two main parameters of the model: the sparsity α and the loss-and-gain strength γ (for fixed N). On the one side, for small α (i.e., mostly isolated vertices), the main diagonal of the adjacency matrices dominates and the imaginary part of the corresponding eigenvalues is approximately equal to ±iγ; see Figure 5 (left panels). That is, the eigenvalues form two thin clouds around ±iγ. On the other side, for large α (i.e., highly-connected graphs), the density of off-diagonal elements of the adjacency matrices is also large, and the corresponding eigenvalues form a cloud with center at the origin of the complex plane that gets wider for increasing γ; see Figure 5 (right panels) with 0.001 < γ < 0.5. Moreover, for γ ≈ 1, this cloud splits into two clouds that separate further from the real axis for even larger values of γ; see Figure 5 (right panels) with γ > 0.5. It is remarkable that the cloud splits for γ ≈ 1, since it corresponds to the super-radiance transition value reported for full random matrices [52], one-dimensional disordered tight-binding wires [26,27,[53][54][55], and random many-body systems [56]. The super-radiance transition is a phase transition that occurs, as a function of the coupling strength, in quantum systems coupled to common decay channels; see, e.g., [20][21][22][23]57]. It was originally predicted by the Dicke model of super-radiance [58]. In very general terms, this transition occurs at a given coupling strength above which a number of internal states (eigenvalues) acquire decay widths (imaginary part of the eigenvalues) proportional to the coupling strength. Thus, even though a more detailed analysis is necessary, we can assume that the splitting of the density plots of eigenvalues in the complex plane at γ ≈ 1 (as observed in Figure 5) is a signature of the super-radiance transition in our tight-binding random network model.
Finally, for moderate values of α, as reported in Figure 5 (central panels), the combination of the two situations described above occurs: for small γ, the eigenvalues form three clouds in the complex plane, two thin ones close to ±iγ, and a third one with the center at the origin of the complex plane; for increasing γ, the middle cloud gets wider and splits into two clouds that, for large enough γ, merge with the thin clouds at ±iγ.
Notice that the panorama shown in Figure 5 (where networks of size N = 1000 were used), even though it is valid for any N, will be shifted for different network sizes; as can be inferred from the information entropy of the eigenvectors reported in Figure 1. Moreover, the scaling analysis made in the previous subsection allowed us to define the scaling parameter ξ that fixes the eigenvector properties of our random network model, as shown in Figure 4. Therefore, in Figure 6 we present density plots of eigenvalues for three network sizes and increasing values of ξ (from top to bottom). It is clear from Figure 6 that once ξ is fixed, the density of eigenvalues in the complex plane is (statistically) the same for different parameter combinations. Thus, we validate that the eigenvalue properties of our model are also scaled with the parameter ξ.

Summary
In this paper, we have numerically studied the eigenvector and eigenvalue properties of the adjacency matrices of tight-binding random networks with balanced losses and gain. In particular, we focused on scaling and universality from a random matrix theory point of view. We would like to stress that even though we already have some previous experience with scaling studies of random network models (see, e.g., [5,[15][16][17]32]), this is the first time we apply this technique to non-Hermitian adjacency matrices.
Specifically, we have considered Erdős-Rényi tight-binding random networks with self-loops (where all non-vanishing adjacency matrix elements are Gaussian random variables) and add the imaginary term ±iγ to the weights of all vertices to emulate losses (iγ) and gain (−iγ). We assume balanced losses and gain, so that we include the same number of positive and negative terms iγ. This implies the number of vertices in the network to be an even number. Thus, our random network model depends on three parameters: the network size N, the network connectivity α, and the losses-and-gain strength γ.
First, by the proper scaling analysis of the information entropy of the eigenvectors of the adjacency matrices of our random network model, we obtain ξ ≈ 4α/(8 + γ)N δ , with δ = −0.98; see Equations (6) and (7). Here, ξ ≡ ξ (N, α, γ) is the scaling parameter of the model; that is, for fixed ξ, the information entropy of the eigenvectors is also fixed; see Figure 4. Our analysis provides a way to predict the localization properties of the random networks with losses and gain: for ξ < 0.1, the eigenvectors are localized; the localization-to-delocalization transition occurs for 0.1 < ξ < 10; while when 10 < ξ, the eigenvectors are extended. Moreover, by recalling that in tight-binding systems, a localization-to-delocalization transition implies an insulator-to-metal transition in the corresponding scattering setup, our results might be used to design the conduction properties of the tight-binding network since tuning N, α, and γ could drive the network from a regime of localized eigenvectors (insulating regime), ξ < 0.1, to a regime of delocalized eigenvectors (metallic regime), ξ > 10.
Therefore, to extend the applicability of our findings, we demonstrate that for fixed ξ, the spectral properties (characterized by the position of the eigenvalues on the complex plane) of our network model are also universal; i.e., they do not depend on the specific values of the network parameters; see Figure 6.
We expect our results may motivate further numerical, as well as analytical efforts towards the understanding of networks with non-Hermitian adjacency matrices.