Next Article in Journal
Distributed Consensus Tracking Control of Chaotic Multi-Agent Supply Chain Network: A New Fault-Tolerant, Finite-Time, and Chatter-Free Approach
Previous Article in Journal
First-Order Phase Transformation at Constant Volume: A Continuous Transition?
Previous Article in Special Issue
A Semi-Deterministic Random Walk with Resetting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mean Hitting Time for Random Walks on a Class of Sparse Networks

1
School of Electronics Engineering and Computer Science, Peking University, NO. 5 Yiheyuan Road, Haidian District, Beijing 100871, China
2
Key Laboratory of High Confidence Software Technologies, Peking University, Beijing 100871, China
3
College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2022, 24(1), 34; https://doi.org/10.3390/e24010034
Submission received: 2 November 2021 / Revised: 9 December 2021 / Accepted: 21 December 2021 / Published: 24 December 2021
(This article belongs to the Special Issue New Trends in Random Walks)

Abstract

:
For random walks on a complex network, the configuration of a network that provides optimal or suboptimal navigation efficiency is meaningful research. It has been proven that a complete graph has the exact minimal mean hitting time, which grows linearly with the network order. In this paper, we present a class of sparse networks G ( t ) in view of a graphic operation, which have a similar dynamic process with the complete graph; however, their topological properties are different. We capture that G ( t ) has a remarkable scale-free nature that exists in most real networks and give the recursive relations of several related matrices for the studied network. According to the connections between random walks and electrical networks, three types of graph invariants are calculated, including regular Kirchhoff index, M-Kirchhoff index and A-Kirchhoff index. We derive the closed-form solutions for the mean hitting time of G ( t ) , and our results show that the dominant scaling of which exhibits the same behavior as that of a complete graph. The result could be considered when designing networks with high navigation efficiency.

1. Introduction

A complex network is recognized as a powerful tool for revealing the mysteries of complex systems [1]. It is widely used in metabolic networks [2], software engineering [3], ecosystems [4] and so on. In addition to some topological parameters, such as power-law degree distribution, average path length and clustering coefficient of complex network, the random walks also received widespread attention because the research of random walk theory can disclose dynamic processes on complex networks. As a key quantity of random walks, the hitting time is related to the mixing rate of an irreducible Markov chain, and it is also considered when calculating the expected time of mixing the Markov chain [5]. The hitting time can be used to measure the navigation efficiency of the network [6,7], and it has a core position in different disciplines, including mathematics, computer, biology, physics, control science and engineering [8,9,10,11,12].
Most of the previous research on random walks about complex networks focuses on two aspects: one is that the nodes of the studied network have identical walking rules [13,14], and the other is to study random walks on heterogeneous networks that set a trap on the node with the largest degree and have scale-free characteristics [15,16,17]. Since many real networks have a scale-free nature, every node in the network can be a trap. Thus, we construct a deterministic network that satisfies the above restrictions to approximate the real network, and it is more beneficial for us to evaluate the dynamic process of the network.
It has been proven that the complete graph has the minimum mean hitting time among all undirected networks, which shows that its propagation is quite efficient [18]. For the purpose of constructing a highly efficient network and controlling its trapping process, it is necessary to explore and design some networks with a small mean hitting time. Since most real networks are sparse networks, the average degree of these networks is much less than that of a complete graph. In this paper, we design and analyze a class of sparse networks with scale-free properties; their topological properties are different from the complete graph. We prove that the dominant scaling of the mean hitting time exhibits the same behavior as that of a complete graph, and they can also have high navigation efficiency.
The main contents in the other sections of this paper are as follows. In Section 2, we propose a graphic operation and design a class of sparse networks, show their differences in several topological parameters, including average degree, degree distribution, clustering coefficient and diameter. In Section 3, we present some lemmas about electrical networks and random walks. In Section 4, we analytically obtain the closed-form solution of the mean hitting time according to the connections between the Kirchhoff index and the mean hitting time. In Section 5, we conclude our work with a concise narrative.

2. Topological Characteristics of the Network

Before proceeding, we propose a graphic operation called the rhombus operation and construct a network G ( t ) by iterating the rhombus operation. Then, we compare the topological features of G ( t ) and the complete graph K N t with the same network order.
Rhombus operation: For a given edge i j with two endnodes i and j, add two new nodes to both sides of this edge, denoted by u and v, and then connect edge u i , u j , v i , v j , respectively. Figure 1a shows the operation process of a rhombus operation.
With the preparation of a graphic operation, we show the construction rule of networks G ( t ) as follows. Initial state, t = 0 , G ( 0 ) is only an edge. For t 1 , G ( t ) can be born from G ( t 1 ) by performing a rhombus operation on every edge in G ( t 1 ) . Figure 1b,c illustrate the topological structure of G ( 2 ) and G ( 3 ) .
The iterative construction allows us to precisely analyze relevant topological properties of the network. Let V ( t ) and E ( t ) be the node set and edge set of G ( t ) ; in more detail, the new node set and new edge set at time step t are denoted as V ¯ ( t ) and E ¯ ( t ) , which means V ( t ) = V ( t 1 ) V ¯ ( t ) , and the node belonging to V ( t 1 ) is called the old node. Then, the number of nodes and edges of the network are denoted as N t and E t , respectively. The recursive relation E t = 5 E t 1 is obviously established according to the rhombus operation, and we can obtain E t = 5 t due to E 0 = 1 . Additionally, we have N t = N t 1 + 2 E t 1 , so it is easy to verify that N t = N 0 + 2 i = 0 t 1   E i = ( 5 t + 3 ) / 2 . Let k i ( t ) be the degree of node i in G ( t ) that was generated at iteration t i , which satisfies that the degree of node i at time step t is 3 times the degree of the previous time step t 1 ; that is, k i ( t ) = 3 k i ( t 1 ) .

2.1. Average Degree

Theorem 1.
For the network G ( t ) with N t nodes and E t edges, the solution of the average degree of network G ( t ) is
k = 2 E t N t = 4 × 5 t 5 t + 3 4 .
When t , the condition E t N t ( N t 1 ) 2 is clearly established, so our network model is a sparse network according to literature [19]. However, the complete graph is not sparse. For a complete graph K N t with the same number of nodes, the average degree of K N t is N t 1 due to the degree of each node is N t 1 .

2.2. Cumulative Degree Distribution

In real life, there are few fully connected networks like a complete graph. Most real networks exhibit scale-free nature, and their nodes with a large degree are fewer, but nodes with a small degree are the majority of the network. A network is said to be scale-free when its cumulative degree distribution obeys P c u m ( k ) k 1 γ , where 2 < γ < 3 , and the cumulative degree distribution P c u m ( k ) = k = k P ( k ) represents the probability that the degree of a node is equal to or greater than k, where P ( k ) is a probability of a randomly selected node with k neighbors in the network G ( t ) .
Theorem 2.
The cumulative degree distribution of the sparse network G ( t ) obeys the following power law distribution,
P c u m ( k ) = k ln 5 ln 3 , γ = 1 + ln 5 / ln 3 .
Proof. 
The degree of node i will increase by a factor 3, that is, k i ( t + 1 ) = 3 k i ( t ) , which shows that the degree spectrum of G ( t ) is discrete. In Table 1, we enumerate the degree k and the number n ( k ) of nodes with degree k, then the cumulative degree distribution of G ( t ) is calculated by
P c u m ( k ) = k = k n ( k ) N t = 5 t i + 3 5 t + 3 k ln 5 ln 3 ,
where t i = t ln k ln 2 ln 3 has been substituted into the above formula, and for large t, the cumulative degree distribution follows a power law k 1 γ with exponent γ = 1 + ln 5 ln 3 . Therefore, we have proven that the network G ( t ) is a scale-free network. On the other hand, the degree of all nodes in the complete graph K N t is the same, so K N t is not a scale-free network. It can be seen that our network is more suitable for simulating scale-free real networks. □

2.3. Clustering Coefficient

The clustering coefficient is used to describe the tightness of clumps between nodes in the graph. Specifically, it is a measure of the dense or sparse connection between neighbors of each node. The clustering coefficient c i of each node i is the ratio between the number e i of edges that actually exist in all k i nearest neighbors and the number k i ( k i 1 ) / 2 of all possible edges between them, and it is expressed as c i = 2 e i / k i ( k i 1 ) . The whole clustering coefficient C ¯ of the network is the average value of c i over all nodes in the network, which can be written as C ¯ = i V ( G ) c i / N t .
Theorem 3.
For the network G ( t ) with N t nodes, the solution of whole clustering coefficient of network G ( t ) is
C ¯ = 2 × 5 t 10 × 3 t 2 3 2 t 2 ( 5 t + 3 ) 0 , when t .
Proof. 
Since the clustering coefficient of each node with the same degree in G ( t ) is also the same, let n ( k ) be the number of nodes with degree k, and c ( k ) represents the clustering coefficient of each node with degree k. For each degree k, we can calculate the clustering coefficient c ( k ) and the corresponding number n ( k ) of nodes with degree k, as shown in Table 1, so the whole clustering coefficient C ¯ of network G ( t ) can be calculated as follows:
C ¯ = k c ( k ) × n ( k ) N t = 2 3 t × 4 5 t + 3 + 4 5 t + 3 t i = 1 t 5 t i 1 3 t t i = 2 × 5 t 10 × 3 t 2 3 2 t 2 ( 5 t + 3 ) ,
we have C ¯ 0 when t , and G ( t ) is not a highly clustered network. However, the clustering coefficient of the complete graph K N t is equal to 1, and there is a significant difference between G ( t ) and K N t on this topological parameter. □

2.4. Diameter

The diameter is defined as the maximum of the shortest distances between all pairs of nodes in network G ( t ) , denoted as D ( G ( t ) ) , and it is often used to characterize the longest communication delay in complex network.
Theorem 4.
For t 0 , the diameter of the sparse network G ( t ) is D ( G ( t ) ) = t + 1 .
Proof. 
When G ( t ) is a small network, we can easily enumerate its diameter, such as t = 0 , D ( G ( 0 ) ) = 1 . At time step t = 1 , the distance between two new nodes is the longest, and the path through them must contain an old node, so D ( G ( 1 ) ) = 2 . For t > 1 , we can find that the diameter refers to the distance between a pair of new nodes. For simplicity of description, we denote the newly generated node at time t i as t i ; therefore, we only need to consider the maximum value of the shortest distance between two nodes t. According to the structure of G ( t ) , when t = 2 , the shortest path between new nodes must be the path P 2 = 2 1 0 2 , which contains nodes generated at time step 0 , 1 , 2 . For t 3 , the shortest path between two new nodes must be a path P t = t t 1 t 2 3 1 0 2 t . Hence, D ( G ( t ) ) = t + 1 is true for t 0 , which shows that the diameter scales logarithmically with the network order. For a complete graph K N t with the same number of nodes, it is well known that its diameter is equal to 1, which means that the diameter of G ( t ) is larger than that of K N t . □

3. Random Walks and Electrical Networks

In this section, we aim to show the closed-form solution for the mean hitting time of our networks. Firstly, we introduce several notions and lemmas about electrical networks and random walks, then we provide the relationships between the mean hitting time and Kirchhoff index. The electrical network corresponding to a graph G ( t ) can be constructed by replacing each edge in G ( t ) with a unit resistor, but we still denote the resulting electrical network as G ( t ) . The effective resistance Ω i j between any two distinct nodes i , j V ( t ) is defined as the potential difference between them when a unit current from i to j is kept; when i = j , we set Ω i j = 0 .
Lemma 1
([20]). For an electrical network G ( t ) with N t  nodes, the sum of effective resistances between all pairs of adjacent nodes can be written as
i < j , ( i , j ) E ( t ) Ω i j = N t 1 .
Lemma 2
([21]). For any pair of distinct node i and j in an electrical network G ( t ) , k i and N ( i ) represent the degree of node i and its neighbors set, then the degree and effective resistance satisfy the following relationship:
k i Ω i j + s N ( i ) ( Ω i s Ω j s ) = 2 .
According to [22], the regular Kirchhoff index K ( G ( t ) ) of a network G ( t ) is defined as the sum of the effective resistances of all pairs of disordered nodes in G ( t ) :
K ( G ( t ) ) = i , j V ( t ) Ω i j .
Taking into account the influence of degree on the Kirchhoff index, the M-Kirchhoff index K * ( G ( t ) ) and the A-Kirchhoff index K + ( G ( t ) ) have been proposed in [23,24], respectively, and they are interpreted by the formula as
K * ( G ( t ) ) = i , j V ( t ) ( d i d j ) Ω i j ,
and
K + ( G ( t ) ) = i , j V ( t ) ( d i + d j ) Ω i j .
The unbiased discrete time random walks means that the particle starting from the current location jumps to each of its neighboring nodes with equal probability at every time step [25]. The hitting time T i j of network G ( t ) is a key quantity pertaining to random walk, and it is defined as the expected time taken by a particle jumping to the ending node j from the starting node i for the first time. The mean hitting time T ¯ ( G ( t ) ) is the average of hitting times over all node pairs [26], and it can be solved by the K ( G ( t ) ) and the network order and size of G ( t ) .
Lemma 3.
For a network G ( t ) with N t nodes and E t edges, K ( G ( t ) ) represents its regular Kirchhoff index, then the mean hitting time T ¯ ( G ( t ) ) can be expressed as
T ¯ ( G ( t ) ) = E t · K ( G ( t ) ) N t ( N t 1 ) .
Proof. 
The Kirchhoff index K ( G ( t ) ) can be represented in terms of the N t 1 non-zero eigenvalues of the Laplacian matrix L t for network G ( t ) as K ( G ( t ) ) = 2 N t i = 2 N t 1 λ i [24]. The mean hitting time T ¯ ( G ( t ) ) of network G ( t ) is the average of hitting times over all N t ( N t 1 ) node pairs, and it is expressed as T ¯ ( G ( t ) ) = 1 N t ( N t 1 ) i = 1 , i j N t j = 1 N t H i j , where H i j is the hitting time from node i to another node j. In addition, T ¯ ( G ( t ) ) can be expressed in terms of the non-zero eigenvalues of the Laplacian matrix L t [18], that is T ¯ ( G ( t ) ) = 2 E t N t ( N t 1 ) i = 2 N t 1 λ i , then we can obtain the equation T ¯ ( G ( t ) ) = 1 N t ( N t 1 ) E t · K ( G ( t ) ) by combining the above equations. □

3.1. Related Matrices

All the nodes of a given network G ( t ) are marked as 1 , 2 , 3 , , N t , respectively, and the adjacency relations between all nodes and edges are implicit in an adjacency matrix A t = ( a i j ) N t × N t , where a i j = 1 if node i and j are connected by an edge in the network, and a i j = 0 if there is no edge between i and j. Let D t be the diagonal degree matrix of G ( t ) . Its i-th diagonal entry is the degree k i ( t ) of node i, and the remaining entries are zero. The Laplacian matrix of G ( t ) is defined as L t = D t A t .
Use α = V ( t 1 ) and β = V ¯ ( t ) to abbreviate the old node set and the new node set in network G ( t + 1 ) , and the number of new nodes is | V ¯ ( t + 1 ) | = 2 × 5 t . The network G ( t + 1 ) is generated iteratively by G ( t ) , then we show the recursive relationship between two consecutive time steps of these matrices. The adjacency matrix A t + 1 can be written in block form as
A t + 1 = A t + 1 α , α A t + 1 α , β A t + 1 β , α A t + 1 β , β = A t A t + 1 α , β A t + 1 β , α 0 ,
where A t + 1 β , α = ( A t + 1 α , β ) T is obvious according to the definition of adjacency matrix, and A t + 1 β , β is the zero matrix with order | V ¯ ( t + 1 ) | × | V ¯ ( t + 1 ) | . On the other hand, the diagonal matrix D t + 1 satisfies
D t + 1 = D t + 1 α , α D t + 1 α , β 0 D t + 1 β , β = 3 D t 0 0 2 I ,
where the symbol I represents the identity matrix with order | V ¯ ( t + 1 ) | × | V ¯ ( t + 1 ) | . Equation (13) is based on the fact that the nodes contained in set β are 2 degree nodes, the degree of every node in set α increases by a factor 3. Thus, the Laplacian matrix L t + 1 of network G ( t + 1 ) can be expressed as
L t + 1 = 3 D t A t A t + 1 α , β A t + 1 β , α 2 I .
Theorem 5.
For the sparse network G ( t + 1 ) after t + 1 time steps, we have A t + 1 α , β A t + 1 β , α = 2 D t + 2 A t .
Proof. 
The left side and right side of equation A t + 1 α , β A t + 1 β , α = 2 D t + 2 A t are denoted by M ¯ t and M t , respectively; thus, the entries of M t are
M t ( i , j ) = 2 k i ( t ) , i = j ; 2 A t ( i , j ) , i j .
Our main task is to verify that the entries M ¯ t ( i , j ) of M ¯ t are equivalent to those of M t . Matrix A t + 1 β , α can be partitioned into N t column vectors x i = ( x i , N t + 1 , x i , N t + 2 , , x i , N t + 1 ) T for i = 1 , 2 , , N t , that is A t + 1 β , α = ( x 1 , x 2 , , x N t ) , and we have A t + 1 α , β = ( x 1 , x 2 , , x N t ) T due to A t + 1 α , β = ( A t + 1 β , α ) T , and we can calculate the product of the two matrices as A t + 1 α , β A t + 1 β , α = ( x i T x j ) N t × N t .
The entries M ¯ t ( i , j ) of M ¯ t can be determined by distinguishing two cases. (a) When i = j , the diagonal entry is M ¯ t ( i , i ) = x i T x i , and we can obtain M ¯ t ( i , i ) = 2 k i ( t ) = M t ( i , i ) . (b) When i j , the non-diagonal entry of matrix M ¯ t is equal to
M ¯ t ( i , j ) = x i T x j = s β ( x i , s x j , s ) = A t + 1 ( i , s ) = 1 A t + 1 ( j , s ) = 1 A t ( i , j ) = 2 A t ( i , j ) = M t ( i , j ) .
Before going on, we introduce a concept about { 1 } -inverse of a matrix [27]. Matrix M is called a { 1 } -inverse of X if X M X = X holds, let X be one of the { 1 } -inverses of X. A lemma about the { 1 } -inverse of a block matrix is shown below.
Lemma 4
([18]). Let matrix P = X Y Y T Z be a block matrix, and Z is nonsingular if there exists a { 1 } -inverse D for D = X Y Z 1 Y T , then the { 1 } -inverse of matrix P is a matrix P = D D Y Z 1 Z 1 Y T D Z 1 Y T D Y Z 1 + Z 1 .

3.2. Effective Resistances

For a connected network G ( t ) , the effective resistance Ω i j ( t ) between any pair of nodes can be obtained from the elements of any { 1 } -inverse of its Laplacian matrix, and we can refer to the following lemma.
Lemma 5
([28]). For a given G ( t ) , let L i j be the ( i , j ) -th element of any { 1 } -inverse L t of its Laplacian matrix L t . For any two nodes i , j V ( t ) , the effective resistance Ω i j ( t ) can be expressed in terms of the elements of L t as Ω i j ( t ) = L i i + L j j L i j L j i .
Next, we show that the effective resistance between any two nodes in G ( t + 1 ) can be represented in terms of effective resistances of node pairs in G ( t ) . In the following calculation process, we divide the nodes into the old node and the new node to investigate the effective resistance between them. To achieve this goal, we introduce some variables and define Ω X , Y ( t ) = i X , j Y Ω i j ( t ) for any two subsets X and Y of set V ( t ) on G ( t ) . Then, for a node i V ¯ ( t + 1 ) in G ( t + 1 ) , we define Ω Δ i ( t ) = Ω a b ( t + 1 ) , where Δ i = { a , b } is the neighbors set of node i and a , b V ( t ) .
Lemma 6.
For the effective resistance between the node pairs in the network G ( t + 1 ) , the following propositions are established for t 0 ,
(1) Let i , j V ( t ) be a pair of old nodes in G ( t + 1 ) , then Ω i j ( t + 1 ) obeys the relation
Ω i j ( t + 1 ) = 1 2 Ω i j ( t ) .
(2) Let i V ¯ ( t + 1 ) be a new node in network G ( t ) , then
Ω i , Δ i ( t + 1 ) = 1 + 1 2 Ω Δ i ( t + 1 ) .
(3) Let i V ¯ ( t + 1 ) and j V ( t ) be a new node and an old node in network G ( t + 1 ) , respectively, then the following equation holds
Ω i j ( t + 1 ) = 1 2 ( 1 1 2 Ω Δ i ( t + 1 ) + Ω j , Δ i ( t + 1 ) ) .
(4) Let i , j V ¯ ( t + 1 ) be a pair of distinct new nodes in network G ( t + 1 ) , then Ω i j ( t + 1 ) obeys
Ω i j ( t + 1 ) = 1 1 4 ( Ω Δ i ( t + 1 ) + Ω Δ j ( t + 1 ) ) + 1 4 Ω Δ i , Δ j .
The detailed proof of Lemma 6 is given in Appendix A.

4. Mean Hitting Time

Based on the above preparations, we determine the mean hitting time for network G ( t ) using the connection between the mean hitting time and the Kirchhoff index. Firstly, we calculate the exact solutions of three auxiliary variables, including K X , Y ( t ) = i X , j Y Ω i j ( t ) , K X , Y * ( t ) = i X , j Y k i ( t ) k j ( t ) Ω i j ( t ) , and K X , Y + ( t ) = i X , j Y ( k i ( t ) + k j ( t ) ) Ω i j ( t ) for two subsets X and Y of set V ( t ) in network G ( t ) , and give the relationships between them. The following lemmas can support our main result.
Lemma 7.
For network G ( t + 1 ) , i V ¯ ( t + 1 ) is a new node and j V ( t ) is an old node, Y V ( t ) , Δ i is the set of all neighbors of node i, then the following two summation formulas hold:
(a) i V ¯ ( t + 1 ) Ω Δ i ( t + 1 ) = N t 1 .
(b) i V ¯ ( t + 1 ) Ω Δ i , Y ( t + 1 ) = 2 k j ( t ) j V ( t ) Ω j , Y ( t + 1 ) .
Proof. 
(a) Since each edge of network G ( t ) can generate two new nodes of network G ( t + 1 ) , then we have
i V ¯ ( t + 1 ) Ω Δ i ( t + 1 ) = 2 ( s , t ) E ( t ) Ω s , t ( t + 1 ) = 2 ( s , t ) E ( t ) 1 2 Ω s , t ( t ) = N t 1 .
(b) For any old node j V ( t ) , there are k j ( t + 1 ) k j ( t ) = 2 k j ( t ) new nodes in V ¯ ( t + 1 ) that are adjacent to j, so Ω j , Y ( t + 1 ) is summed 2 k j ( t ) times. □
Lemma 8.
The M-Kirchhoff index and the A-Kirchhoff index of our network G ( t ) are equal to
K V t , V t * ( t ) = 38 15 ( 25 2 ) t + 14 5 × 5 2 t + 26 15 × 5 t ,
and
K V t , V t + ( t ) = 19 9 ( 5 2 ) t 19 15 ( 25 2 ) t + 161 90 × 5 2 t + 13 15 × 5 t + 1 2 .
The proof of Lemma 8 is given in Appendix B.
Theorem 6.
The solution of regular Kirchhoff index of our network G ( t ) is
K V t , V t ( t ) = 95 72 ( 1 2 ) t + 19 36 ( 5 2 ) t 19 120 ( 25 2 ) t + 49 180 × 5 2 t + 13 45 × 5 t 1 4 .
Proof. 
Through the two types of Kirchhoff indexes obtained by Lemma 8, we deduce the relationship between the regular Kirchhoff index and them. Divide all the nodes in the network G ( t + 1 ) into new node and old node, then K V t + 1 , V t + 1 ( t + 1 ) is equal to
K α , α ( t + 1 ) + 2 K α , β ( t + 1 ) + K β , β ( t + 1 ) = 1 2 K V t , V t ( t ) + 2 K α , β ( t + 1 ) + 1 4 K β , β * ( t + 1 ) = 1 2 K V t , V t ( t ) + 1 2 K V t , V t + ( t ) + 1 2 K V t , V t * ( t ) + 35 8 × 5 2 t 3 8 . = 1 2 K V t , V t ( t ) + 19 18 ( 5 2 ) t 19 10 ( 25 2 ) t + 2401 360 × 5 2 t + 13 10 × 5 t 1 8 .
Considering K V 0 , V 0 ( 0 ) = 2 , plugging Equations (21) and (22) into Equation (24) yields the solution of Kirchhoff index of network G ( t ) , as shown in Equation (23). Figure 2 shows a schematic diagram of the three types of Kirchhoff indexes of network G ( t ) . We are ready to show the result for the mean hitting time T ¯ ( G ( t ) ) of G ( t ) .
Theorem 7.
For t 0 , the closed-form solution for the mean hitting time of network G ( t ) is
T ¯ ( G ( t ) ) = 1 5 2 t + 4 × 5 t + 3 [ 95 18 ( 5 2 ) t + 19 9 ( 25 2 ) t 19 30 ( 125 2 ) t + 49 45 5 3 t + 52 45 5 2 t 5 t ] ,
then T ¯ ( G t ) N t for t .
Proof. 
Since Lemma 3 and the total number of edges of whole network is E t = 5 t = 2 N t 3 , we have
T ¯ ( G ( t ) ) = 2 N t 3 N t ( N t 1 ) K V t , V t .
It is easy to obtain the result in Equation (25) by substituting Equation (23) into Equation (26). We continue to express T ¯ ( G ( t ) ) as a function of the network order N t , it can be observed that t = ln ( 2 N t 3 ) ln 5 from the exact value of N t ; hence, the mean hitting time T ¯ ( G ( t ) ) of G ( t ) can be expressed in terms of network order as
T ¯ ( G ( t ) ) = 2 N t 3 N t ( N t 1 ) [ 95 72 ( 2 N t 3 ) ln 2 ln 5 + 19 36 ( 2 N t 3 ) ln 5 ln 2 ln 5 19 120 ( 2 N t 3 ) ln 25 ln 2 ln 5 + 49 180 ( 2 N t 3 ) 2 + 13 45 ( 2 N t 3 ) 1 4 ] .
Therefore, when t , for a large network, we have
T ¯ ( G ( t ) ) 98 45 N t ,
which shows T ¯ ( G ( t ) ) increases linearly with the total number of nodes in our network; the mean hitting time of the random walks shown is similar to that of the complete graph; and they all have high transmission efficiency. □

5. Conclusions

In this paper, we have presented a class of a sparse network G ( t ) and have pointed out the differences between G ( t ) and the complete graph K N t with the same order in several topological characteristics. The main differences are G ( t ) has a scale-free property, while K N t does not. The scale-free feature is a shock discovery in real complex systems. K N t is not sparse, but G ( t ) is sparse; it is rare to achieve a tight connections like a complete graph in a real network. It has been proven that the mean hitting time of the complete graph is minimal and increases linearly with the network order. Based on the relationship between the mean hitting time and the Kirchhoff index, we have calculated a closed-form solution to the mean hitting time of our network, and the result shows that the dominant scaling of which exhibits the same behavior as that of a complete graph. We hope that our work will be instructive for the design and construction of complex networks with efficient navigation.

Author Contributions

Conceptualization, B.Y. and J.S.; writing—original draft preparation, J.S. and X.W.; writing—review and editing, J.S. and B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Plan under Grant No.2019YFA0706401; the National Natural Science Foundation of China under Grants No. 61632002, No.61872166, and No.61662066; and the National Natural Science Foundation of China Youth Project under Grants No.61902005, No.62002002.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Proof of Lemma 6

In Appendix A, we give the proofs of the four propositions in Lemma 6.
(1) The nodes are divided into two parts; the new node set is abbreviated as β , and the old node set is denoted as α , then any { 1 } -inverse L t + 1 of Laplacian matrix L t + 1 can be written as the form of the block matrix
L t + 1 = L α , α L α , β L β , α L β , β .
By Equation (14) and Lemma 4, applying the conditions in Theorem 5, we can clearly obtain the relationship between block L α , α of matrix L t + 1 and matrix L t ,
L α , α = ( 3 D t A t ( A t + 1 α , β ) ( 2 I ) 1 ( A t + 1 β , α ) ) = ( 3 D t A t 1 2 ( 2 D t + 2 A t ) ) = 1 2 L t .
According to Lemma 5 and Equation (A2), for i , j V ( t ) , we reveal the relationship between the effective resistance at two consecutive iterations for any pair of old nodes in network G ( t + 1 ) ,
Ω i j ( t + 1 ) = L α , α ( i , i ) + L α , α ( j , j ) L α , α ( i , j ) L α , α ( j , i ) = 1 2 ( L t ( i , i ) + L t ( j , j ) L t ( i , j ) L t ( j , i ) ) = 1 2 Ω i j ( t ) .
(2) For i V ¯ ( t + 1 ) and its neighboring node set Δ i = { a , b } , we can obtain the following two equations 2 Ω i a ( t + 1 ) + Ω i , Δ i ( t + 1 ) Ω a , Δ i ( t + 1 ) = 2 and 2 Ω i b ( t + 1 ) + Ω i , Δ i ( t + 1 ) Ω b , Δ i ( t + 1 ) = 2 by applying Lemma 2 to the two old neighboring nodes of node i. Then, we add the results of these two equations together to give
2 Ω i , Δ i ( t + 1 ) + 2 Ω i , Δ i ( t + 1 ) Ω Δ i , Δ i ( t + 1 ) = 4 ,
the final result of the above equation simplification is shown as
Ω i , Δ i ( t + 1 ) = 1 + 1 4 Ω Δ i , Δ i ( t + 1 ) = 1 + 1 2 Ω Δ i ( t + 1 ) .
(3) For a given new node i V ¯ ( t + 1 ) and an old node j V ( t ) , we also have d i ( t + 1 ) Ω i j ( t + 1 ) + Ω i , Δ i ( t + 1 ) Ω j , Δ i ( t + 1 ) = 2 with the help of Lemma 2, and considering k i ( t + 1 ) = 2 and proposition (2), we have
Ω i j ( t + 1 ) = 1 2 ( 1 1 2 Ω Δ i ( t + 1 ) + Ω j , Δ i ( t + 1 ) ) .
(4) For two new nodes i , j V ¯ ( t + 1 ) , i j , we can write d i ( t + 1 ) Ω i j ( t + 1 ) + Ω i , Δ i ( t + 1 ) Ω j , Δ i ( t + 1 ) = 2 with the support of the Lemma 2, combining condition k i ( t + 1 ) = 2 and Propositions (2) and (3), we obtain the following derivation process,
Ω i j ( t + 1 ) = 1 2 ( 2 Ω i , Δ i ( t + 1 ) + Ω j , Δ i ( t + 1 ) ) = 1 2 ( 2 Ω i , Δ i ( t + 1 ) + s Δ i Ω j , s ( t + 1 ) ) = 1 2 [ 1 1 2 Ω Δ i ( t + 1 ) + s Δ i 1 2 ( 1 1 2 Ω Δ j ( t + 1 ) + Ω k , Δ j ( t + 1 ) ) ] = 1 1 4 ( Ω Δ i ( t + 1 ) + Ω Δ j ( t + 1 ) ) + 1 4 Ω Δ i , Δ j ( t + 1 ) .
As shown above, the four propositions in the Lemma 6 have been proved.

Appendix B. The Proof of Lemma 8

As defined above, α and β are the old node set and new node set in network G ( t + 1 ) , the M-Kirchhoff index can be calculated by K V t + 1 , V t + 1 * ( t + 1 ) = K α , α * ( t + 1 ) + K α , β * ( t + 1 ) + K β , α * ( t + 1 ) + K β , β * ( t + 1 ) . Because K α , β * ( t + 1 ) and K β , α * ( t + 1 ) are equal to each other, we only evaluate the three terms other than K β , α * ( t + 1 ) on the right side of above equation. According to Lemma 6 (1), we can write the first term as
K α , α * ( t + 1 ) = i V ( t ) , j V ( t ) k i ( t + 1 ) k j ( t + 1 ) Ω i , j ( t + 1 ) = 3 2 × 1 2 i V ( t ) , j V ( t ) k i ( t ) k j ( t ) Ω i , j ( t ) = 9 2 K V t , V t * ( t ) .
Refer to the expression of the effective resistance between the new node and old node given in Lemma 6 (3), and the second term can be investigated by a similar derivation method,
K α , β * ( t + 1 ) = i V ¯ ( t + 1 ) , j V ( t ) k i ( t + 1 ) k j ( t + 1 ) Ω i , j ( t + 1 ) = i V ¯ ( t + 1 ) , j V ( t ) 2 ( 3 k j ( t ) ) 1 2 ( 1 1 2 Ω Δ i ( t + 1 ) + Ω Δ i , j ( t + 1 ) ) = i V ¯ ( t + 1 ) ( 1 1 2 Ω Δ i ( t + 1 ) ) ( j V ( t ) 3 k j ( t ) ) + 3 i V ¯ ( t + 1 ) , j V ( t ) k j ( t ) Ω Δ i , j ( t + 1 ) ,
for a node i V ¯ ( t + 1 ) , we note that k i ( t + 1 ) = 2 and the transformation equation given by Lemma 7, and Equation (A9) is reformulated as
K α , β * ( t + 1 ) = 3 × 2 × E t ( | V ¯ ( t + 1 ) | 1 2 ( N t 1 ) ) + 3 · 2 i , j V ( t ) k i ( t ) k j ( t ) Ω i , j ( t + 1 ) = 3 K V t , V t * ( t ) + 21 2 · 5 2 t 3 2 · 5 t .
Considering the effective resistance between the new node, we need to calculate the last term K β , β * ( t + 1 ) by Lemma 6 (4), then K β , β * ( t + 1 ) is equal to
i , j V ¯ ( t + 1 ) , i j d i ( t + 1 ) d j ( t + 1 ) Ω i , j ( t + 1 ) = 4 i , j V ¯ ( t + 1 ) , i j [ 1 1 4 ( Ω Δ i ( t + 1 ) + Ω Δ j ( t + 1 ) ) + 1 4 Ω Δ i , Δ j ( t + 1 ) ] = 4 | V ¯ ( t + 1 ) | ( | V ¯ ( t + 1 ) | 1 ) 2 ( | V ¯ ( t + 1 ) | 1 ) i V ¯ ( t + 1 ) Ω Δ i ( t + 1 ) + ( i , j V ¯ ( t + 1 ) Ω Δ i , Δ j ( t + 1 ) i V ¯ ( t + 1 ) Ω Δ i , Δ i ( t + 1 ) ) .
At the same time, according to the definition of equation Ω X , Y ( t ) = i X , j Y Ω i j ( t ) , the equation i V ¯ ( t + 1 ) Ω Δ i , Δ i ( t + 1 ) = 2 i V ¯ ( t + 1 ) Ω Δ i ( t + 1 ) can be easily obtained. Refer to the Lemma 7, we obtain
K β , β * ( t + 1 ) = 4 | V ¯ ( t + 1 ) | ( | V ¯ ( t + 1 ) | 1 ) 2 ( N t 1 ) | V ¯ ( t + 1 ) | + 4 x , y V t d x ( t ) d y ( t ) Ω x , y ( t + 1 ) = 2 K V t , V t * ( t ) + 4 | V ¯ ( t + 1 ) | ( | V ¯ ( t + 1 ) | 1 ) 2 | V ¯ ( t + 1 ) | ( N t 1 ) .
Taking Equation (A12) into consideration and inserting the results of the three equations Equations (A8), (A10) and (A12) into K V t + 1 , V t + 1 * ( t + 1 ) = K α , α * ( t + 1 ) + K α , β * ( t + 1 ) + K β , α * ( t + 1 ) + K β , β * ( t + 1 ) yields
K V t + 1 , V t + 1 * ( t + 1 ) = 25 2 K V t , V t * ( t ) + 35 × 5 2 t 13 × 5 t ,
the above calculation process shows a recursive formula about K V t + 1 , V t + 1 * ( t + 1 ) . By adding a simple condition K V 0 , V 0 * ( 0 ) = 2 , we obtain the desired result about K V t , V t * ( t ) , as shown in Equation (21).
On the other hand, we consider the A-Kirchhoff index of network, and we also have the following recursive relation governing K V t + 1 , V t + 1 + ( t + 1 ) and K V t , V t + ( t ) . Obviously, K V t + 1 , V t + 1 + ( t + 1 ) = K α , α + ( t + 1 ) + 2 K α , β + ( t + 1 ) + K β , β + ( t + 1 ) , so we can deduce the following relations for the three terms of this equation,
K α , α + ( t + 1 ) = 3 2 K α , α + ( t ) , K β , β + ( t + 1 ) = K β , β * ( t + 1 ) 2 K α , β + ( t + 1 ) = 2 i V ¯ ( t + 1 ) , j V ( t ) ( 2 + k j ( t + 1 ) ) Ω i , j ( t + 1 ) = 4 K α , β ( t + 1 ) + K α , β * ( t + 1 ) .
We have calculated the precise values of K β , β * ( t + 1 ) and K α , β * ( t + 1 ) above, so the focus of the following calculation is to calculate K α , β ( t + 1 ) . By using Lemma 6 (3), we have
K α , β ( t + 1 ) = i V ¯ ( t + 1 ) , j V ( t ) 1 2 ( 1 1 2 Ω Δ i ( t + 1 ) + Ω j , Δ i ( t + 1 ) ) = i V ¯ ( t + 1 ) , j V ( t ) 1 2 ( 1 1 2 Ω Δ i ( t + 1 ) ) + 1 2 i , j V ( t ) k i ( t ) Ω i , j ( t ) = 1 4 K V t , V t + ( t ) + 1 2 N t ( | V ¯ ( t + 1 ) | 1 2 ( N t 1 ) ) ,
where equation i , j V ( t ) k i ( t ) Ω i , j ( t ) = 1 2 i , j V ( t ) ( k i ( t ) + k j ( t ) ) Ω i , j ( t ) = 1 2 K V t , V t + ( t ) is used in the above equation, and combining the three obtained term expressions leads to
K V t + 1 , V t + 1 + ( t ) = 5 2 K V t , V t + ( t ) + 5 K V t , V t * ( t ) + 105 4 × 5 2 t 13 2 × 5 t 3 4 ,
Equation (21) gives the exact value of K V t , V t * ( t ) . Considering a initial value K V 0 , V 0 + ( 0 ) = 4 , we can obtain the solution of K V t , V t + ( t ) shown in Equation (22).

References

  1. Newman, M.E.J. Networks: An Introduction; Oxford University Press: New York, NY, USA, 2010. [Google Scholar]
  2. An, Z.; Merrill, N.J.; Mcquade, S.T.; Piccoli, B. Equilibria and control of metabolic networks with enhancers and inhibitors. Math. Eng. 2019, 1, 648–671. [Google Scholar] [CrossRef]
  3. Pan, W.F.; Ming, H.; Chai, C.L. Structure, Dynamics, and Applications of Complex Networks in Software Engineering. Math. Probl. Eng. 2021, 2021, 6734248. [Google Scholar] [CrossRef]
  4. Hatton, I.A.; McCann, K.S.; Fryxell, J.M.; Davies, T.J.; Smerlak, M.; Sinclair, A.R.E.; Loreau, M. The predator-prey power law: Biomass scaling across terrestrial and aquatic biomes. Science 2015, 349, aac6284. [Google Scholar] [CrossRef]
  5. Kirkland, S. Fastest expected time to mixing for a Markov chain on a directed graph. Linear Algebra Its Appl. 2010, 433, 1988–1996. [Google Scholar] [CrossRef] [Green Version]
  6. Levene, M.; Loizou, G. Kemeny’s constant and the random surfer. Am. Math. Mon. 2002, 109, 741–745. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, Z.Z.; Sheng, Y.B.; Hu, Z.Y.; Chen, G.R. Optimal and suboptimal networks for efficient navigation measured by mean-first passage time of random walks. Chaos 2012, 22, 043129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Wei, W.Y.; Chen, H. Salient object detection based on weighted hypergraph and random walk. Math. Probl. Eng. 2020, 2020, 2073140. [Google Scholar] [CrossRef]
  9. Chelali, M.; Kurtz, C.; Puissant, A.; Vincent, N. From pixels to random walk based segments for image time series deep classification. In International Conference on Pattern Recognition and Artificial Intelligence; Springer: Cham, Switzerland, 2020. [Google Scholar]
  10. Baum, D.; Weaver, J.C.; Zlotnikov, I.; Knötel, D.; Tomholt, L.; Dean, M.N. High-throughput segmentation of tiled biological structures using random salk distance transforms. Integr. Comp. Biol. 2019, 6, 6. [Google Scholar]
  11. Dhahri, A.; Mukhamedov, F. Open quantum random walks and quantum markov chains. Funct. Anal. Its Appl. 2019, 53, 137–142. [Google Scholar] [CrossRef]
  12. Zhang, K.; Gui, H.; Luo, Z.; Li, D. Matching for navigation map building for automated guided robot based on laser navigation without a reflector. Ind. Robot 2019, 46, 17–30. [Google Scholar] [CrossRef]
  13. Li, L.; Sun, W.G.; Wang, G.X.; Xu, G. Mean first-passage time on a family of small-world treelike networks. Int. J. Mod. Phys. C 2014, 25, 1350097. [Google Scholar] [CrossRef]
  14. Dai, M.F.; Sun, Y.Q.; Sun, Y.; Xi, L.; Shao, S. The entire mean weighted first-passage time on a family of weighted treelike networks. Sci. Rep. 2016, 6, 28733. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, Z.Z.; Lin, Y.; Ma, Y.J. Effect of trap position on the efficiency of trapping in treelike scale-free networks. J. Phys. A Math. Theor. 2011, 44, 075102. [Google Scholar] [CrossRef]
  16. Dai, M.F.; Yue, Z.; He, J.J.; Wang, X.; Sun, Y.; Su, W. Two types of weight-dependent walks with a trap in weighted scale-free treelike networks. Sci. Rep. 2018, 8, 1544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Wang, X.M.; Ma, F. Constructions and properties of a class of random scale-free networks. Chaos 2020, 30, 043120. [Google Scholar] [CrossRef] [PubMed]
  18. Sheng, Y.B.; Zhang, Z.Z. Low-Mean Hitting Time for Random Walks on Heterogeneous Networks. IEEE Trans. Inf. Theory 2019, 65, 6898–6910. [Google Scholar] [CrossRef]
  19. Del Genio, C.I.; Gross, T.; Bassler, K.E. All scale-free network are sparse. Phys. Rev. Lett. 2011, 107, 178701. [Google Scholar] [CrossRef] [Green Version]
  20. Foster, R.M. The average impedance of an electrical network. Contrib. Appl. Mech. (Reissner Anniv. Vol.) 1949, 333–340. [Google Scholar]
  21. Chen, H.Y. Random walks and the effective resistance sum rules. Discret. Appl. Math. 2010, 158, 1691–1700. [Google Scholar] [CrossRef] [Green Version]
  22. Klein, D.J.; Randić, M. Resistance distance. J. Math. Chem. 1993, 1, 81–95. [Google Scholar] [CrossRef]
  23. Chen, H.Y.; Zhang, F.J. Resistance distance and the normalized Laplacian spectrum. Discret. Appl. Math. 2007, 155, 654–661. [Google Scholar] [CrossRef] [Green Version]
  24. Gutman, I.; Mohar, B. The quasi-Wiener and the Kirchhoff indices coincide. J. Chem. Inf. Model. 1996, 36, 982–985. [Google Scholar] [CrossRef]
  25. Kemeny, J.G.; Snell, J.L. Finite Markov Chains; Springer: New York, NY, USA, 1976. [Google Scholar]
  26. Condamin, S.; Benichou, O.; Tejedor, V.; Voituriez, R.; Klafter, J. First-passage times in complex scale-invariant media. Nature 2007, 450, 77–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Tian, Y. Reverse order laws for the generalized inverses of multiple matrix products. Linear Algebra Its Appl. 1944, 211, 85–100. [Google Scholar] [CrossRef] [Green Version]
  28. Bapat, R.B. Resistance distance in graphs. Mathmatics Stud. 1999, 68, 87–98. [Google Scholar]
Figure 1. (a) The illustration of rhombus operation; (b) the network G ( t ) at time step t = 2 ; (c) the network G ( t ) at time step t = 3 .
Figure 1. (a) The illustration of rhombus operation; (b) the network G ( t ) at time step t = 2 ; (c) the network G ( t ) at time step t = 3 .
Entropy 24 00034 g001
Figure 2. The schematic diagram of the three types of Kirchhoff indexes.
Figure 2. The schematic diagram of the three types of Kirchhoff indexes.
Entropy 24 00034 g002
Table 1. The degree and clustering coefficient of node in G ( t ) .
Table 1. The degree and clustering coefficient of node in G ( t ) .
tk n ( k ) c ( k )
0 3 t 2 2 / 3 t
1 2 × 3 t 1 2 × 5 0 1 / 3 t 1
2 2 × 3 t 2 2 × 5 1 1 / 3 t 2
t i 2 × 3 t t i 2 × 5 t i 1 1 / 3 t t i
t 1 2 × 3 1 2 × 5 t 2 1 / 3 1
t 2 × 3 0 2 × 5 t 1 1 / 3 0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, J.; Wang, X.; Yao, B. Mean Hitting Time for Random Walks on a Class of Sparse Networks. Entropy 2022, 24, 34. https://doi.org/10.3390/e24010034

AMA Style

Su J, Wang X, Yao B. Mean Hitting Time for Random Walks on a Class of Sparse Networks. Entropy. 2022; 24(1):34. https://doi.org/10.3390/e24010034

Chicago/Turabian Style

Su, Jing, Xiaomin Wang, and Bing Yao. 2022. "Mean Hitting Time for Random Walks on a Class of Sparse Networks" Entropy 24, no. 1: 34. https://doi.org/10.3390/e24010034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop