Next Article in Journal
Coupling Chemotaxis and Growth Poromechanics for the Modelling of Feather Primordia Patterning
Next Article in Special Issue
Computing Sharp Bounds of Metric Based Fractional Dimensions for the Sierpinski Networks
Previous Article in Journal
Affine Term Structure Models: Applications in Portfolio Optimization and Change Point Detection
Previous Article in Special Issue
The Eccentric-Distance Sum Polynomials of Graphs by Using Graph Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Properties of Stochastic Matrices and Non-Homogeneous Markov Chains Generated by Nonlinearities in the Resource Network Model

by
Liudmila Zhilyakova
*,†,
Vasily Koreshkov
and
Nadezhda Chaplinskaia
V. A. Trapeznikov Institute of Control Sciences, Russian Academy of Sciences, 65, Profsoyuznaya Street, 117997 Moscow, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(21), 4095; https://doi.org/10.3390/math10214095
Submission received: 17 October 2022 / Revised: 28 October 2022 / Accepted: 30 October 2022 / Published: 3 November 2022
(This article belongs to the Special Issue Graph Theory and Applications)

Abstract

:
The resource network is a non-linear threshold model where vertices exchange resource in infinite discrete time. The model is represented by a directed weighted graph. At each time step, all vertices send their resources along all output edges following one of two rules. For each vertex, the threshold value for changing the operation rule is equal to the total weight of its outgoing edges. If all vertices have resources less than their thresholds, the network is completely described by a homogeneous Markov chain. If at least one of the vertices has a resource above the threshold, the network is described by a non-homogeneous Markov chain. The purpose of this article is to describe and investigate non-homogeneous Markov chains generated by the resource network model. It is proven that they are strongly ergodic. In addition, stochastic matrices of a special form were studied. A number of new properties were revealed for them. The results obtained were generalized to arbitrary stochastic matrices.

1. Introduction

Many problems from different subject areas are formulated and solved using network models. The study of various dynamic processes in networks, both stochastic and deterministic, often requires the tools of homogeneous and non-homogeneous Markov chains, as well as operating with stochastic matrices. This article is devoted to the investigation of the resource network model with a special kind of graph topology. The problem of studying resource networks belongs to the intersection of several classes of diffusion models. The first one contains various random walk models [1]. This is a large field comprising random walks on lattices [2], on finite connected undirected graphs [3], random walks with local biases [4], etc. In [5], the problems of optimizing flows and finding the shortest paths were solved using an analysis of the large-deviation functions of random walks. Non-linear diffusion was considered in [6]. The non-linearity is of our particular interest because the model presented in this article is also, in general, non-linear. This feature is achieved by threshold switching between the operation rules of vertices. Some vertex, under certain conditions, switches to a non-linear rule and starts functioning similarly to the vertices in the chip-firing game model. This model was proposed quite a long time ago [7]; however, it had not lost its relevance at the present time [8,9,10]. The chip-firing game is often used to describe the sand pile or avalanche model [11] and the similar processes of self-organized criticality [12,13,14,15,16].
In addition to describing physical diffusion processes in networks, similar models appear when simulating information processes, in particular the dynamics of opinions and reaching a consensus in multi-agent systems. At the present time, the classic DeGroot model [17] has many different modifications [18,19,20,21]. Homogeneous and non-homogeneous Markov chains are one of the main tools for describing multi-state systems, processes, and devices [22,23].
Studying such models, researchers are faced with the need to investigate some non-trivial properties of stochastic matrices [24,25,26]. In this paper, we study a resource network model of a special topology based on a non-ergodic graph. Some of its properties are also proven using stochastic matrices of a special form.
The resource network model is a non-linear diffusion model, where the vertices of a directed weighted graph distribute some dimensionless infinitely divisible resource according two rules depending on the resource amount at every time step. The time in the model is discrete; all vertices operate in parallel. The weights of the edges denote their throughput abilities. The vertices can store an unlimited amount of resource. If, at time t, the resource amount in a vertex exceeds the total throughput of its outgoing arcs, this vertex sends the full throughput to each arc; otherwise, it gives out all its resource, dividing it in proportion to the throughput of the arcs. Non-linearity occurs in the model when the total resource exceeds the threshold value, which is why different vertices of the network begin to function according to different rules.
The resource network was first proposed in [27]. Since that time, the theory of resource networks has arisen, and a brief description can be found in [28]. This article also describes two two-threshold modifications of the standard model. Some other models based on the standard resource network were developed by other research teams [29,30].

2. Preliminaries

2.1. Resource Network: Basic Definitions

The resource network is a non-linear flow model operating in discrete time. Vertices of a network synchronously redistribute some infinitely divisible resource. At each time step, every vertex sends the resource to all its neighbors according to one of two rules with threshold switching. The choice of the rule depends on the amount of the resource in the vertex. If the resource at the vertex is greater than the total throughput of its outgoing edges, it sends the full throughput to each edge; otherwise, the vertex gives away the entire resource, distributing it in proportion to the throughput of outgoing edges. A formal description of the operation rules is presented in Section 2.2. Vertices have unlimited capacities.
The structure of the network is given by a directed weighted graph G = ( V , E ) . The edges e i j = ( v i , v j ) E have time-constant non-negative weights r i j that determine the throughputs of the corresponding edges.
Matrix R = ( r i j ) n × n is the throughput matrix, r i j R + . If edge e i j exists, then r i j > 0 , otherwise r i j = 0 .
The dynamic properties of the network are determined by the rules of resource redistribution, as well as the amount of the total resource and its distribution over the vertices.
Definition 1.
Resources q i ( t ) are nonnegative numbers assigned to vertices v i , i = 1 , n ¯ , and changing in discrete time t. The state Q ( t ) of a network at time step t is a vector of resource values at each vertex:
Q ( t ) = ( q 1 ( t ) , , q n ( t ) ) .
Let W be the total resource in a network. The network operation fulfills the conservation law: the resource neither flows in nor flows out:
t i = 1 n q i ( t ) = W .
Definition 2.
The flow f i j ( t ) is the resource leaving vertex v i along the edge e i j at time t.
F ( t ) = ( f i j ( t ) ) n × n is a flow matrix at time t (between t and t + 1 ). Since the throughputs of the edges are limited, t f i j ( t ) r i j . Unlike the total resource, the total flow in the network is always limited by the throughputs of the edges.

2.2. Rules of Resource Distribution

First, we introduce two characteristics of the network vertices.
Definition 3.
The values:
r i i n = j = 1 n r j i a n d r i o u t = j = 1 n r i j
are the total in- and out-throughputs of vertex v i , respectively. A loop’s throughput, if it exists, is included in both sums.
At time step t, vertex v i sends to adjacent vertex v j through edge e i j a resource amount f i j ( t ) equal to:
f i j ( t ) = r i j , if q i ( t ) > r i o u t ( rule 1 ) ; r i j r i o u t q i ( t ) , if q i ( t ) r i o u t ( rule 2 ) .
In other words, if the resource amount of a vertex exceeds the total throughput of outgoing edges, then it operates according to Rule 1. In this case, the flow in each edge is equal to its throughput: f i j ( t ) = r i j ; in total, the vertex sends out its total output throughput:
j = 1 n f i j ( t ) = j = 1 n r i j = r i o u t .
If a vertex has an insufficient amount of resource, then, in accordance with Rule 2, it gives out its entire resource, distributing it to all outgoing edges in proportion to their throughputs. Then,
j = 1 n f i j ( t ) = q i ( t ) .
In accordance with Definition 3, we introduce similar concepts for the total input and output flows:
f i o u t ( t ) = j = 1 n f i j ( t ) ;
f i i n ( t + 1 ) = j = 1 n f j i ( t ) ; ( f i i n ( 0 ) = 0 ) .
It follows from Formula (1) that the output flow of vertex v i at time t is
f i o u t ( t ) = min { q i ( t ) , r i o u t } .
Remark 1.
Note that, if q i ( t ) = r i o u t , then applying both rules will lead to the same result: f i o u t ( t ) = r i o u t . This property is used in Section 3.
The input and output flows of all vertices form the corresponding vectors:
F o u t ( t ) = ( f 1 o u t ( t ) , , f n o u t ( t ) ) , F i n ( t ) = ( f 1 i n ( t ) , , f n i n ( t ) ) .
The operation of an arbitrary network with n vertices is specified only by its initial state Q ( 0 ) = ( q 1 ( 0 ) , , q n ( 0 ) ) and is given by the formula:
Q ( t + 1 ) = Q ( t ) F o u t ( t ) + F i n ( t + 1 ) .
Definition 4.
Vector Q * = ( q 1 * , , q n * ) , where
lim t q i ( t ) = q i * , i = 1 , n ¯ ,
(if it exists) is called a limit state of the network.
If the limit state exists, it can be reached in finite time or asymptotically; it depends on the network topology and the initial resource distribution. The limit state is always steady. This means that, if Q ( t ) = Q * , then Q ( t ) = Q ( t + 1 ) =
If the state of a network Q * is steady, then the flow is also steady.
F * = ( f i j * ) n × n is a limit flow matrix.
It was proven in [28] that the limit state exists for all topologies of resource networks, excluding cyclic networks (networks in which the GCD of the lengths of all cycles is greater than 1).
Let us consider some properties of resource networks in more detail.

2.3. Basic Properties of Resource Networks

Here, we briefly summarize the results from [28] used in the current research.
We considered resource networks represented by strongly connected graphs with GCD of lengths of all cycles equal to 1.
A random walk on such a graph is described by a regular Markov chain. A regular Markov chain is one that has no transient sets and has a single ergodic set with only one cyclic class [31].
Resource networks based on such graphs are also called regular.
Definition 5.
A resource network is called non-symmetric if it contains at least one vertex v i for which r i o u t r i i n .
In fact, of course, in a connected non-symmetric network, there is at least a pair of vertices v i , v j that meet this condition. Moreover, they satisfy the inequalities of different signs: r i o u t > r i i n , r j o u t < r j i n , or vice versa.
In this section, we consider regular non-symmetric networks.
Let the total network resource W be so small that all vertices at each time step operate according to Rule 2. In this case, at every time step, each vertex gives out the entire resource (see Equation (1)). Then, the vertex resource consists only of the incoming flow, and Equation (3) can be reduced to the formula:
Q ( t + 1 ) = F i n ( t + 1 ) .
Combining Formulas (1) (for Rule 2) and (4), we obtain
q j ( t + 1 ) = i = 1 n r i j r i o u t q i ( t ) .
Thus, in matrix form, the network operation is described by the formula:
Q ( t + 1 ) = Q ( t ) R ,
where R is a stochastic matrix corresponding to the throughput matrix R:
R = r 11 r 1 o u t r 1 n r 1 o u t r n 1 r n o u t r n n r n o u t ,
or R = D 1 R , where D = diag ( r 1 o u t , , r n o u t ) .
Let matrix R be such that, for any allocation of the total resource W = 1 , the whole network operates according to Rule 2 (e.g., a sufficient condition for this is the fulfillment of the inequality r i j 1 for all nonzero elements of matrix R).
Denote the vectors of the state at time step t and of the limit state for W = 1 as Q 1 ( t ) and Q 1 * , respectively.
In this case, the vector Q 1 ( 0 ) can be considered as a vector of the initial probability distribution. The network operation is described by Formula (5):
Q 1 ( t + 1 ) = Q 1 ( t ) R .
Hence, the resource distribution process is described by a regular homogeneous Markov chain. Therefore, all the results valid for regular homogeneous Markov chains [31] and stochastic matrices [31,32] can be transferred to the resource network with W = 1 :
Proposition 1.
Let a regular network be given by matrix R and R be the corresponding stochastic matrix obtained by Formula (6), then:
1. 
The limit of the degrees of matrix R exists:
lim t ( R ) t = R * ;
2. 
The limit state Q 1 * exists and is unique for any initial state.
For an arbitrary state Q 1 ( t ) , the equality holds:
Q 1 ( t ) R * = Q 1 * ;
3. 
The limit matrix R * consists of n identical rows equal to vector Q 1 * :
R * = 1 · Q 1 * ,
where 1 = ( 1 , , 1 ) T .
4. 
Vector Q 1 * is a unique left eigenvector of matrices R and R * corresponding to eigenvalue λ = 1 :
Q 1 * R = Q 1 * ; Q 1 * R * = Q 1 * .
Corollary 1.
The limit flow exists and is unique for any initial state: F i n 1 * = ( F o u t 1 * ) T = Q 1 * ; f s u m * = 1 .
Proposition 2.
Regular resource networks have a global characteristic: the threshold value of the total resource W = T :
  • If W < T , there exists a finite time t such that, for t > t , all vertices will operate according to Rule 2;
  • If W = T , all vertices will operate according to Rule 2—in finite time or asymptotically, depending on the initial resource allocation;
  • If W > T , there exists a finite time t such that, for t > t , at least one vertex will operate according to Rule 1.
The resource W T is called small; the resource W > T is called large.
Proposition 3.
In a regular resource network, the threshold value T is unique and expressed by the formula:
T = min i = 1 , n ¯ r i o u t q i 1 * .
Corollary 2.
If W < T , then, since some time step t , all the vertices give out their resource according to Rule 2, and all of the above results obtained for W = 1 will also be correct.
The limit state and flow vectors are unique for any initial state and can be found by the formulae:
Q * = W · Q 1 * ; F i n * = ( F o u t * ) T = W · Q 1 * ; f s u m * = W .
Let W = T . Denote the limit state and flow vectors as Q ˜ , F ˜ i n , and F ˜ o u t , respectively. Vector Q ˜ exists and is unique for any regular network. The flow vectors F ˜ i n and F ˜ o u t also exist.
Q ˜ = F ˜ i n = F ˜ o u t T = T Q 1 * .
If W > T , then starting from the moment t , some vertices will switch to Rule 1.
It was proven that the ability of a vertex to function stably according to Rule 1 depends on the topology and weights of the edges of the graph and does not depend on the initial resource allocation.
Definition 6.
Vertices capable of accumulating resource surpluses (resources in excess of T) are called attractors.
A criterion for the attractiveness of vertices was formulated and proven [28]:
Proposition 4.
The vertex v k of a regular resource network is an attractor iff
k = arg min i = 1 , n ¯ r i o u t q i 1 * .
Let the non-symmetric network have l attractors, 1 l < n , and let these attractors have numbers from 1 to l. Then, the limit vectors for W = T are:
Q ˜ = F ˜ i n = F ˜ o u t T = ( r 1 o u t , , r l o u t , q ˜ l + 1 , , q ˜ n ) ; q ˜ k < r k o u t , k = l + 1 , n ¯ .
For any W > T , the n l last coordinates of a limit state vector q ˜ k do not change. The surplus Δ W = W T is distributed among attractors:
Q * = ( r 1 o u t + Δ w 1 , , r l o u t + Δ w l , q ˜ l + 1 , , q ˜ n ) ; Δ w 1 + + Δ w l = Δ W .
The vectors of the limit flow remain the same as for W = T (8):
F ˜ i n = F ˜ o u t T = ( r 1 o u t , , r l o u t , q ˜ l + 1 , , q ˜ n ) .

3. Resource Networks with Large Resource and Non-Homogeneous Markov Chains

When W > T , due to the operation of some vertices according to Rule 1, the change in the state vector ceases to be described by a homogeneous Markov chain. However, the change in flows still remains within the framework of a homogeneous Markov model. Therefore, for the sake of simplicity, in previous works, for a large resource, not states, but flows were investigated.
Here, we propose a method for studying the functioning of resource networks with a large resource using non-homogeneous Markov chains, i.e., Markov chains with dynamically changing stochastic matrices, and obtain new results.
Let us consider an example of the functioning of a network with a large resource.
Example 1.
Let the resource network be determined by the weighted graph with two vertices shown in Figure 1 with the throughput matrix R:
R = 1 1 1 0 .
The initial state vector is Q ( 0 ) = ( 10 , 0 ) .
The functioning of this network is the following:
t = 0 : Q ( 0 ) = ( 10 , 0 ) ;
t = 1 : Q ( 1 ) = ( 9 , 1 ) ;
t = 2 : Q ( 2 ) = ( 9 , 1 ) .
Thus, the stable state of the network is Q * = ( 9 , 1 ) .
Let us analyze the law of changing states at the first steps and create the sequence of stochastic matrices defining a non-homogeneous Markov chain for its description. Consider the first vertex at the initial state: q 1 ( 0 ) = 10 > 2 = r 1 o u t . This means that the vertex v 1 at time step t = 0 sends some resource equal to 1 to each of two outgoing edges (one of them is a loop), according to Rule 1, and stores the rest of the resource, equal to 8, in itself. The vertex v 2 does not transfer anything, since q 2 ( 0 ) = 0 . As a result, we have the state Q ( 1 ) , but the same state Q ( 1 ) would be obtained if the loop of the vertex v 1 had an arbitrary throughput in segment [ 0 , 9 ] . If r 11 [ 0 , 9 ] , then Q ( 1 ) = ( 9 , 1 ) remains the same. If r 11 > 9 , then Q ( 1 ) will change because the vertex v 1 will switch from Rule 1 to Rule 2 and will distribute its resource in the other way. Therefore, the value r 11 ( 0 ) = 9 is a threshold value of the first vertex for the resource amount q 1 ( 0 ) = 10 with a network topology determined by R.
Consider the new throughput matrix R ( 0 ) :
R ( 0 ) = 9 1 1 0 .
As r 11 ( 0 ) = 9 , then r 1 o u t ( 0 ) = 10 and q 1 ( 0 ) = r 1 o u t ( 0 ) , so the vertex v 1 uses Rule 2 and sends out its entire resource. The vertex v 2 also uses Rule 2, which means that the next state Q ( 1 ) is obtained using Formula (5) with the stochastic throughput matrix R ( 0 ) :
Q ( 1 ) = Q ( 0 ) R ( 0 ) = ( 10 , 0 ) 9 10 1 10 1 0 = ( 9 , 1 ) .
At the next time step, to save the operation according Rule 2 (i.e., the equality q 1 ( 1 ) = r 1 o u t ( 1 ) ), the loop throughput of vertex v 1 needs to be decreased by 1:
R ( 1 ) = 8 1 1 0 .
Then, the next state Q ( 2 ) can be calculated using the same formula with the new stochastic throughput matrix R ( 1 ) :
Q ( 2 ) = Q ( 1 ) R ( 1 ) = ( 9 , 1 ) 8 9 1 9 1 0 = ( 9 , 1 ) .
As previously, the stable state of the resource network is Q * = ( 9 , 1 ) .
We have that
Q * = Q ( 0 ) R * ,
where
R * = R ( 0 ) R ( 1 ) = 9 10 1 10 1 0 8 9 1 9 1 0 = 9 10 1 10 8 9 1 9 .
Q * is an equilibrium vector. However, the matrix R * itself is not a limit matrix of the (corresponding homogeneous) Markov chain. It is easy to verify this by considering its degrees.
However, t > 1 R ( t ) = R ( 1 ) , then Q ( t + 1 ) = Q ( t ) R ( 1 ) and Q * = Q * R ( 1 ) .
On the other hand,
Q * = Q ( 0 ) R ( 0 ) R ( 1 ) R ( 1 ) R ( 1 ) = Q ( 0 ) R ( 0 ) ( R ( 1 ) ) .
Matrix ( R ( 1 ) ) is a limit of the degrees of a regular stochastic matrix. By definition, it consists of identical rows equal to the limit vector of the probability distribution. This vector, in turn, is a normalized equilibrium vector Q * , i.e., it is equal to ( 9 10 , 1 10 ) , and
( R ( 1 ) ) = 9 10 1 10 9 10 1 10 .
By the property of the limit matrix of a homogeneous Markov chain:
R ( 0 ) ( R ( 1 ) ) = ( R ( 1 ) )
and
Q * = Q ( 0 ) ( R ( 1 ) ) .
Remark 2.
The network operation in the example is very simple. The limit state is reached in one step. However, the behavior of resource networks with a large resource, in the general case, is rather complicated. The corresponding statements can be read in [28]; examples of the operation of networks with different topologies can be found in [33].
This example shows that the functioning of the resource network with the vertices having a large resource amount can be considered as the non-homogeneous Markov chain [34].
In the general case, this transformation for the resource network with the initial state vector Q ( 0 ) and the throughput matrix R can be written as follows:
R R ( t ) = { r i j ( t ) } i , j = 1 , n ¯ ,
r i j ( t ) = r i i + ( q i ( t ) r i o u t ) , if { q i ( t ) > r i o u t & i = j } , r i j , otherwise .
Here, t 0 Q ( t + 1 ) = Q ( t ) R ( t ) , or
Q ( t + 1 ) = Q ( 0 ) i = 0 t R ( i ) .
In the example above, the non-homogeneous Markov chain was strongly ergodic (see [34,35]):
Definition 7.
A non-homogeneous Markov chain with stochastic matrices P ( i ) is called strongly ergodic if
lim t i = 1 t P ( i ) = 1 · π T ,
where 1 is a column vector of n units and π is some probability vector.
In general, if the limit state and the limit matrix exist, they can be found as
Q * = Q ( 0 ) lim t i = 0 t R ( i ) = Q ( 0 ) R ,
where R = 1 · Q 1 * * ( Q 1 * * is some vector of the distribution of the resource W = 1 ).
This equality holds for all strongly ergodic non-homogeneous Markov chains.
Let us turn to the study of strong ergodicity and the description of the limit characteristics of resource networks with the transient and regular components.

4. Study of Processes in Transient Components of Two-Component Resource Networks

Consider a resource network with two strongly connected components, such that there are only edges from the first one (called the transient component, with n 1 vertices) to the second one (called final, with n 2 vertices). Let n be the total number of vertices in the network (Figure 2).
Theorem 1.
There exists a finite time moment t such that, for t > t , all vertices of the transient component operate according to Rule 2.
Proof. 
We assumed there should always exist at least one vertex (say, v s ) in the transient component that has an edge to the final component. Otherwise the network graph is not even weakly connected. If so, we can analyze weakly connected components independently.
Suppose there is an excessive amount of resource in an arbitrary vertex v k of the transient component at time step t. Since the transient component is strongly connected, there exists a simple path v k = v i 0 v i 1 v i p = v s . In that case, the resource equal to c 1 k = r i 0 i 1 will be transferred from v i 0 to v i 1 at the time moment t + 1 . This means vertex v i 1 will contain at least c 1 k resource. Next, at the moment t + 2 , vertex v i 1 will transfer at least c 2 k = min { r i 1 i 2 , r i 1 i 2 · c 1 k } resource to the vertex v i 2 . In that case, at the moment t + 2 , vertex v i 2 will contain at least c 2 k resource. If we continue this process, we will obtain that vertex v i p will contain at least c p k resource at the moment of time t + p . Finally, at least c fin k = min { r s fin , r s fin · c p k } resource will go to the final component at the moment t + p + 1 (where r s fin is some edge from v s to some vertex in the final component). Therefore, if there was excessive resource in vertex v k at moment t, then, during time period [ t , t + n 1 ] (more precisely, [ t , t + p + 1 ] ), at least c fin k resource will be transferred to the final component. It is important to note that c f i n k depends on nothing but the fact that there was an excessive amount of resource in vertex v k .
Similar reasoning can be applied to any other vertex with excessive resource v l . Let us define ε = min l 1 , n 1 ¯ c fin l > 0 . Then, the following holds: if, at some time moment t, an arbitrary vertex contained an excessive amount of resource, then by the time moment t + n 1 , at least ε resource will be transferred to the final component.
The proof now goes by contradiction: Suppose there is no moment t from which the transient component operates according to Rule 2 all the time. In other words, suppose there exists an infinite sequence of time moments { t 1 , t 2 , } such that at least one vertex in the transient component contains excessive resource at each of these moments. Without loss of generality, assume that i > 0 t i + 1 t i > n 1 . Hence, at least ε resource will be transferred to the final component at each of the time intervals { [ t 1 , t 1 + n 1 ] , [ t 2 , t 2 + n 1 ] , } . The total amount of transferred resource is at least i = 1 ε = . The contradiction is now obvious since the resource network contains only a finite amount of resource by definition. □
Theorem 1 states that every possible resource distribution in the transient component sooner or later reduces to the linear case. The following theorem shows how the network behaves in the linear case.
Theorem 2.
If the entire two-component network operates according to Rule 2, then the total amount of resource in the transient component tends to zero as time goes to infinity.
Proof. 
Let the network be defined by the matrix:
R = R 1 R 2 0 R 3 ,
where R 1 R n 1 × n 1 and R 3 R n 2 × n 2 are nonzero square blocks corresponding to the edges of the transient and the final components, respectively; R 2 R n 1 × n 2 is a rectangle block corresponding to the edges from the transient to the final component. The corresponding stochastic matrix is of the following form:
R = R 1 R 2 0 R 3 .
Whenever the network operates according to Rule 2, its next state is obtained according to Formula (5).
If the network is operated according to Rule 2 for every t 0 , then the following holds:
Q ( t + 1 ) = Q ( 0 ) ( R ) t .
Passing to the limit as t (this limit always exists since the final component is regular):
Q * = Q ( 0 ) R .
Consider powers of matrix R :
( R ) 2 = R 1 R 2 0 R 3 R 1 R 2 0 R 3 = ( R 1 ) 2 R 1 R 2 + R 2 R 3 0 ( R 3 ) 2 ,
( R ) 3 = R 1 R 2 0 R 3 ( R 1 ) 2 R 1 R 2 + R 2 R 3 0 ( R 3 ) 2 = = ( R 1 ) 3 ( R 1 ) 2 R 2 + R 1 R 2 R 3 + R 2 ( R 3 ) 2 0 ( R 3 ) 3 ,
and so on.
It follows from (12) that the first n 1 components of Q * are defined entirely by the first n 1 columns of R . Therefore, only the left-side blocks of powers of R are of our interest. Since we can interpret R 1 as part of the stochastic matrix corresponding to the transient component of a Markov chain, it is easy to prove then (using Theorem 3.1.1 of [31]) that
lim t ( R 1 ) t = 0 .
It follows that, for each possible initial state Q ( 0 ) , such that the network operates according to Rule 2 forever, the limiting state will contain only zeros in the first n 1 components (see (12)). The statement of the theorem thus follows. □
The following theorem generalizes the two previous ones to the case of an arbitrary total resource amount in the network.
Theorem 3.
In the two-component resource network, for any value and initial distribution of the network total resource, the total amount of resource in the transient component tends to zero as time goes to infinity.
Proof. 
Let the network be defined by the matrix:
R = R 1 R 2 0 R 3 ,
and the resource is greater than the threshold value. Then, the resource distribution is described by a non-homogeneous Markov chain with a stochastic matrix:
R ( t ) = R 1 ( t ) R 2 ( t ) 0 R 3 ( t ) .
The network operates according to the formula:
Q ( t + 1 ) = Q ( t ) R ( t ) , or
Q ( t + 1 ) = Q ( 0 ) i = 1 t R ( i ) .
By Theorem 1, there is such a moment t , starting from which the transition component begins to function according to Rule 2. Then, for all t > t , the matrix R ( t ) can be presented as matrix R ( t ) , where
R ( t ) = R 1 R 2 0 R 3 ( t ) .
Thus, Formula (17) can be written as follows:
Q ( t + 1 ) = Q ( 0 ) i = 1 t R ( i ) j = t + 1 t R ( j ) .
In the second product, the block R 1 is constant; therefore, the first m columns tend to zero. Then, in the entire product, the first m columns tend to zero as time goes to infinity. □
Thus, for any total resource and any of its initial distribution in a two-component resource network, the entire resource will pass into the final component.

5. Some Properties of Stochastic Matrices

The above results can be generalized to arbitrary stochastic matrices, and some new properties can be derived. Theorem 2 implies the validity of the following theorem.
Theorem 4.
For any rectangular stochastic matrix R = [ R 1 , R 2 ] R m × n , m < n , where R 1 R m × m is a square block, R 2 R m × ( n m ) is a rectangular block; if R 2 0 , the rectangular matrix ( I R 1 ) 1 R 2 R m × ( n m ) is stochastic.
Proof. 
Let us relatea rectangular matrix to a square one as follows:
R = R 1 R 2 0 I ,
Consider the powers of the matrix R :
R 2 = R 1 R 2 0 I R 1 R 2 0 I = R 1 2 R 1 R 2 + R 2 0 I ,
R 3 = R 1 2 R 1 R 2 + R 2 0 I R 1 R 2 0 I = R 1 3 R 1 2 R 2 + R 1 R 2 + R 2 0 I , R t = R 1 t R 1 t 1 R 2 + R 1 t 2 R 2 + + R 1 R 2 + R 2 0 I .
For t , there is R 1 = 0 in the upper left block, as R 1 is the block of stochastic matrix R . Furthermore, there is the sum of a infinite decreasing geometric progression with the denominator R 1 in the upper right block:
R = 0 ( I R 1 ) 1 R 2 0 I .
Here, mentioning that R 2 0 and R 1 = 0 , the inverse matrix ( I R 1 ) 1 exists always according to Theorem 1.11.1 in [31]. The matrix R is stochastic; therefore, the matrix R is stochastic. Thus, if R 2 0 , the matrix ( I R 1 ) 1 R 2 is stochastic. □
The simple example demonstrates how Theorem 4 works.
Example 2.
Consider the stochastic vector:
R = ( r 1 , r 2 , r 3 , r 4 ) , r 1 + r 2 + r 3 + r 4 = 1 .
Let us split it into two blocks, so that the first block R 1 = ( r 1 ) is square dimensioned 1 × 1 , R 2 = ( r 2 , r 3 , r 4 ) .
Thus, according to the proven Theorem 4, the vector ( 1 R 1 ) 1 R 2 is also stochastic. Actually:
( 1 r 1 ) 1 ( r 2 , r 3 , r 4 ) = 1 r 2 + r 3 + r 4 ( r 2 , r 3 , r 4 ) .
An important generalization follows directly from Theorem 4.
Theorem 5.
For any square stochastic matrix R and for any arbitrary partition into blocks:
R = R 1 R 2 R 3 R 4 ,
where R 1 and R 4 are square blocks; if R 2 0 , the rectangular matrix ( I R 1 ) 1 R 2 is stochastic.

6. Limit States in Two-Component Resource Networks and the Strong Ergodicity of Non-Homogeneous Markov Chains

In Section 4, it was established that, whatever the initial resource distribution is, all of the resource will be transferred to the final component in infinite time.
According to Proposition 2 and Corollary 2, if a resource network is regular, then in the case of a small resource, there exists a unique limit state that does not depend on the initial distribution.
In the case of a large resource, there also always exists a limit state, but if there are several attractor vertices (Definition 6), then the limit state might not be unique for every initial state. However, the limit state is always of the form (9). What interests us is whether an analogy can be drawn with our case of a two-component network. The following theorems answer this question.
Suppose that the final component of the resource network is regular. Let T (Formula (7)) be the threshold value of the total resource for the final component.
Theorem 6.
If W T , then there exists a unique limit state that depends neither on the initial distribution of the total resource Q ( 0 ) nor on blocks R 1 and R 2 of capacity matrix R.
Proof. 
The statement of the theorem means that there is no difference what the topology of the transient component is and what edges lead from the transient to the final component, if the resource amount is small.
If W < T , it is assumed that there is a point of time from which the entire network operates according to Rule 2. Without loss of generality, suppose that the entire network operates according to Rule 2 from the first moment.
One can deduce from Equations (13) and (14) that an arbitrary power of R has the form:
( R ) N = ( R 1 ) N R 2 ( N ) 0 ( R 3 ) N ,
where
R 2 ( N ) = t = 1 N ( R 1 ) N t R 2 ( R 3 ) t 1 .
For instance,
R 2 ( 1 ) = ( R 1 ) 0 R 2 ( R 3 ) 0 = R 2 , R 2 ( 2 ) = R 1 R 2 + R 2 R 3 , R 2 ( 3 ) = ( R 1 ) 2 R 2 + R 1 R 2 R 3 + R 2 ( R 3 ) 2 ,
and so on.
According to Theorem 11.1 of [32], there exists a limit lim N ( R ) N = R * . Combining this fact with the statement of Theorem 2, we obtain that the limit must have the form
R * = 0 R 2 * 0 R 3 * ,
where
R 2 * = lim N R 2 ( N ) ; R 3 * = lim N R 3 N .
Multiplying the limit stochastic matrix by itself is idempotent (i.e., R * R * = R * ). That means the following:
R * R * = 0 R 2 * 0 R 3 * 0 R 2 * 0 R 3 * = 0 R 2 * R 3 * 0 ( R 3 * ) 2 = 0 R 2 * 0 R 3 * = R * .
The latter equation implies
R 2 * R 3 * = R 2 * .
Being a limit of the power of a regular stochastic matrix, R 3 * consists of equal rows. Its rank is 1. Moreover, whenever any stochastic matrix is right-multiplied by R 3 * , the result will also consist of the same rows. That being said, the matrix R 2 * consists of the same rows as R 3 * .
Suppose that R 3 * consists of rows ( r 3 * ) . Then, R * has the form
R * = 0 r 3 * 0 r 3 * .
It is obvious that R * is of rank 1. It follows that the image of this linear operator (see Equation (12)) is determined solely by the initial amount of resource W.
The resource W = T is a limit case for a small resource. By Proposition 2, for some initial states, two limit transitions are required to prove the theorem: the first one is for the transition of all vertices to Rule 2; then, the above reasoning is applied. □
Theorem 7.
If W > T , then for every initial state Q ( 0 ) , there exists a limit state Q * . If, additionally, the final component contains only one attractor vertex, then this limit state depends only on W, i.e., it depends neither on the initial distribution of the total resource and on blocks R 1 and R 2 of capacity matrix R.
Proof. 
In this case, it is assumed that there is a point of time from which at least one vertex of the network contains an excessive amount of resource.
It follows from Theorem 1 that there is a moment from which the entire transient component operates according to Rule 2. That means that all of the excessive resource is located in the final component (at least for sufficiently large moments of time). After this fact is established, the proof in the case W > T is made in analogy with the case W T . The main difference is that R 3 * is not a limit of the powers of a single matrix, but a limit of the products of different matrices. However, it is known from the previous works [28] that this limit of the products of matrices exists for every initial condition. In the case where there is only one attractor vertex in the final component, this limit of products depends only on W. Thus, the statement of the theorem follows. □
Example 3.
Let us consider the two-component resource network shown in Figure 2. The edge weights are given by the matrix:
R = 1 1 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 0 0 1 i n e 0 0 0 0 0 2 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 3 0 0 0 0 .
Let W = 14 and the initial state be given by vector Q ( 0 ) = ( 5 , 4 , 3 , 2 , 0 , 0 , 0 , 0 , 0 ) . In this case, the entire resource is concentrated in the transient component.
The limit state is Q * = ( 0 , 0 , 0 , 0 , 2.4 , 1.2 , 6.4 , 1.6 , 2.4 ) .
The functioning of the network is shown in Figure 3.
On the graph, it can be seen how the resource flows from the transient component to the vertices of the final component.
If W > T , then matrix sequences R ( t ) , t = 0 , 1 , , differ for different initial resource distributions Q ( 0 ) . The infinite products of these matrices tend to the limit matrices R ( Q ( 0 ) ) (further, we omit the argument ( Q ( 0 ) ) ).
Example 1 shows that, for at least some regular resource networks with W > T , there exist two stochastic matrices transforming the initial vector into the limit one:
Q * = Q ( 0 ) R * ; Q * = Q ( 0 ) R .
It is easy to see that such matrices exist for any regular, as well as two-component networks and for any initial state Q ( 0 ) ( W > T ) .
The first matrix R * is obtained from matrix R defined by Formula (10), where q i ( t ) = q i * ; here, q i * are the components of a limit vector Q * . This matrix has different rows and is not a limit matrix. However, the limit matrix R is a limit of the powers of the stochastic matrix R * .
R = lim t ( R * ) t .
Matrix R is unique for every Q ( 0 ) and consists of n normalized vectors Q * .
Therefore, the following theorem is true.
Theorem 8.
For a two-component resource network, if W > T , then, for every initial state, the non-homogeneous Markov chain with stochastic matrices R ( t ) is strongly ergodic.

7. Discussion

The results presented in the article can be divided into three areas: two-component resource networks, stochastic matrices, and non-homogeneous Markov chains.
The main results concerning two-component resource networks were stated in the Theorems 6 and 7. It was proven that the two-component resource network with a regular final component is globally asymptotically stable, if W T or W > T and there is a single attractor vertex. If there are several attractor vertices in a final component, the resource surplus is distributed among attractors (Formula (9)) in a unique way described by Formula (22).
The study of the behavior of two-component networks made it possible to prove a number of non-trivial statements about stochastic matrices, as well as about non-homogeneous Markov chains.
In our opinion, Theorems 4–6 deserve special attention, since they formulate the interesting properties of stochastic matrices. These properties are universal and applicable to any stochastic matrices, without reference to the resource network model.
Theorem 8 states that the non-homogeneous Markov chain describing the operation of the two-component resource network with a regular final component for W > T is strongly ergodic. This implies that the non-homogeneous Markov chain describing the regular resource network with W > T is definitely strongly ergodic.
The non-homogeneous Markov chain corresponding to the given network and given initial distribution of resource W > T is defined by Formula (10). Two matrices that transform the initial distribution into the limit distribution are presented in Formula (22), and their relationship is in Formula (23).
In the future, we plan to obtain an analytical formula for resource allocation among attractors in the limit state for a network with several attractors and W > T . This problem still remains open.

8. Conclusions

The resource network model is described by very simple operation rules; however, it is capable of generating complex behavior. Moreover, the network turned out to be a good tool for studying the properties of stochastic matrices and Markov chains.
In our team, two extensions of resource networks were proposed and studied: a network with the capacity limitations of attractor vertices and a network with greedy vertices [28].
Based on the resource network defined by a regular grid, a model of the distribution of pollutants on the surface of a reservoir was developed [36].
In our future research, we plan to study new modifications of resource networks, such as networks with dynamic throughputs. Using such a model, it will be possible to search for bottlenecks in the distribution of traffic in networks of various kinds—from transport to telecommunications.

Author Contributions

Conceptualization, L.Z.; methodology, L.Z. and V.K.; validation, L.Z., V.K. and N.C.; writing—original draft preparation, L.Z., V.K. and N.C.; writing—review and editing, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Russian Foundation for Basic Research (Project No. 20-07-00190-a).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blanchard, P.; Volchenkov, D. Random Walks and Diffusions on Graphs and Data-bases: An Introduction, Springer Series in Synergetics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  2. Erusalimskii, Y.M. 2–3 Paths in a Lattice Graph: Random Walks. Math Notes 2018, 104, 395–403. [Google Scholar] [CrossRef]
  3. Volchenkov, D. Infinite Ergodic Walks in Finite Connected Undirected Graphs. Entropy 2021, 23, 205. [Google Scholar] [CrossRef] [PubMed]
  4. Calva, C.S.H.; Riascos, A.P. Optimal exploration of random walks with local bias on networks. Phys. Rev. E 2022, 105, 044318. [Google Scholar] [CrossRef] [PubMed]
  5. Gutiérrez, R.; Pérez-Espigares, C. Generalized optimal paths and weight distributions revealed through the large deviations of random walks on networks. Phys. Rev. E 2021, 103, 022319. [Google Scholar] [CrossRef] [PubMed]
  6. Falcó, C. From random walks on networks to nonlinear diffusion. arXiv 2022, arXiv:2206.08859. [Google Scholar] [CrossRef]
  7. Björner, A.; Lovász, L. Chip-firing games on directed graphs. J. Algebr. Comb. 1992, 1, 305–328. [Google Scholar] [CrossRef]
  8. Liscio, P. Lattices in Chip-Firing. arXiv 2020, arXiv:2010.15650. [Google Scholar]
  9. Glass, D.; Kaplan, N. Chip-Firing Games and Critical Groups. In A Project-Based Guide to Undergraduate Research in Mathematics. Foundations for Undergraduate Research in Mathematics; Harris, P., Insko, E., Wootton, A., Eds.; Birkhäuser: Cham, Switzerland, 2020; pp. 32–58. [Google Scholar]
  10. Dochtermann, A.; Meyers, E.; Samavedan, R.; Yi, A. Cycle and circuit chip-firing on graphs. arXiv 2020, arXiv:2006.13397. [Google Scholar]
  11. Merino, C. The chip-firing game and the sand pile model. Handbook of the Tutte Polynomial and Related Topics; Chapman and Hall: London, UK, 2022; pp. 345–351. [Google Scholar]
  12. Bak, P.; Tang, C.; Wiesenfeld, K. Self-organized criticality. Phys. Rev. A 1988, 38, 364–374. [Google Scholar] [CrossRef]
  13. Dhar, D. The abelian sand pile and related models. Phys. Stat. Mech. Its Appl. 1999, 263, 4–25. [Google Scholar] [CrossRef] [Green Version]
  14. Pegden, W.; Smart, C.K. Stability of Patterns in the Abelian Sandpile. Ann. Henri Poincaré 2020, 21, 1383–1399. [Google Scholar] [CrossRef] [Green Version]
  15. Kim, S.; Wang, Y. A Stochastic Variant of the Abelian Sandpile Model. J. Stat. Phys. 2020, 178, 711–724. [Google Scholar] [CrossRef]
  16. Duffy, C.; Lidbetter, T.F.; Messinger, M.E.; Nowakowski, R.J. A Variation on Chip-Firing: The diffusion game. Discret. Math. Theor. Comput. Sci. 2018, 20, 1. [Google Scholar]
  17. DeGroot, M.H. Reaching a consensus. J. Am. Stat. Assoc. 1974, 69, 118–121. [Google Scholar] [CrossRef]
  18. Ong, C.G.; Canyakmaz, I. Consensus of Network of Homogeneous Agents with General Linear Dynamics. arXiv 2022, arXiv:2201.07532. [Google Scholar]
  19. Fedyanin, D.N. Reaching a Consensus in Polarized Social Networks. In Proceedings of the 3rd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), Lipetsk, Russia, 9–11 November 2021; pp. 366–371. [Google Scholar] [CrossRef]
  20. Agaev, R.; Khomutov, D. Graph Interpretation of the Method of Orthogonal Projection for Regularization in Multiagent Systems. In Proceedings of the 14th International Conference Management of large-scale system development (MLSD), Moscow, Russia, 27–29 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
  21. Geng, H.; Wu, H.; Miao, J.; Hou, S.; Chen, Z. Consensus of Heterogeneous Multi-Agent Systems Under Directed Topology. IEEE Access 2022, 10, 5936–5943. [Google Scholar] [CrossRef]
  22. Zhao, X.; Chai, X.; Sun, J.; Qiu, Q. Joint optimization of mission abort and protective device selection policies for multistate systems. Risk Anal. 2022, 1–12. [Google Scholar] [CrossRef]
  23. Wang, X.; Ning, R.; Zhao, X.; Wu, C. Reliability assessments for two types of balanced systems with multi-state protective devices. Reliab. Eng. Syst. Saf. 2023, 229. [Google Scholar] [CrossRef]
  24. Fiedler, M. Doubly stochastic matrices and optimization. Adv. Math. Optim. 2022, 44–51. [Google Scholar] [CrossRef]
  25. Xi, C.; Mai, V.S.; Xin, R.; Abed, E.H.; Khan, U.A. Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices. IEEE Trans. Autom. Control. 2018, 63, 3558–3565. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, B.; Lu, W.; Jiao, L.; Chen, T. Products of Generalized Stochastic Matrices With Applications to Consensus Analysis in Networks of Multiagents with Delays. IEEE Trans. Cybern. 2020, 50, 386–399. [Google Scholar] [CrossRef] [PubMed]
  27. Kuznetsov, O.P. Uniform Resource Networks. I. Complete Graphs. Autom. Remote. Control. 2009, 70, 1767–1775. [Google Scholar] [CrossRef]
  28. Zhilyakova, L. Single-Threshold Model Resource Network and Its Double-Threshold Modifications. Mathematics 2021, 9, 1444. [Google Scholar] [CrossRef]
  29. Skorokhodov, V.A.; Sviridkin, D.O. Regular periodic dynamic resource networks. Itogi Nauk. Tekhniki. Seriya Sovrem. Mat. Prilozheniya. Temat. Obz. 2021, 192, 117–124. [Google Scholar]
  30. Antonova, V.M.; Zakhir, B.M.; Kuznetsov, N.A. Modeling of Graphs with Different Types of Reachability in Python. J. Commun. Technol. Electron. 2019, 64, 1464–1472. [Google Scholar] [CrossRef]
  31. Kemeny, J.G.; Snell, J.L. Finite Markov Chains; Springer: New York, NY, USA; Berlin/Heidelberg, Germany; Tokyo, Japan, 1976. [Google Scholar]
  32. Gantmacher, F.R. The Theory of Matrices; Chelsea Publishing Company: New York, NY, USA, 1959. [Google Scholar]
  33. Supplementary Materials. Available online: https://www.researchgate.net/publication/352184919_Supplementary_Materials_for_article_%27Single-Threshold_Model_Resource_Network_and_its_Double-Threshold_Modifications%27 (accessed on 10 October 2022).
  34. Hajnal, J. Weak ergodicity in non-homogeneous Markov chains. Proc. Cambridge Philos. Soc. 1958, 54, 233–246. [Google Scholar] [CrossRef]
  35. Scott, M.; Arnold, B.C.; Isaacson, D.L. Strong Ergodicity for Continuous-Time, Non-Homogeneous Markov Chains. J. Appl. Probab. 1982, 19, 692–694. [Google Scholar] [CrossRef]
  36. Zhilyakova, L.Y. Using resource networks to model substance distribution in aqueous medium. Autom. Remote Control. 2012, 73, 1581–1589. [Google Scholar] [CrossRef]
Figure 1. The weighted graph determining the considered resource network.
Figure 1. The weighted graph determining the considered resource network.
Mathematics 10 04095 g001
Figure 2. The two-component resource network. Vertices ( v 1 v 4 ) belong to the transient component; vertices ( v 5 v 9 ) form the final component; n 1 = 4 , n 2 = 5 , n = 9 .
Figure 2. The two-component resource network. Vertices ( v 1 v 4 ) belong to the transient component; vertices ( v 5 v 9 ) form the final component; n 1 = 4 , n 2 = 5 , n = 9 .
Mathematics 10 04095 g002
Figure 3. The resource distribution over time for the network represented by matrix (21). W = 14 , and the vector of the initial state is Q ( 0 ) = ( 5 , 4 , 3 , 2 , 0 , 0 , 0 , 0 , 0 ) . The resource values in vertices ( v 1 v 4 ) tend to 0; all the resource is concentrated in vertices ( v 5 v 9 ).
Figure 3. The resource distribution over time for the network represented by matrix (21). W = 14 , and the vector of the initial state is Q ( 0 ) = ( 5 , 4 , 3 , 2 , 0 , 0 , 0 , 0 , 0 ) . The resource values in vertices ( v 1 v 4 ) tend to 0; all the resource is concentrated in vertices ( v 5 v 9 ).
Mathematics 10 04095 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhilyakova, L.; Koreshkov, V.; Chaplinskaia, N. Some Properties of Stochastic Matrices and Non-Homogeneous Markov Chains Generated by Nonlinearities in the Resource Network Model. Mathematics 2022, 10, 4095. https://doi.org/10.3390/math10214095

AMA Style

Zhilyakova L, Koreshkov V, Chaplinskaia N. Some Properties of Stochastic Matrices and Non-Homogeneous Markov Chains Generated by Nonlinearities in the Resource Network Model. Mathematics. 2022; 10(21):4095. https://doi.org/10.3390/math10214095

Chicago/Turabian Style

Zhilyakova, Liudmila, Vasily Koreshkov, and Nadezhda Chaplinskaia. 2022. "Some Properties of Stochastic Matrices and Non-Homogeneous Markov Chains Generated by Nonlinearities in the Resource Network Model" Mathematics 10, no. 21: 4095. https://doi.org/10.3390/math10214095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop