Next Article in Journal
Non-Stationary Fractal Functions on the Sierpiński Gasket
Previous Article in Journal
Neural Operator for Planetary Remote Sensing Super-Resolution with Spectral Learning
Previous Article in Special Issue
Preface to the Special Issue on “Advances in Machine Learning, Optimization, and Control Applications”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Exploration of Diffusion Process Based on Multi-Type Galton–Watson Forests

1
School of Statistics and Date Science, Qufu Normal University, Qufu 373165, China
2
Department of Computing, Curtin University, Perth 6102, Australia
3
School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen 518107, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3462; https://doi.org/10.3390/math12223462
Submission received: 26 September 2024 / Revised: 30 October 2024 / Accepted: 4 November 2024 / Published: 6 November 2024

Abstract

:
Diffusion is a commonly used technique for spreading information from point to point on a graph. The rationale behind diffusion is not clear. The multi-type Galton–Watson forest is a random model of population growth without space or any other resource constraints. In this paper, we use the degenerated multi-type Galton–Watson forest (MGWF) to interpret the diffusion process, corresponding vertices to types and establishing an equivalence relationship between them. With the two-phase setting of the MGWF, one can interpret the diffusion process and the Google PageRank system explicitly. It also improves the convergence behavior of the iterative diffusion process and Google PageRank system. We validate the proposal by experiment while providing new research directions.

1. Introduction

The diffusion process is a phenomenon in which particles spontaneously diffuse from a region of high concentration to a region of low concentration so that the system tends to a stable state. Because of its ability to capture the intrinsic manifold structure of data, diffusion processes are widely used in many areas of machine learning, such as information retrieval [1,2,3,4,5]; clustering [6,7]; significance detection [8,9]; image segmentation [4,10]; and semi-supervised learning [5,11].
Technically, the undirected graph G = ( V , E ) consists of N nodes v i V , and edges e i j E . The edges e i j can be weighted by their affinity value A i j . The diffusion process can be interpreted as a Markov random walk on the graph G , where the initial distribution F 0 containing the initial information, for example, in affinity learning, the initial distribution is the initially constructed affinity matrix. Information is transferred according to the transfer matrix P, and the iteration criterion is
F n = F n 1 P ,
where P = D 1 A , A is the affinity matrix of G , D i i = Σ k = 1 N A i k is the diagonal matrix.
To address the interpretation dilemmas of diffusion process, this paper employs the MGWF with a degenerate branching law model to map graph points to distinct types, providing a multi-faceted explanation of the diffusion process. Specifically, we innovatively introduce the concept of particle mutation, divide each step of the diffusion process into two phases. It can understand the diffusion process more clearly, and inspire us to learn the diffusion process in a general setting. Furthermore, the model of MGWF with mutation and immigration can be used to explain the well-known Google PageRank system. This will also inspire us to interpret the Google PageRank system from a new perspective.
In summary, the main contributions of this article are as follows:
It introduces the novel concept of mutation into the MGWF with the degenerated branching law, where each step of the diffusion process is divided into two phases—the branching phase and the mutation phase. Such an explanation provides novel insight on the rationale of the diffusion process from a longitudinal perspective. Moreover, it is beneficial for the more general diffusion process.
It proves the sequence convergence of affinity matrix generated by the diffusion process under the given assumption. To our best knowledge, this is the first time that the convergence of the diffusion process has been proved theoretically.
It interprets the Google PageRank system well by extending the MGWF by introducing immigration.
It proposes a variant of the diffusion process in which the mutation probability can change in the framework of branching and mutation. It is a more general diffusion process that includes the original as a special case.
The remainder of this paper is as follows. Section 2 introduces the MGWF model in detail. In Section 3 and Section 4, we explain the diffusion process and the Google PageRank system under the new model, respectively. Section 5 shows the experimental results with the probability matrix change. Section 6 concludes this article.

2. Preliminaries

The branching process is also known as Galton–Watson process (GW process) which is also a Markov chain. It can deal with a mathematical representation for the population changes in which their members can reproduce and die, subject to laws of chance.
Definition 1
(Branching process). Let ξ be non-negative integer random variables, where the distribution is p k = P ( ξ = k ) , k 0 , P 0 < 1 . Suppose that the number of offspring produced by a species follows the distribution of ξ, and the reproduction of each individual within the species is independent. The number of individuals in the first generation is Z 1 , and naturally there is
Z n + 1 = i = 1 Z n ξ n + 1 , i ,
where ξ n + 1 , i can be thought of as the number of members in the ( n + 1 ) -th generation which are the offspring of the i-th member of the n generation.
There is a list of random variables { Z n } which take non-negative integer values. Z n satisfies the recursive relation. Then, such a random process { Z n } is a Markov chain with state space Z and transition probability
P i j = P ( k = 1 i ξ k = j ) .
The particles may be of different types depending on their age, energy, position, or other factors. However, the type of all particles is the same in the GW process. Following, it introduces the multi-type GW process.
Definition 2
(N-type GW process). Given the initiated state as
Z 1 = ( Z 1 , 1 , , Z 1 , N )
where Z 1 , k Z + , k = 1 , N , is the number of individuals of k-type. For the nth generation,
Z n = ( Z n , 1 , Z n , 2 , Z n , N )
where Z n , j represents the number of particles of j-type in the n t h generation. Then, { Z n } n 0 is a N-type GW process.
It assumes that the parent i lives for a unit of time and, upon death, produces children of all types and according to the offspring distribution P i ( j ) and the independence of other individuals. The case is vividly called a multi-type Galton–Watson forest (MGWF) [12,13,14,15,16].
Next, a special N-type GWF is introduced. The initiated state
Z 0 = e i
where e i , i = 1 , 2 , , N denotes the i-th component as 1, and the remaining is zero. The ancestor lives for a unit of time and produces one child of all types upon death according to the offspring distribution { P e i ( e j ) } = { P i ( j ) } , where
P i { Z N N : Σ j = 1 N Z j 1 ) = 0
i.e., each parent produces only one offspring. In this way, the number of multi-type branching tree formed in the forest is N. From [12,13], it names the MGWF with the degenerate branching law. It is called MGWF in this paper. The difference between the MGWF and the MGWF with the degenerated branching law is graphically illustrated in Figure 1.
Example 1.
There are three types of particles in MGWF, i.e., N = 3 . The 3-type GW forest with the degenerated branching law P is shown in Figure 1b. The branching law matrix is
P 3 × 3 = 0 2 3 1 3 2 5 0 3 5 1 5 4 5 0 .

3. Mutations in the MGWF

3.1. The Mutations

Let us consider mutation in the framework of MGWF with the degenerated branching law [17,18]. In this paper, we propose a novel variant of mutation which can better conform to the actual situation, and more importantly, it provides the new insight into the rationale of the diffusion process by separating it into two phases, branching and mutation.
Definition 3
(Mutation). When the mutation time is ignored, a unit of time is divided into two phases, the branching phase and the mutation phase.
(i) 
In the branching phase. An i-type particle produces a j-type particle according to the branching law P i ( j ) .
(ii) 
In the mutation phase.
The mutation occurs: the j-type particle may mutate into a k-type particle according to the probability of mutation T j ( k ) , j k , i , j , k = 1 , 2 , , N .
No mutation occurs: the j-type particle may also remain unchanged in type with probability T j ( j ) .
The two cases in the mutation phase are shown in Figure 2.
Naturally, when mutations are considered, the branching law matrix is change, and the iterative process is as follows.
Let P 1 i ( j ) = P i ( j ) , and T is a probability matrix of mutation. At time 0, there is an i-type ancestor; after a unit of time, it has a j-type offspring. Then, in the second generation,
P 2 i ( j ) = P 1 i ( j ) T j ( j ) + Σ k j P 1 i ( k ) T k ( j ) .
Its matrix form is,
P 2 = P 1 T .
Iteratively, for the n t h generation,
P n = P n 1 T = P T n 1 ,
where P n = P T n 1 is the expansion of the entire iterative process. P n = P n 1 T focuses on one generation, while P n = P T n 1 focuses on the entire process. Next, we use one example to show the iteration process.
Example 2
(The continuation of Example 1). Let the probability matrix of mutation T = P . The 3-type GW forest with mutation and the mutation details are shown in Figure 3.
It shows the branching low matrix of the second generation and third generation
P 2 = P T = 1 3 4 15 2 5 3 25 56 75 2 15 8 25 2 15 41 75
and
P 3 = P 2 T = P T 2 = 14 75 122 225 61 225 122 375 14 75 61 125 61 375 244 375 14 75 .
Next, we discuss the convergence of the above iterative process.

3.2. The Stable State

It introduces a few concepts and lemmas that come from the literature [19,20].
Definition 4
(communicate [19]). Let i , j S be two states of a Markov chain with transition matrix T. Then, state i communicates with state j if there exist non-negative integers m and n such that T j , i m > 0 and T i , j n > 0 .
Definition 5
(irreducible [20]). A Markov chain with finite-state space S and transition matrix T is said to be irreducible if for all i , j S are communicated.
Definition 6
(aperiodic [19]). The period d ( i ) of a state i of a Markov chain is the greatest common divisor of all time steps n for which T i i n > 0 . If the period is 1, the state is aperiodic.
If a Markov chain is irreducible, the period of the chain is the period of its single communication class. If the period of every communication class is d = 1 , then the Markov chain is aperiodic.
Definition 7
(regular [19]). A stochastic matrix T is regular if some power T k contains only strictly positive entries.
Lemma 1.
Let T be the transition matrix for an irreducible, aperiodic Markov chain. Then, T is a regular matrix.
Proof. 
For detailed proof, see page C-39 of the literature [19]. □
Lemma 2
(Ref. [19]). If T is a regular m × m transition matrix of Markov chain with m 2 , then the following statements are all true.
(1) 
There is a stochastic matrix Π such that lim n T n = Π .
(2) 
Each row of Π is the same probability vector π.
(3) 
For any initial probability vector x 0 , lim n x 0 T n = π .
(4) 
The vector π is the unique probability vector that is an eigenvector of T associated with the eigenvalue 1, i.e., π = π T .
Proof. 
For detailed proof, see the Appendix (page C-65) of reference [19]. □
As described above, to prove the convergence of the iterative process, we make the following statement.
Assumption 1.
The MGWF with mutation is a finite-state Markov chain.
Assumption 2.
The MGWF with mutation is an irreducible chain.
Assumption 3.
The MGWF with mutation is required to be an aperiodic chain.
Theorem 1.
Suppose Assumptions 1–3 hold. Then, the MGWF with mutation has a unique stationary distribution Π, s.t.
lim n T n = Π
where each row of Π is the same probability vector π which is the unique probability vector that is an eigenvector of T associated with the eigenvalue 1.
Proof. 
On the one hand, we prove that the limit exists. Since Assumptions 1–3 are satisfied, according to Lemma 1, the probability matrix of mutation T is regular. From Lemma 2 (1), lim n T n = Π .
On the other hand, we analyze the composition of the limit matrix Π . From Lemma 2, (2) and (4), the limit matrix Π = π 1 , where π is an eigenvector of T associated with the eigenvalue 1.
In summary, the conclusion of Theorem 1 is proved. □
When the MGWF with mutations reaches a stable state Π , the branching law is no longer affected by the mutation.
Example 3
(The continuation of Example 2). Let the probability matrix of mutation T = P , where the transition matrix P is given in Example 1. It is verified that k = 2 , T i j 2 > 0 for all i , j { 1 , 2 , , N } , so T is regular, satisfying the Lemma 2 condition. It sets the threshold to 1 × 10 8 ,
P 88 = 39 164 35 82 55 164 39 164 35 82 55 164 39 164 35 82 55 164
and iterating P 89 , P 90 , again, it no longer changes, i.e., the particle mutation will not affect the branching law of the MGWF.

3.3. Interpretation of Diffusion Process

In general, the diffusion process starts from a N × N affinity matrix A. It interprets the matrix A as a graph G = ( V , E ) , consisting of N nodes v i V , and edges e i j E that link nodes to each other, fixing the edge weights to provide affinity values with A i j . The diffusion processes spread the affinity values through the entire graph, based on the transition matrix. When using a random walk on a graph to explain the diffusion process, there are three steps: (1) initialization; (2) definition of the transition matrix; and (3) definition of the diffusion process.
In this paper, we use the model of MGWF with mutation to explain the diffusion process from a new perspective. It regards the N vertices of the graph corresponding to the N different particle types, and the transition of the random walk between nodes corresponds to the iteration of the parent which produces offspring in MGWF with the mutation. And the unit of time for each iteration is divided into two phases, the branching phase and the mutation phase.
Assume that the initial branching law and the probability matrix of mutation are given. To ensure that the initial branching law is a row-stochastic matrix, it takes F 0 = P = D 1 A , where D = d i a g ( D i i ) , D i i = Σ j = 1 N A i j , and especially the probability matrix of mutation T = P = D 1 A . After the first unit time, the branching law of the first generation is given by
F 1 = F 0 T .
Eventually, the branching law of the n t h generation is
F n = F n 1 T = F 0 T n .
Assuming Assumptions 1–3 hold, from Theorem 1, the branching law reaches a stable state, that is, the branching law will no longer be affected by the mutation.
The correspondence between the diffusion process and the MGWF with the mutation is shown in Table 1.
In this section, we establish the relationship between the diffusion process and the MGWF with mutation. We prove the sequence convergence of the diffusion process under the given assumption. To our best knowledge, this is the first time that the convergence of the diffusion process has been proved theoretically. Next, we propose the immigration concept for the MGWF and discuss its relationship with the Google Pagerank system.

4. The MGWF with Immigration and the Google PageRank System

4.1. The MGWF with Mutation and Immigration

The classic MGWF has a wide range of extensions, and one of the important extensions is the GW process with immigration [21,22,23,24]. In this study, state-independent immigration is considered. The classical theory of immigration is to introduce immigrants to ensure that the entire forest does not become extinct so that immigrant particles can enter the forest and produce offspring together with the original particles as shown in Figure 4a.
In an N-type GWF with the mutations system, for every tree and every generation, there may be an immigrant of the j-type particle entering with probability b j , j = 1 , 2 , , N and Σ j = 1 N b j = 1 . When it is observed that an i-type particle in the first generation has a child of the j type in the second generation, there are two possible mechanisms:
(1)
Generated by a branching–mutation mechanism with probability α , α ( 0 , 1 ) , i.e., an i-type particle produces a j-type particle in the branching–mutations mechanism.
(2)
Acted by the immigration mechanism with probability ( 1 α ) , i.e., in the second generation, an immigrant particle of the j type enters directly, the branching–mutations mechanism no longer works.
Therefore, the branching law in the second generation,
P 2 i ( j ) = α P 1 i ( j ) T j ( j ) + Σ k j P 1 i ( k ) T k ( j ) + ( 1 α ) b j , i , j , k = 1 , 2 , , N .
The matrix form is
P 2 = α P 1 T + ( 1 α ) Y
where
Y = b 1 b 2 b N b 1 b 2 b N b 1 b 2 b N .
Figure 4b shows the relationship between the immigration mechanism and the branching–mutation mechanism.
Set
F 0 = P , T = P ,
and the convergence of this process is given below.
Theorem 2.
Suppose Assumptions 1–3 hold. Then, the set { F n } generated from the MGWF with the mutation and immigration is the convergence, that is, the limit lim n F n exists.
Proof. 
F n = α F n 1 T + ( 1 α ) Y = α n P n + 1 + ( 1 α ) Y Σ k = 0 n 1 α k P k
Since T = P is regular, which is explained in Section 4.2, which meets the condition of Lemma 2, the limit lim n P n + 1 exists. And for Σ k = 0 n 1 α k P k , since the eigenvalue of α P , ρ ( α P ) < 1 , the limit of Σ k = 0 n 1 ( α P ) k exists when n . Consequently, the lim n F n exists. □

4.2. The Google PageRank System

One of the most successful diffusion processes is the Google PageRank system. It was originally designed to objectively rank webpages by measuring people’s interest in the relevant webpages. In the Google PageRank system, in addition to walking on neighboring nodes with a probability α ( 0 , 1 ) , the particle is also considered to jump to an arbitrary node with a very small probability ( 1 α ) , following the update mechanism below:
F n + 1 = α F n P + ( 1 α ) Y
N × N matrix Y is a stacking of row vectors y , and y defines the probabilities of randomly jumping to the corresponding nodes.
We use the MGWF with mutation and immigration model to explain the extension of the diffusion process. For convenience, we make the following transformation:
F n + 1 = α F n P + ( 1 α ) Y
= α F n ( 1 α ) I P Y
In the above model, the branching law of n th generation is α F n ( 1 α ) I , where the branching–mutation mechanism works with probability α and the branching law matrix is F n . The immigration mechanism works with probability ( 1 α ) , and the branching law matrix is I; the probability matrix of mutation is P Y in the mutation phase, where P , Y respectively are the mutation probability of MGWF with the mutation mechanism and immigration mechanism.
We can explain the diffusion process and the Google PageRank system in the same framework model through the proposed two-phase setting. The initial branching law matrix is F 0 = α P ( 1 α ) I in the branching phase. And the probability matrix of mutation is T = P Y in the mutation phase. The iteration rule is F n + 1 = F n T . When α = 1 , it is the diffusion process F n + 1 = F n P ; when α ( 0 , 1 ) , it is the Google PageRank system F n + 1 = α F n P + ( 1 α ) Y as shown in Table 2.
In summary, we propose the two-phase MGWF setting and explain the diffusion process and the Google PageRank system. It is a new perspective to understand the diffusion process. Further, it extends the diffusion process and speeds up the iteration process as shown in the examples below.

5. Simulations

Example 4
(The continuation of Example 3). Let the initial value F 0 = P , the transition matrix P which is given in Example 1, and the iteration rule F n + 1 = F n P . We explain the diffusion process by random walk and MGWF, respectively.
(i) 
Random walk on graph
For the initial value F 0 = P , the probability that the particle starts at v 1 , v 2 and v 2 is ( 0 2 3 1 3 ) , ( 2 5 0 3 5 ) and ( 1 5 4 5 0 ) . With the transition matrix P, and the iteration rule F n + 1 = F n P , we use the binary norm of the matrix F n F n 1 2 to measure the convergence of F n with an accuracy of 1 × 10 8 . We find that the diffusion process can reach this approximate stable state through 88 iterations. The random walk on the graph with three nodes is shown in Figure 5.
(ii)
MGWF
In MGWF with mutation, the initial branching law is F 0 = P , such as the probabilities of the 1-type parent producing children of the 1 , 2 , 3 types being 0 , 2 3 , 1 3 , respectively. From Figure 3a, the unit time is divided into two phases in the n-th generation, the branching law in the branching phase is F n , and the probability matrix of mutation in the mutation phase is T = P . The iterate result is F n + 1 = F n T . It is also found that the approximate stable state with the threshold of 1 × 10 8 is basically reached after 88 iterations through simulation.
From the above description, we find that the model of MGWF with the mutation can intuitively explain the diffusion process better. Moreover, we make the probability matrix of mutation different in the odd iteration and even iteration mutation phases. For λ ( 0 , 1 ) , the specific iteration rule is
F n + 1 = F n T , if n is odd ; F n ( λ T + ( 1 λ ) I ) , if n is even .
To compare the difference in convergence speed between the modified version and the diffusion process, we take λ = 1 3 and use F n F n 1 2 to measure the convergence of F n with an accuracy of 1 × 10 8 . As shown in Figure 6a, for the same convergence accuracy, the modified version reaches an approximate stable state after 17 iterations, and its efficiency is significantly higher than that of the original diffusion process.
Example 5.
The Google PageRank system has the initial value F 0 = P , the transition matrix P is given in Example 1, the probability matrix of the random jump is Y, and the iteration rule is F n + 1 = α F n P + ( 1 α ) Y . We explain the Google PageRank system by MGWF with mutation and immigration with α = 0.9 ,
Y = 0 2 5 3 5 1 4 0 3 4 2 5 3 5 0 .
The initial branching law is α P ( 1 α ) I . The probability matrix of mutation is P Y in the mutation phase. In order to obtain the branching law of the ( n + 1 ) -th generation, it follows the result α F n ( 1 α ) I P Y . It uses F n F n 1 2 to measure the convergence of F n with the threshold of 1 × 10 8 . The Google PageRank system can reach this approximate stable state through 42 iterations.
While the random walk cannot explain the Google PageRank system, the Google PageRank system is perfectly explained by considering the immigration in the MGWF with mutation. Similar to the basic diffusion process, we propose a modified PageRank system by dividing each unit of time into two phases:
(i) 
It has some minor changes to the model of the Google PageRank system. We assume the branching law matrix is the same and the probability matrix of mutation is different in the odd iteration and even iteration mutation phases. In the odd iteration, the probability matrix of mutation is P Y ; in the even iteration, we change the probability matrix of mutation to λ P Y + ( 1 λ ) I J . The iterative process is
F n + 1 = α F n ( 1 α ) I P Y , if n is odd ; α F n ( 1 α ) I λ P Y + ( 1 λ ) I J , if n is even .
where λ ( 0 , 1 ) , I is an identity matrix and
J = 0 9 10 1 10 4 5 0 1 5 3 5 2 5 0 .
is another probability matrix of mutation in the immigration mechanism which is different from Y.
Let λ = 3 5 and apply the revised Google PageRank system with N = 3 above. We find that the system reaches two different stable states in the seventh iteration and the eighth iteration, respectively, which makes the norm F n F n 1 2 a fixed constant. The speed of reaching stability of the revised Google PageRank is faster than that of the original Google PageRank system. We will conduct an in-depth study on the relationship and the choice between the two stable states in the future.
The convergence behaviors of the Google PageRank system and the revised Google PageRank system is shown in Figure 6b. The modified version can reach stability within five iterations, but the Google PageRank needs about 10 iterations to reach stability.
(ii) 
We can make a couple of other changes to look at the branching–mutation mechanism and the immigration mechanism, whose change in mutation probability is responsible for the rate of convergence of F n .
(a) 
(The change in the branching–mutation mechanism) In the odd and even iterations, the change in the branching–mutation mechanism is
F n + 1 = α F n ( 1 α ) I P Y , if n is odd ; α F n ( 1 α ) I ( λ P + ( 1 λ ) I ) Y , if n is even .
We take λ = 2 5 and find that the system reaches two different stable states in the seventh iteration and the eighth iteration, respectively.
(b) 
(The change in the immigration mechanism) In the odd and even iterations, a different probability matrix of immigration Y is selected respectively:
F n + 1 = α F n ( 1 α ) I P Y , if n is odd ; α F n ( 1 α ) I P λ Y + ( 1 λ ) J , if n is even .
We take λ = 2 5 and find that the system reaches two different stable states in the 11th iteration and the 12th iteration, respectively.
According to the iteration results of the (a) and (b) examples, it is found that changing the probability matrix of mutation in the branching–mutation mechanism is the key technique to improve the iteration speed. The convergence behaviors of (a) and (b) are shown in Figure 6c.

6. Conclusions

In this paper, we have connected the two seemingly unrelated fields of the diffusion and branching processes and established an explicit interpretation of the correspondence between the diffusion process and MGWF. Then, we focused on MGWF with the degenerated branching law and innovatively proposed the concept of particle mutation in MGWF with the degenerated branching law. Through the variation in MGWF, the vertices on the graph correspond to the types to explain the diffusion process, and the movement of particles can be observed more clearly from a longitudinal perspective. We divided each step of the diffusion process into two phases–the branching phase and the mutation phase—which not only interprets the mechanics more clearly but also removes the existing limitation that the transition probability P is dependent on the affinity matrix A. By extending MGWF with the concept of immigration, we further connected the MGWF model with the popular Google PageRank system, providing a new perspective to the latter as well. Both the diffusion process and the Google PageRank are well explained with the two-phase idea as shown in Table 3.
In the future, we will focus on the following directions:
From Table 3, we found that the existing improvements of the diffusion process are concentrated in the branching phase, that is, change the branching law by iteration. However, for the mutation phase, the mutation probability of all algorithms is constant. Therefore, in the future, we will pay more attention to changing the probability of mutation at each unit of time.
In Example 4, we found that the rate of convergence of the diffusion process was significantly accelerated after we made a slight change in the mutation probability. We will study how to change the probability of mutation to accelerate the rate of convergence.
In Example 5, after slight changes were made to the mutation probability of the Google PageRank system, we found that although the rate of convergence of the Google PageRank system was significantly accelerated, two stable states appeared. It will be of interest to study how to deal with two stable states.

Author Contributions

Methodology, W.L. and C.Y.; writing—original draft preparation, Y.Z.; writing—review and editing, Q.L.; visualization, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, X.; Köknar-Tezel, S.; Latecki, L.J. Locally constrained diffusion process on locally densified distance spaces with applications to shape retrieval. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  2. Bai, X.; Yang, X.; Latecki, L.J.; Liu, W.; Tu, Z. Learning Context-Sensitive Shape Similarity by Graph Transduction. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 861–874. [Google Scholar] [CrossRef] [PubMed]
  3. Egozi, A.; Keller, Y.; Guterman, H. Improving shape retrieval by spectral matching and meta similarity. IEEE Trans. Image Process. 2010, 19, 1319–1327. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, B.; Tu, Z. Affinity learning via self-diffusion for image segmentation and clustering. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  5. Donoser, M.; Bischof, H. Diffusion Processes for Retrieval Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  6. Li, Q.; Liu, W.; Li, L. Affinity learning via a diffusion process for subspace clustering. Pattern Recognit. 2018, 84, 39–50. [Google Scholar] [CrossRef]
  7. Donoser, M. Replicator Graph Clustering. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [Google Scholar]
  8. Chen, S.; Zheng, L.; Hu, X.; Zhou, P. Discriminative saliency propagation with sink points. Pattern Recognit. 2016, 60, 2–12. [Google Scholar] [CrossRef]
  9. Lu, S.; Mahadevan, V.; Vasconcelos, N. Learning optimal seeds for diffusion-based salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2790–2797. [Google Scholar]
  10. Yang, X.; Prasad, L.; Latecki, L.J. Affinity Learning with Diffusion on Tensor Product Graph. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 28–38. [Google Scholar] [CrossRef] [PubMed]
  11. Elhamifar, E.; Vidal, R. Sparse Subspace Clustering: Algorithm, Theory, and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 2765–2781. [Google Scholar] [CrossRef] [PubMed]
  12. Berzunza, G. On scaling limits of multitype Galton-Watson trees with possibly infinite variance. arXiv 2016, arXiv:1605.04810. [Google Scholar] [CrossRef]
  13. Miermont, G. Invariance principles for spatial multitype Galton-Watson trees. Ann. Inst. Henri Poincaré Probab. Stat. 2008, 44, 1128–1161. [Google Scholar] [CrossRef]
  14. de Raphelis, L. Scaling limit of multitype Galton-Watson trees with infinitely many types. Ann. Inst. Henri Poincaré Probab. Stat. 2017, 53, 200–225. [Google Scholar] [CrossRef]
  15. Duquesne, T. A limit theorem for the contour process of condidtioned Galton–Watson trees. Ann. Probab. 2003, 31, 996–1027. [Google Scholar] [CrossRef]
  16. Le Gall, J.F.; Duquesne, T. Random Trees, Levy Processes, and Spatial Branching Processes. Asterisque 2002, 281, 30. [Google Scholar]
  17. Chaumont, L.; Nguyen, T.N.A. On mutations in the branching model for multitype populations. Adv. Appl. Probab. 2018, 50, 543–564. [Google Scholar] [CrossRef]
  18. Cheek, D.; Antal, T. Mutation frequencies in a birth–death branching process. Ann. Appl. Probab. 2018, 28, 3922–3947. [Google Scholar] [CrossRef]
  19. Lax, P.D. Linear Algebra and Its Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007; Volume 78. [Google Scholar]
  20. Haggstrom, O. Finite Markov Chains and Algorithmic Applications; Cambridge University Press: Cambridge, UK, 2002; Volume 52. [Google Scholar]
  21. Seneta, E. On the supercritical Galton-Watson process with immigration. Math. Biosci. 1970, 7, 9–14. [Google Scholar] [CrossRef]
  22. Seneta, E. A note on the supercritical Galton-Watson process with immigration. Math. Biosci. 1970, 6, 305–311. [Google Scholar] [CrossRef]
  23. Liu, J.N.; Zhang, M. Large deviation for supercritical branching processes with immigration. Acta Math. Sin. Engl. Ser. 2016, 32, 893–900. [Google Scholar] [CrossRef]
  24. Sun, Q.; Zhang, M. Harmonic moments and large deviations for supercritical branching processes with immigration. Front. Math. China 2017, 12, 1201–1220. [Google Scholar] [CrossRef]
Figure 1. (a) The 3-type GWF { Z n } with the initiated state Z 1 = ( 1 , 1 , 1 ) ; (b) MGWF with the degenerated branching law, where a parent particle can only produce one offspring particle.
Figure 1. (a) The 3-type GWF { Z n } with the initiated state Z 1 = ( 1 , 1 , 1 ) ; (b) MGWF with the degenerated branching law, where a parent particle can only produce one offspring particle.
Mathematics 12 03462 g001
Figure 2. The two cases in which mutations occur (case 1) and do not occur (case 2) during the mutation phase.
Figure 2. The two cases in which mutations occur (case 1) and do not occur (case 2) during the mutation phase.
Mathematics 12 03462 g002
Figure 3. (a) The 3-type GW forest with mutation. (b) The mutation details of the 3-type GW forest with mutation.
Figure 3. (a) The 3-type GW forest with mutation. (b) The mutation details of the 3-type GW forest with mutation.
Mathematics 12 03462 g003
Figure 4. The immigration in branching process and MGWF with mutation. (a) The 3-type branching process with immigration. (b) The relationship between immigration mechanism and the branching–mutation mechanism in the MGWF.
Figure 4. The immigration in branching process and MGWF with mutation. (a) The 3-type branching process with immigration. (b) The relationship between immigration mechanism and the branching–mutation mechanism in the MGWF.
Mathematics 12 03462 g004
Figure 5. Random walk on the graph which contains three nodes. We take the accuracy of 1 × 10 8 and find that after 88 iterations, F n reaches a stable state.
Figure 5. Random walk on the graph which contains three nodes. We take the accuracy of 1 × 10 8 and find that after 88 iterations, F n reaches a stable state.
Mathematics 12 03462 g005
Figure 6. The convergence behavior of existing methods and the modified diffusion versions: (a) the diffusion process and the modified diffusion version, (b) the Google PageRank system and the modified diffusion version, (c) the convergence behavior of mutation probability matrix changes in different mechanisms. Green marks the number of iterations of different models reaching the same accuracy of 1 × 10 8 .
Figure 6. The convergence behavior of existing methods and the modified diffusion versions: (a) the diffusion process and the modified diffusion version, (b) the Google PageRank system and the modified diffusion version, (c) the convergence behavior of mutation probability matrix changes in different mechanisms. Green marks the number of iterations of different models reaching the same accuracy of 1 × 10 8 .
Mathematics 12 03462 g006
Table 1. The correspondence between diffusion process and MGWF with mutation.
Table 1. The correspondence between diffusion process and MGWF with mutation.
Diffusion ProcessMGWF with Mutation
N nodes in the graphN types of particle
Edges in the graphParticle type change
An iteration of the diffusion processA unit time of MGWF
Initialization F 0 = P Initial branching law matrix F 0 = P
Transition matrix PThe probability matrix of mutation T = P
Update scheme F n = F n 1 P Update scheme F n = F n 1 T
Table 2. The same framework model of diffusion process and Google PageRank system.
Table 2. The same framework model of diffusion process and Google PageRank system.
The Framework α The Model
F n + 1 = α F n ( 1 α ) I P Y α = 1 The diffusion process
α ( 0 , 1 ) The Google PageRank system
Table 3. The diffusion process and the Google PageRank are explained with the two-phase idea.
Table 3. The diffusion process and the Google PageRank are explained with the two-phase idea.
MethodThe Branching LawThe Probability of Mutation
Diffusion process F n P
Google pagerank system α F n ( 1 α ) I P Y
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Li, Q.; Liu, W.; Yin, C.; Gao, Z. A Novel Exploration of Diffusion Process Based on Multi-Type Galton–Watson Forests. Mathematics 2024, 12, 3462. https://doi.org/10.3390/math12223462

AMA Style

Zhu Y, Li Q, Liu W, Yin C, Gao Z. A Novel Exploration of Diffusion Process Based on Multi-Type Galton–Watson Forests. Mathematics. 2024; 12(22):3462. https://doi.org/10.3390/math12223462

Chicago/Turabian Style

Zhu, Yanjiao, Qilin Li, Wanquan Liu, Chuancun Yin, and Zhenlong Gao. 2024. "A Novel Exploration of Diffusion Process Based on Multi-Type Galton–Watson Forests" Mathematics 12, no. 22: 3462. https://doi.org/10.3390/math12223462

APA Style

Zhu, Y., Li, Q., Liu, W., Yin, C., & Gao, Z. (2024). A Novel Exploration of Diffusion Process Based on Multi-Type Galton–Watson Forests. Mathematics, 12(22), 3462. https://doi.org/10.3390/math12223462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop